Documentation
ยถ
Overview ยถ
Package agent implements the ReAct (Reason + Act) pattern for AI agents.
Overview ยถ
A ReAct agent runs a bounded Think โ Act โ Observe loop: the model thinks (generates a response), acts (calls tools), and observes (results are appended to the history), repeating until it produces a final answer or exhausts the step limit.
The pattern is based on "ReAct: Synergizing Reasoning and Acting in Language Models" (Yao et al., 2022 โ https://arxiv.org/abs/2210.03629).
Building an agent ยถ
Use the fluent builder to compose an agent from an LLM client, tool definitions, and a tool executor:
a := agent.New(client, toolDefs, executor).
WithInstructions("You are a precise research assistant.").
WithMaxSteps(15)
Running an agent ยถ
Agent.Run executes the full loop for a single user question. It returns a Result, a replayable rxgo.Observable of AgentEvent values, and any error:
result, events, err := a.Run(ctx, "Who won the 2025 Nobel Prize in Physics?")
if err != nil {
log.Fatal(err)
}
fmt.Println(result.Output)
Observable event stream ยถ
The returned observable is a cold, replayable stream of everything that happened during the run. Subscribe by calling Observe():
for item := range events.Observe() {
switch e := item.V.(type) {
case agent.LLMCallEvent:
slog.Info("llm call", "step", e.Step, "latency_ms", e.Latency.Milliseconds())
case agent.ToolExecEvent:
slog.Info("tool exec", "tools", e.ToolNames)
case agent.RunEndEvent:
slog.Info("run end", "err", e.Err)
}
}
Calling Observe() again replays all events from the beginning โ safe for multiple independent subscribers (loggers, metrics, tracing).
Execution history ยถ
The full conversation is available via result.Context.Events(). Each model.Event has an Author ("user", "agent", or "tools"), a timestamp, and typed model.ContentItem values (model.Message, model.ToolCall, model.ToolResult).
Bringing your own tools ยถ
Implement model.ToolExecutor to connect any tool-running backend:
type myExecutor struct{ /* your registry, MCP session, etc. */ }
func (e *myExecutor) Execute(ctx context.Context, calls []model.ToolCall) ([]model.ToolResult, error) {
results := make([]model.ToolResult, len(calls))
for i, call := range calls {
out, err := e.dispatch(ctx, call.Name, call.Arguments)
if err != nil {
results[i] = model.ToolResult{ID: call.ID, Name: call.Name, Status: "error", Content: []string{err.Error()}}
continue
}
results[i] = model.ToolResult{ID: call.ID, Name: call.Name, Status: "success", Content: []string{out}}
}
return results, nil
}
For MCP-based tools (github.com/v8tix/mcp-toolkit), use the ready-made adapter in the [mcpadapter] sub-package.
Manual step control ยถ
Agent.Step is exported so callers can drive the loop themselves โ useful for streaming, checkpointing, or human-in-the-loop interrupts:
execCtx := agent.NewExecutionContextForTest()
execCtx.AddEvent("user", model.Message{Role: "user", Content: question})
for execCtx.CurrentStep() < 20 {
if err := a.Step(ctx, execCtx); err != nil {
break
}
if execCtx.Done() {
break
}
execCtx.IncrementStep()
}
Index ยถ
- Variables
- type Agent
- func (a *Agent) Act(ctx context.Context, execCtx *ExecutionContext, calls []model.ToolCall) error
- func (a *Agent) Run(ctx context.Context, userMessage string) (*Result, rxgo.Observable, error)
- func (a *Agent) Step(ctx context.Context, execCtx *ExecutionContext) error
- func (a *Agent) Think(ctx context.Context, execCtx *ExecutionContext) (model.Response, error)
- func (a *Agent) WithInstructions(s string) *Agent
- func (a *Agent) WithMaxSteps(n int) *Agent
- type AgentEvent
- type ExecutionContext
- func (ec *ExecutionContext) AddEvent(author string, content ...model.ContentItem)
- func (ec *ExecutionContext) CurrentStep() int
- func (ec *ExecutionContext) Done() bool
- func (ec *ExecutionContext) Events() []model.Event
- func (ec *ExecutionContext) FinalResult() (string, bool)
- func (ec *ExecutionContext) GetState(key string) (any, bool)
- func (ec *ExecutionContext) ID() string
- func (ec *ExecutionContext) IncrementStep()
- func (ec *ExecutionContext) SetState(key string, value any)
- type LLMCallEvent
- type LLMClient
- type LiteLLMClient
- type Result
- type RunEndEvent
- type RunStartEvent
- type StepEndEvent
- type StepStartEvent
- type ToolExecEvent
Examples ยถ
Constants ยถ
This section is empty.
Variables ยถ
var ErrMaxStepsReached = errors.New("agent: max steps reached without final answer")
ErrMaxStepsReached is returned when Run exhausts maxSteps without a final answer.
Functions ยถ
This section is empty.
Types ยถ
type Agent ยถ
type Agent struct {
// contains filtered or unexported fields
}
Agent is the ReAct orchestrator. It runs a Think โ Act โ Observe loop until the LLM produces a final answer or maxSteps is exhausted.
func New ยถ
func New(client LLMClient, defs []model.ToolDefinition, executor model.ToolExecutor) *Agent
New creates an Agent with sensible defaults (maxSteps=10).
- defs: tool definitions the LLM can call (pass nil or empty for no tools)
- executor: executes tool calls concurrently (pass nil when defs is empty)
Use the builder methods to customise the agent:
agent.New(client, defs, executor).
WithInstructions("You are helpful.").
WithMaxSteps(15)
Example ยถ
ExampleNew shows how to construct an agent with the fluent builder. Replace demoLLM with agent.NewLiteLLMClient(openaiClient, model) to target a real LLM.
package main
import (
"context"
"fmt"
agent "github.com/v8tix/react-agent"
"github.com/v8tix/react-agent/model"
)
// demoLLM is a scripted LLM stub: it returns responses[0], responses[1], โฆ
// in sequence. Use it to write deterministic, self-contained documentation
// examples that produce a known output without any network calls.
type demoLLM struct {
responses []model.Response
n int
}
func (d *demoLLM) Generate(_ context.Context, _ model.Request) (model.Response, error) {
resp := d.responses[d.n]
d.n++
return resp, nil
}
// demoExecutor echoes back a canned result for every tool call it receives.
// Suitable for examples that need a tool round-trip without a real backend.
type demoExecutor struct{}
func (demoExecutor) Execute(_ context.Context, calls []model.ToolCall) ([]model.ToolResult, error) {
results := make([]model.ToolResult, len(calls))
for i, c := range calls {
results[i] = model.ToolResult{
ID: c.ID,
Name: c.Name,
Status: "success",
Content: []string{fmt.Sprintf("result_of_%s", c.Name)},
}
}
return results, nil
}
func main() {
// Hypothetical: ask an assistant to look up a stock price.
llm := &demoLLM{} // swap for agent.NewLiteLLMClient(...)
defs := []model.ToolDefinition{
{
Name: "search_web",
Description: "Search the web for up-to-date information",
Parameters: map[string]any{
"type": "object",
"properties": map[string]any{
"query": map[string]any{"type": "string"},
},
"required": []string{"query"},
},
},
}
_ = agent.New(llm, defs, demoExecutor{}).
WithInstructions("You are a precise research assistant.").
WithMaxSteps(15)
}
Output:
func (*Agent) Act ยถ
Act executes all requested tool calls via ToolExecutor and records the results. The agent's tool-call decision is appended as an "agent" event BEFORE execution, then tool results are appended as a "tools" event AFTER execution. Note: events are not emitted when calling Act directly; use Run for observability.
func (*Agent) Run ยถ
Run executes the full ReAct loop for a single user message. It returns a Result, a replayable Observable of AgentEvents, and any error.
The Observable is a cold, replayable stream: each call to Observe() replays all events from the completed run. It is safe for multiple subscribers.
Example ยถ
ExampleAgent_Run demonstrates a single-step run where the LLM answers directly without calling any tools.
package main
import (
"context"
"fmt"
"log"
agent "github.com/v8tix/react-agent"
"github.com/v8tix/react-agent/model"
)
// demoLLM is a scripted LLM stub: it returns responses[0], responses[1], โฆ
// in sequence. Use it to write deterministic, self-contained documentation
// examples that produce a known output without any network calls.
type demoLLM struct {
responses []model.Response
n int
}
func (d *demoLLM) Generate(_ context.Context, _ model.Request) (model.Response, error) {
resp := d.responses[d.n]
d.n++
return resp, nil
}
func main() {
llm := &demoLLM{
responses: []model.Response{
{Content: []model.ContentItem{
model.Message{Role: "assistant", Content: "The capital of France is Paris."},
}},
},
}
a := agent.New(llm, nil, nil).
WithInstructions("You are a helpful assistant.")
result, _, err := a.Run(context.Background(), "What is the capital of France?")
if err != nil {
log.Fatal(err)
}
fmt.Println(result.Output)
fmt.Println(result.ToolCalled)
}
Output: The capital of France is Paris. false
Example (EventStream) ยถ
ExampleAgent_Run_eventStream shows how to consume the observable event stream returned by Run to build logging, metrics, or tracing.
The observable is cold and replayable โ calling Observe() again replays all events from the beginning, safe for multiple independent subscribers.
package main
import (
"context"
"encoding/json"
"fmt"
"log"
agent "github.com/v8tix/react-agent"
"github.com/v8tix/react-agent/model"
)
// demoLLM is a scripted LLM stub: it returns responses[0], responses[1], โฆ
// in sequence. Use it to write deterministic, self-contained documentation
// examples that produce a known output without any network calls.
type demoLLM struct {
responses []model.Response
n int
}
func (d *demoLLM) Generate(_ context.Context, _ model.Request) (model.Response, error) {
resp := d.responses[d.n]
d.n++
return resp, nil
}
// demoExecutor echoes back a canned result for every tool call it receives.
// Suitable for examples that need a tool round-trip without a real backend.
type demoExecutor struct{}
func (demoExecutor) Execute(_ context.Context, calls []model.ToolCall) ([]model.ToolResult, error) {
results := make([]model.ToolResult, len(calls))
for i, c := range calls {
results[i] = model.ToolResult{
ID: c.ID,
Name: c.Name,
Status: "success",
Content: []string{fmt.Sprintf("result_of_%s", c.Name)},
}
}
return results, nil
}
func main() {
llm := &demoLLM{
responses: []model.Response{
{Content: []model.ContentItem{
model.ToolCall{ID: "c1", Name: "search_web", Arguments: json.RawMessage(`{}`)},
}},
{Content: []model.ContentItem{
model.Message{Role: "assistant", Content: "Done."},
}},
},
}
defs := []model.ToolDefinition{{
Name: "search_web",
Parameters: map[string]any{"type": "object", "properties": map[string]any{"query": map[string]any{"type": "string"}}, "required": []string{"query"}},
}}
a := agent.New(llm, defs, demoExecutor{})
_, events, err := a.Run(context.Background(), "What is the weather in Paris?")
if err != nil {
log.Fatal(err)
}
for item := range events.Observe() {
switch e := item.V.(type) {
case agent.RunStartEvent:
fmt.Println("run started")
case agent.LLMCallEvent:
fmt.Printf("llm call step=%d\n", e.Step)
case agent.ToolExecEvent:
fmt.Printf("tool exec: %v\n", e.ToolNames)
case agent.RunEndEvent:
fmt.Println("run ended")
}
}
}
Output: run started llm call step=0 tool exec: [search_web] llm call step=1 run ended
Example (ReasoningTrail) ยถ
ExampleAgent_Run_reasoningTrail shows how to inspect the full conversation history โ every message, tool call, and tool result โ after a run. Useful for debugging, audit logs, or displaying the chain of thought.
package main
import (
"context"
"encoding/json"
"fmt"
"log"
agent "github.com/v8tix/react-agent"
"github.com/v8tix/react-agent/model"
)
// demoLLM is a scripted LLM stub: it returns responses[0], responses[1], โฆ
// in sequence. Use it to write deterministic, self-contained documentation
// examples that produce a known output without any network calls.
type demoLLM struct {
responses []model.Response
n int
}
func (d *demoLLM) Generate(_ context.Context, _ model.Request) (model.Response, error) {
resp := d.responses[d.n]
d.n++
return resp, nil
}
// demoExecutor echoes back a canned result for every tool call it receives.
// Suitable for examples that need a tool round-trip without a real backend.
type demoExecutor struct{}
func (demoExecutor) Execute(_ context.Context, calls []model.ToolCall) ([]model.ToolResult, error) {
results := make([]model.ToolResult, len(calls))
for i, c := range calls {
results[i] = model.ToolResult{
ID: c.ID,
Name: c.Name,
Status: "success",
Content: []string{fmt.Sprintf("result_of_%s", c.Name)},
}
}
return results, nil
}
func main() {
llm := &demoLLM{
responses: []model.Response{
{Content: []model.ContentItem{
model.ToolCall{ID: "c1", Name: "lookup", Arguments: json.RawMessage(`{"id":"42"}`)},
}},
{Content: []model.ContentItem{
model.Message{Role: "assistant", Content: "Found it."},
}},
},
}
defs := []model.ToolDefinition{{
Name: "lookup",
Parameters: map[string]any{"type": "object", "properties": map[string]any{"id": map[string]any{"type": "string"}}, "required": []string{"id"}},
}}
result, _, err := agent.New(llm, defs, demoExecutor{}).
Run(context.Background(), "Look up record 42.")
if err != nil {
log.Fatal(err)
}
for _, event := range result.Context.Events() {
for _, item := range event.Content {
switch v := item.(type) {
case model.Message:
fmt.Printf("[%s] %s\n", event.Author, v.Content)
case model.ToolCall:
fmt.Printf("[%s] call %s\n", event.Author, v.Name)
case model.ToolResult:
fmt.Printf("[%s] result %s=%s\n", event.Author, v.Name, v.Content[0])
}
}
}
}
Output: [user] Look up record 42. [agent] call lookup [tools] result lookup=result_of_lookup [agent] Found it.
Example (WithTools) ยถ
ExampleAgent_Run_withTools demonstrates a two-step run: the LLM first calls a tool, then produces a final answer once it has the search result.
This mirrors the classic ReAct scenario โ the agent reasons about what information it needs, fetches it, then synthesises an answer.
package main
import (
"context"
"encoding/json"
"fmt"
"log"
agent "github.com/v8tix/react-agent"
"github.com/v8tix/react-agent/model"
)
// demoLLM is a scripted LLM stub: it returns responses[0], responses[1], โฆ
// in sequence. Use it to write deterministic, self-contained documentation
// examples that produce a known output without any network calls.
type demoLLM struct {
responses []model.Response
n int
}
func (d *demoLLM) Generate(_ context.Context, _ model.Request) (model.Response, error) {
resp := d.responses[d.n]
d.n++
return resp, nil
}
// demoExecutor echoes back a canned result for every tool call it receives.
// Suitable for examples that need a tool round-trip without a real backend.
type demoExecutor struct{}
func (demoExecutor) Execute(_ context.Context, calls []model.ToolCall) ([]model.ToolResult, error) {
results := make([]model.ToolResult, len(calls))
for i, c := range calls {
results[i] = model.ToolResult{
ID: c.ID,
Name: c.Name,
Status: "success",
Content: []string{fmt.Sprintf("result_of_%s", c.Name)},
}
}
return results, nil
}
func main() {
llm := &demoLLM{
responses: []model.Response{
// Step 1: LLM decides to call search_web
{Content: []model.ContentItem{
model.ToolCall{
ID: "call_1",
Name: "search_web",
Arguments: json.RawMessage(`{"query":"AAPL stock price January 9 2007"}`),
},
}},
// Step 2: LLM reads the search result and gives the final answer
{Content: []model.ContentItem{
model.Message{Role: "assistant", Content: "Apple stock was $11.74 on January 9, 2007."},
}},
},
}
defs := []model.ToolDefinition{{
Name: "search_web",
Description: "Search the web for current information",
Parameters: map[string]any{
"type": "object",
"required": []string{"query"},
"properties": map[string]any{
"query": map[string]any{"type": "string"},
},
},
}}
a := agent.New(llm, defs, demoExecutor{}).
WithInstructions("You are a research assistant. Verify facts before answering.")
result, _, err := a.Run(context.Background(), "What was Apple's stock price the day the iPhone was announced?")
if err != nil {
log.Fatal(err)
}
fmt.Println(result.Output)
fmt.Println("tool called:", result.ToolCalled)
}
Output: Apple stock was $11.74 on January 9, 2007. tool called: true
func (*Agent) Step ยถ
func (a *Agent) Step(ctx context.Context, execCtx *ExecutionContext) error
Step executes one Think โ (optionally) Act cycle, mutating execCtx in place. It is exported so callers can drive the loop manually for checkpointing or human-in-the-loop interrupts. Use execCtx.Done() to check for a final answer. Note: events are not emitted when calling Step directly; use Run for observability.
Example ยถ
ExampleAgent_Step shows how to drive the ReAct loop manually step-by-step. This gives you control between steps โ useful for streaming output to a UI, checkpointing long runs, or pausing for human approval before the agent acts.
package main
import (
"context"
"fmt"
"log"
agent "github.com/v8tix/react-agent"
"github.com/v8tix/react-agent/model"
)
// demoLLM is a scripted LLM stub: it returns responses[0], responses[1], โฆ
// in sequence. Use it to write deterministic, self-contained documentation
// examples that produce a known output without any network calls.
type demoLLM struct {
responses []model.Response
n int
}
func (d *demoLLM) Generate(_ context.Context, _ model.Request) (model.Response, error) {
resp := d.responses[d.n]
d.n++
return resp, nil
}
func main() {
llm := &demoLLM{
responses: []model.Response{
{Content: []model.ContentItem{
model.Message{Role: "assistant", Content: "The answer is 42."},
}},
},
}
a := agent.New(llm, nil, nil).WithMaxSteps(10)
execCtx := agent.NewExecutionContextForTest()
execCtx.AddEvent("user", model.Message{Role: "user", Content: "What is the answer to life, the universe and everything?"})
for execCtx.CurrentStep() < 10 {
if err := a.Step(context.Background(), execCtx); err != nil {
log.Fatal(err)
}
if execCtx.Done() {
break
}
execCtx.IncrementStep()
}
answer, _ := execCtx.FinalResult()
fmt.Println(answer)
fmt.Println("done:", execCtx.Done())
}
Output: The answer is 42. done: true
func (*Agent) Think ยถ
Think calls the LLM with the current execution context and returns its response. Note: events are not emitted when calling Think directly; use Run for observability.
func (*Agent) WithInstructions ยถ
WithInstructions sets the system prompt sent on every LLM request.
func (*Agent) WithMaxSteps ยถ
WithMaxSteps overrides the default step limit (10). Panics if n < 1 โ zero or negative steps is a programming error.
type AgentEvent ยถ
type AgentEvent interface {
// contains filtered or unexported methods
}
AgentEvent is the sealed sum type for all agent lifecycle events. Use a type switch to handle specific event types:
for item := range events.Observe() {
switch e := item.V.(type) {
case agent.LLMCallEvent:
slog.Info("llm call", "latency_ms", e.Latency.Milliseconds())
case agent.RunEndEvent:
fmt.Println(e.Result.Output)
}
}
type ExecutionContext ยถ
type ExecutionContext struct {
// contains filtered or unexported fields
}
ExecutionContext is the central mutable state for one agent run. It records all Events across steps and holds the final result once the agent produces a terminal response. All public methods are safe for concurrent use.
func NewExecutionContextForTest ยถ
func NewExecutionContextForTest() *ExecutionContext
NewExecutionContextForTest exposes newExecutionContext for white-box unit tests.
func (*ExecutionContext) AddEvent ยถ
func (ec *ExecutionContext) AddEvent(author string, content ...model.ContentItem)
AddEvent appends an event authored by author with the given content items. ID and Timestamp are generated automatically. Safe for concurrent use.
func (*ExecutionContext) CurrentStep ยถ
func (ec *ExecutionContext) CurrentStep() int
CurrentStep returns the current step index. Safe for concurrent use.
func (*ExecutionContext) Done ยถ
func (ec *ExecutionContext) Done() bool
Done reports whether the agent has produced a final answer. Safe for concurrent use.
func (*ExecutionContext) Events ยถ
func (ec *ExecutionContext) Events() []model.Event
Events returns a defensive copy of the event log. Each Event's Content slice is independently copied so callers cannot corrupt internal state by mutating returned slices. Safe for concurrent use.
func (*ExecutionContext) FinalResult ยถ
func (ec *ExecutionContext) FinalResult() (string, bool)
FinalResult returns the agent's final answer and true once Done() is true. Returns ("", false) if the agent has not finished yet. Safe for concurrent use.
func (*ExecutionContext) GetState ยถ
func (ec *ExecutionContext) GetState(key string) (any, bool)
GetState retrieves a value from the run-scoped key-value store. Safe for concurrent use.
func (*ExecutionContext) ID ยถ
func (ec *ExecutionContext) ID() string
ID returns the unique identifier for this execution. Safe for concurrent use.
func (*ExecutionContext) IncrementStep ยถ
func (ec *ExecutionContext) IncrementStep()
IncrementStep advances the step counter by one. Safe for concurrent use.
func (*ExecutionContext) SetState ยถ
func (ec *ExecutionContext) SetState(key string, value any)
SetState stores a value in the run-scoped key-value store. Safe for concurrent use.
type LLMCallEvent ยถ
LLMCallEvent is emitted after every Generate() call, including on error. Latency covers only the LLM network round-trip. The full request and response content are available via result.Context.Events().
type LLMClient ยถ
type LLMClient interface {
Generate(ctx context.Context, req model.Request) (model.Response, error)
}
LLMClient abstracts communication with a language model. Implement this interface to support any LLM provider.
type LiteLLMClient ยถ
type LiteLLMClient struct {
// contains filtered or unexported fields
}
LiteLLMClient adapts the openai-go client to the LLMClient interface. Works with OpenAI directly or with a LiteLLM proxy (same API surface).
func NewLiteLLMClient ยถ
func NewLiteLLMClient(client *openai.Client, model openai.ChatModel) *LiteLLMClient
NewLiteLLMClient creates a LiteLLMClient wrapping the provided openai-go client.
type Result ยถ
type Result struct {
// Output is the final answer produced by the LLM.
Output string
// ToolCalled reports whether at least one tool was invoked during the run.
ToolCalled bool
// Context is the full execution history for this run.
Context *ExecutionContext
}
Result is the output of a successful Agent.Run() call.
type RunEndEvent ยถ
RunEndEvent is emitted once when Run returns, whether it succeeded. Result is nil when Err is non-nil.
type RunStartEvent ยถ
RunStartEvent is emitted once before the ReAct loop begins.
type StepEndEvent ยถ
StepEndEvent is emitted after each ThinkโAct cycle. Err is non-nil when the step failed.
type StepStartEvent ยถ
StepStartEvent is emitted at the beginning of each ThinkโAct cycle.
Directories
ยถ
| Path | Synopsis |
|---|---|
|
Package mcpadapter bridges github.com/v8tix/mcp-toolkit with react-agent.
|
Package mcpadapter bridges github.com/v8tix/mcp-toolkit with react-agent. |
|
Package model contains the core data types shared across react-agent and its sub-packages.
|
Package model contains the core data types shared across react-agent and its sub-packages. |