agent

package module
v1.0.0 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Apr 23, 2026 License: MIT Imports: 12 Imported by: 0

README ยถ

๐Ÿค– react-agent

Go Reference Go Report Card

Build AI agents that think before they act.

Most LLM integrations are one-shot: send a prompt, get an answer. But complex questions require reasoning โ€” looking things up, checking results, adjusting the plan. react-agent implements the ReAct pattern (Reason + Act), a technique where the model alternates between thinking out loud and using tools until it's confident enough to answer.

๐Ÿ“„ Based on "ReAct: Synergizing Reasoning and Acting in Language Models" โ€” Yao et al., 2022


๐Ÿง  What is the ReAct pattern?

Think of it like a detective ๐Ÿ•ต๏ธ who never guesses. Instead of jumping to a conclusion, they follow a strict method: form a hypothesis โ†’ gather evidence โ†’ revise โ†’ repeat until the case is solved.

flowchart TD
    Q(["โ“ User Question"])
    THINK["๐Ÿง  Think\nWhat do I need to find out?"]
    DECIDE{{"๐Ÿค” Need\na tool?"}}
    ACT["๐Ÿ”ง Act\nCall a tool"]
    OBSERVE["๐Ÿ‘๏ธ Observe\nRead tool output"]
    ANSWER["โœ… Answer\nReturn final response"]
    LIMIT{{"๐Ÿšง Max steps\nreached?"}}
    ERR(["โŒ ErrMaxStepsReached"])

    Q --> THINK
    THINK --> DECIDE
    DECIDE -->|"Yes ๐Ÿ› ๏ธ"| ACT
    ACT --> OBSERVE
    OBSERVE --> LIMIT
    LIMIT -->|"No"| THINK
    LIMIT -->|"Yes"| ERR
    DECIDE -->|"No, I know enough โœจ"| ANSWER

๐Ÿ” A Concrete Example

"What was Apple's stock price the day the iPhone was announced?"

A one-shot model will guess. A ReAct agent will reason:

sequenceDiagram
    participant U as ๐Ÿ‘ค User
    participant A as ๐Ÿค– Agent
    participant L as ๐Ÿง  LLM
    participant T as ๐Ÿ”ง search_web

    U->>A: "What was Apple's stock price the day the iPhone was announced?"

    A->>L: Think ๐Ÿ’ญ
    L-->>A: I need the announcement date first
    A->>T: search_web("first iPhone announcement date")
    T-->>A: "January 9, 2007" ๐Ÿ“…

    A->>L: Think ๐Ÿ’ญ (now I have the date)
    L-->>A: Now I need the stock price on that date
    A->>T: search_web("AAPL stock price January 9 2007")
    T-->>A: "$11.74" ๐Ÿ“ˆ

    A->>L: Think ๐Ÿ’ญ (I have everything I need)
    L-->>A: โœ… Final answer

    A-->>U: "Apple's stock was $11.74 on Jan 9, 2007 โ€” the day Steve Jobs unveiled the iPhone."

๐Ÿ’ก Notice how the agent builds on previous observations โ€” each step's result is fed back into the next Think. The LLM never loses context.


๐Ÿ“ฆ Installation

go get github.com/v8tix/react-agent

๐Ÿš€ Quick Start

Here's a complete example: a research assistant ๐Ÿ”ฌ that can search the web and do math to answer complex questions.

package main

import (
    "context"
    "fmt"
    "log"

    "github.com/openai/openai-go"
    agent "github.com/v8tix/react-agent"
)

func main() {
    ctx := context.Background()

    // 1๏ธโƒฃ Wrap your LLM (any OpenAI-compatible endpoint, including LiteLLM)
    openaiClient := openai.NewClient() // reads OPENAI_API_KEY from env
    client := agent.NewLiteLLMClient(openaiClient, "gpt-4o-mini")

    // 2๏ธโƒฃ Declare the tools the agent can use (OpenAI JSON-schema format)
    defs := []agent.ToolDefinition{
        {
            Name:        "search_web",
            Description: "Search the web for up-to-date information",
            Parameters: map[string]any{
                "type": "object",
                "properties": map[string]any{
                    "query": map[string]any{"type": "string", "description": "The search query"},
                },
                "required": []string{"query"},
            },
        },
        {
            Name:        "calculator",
            Description: "Evaluate a simple arithmetic expression",
            Parameters: map[string]any{
                "type": "object",
                "properties": map[string]any{
                    "expression": map[string]any{"type": "string", "description": "e.g. '42 * 1.2'"},
                },
                "required": []string{"expression"},
            },
        },
    }

    // 3๏ธโƒฃ Wire up the executor โ€” your code that actually runs the tools
    executor := &myExecutor{}

    // 4๏ธโƒฃ Build the agent with a fluent builder chain
    a := agent.New(client, defs, executor).
        WithInstructions("You are a precise research assistant. Always verify facts before answering.").
        WithMaxSteps(10)

    // 5๏ธโƒฃ Run! Returns result, a replayable event stream, and any error.
    result, events, err := a.Run(ctx, "How many seconds did it take the Voyager 1 spacecraft to travel 1 AU?")
    if err != nil {
        log.Fatal(err)
    }

    fmt.Println("๐Ÿ Answer:", result.Output)
    _ = events // see "Observability" section below
}

๐Ÿ—๏ธ Architecture

graph TB
    subgraph "Your Code"
        U(["๐Ÿ‘ค Caller"])
        EX["๐Ÿ”ง ToolExecutor\nimpl"]
    end

    subgraph "react-agent"
        AG["๐Ÿค– Agent\norchestrator"]
        EC["๐Ÿ“‹ ExecutionContext\nmessage history"]
        LP["๐Ÿ” ReAct Loop\nstep controller"]
    end

    subgraph "External"
        LLM["๐Ÿง  LLM\n(OpenAI / LiteLLM)"]
        TOOLS["๐Ÿ› ๏ธ Tools\n(search, DB, APIs...)"]
    end

    U -->|"New(...).WithX().Run()"| AG
    AG --> EC
    AG --> LP
    LP -->|"Generate(messages)"| LLM
    LLM -->|"ToolCall or Answer"| LP
    LP -->|"Execute(calls)"| EX
    EX -->|"dispatch"| TOOLS
    TOOLS -->|"results"| EX
    EX -->|"ToolResult[]"| LP
    LP -->|"*Result"| AG
    AG -->|"*Result, Observable, error"| U

๐Ÿ”ง Implementing ToolExecutor

The only interface you must implement:

type ToolExecutor interface {
    Execute(ctx context.Context, calls []agent.ToolCall) ([]agent.ToolResult, error)
}

A typical implementation dispatches by tool name:

type myExecutor struct {
    searcher WebSearcher
    calc     Calculator
}

func (e *myExecutor) Execute(ctx context.Context, calls []agent.ToolCall) ([]agent.ToolResult, error) {
    results := make([]agent.ToolResult, len(calls))
    for i, call := range calls {
        output, err := e.dispatch(ctx, call.Name, call.Arguments)
        if err != nil {
            results[i] = agent.ToolResult{
                ID: call.ID, Name: call.Name,
                Status: "error", Content: []string{err.Error()},
            }
            continue
        }
        results[i] = agent.ToolResult{
            ID: call.ID, Name: call.Name,
            Status: "success", Content: []string{output},
        }
    }
    return results, nil
}

func (e *myExecutor) dispatch(ctx context.Context, name, args string) (string, error) {
    switch name {
    case "search_web":
        return e.searcher.Search(ctx, args)
    case "calculator":
        return e.calc.Eval(args)
    default:
        return "", fmt.Errorf("unknown tool: %s", name)
    }
}

๐Ÿ’ก ToolExecutor is the integration seam โ€” plug in MCP, LangChain tools, a local SQLite, a REST API, anything. The agent doesn't care what's behind it.


๐Ÿ“ก Observability โ€” the Event Stream

Run() returns three values:

result, events, err := a.Run(ctx, question)
//        ^^^^^^
//        rxgo.Observable โ€” a replayable stream of everything the agent did
๐ŸŒŠ What gets emitted
timeline
    title Events emitted during one agent run
    RunStart    : ๐Ÿš€ RunStartEvent
    Step 1      : ๐Ÿ“ StepStartEvent
                : ๐Ÿง  LLMCallEvent (Think)
                : ๐Ÿ”ง ToolExecEvent (Act)
                : ๐Ÿ“ StepEndEvent
    Step 2      : ๐Ÿ“ StepStartEvent
                : ๐Ÿง  LLMCallEvent (Think)
                : ๐Ÿ”ง ToolExecEvent (Act)
                : ๐Ÿ“ StepEndEvent
    Final step  : ๐Ÿ“ StepStartEvent
                : ๐Ÿง  LLMCallEvent (final answer โ€” no tool call)
                : ๐Ÿ“ StepEndEvent
    RunEnd      : ๐Ÿ RunEndEvent (carries *Result)
Event reference
Event Payload highlights When emitted
RunStartEvent RunID, Question Before the loop begins
StepStartEvent Step number At the start of each Thinkโ†’Act cycle
LLMCallEvent Latency, Err After every Generate() call
ToolExecEvent ToolNames, Latency, Err After every Execute() batch
StepEndEvent Step number At the end of each Thinkโ†’Act cycle
RunEndEvent *Result, Err On completion or error
Consuming events
result, events, err := a.Run(ctx, question)
if err != nil {
    log.Fatal(err)
}

// ๐Ÿ”ญ Subscribe โ€” cold observable, safe to call multiple times (full replay each time)
for item := range events.Observe() {
    switch e := item.V.(type) {
    case agent.RunStartEvent:
        slog.Info("๐Ÿš€ agent started", "run_id", e.RunID, "question", e.Question)
    case agent.LLMCallEvent:
        slog.Info("๐Ÿง  llm call", "latency_ms", e.Latency.Milliseconds())
    case agent.ToolExecEvent:
        slog.Info("๐Ÿ”ง tool exec", "tools", e.ToolNames, "latency_ms", e.Latency.Milliseconds())
    case agent.RunEndEvent:
        slog.Info("๐Ÿ run finished", "err", e.Err)
    }
}

fmt.Println(result.Output)

๐ŸงŠ Cold & replayable โ€” the observable uses rxgo.Defer. Nothing is emitted until you call Observe(). Each Observe() call replays all events from scratch, so two separate subscribers (e.g. a logger and a metrics exporter) each see the full picture independently.


๐Ÿ•น๏ธ Manual Step Control

Step is exported so you can drive the loop yourself โ€” useful for streaming UI updates, checkpointing long runs, or human-in-the-loop interrupts:

execCtx := agent.NewExecutionContextForTest()
execCtx.AddEvent("user", agent.Message{Role: "user", Content: "Plan a 3-day trip to Kyoto"})

for execCtx.CurrentStep < 15 {
    if err := a.Step(ctx, execCtx); err != nil {
        break
    }

    // ๐Ÿ” Inspect what just happened before the next step
    latest := execCtx.Events()[len(execCtx.Events())-1]
    fmt.Printf("Step %d: %s said something\n", execCtx.CurrentStep, latest.Author)

    // ๐Ÿ›‘ Human-in-the-loop: pause and ask for approval
    if needsApproval(latest) {
        if !getUserApproval() {
            break
        }
    }

    execCtx.IncrementStep()
}

๐Ÿ” Inspecting the Reasoning Trail

Every run keeps a full, ordered history of messages, tool calls, and tool results in result.Context:

for _, event := range result.Context.Events() {
    fmt.Printf("[%s] at %s\n", event.Author, event.Timestamp.Format(time.RFC3339))
    for _, item := range event.Content {
        switch v := item.(type) {
        case agent.Message:
            fmt.Printf("  ๐Ÿ’ฌ message: %s\n", v.Content)
        case agent.ToolCall:
            fmt.Printf("  ๐Ÿ”ง tool_call: %s(%s)\n", v.Name, v.Arguments)
        case agent.ToolResult:
            fmt.Printf("  ๐Ÿ“Š tool_result: [%s] %v\n", v.Status, v.Content)
        }
    }
}

๐Ÿ’ก Use this for debugging, audit logs, or displaying the agent's "chain of thought" to end users.


๐Ÿ›๏ธ Design Reference

classDiagram
    class Agent {
        +Run(ctx, question) Result, Observable, error
        +Step(ctx, execCtx) error
        +Think(ctx, execCtx) error
        +Act(ctx, execCtx) error
        -WithInstructions(string) Agent
        -WithMaxSteps(int) Agent
    }

    class LLMClient {
        <<interface>>
        +Generate(ctx, req) Response, error
    }

    class ToolExecutor {
        <<interface>>
        +Execute(ctx, calls) ToolResults, error
    }

    class ExecutionContext {
        +Events() []Event
        +AddEvent(author, items)
        +CurrentStep int
        +IncrementStep()
    }

    class AgentEvent {
        <<interface sealed>>
        RunStartEvent
        StepStartEvent
        LLMCallEvent
        ToolExecEvent
        StepEndEvent
        RunEndEvent
    }

    Agent --> LLMClient : uses
    Agent --> ToolExecutor : uses
    Agent --> ExecutionContext : owns
    Agent ..> AgentEvent : emits via Observable
Type Role
Agent ๐Ÿค– Orchestrator โ€” owns the loop
ExecutionContext ๐Ÿ“‹ Mutable run state โ€” thread-safe event log
Event ๐Ÿ“ Timestamped history entry (author + content items)
LLMClient ๐Ÿง  Interface โ€” swap any provider
ToolExecutor ๐Ÿ”ง Interface โ€” bring your own dispatch strategy
LiteLLMClient ๐Ÿ”Œ Concrete adapter for openai-go / LiteLLM proxy
AgentEvent ๐Ÿ“ก Sealed sum type โ€” emitted on the observable stream

License

Apache 2.0

Documentation ยถ

Overview ยถ

Package agent implements the ReAct (Reason + Act) pattern for AI agents.

Overview ยถ

A ReAct agent runs a bounded Think โ†’ Act โ†’ Observe loop: the model thinks (generates a response), acts (calls tools), and observes (results are appended to the history), repeating until it produces a final answer or exhausts the step limit.

The pattern is based on "ReAct: Synergizing Reasoning and Acting in Language Models" (Yao et al., 2022 โ€” https://arxiv.org/abs/2210.03629).

Building an agent ยถ

Use the fluent builder to compose an agent from an LLM client, tool definitions, and a tool executor:

a := agent.New(client, toolDefs, executor).
         WithInstructions("You are a precise research assistant.").
         WithMaxSteps(15)

Running an agent ยถ

Agent.Run executes the full loop for a single user question. It returns a Result, a replayable rxgo.Observable of AgentEvent values, and any error:

result, events, err := a.Run(ctx, "Who won the 2025 Nobel Prize in Physics?")
if err != nil {
    log.Fatal(err)
}
fmt.Println(result.Output)

Observable event stream ยถ

The returned observable is a cold, replayable stream of everything that happened during the run. Subscribe by calling Observe():

for item := range events.Observe() {
    switch e := item.V.(type) {
    case agent.LLMCallEvent:
        slog.Info("llm call", "step", e.Step, "latency_ms", e.Latency.Milliseconds())
    case agent.ToolExecEvent:
        slog.Info("tool exec", "tools", e.ToolNames)
    case agent.RunEndEvent:
        slog.Info("run end", "err", e.Err)
    }
}

Calling Observe() again replays all events from the beginning โ€” safe for multiple independent subscribers (loggers, metrics, tracing).

Execution history ยถ

The full conversation is available via result.Context.Events(). Each model.Event has an Author ("user", "agent", or "tools"), a timestamp, and typed model.ContentItem values (model.Message, model.ToolCall, model.ToolResult).

Bringing your own tools ยถ

Implement model.ToolExecutor to connect any tool-running backend:

type myExecutor struct{ /* your registry, MCP session, etc. */ }

func (e *myExecutor) Execute(ctx context.Context, calls []model.ToolCall) ([]model.ToolResult, error) {
    results := make([]model.ToolResult, len(calls))
    for i, call := range calls {
        out, err := e.dispatch(ctx, call.Name, call.Arguments)
        if err != nil {
            results[i] = model.ToolResult{ID: call.ID, Name: call.Name, Status: "error", Content: []string{err.Error()}}
            continue
        }
        results[i] = model.ToolResult{ID: call.ID, Name: call.Name, Status: "success", Content: []string{out}}
    }
    return results, nil
}

For MCP-based tools (github.com/v8tix/mcp-toolkit), use the ready-made adapter in the [mcpadapter] sub-package.

Manual step control ยถ

Agent.Step is exported so callers can drive the loop themselves โ€” useful for streaming, checkpointing, or human-in-the-loop interrupts:

execCtx := agent.NewExecutionContextForTest()
execCtx.AddEvent("user", model.Message{Role: "user", Content: question})

for execCtx.CurrentStep() < 20 {
    if err := a.Step(ctx, execCtx); err != nil {
        break
    }
    if execCtx.Done() {
        break
    }
    execCtx.IncrementStep()
}

Index ยถ

Examples ยถ

Constants ยถ

This section is empty.

Variables ยถ

View Source
var ErrMaxStepsReached = errors.New("agent: max steps reached without final answer")

ErrMaxStepsReached is returned when Run exhausts maxSteps without a final answer.

Functions ยถ

This section is empty.

Types ยถ

type Agent ยถ

type Agent struct {
	// contains filtered or unexported fields
}

Agent is the ReAct orchestrator. It runs a Think โ†’ Act โ†’ Observe loop until the LLM produces a final answer or maxSteps is exhausted.

func New ยถ

func New(client LLMClient, defs []model.ToolDefinition, executor model.ToolExecutor) *Agent

New creates an Agent with sensible defaults (maxSteps=10).

  • defs: tool definitions the LLM can call (pass nil or empty for no tools)
  • executor: executes tool calls concurrently (pass nil when defs is empty)

Use the builder methods to customise the agent:

agent.New(client, defs, executor).
    WithInstructions("You are helpful.").
    WithMaxSteps(15)
Example ยถ

ExampleNew shows how to construct an agent with the fluent builder. Replace demoLLM with agent.NewLiteLLMClient(openaiClient, model) to target a real LLM.

package main

import (
	"context"
	"fmt"

	agent "github.com/v8tix/react-agent"
	"github.com/v8tix/react-agent/model"
)

// demoLLM is a scripted LLM stub: it returns responses[0], responses[1], โ€ฆ
// in sequence. Use it to write deterministic, self-contained documentation
// examples that produce a known output without any network calls.
type demoLLM struct {
	responses []model.Response
	n         int
}

func (d *demoLLM) Generate(_ context.Context, _ model.Request) (model.Response, error) {
	resp := d.responses[d.n]
	d.n++
	return resp, nil
}

// demoExecutor echoes back a canned result for every tool call it receives.
// Suitable for examples that need a tool round-trip without a real backend.
type demoExecutor struct{}

func (demoExecutor) Execute(_ context.Context, calls []model.ToolCall) ([]model.ToolResult, error) {
	results := make([]model.ToolResult, len(calls))
	for i, c := range calls {
		results[i] = model.ToolResult{
			ID:      c.ID,
			Name:    c.Name,
			Status:  "success",
			Content: []string{fmt.Sprintf("result_of_%s", c.Name)},
		}
	}
	return results, nil
}

func main() {
	// Hypothetical: ask an assistant to look up a stock price.
	llm := &demoLLM{} // swap for agent.NewLiteLLMClient(...)

	defs := []model.ToolDefinition{
		{
			Name:        "search_web",
			Description: "Search the web for up-to-date information",
			Parameters: map[string]any{
				"type": "object",
				"properties": map[string]any{
					"query": map[string]any{"type": "string"},
				},
				"required": []string{"query"},
			},
		},
	}

	_ = agent.New(llm, defs, demoExecutor{}).
		WithInstructions("You are a precise research assistant.").
		WithMaxSteps(15)
}

func (*Agent) Act ยถ

func (a *Agent) Act(ctx context.Context, execCtx *ExecutionContext, calls []model.ToolCall) error

Act executes all requested tool calls via ToolExecutor and records the results. The agent's tool-call decision is appended as an "agent" event BEFORE execution, then tool results are appended as a "tools" event AFTER execution. Note: events are not emitted when calling Act directly; use Run for observability.

func (*Agent) Run ยถ

func (a *Agent) Run(ctx context.Context, userMessage string) (*Result, rxgo.Observable, error)

Run executes the full ReAct loop for a single user message. It returns a Result, a replayable Observable of AgentEvents, and any error.

The Observable is a cold, replayable stream: each call to Observe() replays all events from the completed run. It is safe for multiple subscribers.

Example ยถ

ExampleAgent_Run demonstrates a single-step run where the LLM answers directly without calling any tools.

package main

import (
	"context"
	"fmt"
	"log"

	agent "github.com/v8tix/react-agent"
	"github.com/v8tix/react-agent/model"
)

// demoLLM is a scripted LLM stub: it returns responses[0], responses[1], โ€ฆ
// in sequence. Use it to write deterministic, self-contained documentation
// examples that produce a known output without any network calls.
type demoLLM struct {
	responses []model.Response
	n         int
}

func (d *demoLLM) Generate(_ context.Context, _ model.Request) (model.Response, error) {
	resp := d.responses[d.n]
	d.n++
	return resp, nil
}

func main() {
	llm := &demoLLM{
		responses: []model.Response{
			{Content: []model.ContentItem{
				model.Message{Role: "assistant", Content: "The capital of France is Paris."},
			}},
		},
	}

	a := agent.New(llm, nil, nil).
		WithInstructions("You are a helpful assistant.")

	result, _, err := a.Run(context.Background(), "What is the capital of France?")
	if err != nil {
		log.Fatal(err)
	}

	fmt.Println(result.Output)
	fmt.Println(result.ToolCalled)
}
Output:
The capital of France is Paris.
false
Example (EventStream) ยถ

ExampleAgent_Run_eventStream shows how to consume the observable event stream returned by Run to build logging, metrics, or tracing.

The observable is cold and replayable โ€” calling Observe() again replays all events from the beginning, safe for multiple independent subscribers.

package main

import (
	"context"
	"encoding/json"
	"fmt"
	"log"

	agent "github.com/v8tix/react-agent"
	"github.com/v8tix/react-agent/model"
)

// demoLLM is a scripted LLM stub: it returns responses[0], responses[1], โ€ฆ
// in sequence. Use it to write deterministic, self-contained documentation
// examples that produce a known output without any network calls.
type demoLLM struct {
	responses []model.Response
	n         int
}

func (d *demoLLM) Generate(_ context.Context, _ model.Request) (model.Response, error) {
	resp := d.responses[d.n]
	d.n++
	return resp, nil
}

// demoExecutor echoes back a canned result for every tool call it receives.
// Suitable for examples that need a tool round-trip without a real backend.
type demoExecutor struct{}

func (demoExecutor) Execute(_ context.Context, calls []model.ToolCall) ([]model.ToolResult, error) {
	results := make([]model.ToolResult, len(calls))
	for i, c := range calls {
		results[i] = model.ToolResult{
			ID:      c.ID,
			Name:    c.Name,
			Status:  "success",
			Content: []string{fmt.Sprintf("result_of_%s", c.Name)},
		}
	}
	return results, nil
}

func main() {
	llm := &demoLLM{
		responses: []model.Response{
			{Content: []model.ContentItem{
				model.ToolCall{ID: "c1", Name: "search_web", Arguments: json.RawMessage(`{}`)},
			}},
			{Content: []model.ContentItem{
				model.Message{Role: "assistant", Content: "Done."},
			}},
		},
	}

	defs := []model.ToolDefinition{{
		Name:       "search_web",
		Parameters: map[string]any{"type": "object", "properties": map[string]any{"query": map[string]any{"type": "string"}}, "required": []string{"query"}},
	}}

	a := agent.New(llm, defs, demoExecutor{})

	_, events, err := a.Run(context.Background(), "What is the weather in Paris?")
	if err != nil {
		log.Fatal(err)
	}

	for item := range events.Observe() {
		switch e := item.V.(type) {
		case agent.RunStartEvent:
			fmt.Println("run started")
		case agent.LLMCallEvent:
			fmt.Printf("llm call step=%d\n", e.Step)
		case agent.ToolExecEvent:
			fmt.Printf("tool exec: %v\n", e.ToolNames)
		case agent.RunEndEvent:
			fmt.Println("run ended")
		}
	}
}
Output:
run started
llm call step=0
tool exec: [search_web]
llm call step=1
run ended
Example (ReasoningTrail) ยถ

ExampleAgent_Run_reasoningTrail shows how to inspect the full conversation history โ€” every message, tool call, and tool result โ€” after a run. Useful for debugging, audit logs, or displaying the chain of thought.

package main

import (
	"context"
	"encoding/json"
	"fmt"
	"log"

	agent "github.com/v8tix/react-agent"
	"github.com/v8tix/react-agent/model"
)

// demoLLM is a scripted LLM stub: it returns responses[0], responses[1], โ€ฆ
// in sequence. Use it to write deterministic, self-contained documentation
// examples that produce a known output without any network calls.
type demoLLM struct {
	responses []model.Response
	n         int
}

func (d *demoLLM) Generate(_ context.Context, _ model.Request) (model.Response, error) {
	resp := d.responses[d.n]
	d.n++
	return resp, nil
}

// demoExecutor echoes back a canned result for every tool call it receives.
// Suitable for examples that need a tool round-trip without a real backend.
type demoExecutor struct{}

func (demoExecutor) Execute(_ context.Context, calls []model.ToolCall) ([]model.ToolResult, error) {
	results := make([]model.ToolResult, len(calls))
	for i, c := range calls {
		results[i] = model.ToolResult{
			ID:      c.ID,
			Name:    c.Name,
			Status:  "success",
			Content: []string{fmt.Sprintf("result_of_%s", c.Name)},
		}
	}
	return results, nil
}

func main() {
	llm := &demoLLM{
		responses: []model.Response{
			{Content: []model.ContentItem{
				model.ToolCall{ID: "c1", Name: "lookup", Arguments: json.RawMessage(`{"id":"42"}`)},
			}},
			{Content: []model.ContentItem{
				model.Message{Role: "assistant", Content: "Found it."},
			}},
		},
	}

	defs := []model.ToolDefinition{{
		Name:       "lookup",
		Parameters: map[string]any{"type": "object", "properties": map[string]any{"id": map[string]any{"type": "string"}}, "required": []string{"id"}},
	}}

	result, _, err := agent.New(llm, defs, demoExecutor{}).
		Run(context.Background(), "Look up record 42.")
	if err != nil {
		log.Fatal(err)
	}

	for _, event := range result.Context.Events() {
		for _, item := range event.Content {
			switch v := item.(type) {
			case model.Message:
				fmt.Printf("[%s] %s\n", event.Author, v.Content)
			case model.ToolCall:
				fmt.Printf("[%s] call %s\n", event.Author, v.Name)
			case model.ToolResult:
				fmt.Printf("[%s] result %s=%s\n", event.Author, v.Name, v.Content[0])
			}
		}
	}
}
Output:
[user] Look up record 42.
[agent] call lookup
[tools] result lookup=result_of_lookup
[agent] Found it.
Example (WithTools) ยถ

ExampleAgent_Run_withTools demonstrates a two-step run: the LLM first calls a tool, then produces a final answer once it has the search result.

This mirrors the classic ReAct scenario โ€” the agent reasons about what information it needs, fetches it, then synthesises an answer.

package main

import (
	"context"
	"encoding/json"
	"fmt"
	"log"

	agent "github.com/v8tix/react-agent"
	"github.com/v8tix/react-agent/model"
)

// demoLLM is a scripted LLM stub: it returns responses[0], responses[1], โ€ฆ
// in sequence. Use it to write deterministic, self-contained documentation
// examples that produce a known output without any network calls.
type demoLLM struct {
	responses []model.Response
	n         int
}

func (d *demoLLM) Generate(_ context.Context, _ model.Request) (model.Response, error) {
	resp := d.responses[d.n]
	d.n++
	return resp, nil
}

// demoExecutor echoes back a canned result for every tool call it receives.
// Suitable for examples that need a tool round-trip without a real backend.
type demoExecutor struct{}

func (demoExecutor) Execute(_ context.Context, calls []model.ToolCall) ([]model.ToolResult, error) {
	results := make([]model.ToolResult, len(calls))
	for i, c := range calls {
		results[i] = model.ToolResult{
			ID:      c.ID,
			Name:    c.Name,
			Status:  "success",
			Content: []string{fmt.Sprintf("result_of_%s", c.Name)},
		}
	}
	return results, nil
}

func main() {
	llm := &demoLLM{
		responses: []model.Response{
			// Step 1: LLM decides to call search_web
			{Content: []model.ContentItem{
				model.ToolCall{
					ID:        "call_1",
					Name:      "search_web",
					Arguments: json.RawMessage(`{"query":"AAPL stock price January 9 2007"}`),
				},
			}},
			// Step 2: LLM reads the search result and gives the final answer
			{Content: []model.ContentItem{
				model.Message{Role: "assistant", Content: "Apple stock was $11.74 on January 9, 2007."},
			}},
		},
	}

	defs := []model.ToolDefinition{{
		Name:        "search_web",
		Description: "Search the web for current information",
		Parameters: map[string]any{
			"type":     "object",
			"required": []string{"query"},
			"properties": map[string]any{
				"query": map[string]any{"type": "string"},
			},
		},
	}}

	a := agent.New(llm, defs, demoExecutor{}).
		WithInstructions("You are a research assistant. Verify facts before answering.")

	result, _, err := a.Run(context.Background(), "What was Apple's stock price the day the iPhone was announced?")
	if err != nil {
		log.Fatal(err)
	}

	fmt.Println(result.Output)
	fmt.Println("tool called:", result.ToolCalled)
}
Output:
Apple stock was $11.74 on January 9, 2007.
tool called: true

func (*Agent) Step ยถ

func (a *Agent) Step(ctx context.Context, execCtx *ExecutionContext) error

Step executes one Think โ†’ (optionally) Act cycle, mutating execCtx in place. It is exported so callers can drive the loop manually for checkpointing or human-in-the-loop interrupts. Use execCtx.Done() to check for a final answer. Note: events are not emitted when calling Step directly; use Run for observability.

Example ยถ

ExampleAgent_Step shows how to drive the ReAct loop manually step-by-step. This gives you control between steps โ€” useful for streaming output to a UI, checkpointing long runs, or pausing for human approval before the agent acts.

package main

import (
	"context"
	"fmt"
	"log"

	agent "github.com/v8tix/react-agent"
	"github.com/v8tix/react-agent/model"
)

// demoLLM is a scripted LLM stub: it returns responses[0], responses[1], โ€ฆ
// in sequence. Use it to write deterministic, self-contained documentation
// examples that produce a known output without any network calls.
type demoLLM struct {
	responses []model.Response
	n         int
}

func (d *demoLLM) Generate(_ context.Context, _ model.Request) (model.Response, error) {
	resp := d.responses[d.n]
	d.n++
	return resp, nil
}

func main() {
	llm := &demoLLM{
		responses: []model.Response{
			{Content: []model.ContentItem{
				model.Message{Role: "assistant", Content: "The answer is 42."},
			}},
		},
	}

	a := agent.New(llm, nil, nil).WithMaxSteps(10)

	execCtx := agent.NewExecutionContextForTest()
	execCtx.AddEvent("user", model.Message{Role: "user", Content: "What is the answer to life, the universe and everything?"})

	for execCtx.CurrentStep() < 10 {
		if err := a.Step(context.Background(), execCtx); err != nil {
			log.Fatal(err)
		}
		if execCtx.Done() {
			break
		}
		execCtx.IncrementStep()
	}

	answer, _ := execCtx.FinalResult()
	fmt.Println(answer)
	fmt.Println("done:", execCtx.Done())
}
Output:
The answer is 42.
done: true

func (*Agent) Think ยถ

func (a *Agent) Think(ctx context.Context, execCtx *ExecutionContext) (model.Response, error)

Think calls the LLM with the current execution context and returns its response. Note: events are not emitted when calling Think directly; use Run for observability.

func (*Agent) WithInstructions ยถ

func (a *Agent) WithInstructions(s string) *Agent

WithInstructions sets the system prompt sent on every LLM request.

func (*Agent) WithMaxSteps ยถ

func (a *Agent) WithMaxSteps(n int) *Agent

WithMaxSteps overrides the default step limit (10). Panics if n < 1 โ€” zero or negative steps is a programming error.

type AgentEvent ยถ

type AgentEvent interface {
	// contains filtered or unexported methods
}

AgentEvent is the sealed sum type for all agent lifecycle events. Use a type switch to handle specific event types:

for item := range events.Observe() {
    switch e := item.V.(type) {
    case agent.LLMCallEvent:
        slog.Info("llm call", "latency_ms", e.Latency.Milliseconds())
    case agent.RunEndEvent:
        fmt.Println(e.Result.Output)
    }
}

type ExecutionContext ยถ

type ExecutionContext struct {
	// contains filtered or unexported fields
}

ExecutionContext is the central mutable state for one agent run. It records all Events across steps and holds the final result once the agent produces a terminal response. All public methods are safe for concurrent use.

func NewExecutionContextForTest ยถ

func NewExecutionContextForTest() *ExecutionContext

NewExecutionContextForTest exposes newExecutionContext for white-box unit tests.

func (*ExecutionContext) AddEvent ยถ

func (ec *ExecutionContext) AddEvent(author string, content ...model.ContentItem)

AddEvent appends an event authored by author with the given content items. ID and Timestamp are generated automatically. Safe for concurrent use.

func (*ExecutionContext) CurrentStep ยถ

func (ec *ExecutionContext) CurrentStep() int

CurrentStep returns the current step index. Safe for concurrent use.

func (*ExecutionContext) Done ยถ

func (ec *ExecutionContext) Done() bool

Done reports whether the agent has produced a final answer. Safe for concurrent use.

func (*ExecutionContext) Events ยถ

func (ec *ExecutionContext) Events() []model.Event

Events returns a defensive copy of the event log. Each Event's Content slice is independently copied so callers cannot corrupt internal state by mutating returned slices. Safe for concurrent use.

func (*ExecutionContext) FinalResult ยถ

func (ec *ExecutionContext) FinalResult() (string, bool)

FinalResult returns the agent's final answer and true once Done() is true. Returns ("", false) if the agent has not finished yet. Safe for concurrent use.

func (*ExecutionContext) GetState ยถ

func (ec *ExecutionContext) GetState(key string) (any, bool)

GetState retrieves a value from the run-scoped key-value store. Safe for concurrent use.

func (*ExecutionContext) ID ยถ

func (ec *ExecutionContext) ID() string

ID returns the unique identifier for this execution. Safe for concurrent use.

func (*ExecutionContext) IncrementStep ยถ

func (ec *ExecutionContext) IncrementStep()

IncrementStep advances the step counter by one. Safe for concurrent use.

func (*ExecutionContext) SetState ยถ

func (ec *ExecutionContext) SetState(key string, value any)

SetState stores a value in the run-scoped key-value store. Safe for concurrent use.

type LLMCallEvent ยถ

type LLMCallEvent struct {
	RunID   string
	Step    int
	Latency time.Duration
	Err     error
}

LLMCallEvent is emitted after every Generate() call, including on error. Latency covers only the LLM network round-trip. The full request and response content are available via result.Context.Events().

type LLMClient ยถ

type LLMClient interface {
	Generate(ctx context.Context, req model.Request) (model.Response, error)
}

LLMClient abstracts communication with a language model. Implement this interface to support any LLM provider.

type LiteLLMClient ยถ

type LiteLLMClient struct {
	// contains filtered or unexported fields
}

LiteLLMClient adapts the openai-go client to the LLMClient interface. Works with OpenAI directly or with a LiteLLM proxy (same API surface).

func NewLiteLLMClient ยถ

func NewLiteLLMClient(client *openai.Client, model openai.ChatModel) *LiteLLMClient

NewLiteLLMClient creates a LiteLLMClient wrapping the provided openai-go client.

func (*LiteLLMClient) Generate ยถ

func (c *LiteLLMClient) Generate(ctx context.Context, req model.Request) (model.Response, error)

Generate translates a Request into an OpenAI chat completion and maps the response back to ContentItem types.

type Result ยถ

type Result struct {
	// Output is the final answer produced by the LLM.
	Output string
	// ToolCalled reports whether at least one tool was invoked during the run.
	ToolCalled bool
	// Context is the full execution history for this run.
	Context *ExecutionContext
}

Result is the output of a successful Agent.Run() call.

type RunEndEvent ยถ

type RunEndEvent struct {
	RunID  string
	Result *Result
	Err    error
}

RunEndEvent is emitted once when Run returns, whether it succeeded. Result is nil when Err is non-nil.

type RunStartEvent ยถ

type RunStartEvent struct {
	RunID       string
	UserMessage string
}

RunStartEvent is emitted once before the ReAct loop begins.

type StepEndEvent ยถ

type StepEndEvent struct {
	RunID string
	Step  int
	Err   error
}

StepEndEvent is emitted after each Thinkโ†’Act cycle. Err is non-nil when the step failed.

type StepStartEvent ยถ

type StepStartEvent struct {
	RunID string
	Step  int
}

StepStartEvent is emitted at the beginning of each Thinkโ†’Act cycle.

type ToolExecEvent ยถ

type ToolExecEvent struct {
	RunID     string
	Step      int
	ToolNames []string
	Latency   time.Duration
	Err       error
}

ToolExecEvent is emitted after every executor.Execute() batch. ToolNames lists the names of tools that were called. Latency is 0 when the executor is nil.

Directories ยถ

Path Synopsis
Package mcpadapter bridges github.com/v8tix/mcp-toolkit with react-agent.
Package mcpadapter bridges github.com/v8tix/mcp-toolkit with react-agent.
Package model contains the core data types shared across react-agent and its sub-packages.
Package model contains the core data types shared across react-agent and its sub-packages.

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL