Documentation
¶
Overview ¶
Package agent implements the ReAct (Reason + Act) pattern for AI agents.
Overview ¶
A ReAct agent runs a bounded Think → Act → Observe loop: the model thinks (generates a response), acts (calls tools), and observes (results are appended to the history), repeating until it produces a final answer or exhausts the step limit.
The pattern is based on "ReAct: Synergizing Reasoning and Acting in Language Models" (Yao et al., 2022 — https://arxiv.org/abs/2210.03629).
Building an agent ¶
Use the fluent builder to compose an agent from an LLM client, tool definitions, and a tool executor:
a := agent.New(client, toolDefs, executor).
WithInstructions("You are a precise research assistant.").
WithMaxSteps(15)
Running an agent ¶
Agent.Run executes the full loop for a single user question. It returns a Result, a replayable rxgo.Observable of AgentEvent values, and any error:
result, events, err := a.Run(ctx, "Who won the 2025 Nobel Prize in Physics?")
if err != nil {
log.Fatal(err)
}
fmt.Println(result.Output)
Observable event stream ¶
The returned observable is a cold, replayable stream of everything that happened during the run. Subscribe by calling Observe():
for item := range events.Observe() {
switch e := item.V.(type) {
case agent.LLMCallEvent:
slog.Info("llm call", "step", e.Step, "latency_ms", e.Latency.Milliseconds())
case agent.ToolExecEvent:
slog.Info("tool exec", "tools", e.ToolNames)
case agent.RunEndEvent:
slog.Info("run end", "err", e.Err)
}
}
Calling Observe() again replays all events from the beginning — safe for multiple independent subscribers (loggers, metrics, tracing).
Execution history ¶
The full conversation is available via result.Context.Events(). Each model.Event has an Author ("user", "agent", or "tools"), a timestamp, and typed model.ContentItem values (model.Message, model.ToolCall, model.ToolResult).
Bringing your own tools ¶
Implement model.ToolExecutor to connect any tool-running backend:
type myExecutor struct{ /* your registry, MCP session, etc. */ }
func (e *myExecutor) Execute(ctx context.Context, calls []model.ToolCall) ([]model.ToolResult, error) {
results := make([]model.ToolResult, len(calls))
for i, call := range calls {
out, err := e.dispatch(ctx, call.Name, call.Arguments)
if err != nil {
results[i] = model.ToolResult{ID: call.ID, Name: call.Name, Status: "error", Content: []string{err.Error()}}
continue
}
results[i] = model.ToolResult{ID: call.ID, Name: call.Name, Status: "success", Content: []string{out}}
}
return results, nil
}
For MCP-based tools (github.com/v8tix/mcp-toolkit/v2), use the ready-made adapter in the [mcpadapter] sub-package.
Stateful sessions and approvals ¶
Use SessionRunner when a conversation must persist across multiple user-facing turns. It replays prior model.Event values from a SessionManager, runs the agent, and saves the updated state after each call. If a callback suspends the run, SessionRunner.Run returns StatusPending plus a pending interaction payload that your app can surface in a UI or API before resuming:
sessions := agent.NewInMemorySessionManager()
runner := agent.NewSessionRunner(
agent.New(client, defs, executor).
WithBeforeToolCallbacks(agent.NewConfirmationCallback(agent.StaticApprovalPolicy{
"delete_file": {MessageTemplate: "Approve file deletion?"},
})),
sessions,
8,
)
first, _ := runner.Run(ctx, "chat-1", "user-7", "My name is Alice")
next, _ := runner.Run(ctx, "chat-1", "user-7", "What's my name?")
_, _ = first, next
Approval callbacks use Suspend under the hood and can be resumed with Agent.Resume or SessionRunner.Resume. The built-in ConfirmationCallback also redacts sensitive tool arguments from the interaction payload.
Request mutation and context memory ¶
MutatingLLMClient lets you rewrite a request immediately before it is sent to the underlying LLMClient. This is the extension point for prompt hygiene, context-window management, and memory injection.
Common building blocks:
- ContextOptimizer applies one or more OptimizationStrategy values once a TokenCounter threshold is exceeded.
- SlidingWindowStrategy preserves the latest user turn and a recent tail of events.
- CompactionStrategy replaces bulky tool payloads with short sanitized summaries.
- SummarizationStrategy moves older history into a generated summary in the instructions.
- WithMutatorLogger adds structured logs around any RequestMutator.
Long-term task memory ¶
TaskMemoryManager stores solved tasks in a pluggable VectorStore so future requests can retrieve similar work. Pair it with MemoryInjector to inject the most relevant prior records into the prompt before each LLM call:
memories := agent.NewTaskMemoryManager(embedder, agent.NewInMemoryVectorStore(), agent.SimpleDuplicateChecker{})
clientWithMemory := agent.NewMutatingLLMClient(
client,
agent.NewMemoryInjector(memories, 3),
)
_, _, _ = memories, clientWithMemory, agent.New(clientWithMemory, defs, executor)
Manual step control ¶
Agent.Step is exported so callers can drive the loop themselves — useful for streaming, checkpointing, or human-in-the-loop interrupts:
execCtx := agent.NewExecutionContextForTest()
execCtx.AddEvent("user", model.Message{Role: "user", Content: question})
for execCtx.CurrentStep() < 20 {
if err := a.Step(ctx, execCtx); err != nil {
break
}
if execCtx.Done() {
break
}
execCtx.IncrementStep()
}
Index ¶
- Variables
- func Suspend(req InteractionRequest) error
- type AfterToolCallback
- type Agent
- func (a *Agent) Act(ctx context.Context, execCtx *ExecutionContext, calls []model.ToolCall) error
- func (a *Agent) Resume(ctx context.Context, suspended SuspendedRun, response InteractionResponse) (*Result, rxgo.Observable, error)
- func (a *Agent) Run(ctx context.Context, userMessage string) (*Result, rxgo.Observable, error)
- func (a *Agent) Step(ctx context.Context, execCtx *ExecutionContext) error
- func (a *Agent) Think(ctx context.Context, execCtx *ExecutionContext) (model.Response, error)
- func (a *Agent) WithAfterToolCallbacks(callbacks ...AfterToolCallback) *Agent
- func (a *Agent) WithBeforeToolCallbacks(callbacks ...BeforeToolCallback) *Agent
- func (a *Agent) WithInstructions(s string) *Agent
- func (a *Agent) WithMaxSteps(n int) *Agent
- type AgentEvent
- type ApprovalPolicy
- type ApprovalRule
- type BeforeToolCallback
- type CallbackEvent
- type CallbackPhase
- type CallbackStage
- type CompactionStrategy
- type ConfirmationCallback
- type ContextOptimizer
- type DuplicateChecker
- type Embedder
- type ExecutionContext
- func (ec *ExecutionContext) AddEvent(author string, content ...model.ContentItem)
- func (ec *ExecutionContext) CurrentStep() int
- func (ec *ExecutionContext) Done() bool
- func (ec *ExecutionContext) Events() []model.Event
- func (ec *ExecutionContext) FinalResult() (string, bool)
- func (ec *ExecutionContext) GetState(key string) (any, bool)
- func (ec *ExecutionContext) ID() string
- func (ec *ExecutionContext) IncrementStep()
- func (ec *ExecutionContext) InteractionResponse(requestID string) (InteractionResponse, bool)
- func (ec *ExecutionContext) PendingInteraction() (*InteractionRequest, bool)
- func (ec *ExecutionContext) SetState(key string, value any)
- type InMemorySessionManager
- func (m *InMemorySessionManager) Create(sessionID, userID string) (*Session, error)
- func (m *InMemorySessionManager) Get(sessionID string) (*Session, error)
- func (m *InMemorySessionManager) GetOrCreate(sessionID, userID string) (*Session, error)
- func (m *InMemorySessionManager) Save(session *Session) error
- type InMemoryVectorStore
- type InteractionRequest
- type InteractionRequestedError
- type InteractionRequestedEvent
- type InteractionResponse
- type InteractionResumedEvent
- type LLMCallEvent
- type LLMClient
- type LiteLLMClient
- type MemoryInjector
- type MemorySearcher
- type MutatingLLMClient
- type OptimizationStrategy
- type RequestMutator
- type RequestTokenCounter
- type Result
- type RunEndEvent
- type RunResult
- type RunStartEvent
- type RunStatus
- type Session
- type SessionManager
- type SessionRunner
- func (r *SessionRunner) Resume(ctx context.Context, sessionID, userID string, response InteractionResponse) (*RunResult, error)
- func (r *SessionRunner) Run(ctx context.Context, sessionID, userID, userInput string) (*RunResult, error)
- func (r *SessionRunner) WithLogger(logger *slog.Logger) *SessionRunner
- type SimpleDuplicateChecker
- type SlidingWindowStrategy
- type StaticApprovalPolicy
- type StepEndEvent
- type StepStartEvent
- type SummarizationStrategy
- type SummaryGenerator
- type SuspendedRun
- type TaskMemory
- type TaskMemoryManager
- type TokenCounter
- type ToolExecEvent
- type VectorDocument
- type VectorStore
Examples ¶
Constants ¶
This section is empty.
Variables ¶
var ErrInteractionRequested = errors.New("agent: interaction requested")
ErrInteractionRequested signals that the agent suspended awaiting external input.
var ErrMaxStepsReached = errors.New("agent: max steps reached without final answer")
ErrMaxStepsReached is returned when Run exhausts maxSteps without a final answer.
Functions ¶
func Suspend ¶ added in v1.0.1
func Suspend(req InteractionRequest) error
Suspend requests external interaction from inside a callback.
Types ¶
type AfterToolCallback ¶ added in v1.0.1
type AfterToolCallback interface {
AfterTool(ctx context.Context, execCtx *ExecutionContext, result model.ToolResult) (*model.ToolResult, error)
}
AfterToolCallback can replace a tool result after the executor (or a before-tool callback) produced it. Returning a non-nil ToolResult replaces the current result for that call.
type Agent ¶
type Agent struct {
// contains filtered or unexported fields
}
Agent is the ReAct orchestrator. It runs a Think → Act → Observe loop until the LLM produces a final answer or maxSteps is exhausted.
func New ¶
func New(client LLMClient, defs []model.ToolDefinition, executor model.ToolExecutor) *Agent
New creates an Agent with sensible defaults (maxSteps=10).
- defs: tool definitions the LLM can call (pass nil or empty for no tools)
- executor: executes tool calls concurrently (pass nil when defs is empty)
Use the builder methods to customise the agent:
agent.New(client, defs, executor).
WithInstructions("You are helpful.").
WithMaxSteps(15)
Example ¶
ExampleNew shows how to construct an agent with the fluent builder. Replace demoLLM with agent.NewLiteLLMClient(openaiClient, model) to target a real LLM.
package main
import (
"context"
"fmt"
agent "github.com/v8tix/react-agent"
"github.com/v8tix/react-agent/model"
)
// demoLLM is a scripted LLM stub: it returns responses[0], responses[1], …
// in sequence. Use it to write deterministic, self-contained documentation
// examples that produce a known output without any network calls.
type demoLLM struct {
responses []model.Response
n int
}
func (d *demoLLM) Generate(_ context.Context, _ model.Request) (model.Response, error) {
resp := d.responses[d.n]
d.n++
return resp, nil
}
// demoExecutor echoes back a canned result for every tool call it receives.
// Suitable for examples that need a tool round-trip without a real backend.
type demoExecutor struct{}
func (demoExecutor) Execute(_ context.Context, calls []model.ToolCall) ([]model.ToolResult, error) {
results := make([]model.ToolResult, len(calls))
for i, c := range calls {
results[i] = model.ToolResult{
ID: c.ID,
Name: c.Name,
Status: "success",
Content: []string{fmt.Sprintf("result_of_%s", c.Name)},
}
}
return results, nil
}
func main() {
// Hypothetical: ask an assistant to look up a stock price.
llm := &demoLLM{} // swap for agent.NewLiteLLMClient(...)
defs := []model.ToolDefinition{
{
Name: "search_web",
Description: "Search the web for up-to-date information",
Parameters: map[string]any{
"type": "object",
"properties": map[string]any{
"query": map[string]any{"type": "string"},
},
"required": []string{"query"},
},
},
}
_ = agent.New(llm, defs, demoExecutor{}).
WithInstructions("You are a precise research assistant.").
WithMaxSteps(15)
}
Output:
func (*Agent) Act ¶
Act executes all requested tool calls via ToolExecutor and records the results. The agent's tool-call decision is appended as an "agent" event BEFORE execution, then tool results are appended as a "tools" event AFTER execution. Note: events are not emitted when calling Act directly; use Run for observability.
func (*Agent) Resume ¶ added in v1.0.1
func (a *Agent) Resume(ctx context.Context, suspended SuspendedRun, response InteractionResponse) (*Result, rxgo.Observable, error)
Resume continues a suspended run after an external interaction response arrives.
func (*Agent) Run ¶
Run executes the full ReAct loop for a single user message. It returns a Result, a replayable Observable of AgentEvents, and any error.
The Observable is a cold, replayable stream: each call to Observe() replays all events from the completed run. It is safe for multiple subscribers.
Example ¶
ExampleAgent_Run demonstrates a single-step run where the LLM answers directly without calling any tools.
package main
import (
"context"
"fmt"
"log"
agent "github.com/v8tix/react-agent"
"github.com/v8tix/react-agent/model"
)
// demoLLM is a scripted LLM stub: it returns responses[0], responses[1], …
// in sequence. Use it to write deterministic, self-contained documentation
// examples that produce a known output without any network calls.
type demoLLM struct {
responses []model.Response
n int
}
func (d *demoLLM) Generate(_ context.Context, _ model.Request) (model.Response, error) {
resp := d.responses[d.n]
d.n++
return resp, nil
}
func main() {
llm := &demoLLM{
responses: []model.Response{
{Content: []model.ContentItem{
model.Message{Role: "assistant", Content: "The capital of France is Paris."},
}},
},
}
a := agent.New(llm, nil, nil).
WithInstructions("You are a helpful assistant.")
result, _, err := a.Run(context.Background(), "What is the capital of France?")
if err != nil {
log.Fatal(err)
}
fmt.Println(result.Output)
fmt.Println(result.ToolCalled)
}
Output: The capital of France is Paris. false
Example (EventStream) ¶
ExampleAgent_Run_eventStream shows how to consume the observable event stream returned by Run to build logging, metrics, or tracing.
The observable is cold and replayable — calling Observe() again replays all events from the beginning, safe for multiple independent subscribers.
package main
import (
"context"
"encoding/json"
"fmt"
"log"
agent "github.com/v8tix/react-agent"
"github.com/v8tix/react-agent/model"
)
// demoLLM is a scripted LLM stub: it returns responses[0], responses[1], …
// in sequence. Use it to write deterministic, self-contained documentation
// examples that produce a known output without any network calls.
type demoLLM struct {
responses []model.Response
n int
}
func (d *demoLLM) Generate(_ context.Context, _ model.Request) (model.Response, error) {
resp := d.responses[d.n]
d.n++
return resp, nil
}
// demoExecutor echoes back a canned result for every tool call it receives.
// Suitable for examples that need a tool round-trip without a real backend.
type demoExecutor struct{}
func (demoExecutor) Execute(_ context.Context, calls []model.ToolCall) ([]model.ToolResult, error) {
results := make([]model.ToolResult, len(calls))
for i, c := range calls {
results[i] = model.ToolResult{
ID: c.ID,
Name: c.Name,
Status: "success",
Content: []string{fmt.Sprintf("result_of_%s", c.Name)},
}
}
return results, nil
}
func main() {
llm := &demoLLM{
responses: []model.Response{
{Content: []model.ContentItem{
model.ToolCall{ID: "c1", Name: "search_web", Arguments: json.RawMessage(`{}`)},
}},
{Content: []model.ContentItem{
model.Message{Role: "assistant", Content: "Done."},
}},
},
}
defs := []model.ToolDefinition{{
Name: "search_web",
Parameters: map[string]any{"type": "object", "properties": map[string]any{"query": map[string]any{"type": "string"}}, "required": []string{"query"}},
}}
a := agent.New(llm, defs, demoExecutor{})
_, events, err := a.Run(context.Background(), "What is the weather in Paris?")
if err != nil {
log.Fatal(err)
}
for item := range events.Observe() {
switch e := item.V.(type) {
case agent.RunStartEvent:
fmt.Println("run started")
case agent.LLMCallEvent:
fmt.Printf("llm call step=%d\n", e.Step)
case agent.ToolExecEvent:
fmt.Printf("tool exec: %v\n", e.ToolNames)
case agent.RunEndEvent:
fmt.Println("run ended")
}
}
}
Output: run started llm call step=0 tool exec: [search_web] llm call step=1 run ended
Example (ReasoningTrail) ¶
ExampleAgent_Run_reasoningTrail shows how to inspect the full conversation history — every message, tool call, and tool result — after a run. Useful for debugging, audit logs, or displaying the chain of thought.
package main
import (
"context"
"encoding/json"
"fmt"
"log"
agent "github.com/v8tix/react-agent"
"github.com/v8tix/react-agent/model"
)
// demoLLM is a scripted LLM stub: it returns responses[0], responses[1], …
// in sequence. Use it to write deterministic, self-contained documentation
// examples that produce a known output without any network calls.
type demoLLM struct {
responses []model.Response
n int
}
func (d *demoLLM) Generate(_ context.Context, _ model.Request) (model.Response, error) {
resp := d.responses[d.n]
d.n++
return resp, nil
}
// demoExecutor echoes back a canned result for every tool call it receives.
// Suitable for examples that need a tool round-trip without a real backend.
type demoExecutor struct{}
func (demoExecutor) Execute(_ context.Context, calls []model.ToolCall) ([]model.ToolResult, error) {
results := make([]model.ToolResult, len(calls))
for i, c := range calls {
results[i] = model.ToolResult{
ID: c.ID,
Name: c.Name,
Status: "success",
Content: []string{fmt.Sprintf("result_of_%s", c.Name)},
}
}
return results, nil
}
func main() {
llm := &demoLLM{
responses: []model.Response{
{Content: []model.ContentItem{
model.ToolCall{ID: "c1", Name: "lookup", Arguments: json.RawMessage(`{"id":"42"}`)},
}},
{Content: []model.ContentItem{
model.Message{Role: "assistant", Content: "Found it."},
}},
},
}
defs := []model.ToolDefinition{{
Name: "lookup",
Parameters: map[string]any{"type": "object", "properties": map[string]any{"id": map[string]any{"type": "string"}}, "required": []string{"id"}},
}}
result, _, err := agent.New(llm, defs, demoExecutor{}).
Run(context.Background(), "Look up record 42.")
if err != nil {
log.Fatal(err)
}
for _, event := range result.Context.Events() {
for _, item := range event.Content {
switch v := item.(type) {
case model.Message:
fmt.Printf("[%s] %s\n", event.Author, v.Content)
case model.ToolCall:
fmt.Printf("[%s] call %s\n", event.Author, v.Name)
case model.ToolResult:
fmt.Printf("[%s] result %s=%s\n", event.Author, v.Name, v.Content[0])
}
}
}
}
Output: [user] Look up record 42. [agent] call lookup [tools] result lookup=result_of_lookup [agent] Found it.
Example (WithTools) ¶
ExampleAgent_Run_withTools demonstrates a two-step run: the LLM first calls a tool, then produces a final answer once it has the search result.
This mirrors the classic ReAct scenario — the agent reasons about what information it needs, fetches it, then synthesises an answer.
package main
import (
"context"
"encoding/json"
"fmt"
"log"
agent "github.com/v8tix/react-agent"
"github.com/v8tix/react-agent/model"
)
// demoLLM is a scripted LLM stub: it returns responses[0], responses[1], …
// in sequence. Use it to write deterministic, self-contained documentation
// examples that produce a known output without any network calls.
type demoLLM struct {
responses []model.Response
n int
}
func (d *demoLLM) Generate(_ context.Context, _ model.Request) (model.Response, error) {
resp := d.responses[d.n]
d.n++
return resp, nil
}
// demoExecutor echoes back a canned result for every tool call it receives.
// Suitable for examples that need a tool round-trip without a real backend.
type demoExecutor struct{}
func (demoExecutor) Execute(_ context.Context, calls []model.ToolCall) ([]model.ToolResult, error) {
results := make([]model.ToolResult, len(calls))
for i, c := range calls {
results[i] = model.ToolResult{
ID: c.ID,
Name: c.Name,
Status: "success",
Content: []string{fmt.Sprintf("result_of_%s", c.Name)},
}
}
return results, nil
}
func main() {
llm := &demoLLM{
responses: []model.Response{
// Step 1: LLM decides to call search_web
{Content: []model.ContentItem{
model.ToolCall{
ID: "call_1",
Name: "search_web",
Arguments: json.RawMessage(`{"query":"AAPL stock price January 9 2007"}`),
},
}},
// Step 2: LLM reads the search result and gives the final answer
{Content: []model.ContentItem{
model.Message{Role: "assistant", Content: "Apple stock was $11.74 on January 9, 2007."},
}},
},
}
defs := []model.ToolDefinition{{
Name: "search_web",
Description: "Search the web for current information",
Parameters: map[string]any{
"type": "object",
"required": []string{"query"},
"properties": map[string]any{
"query": map[string]any{"type": "string"},
},
},
}}
a := agent.New(llm, defs, demoExecutor{}).
WithInstructions("You are a research assistant. Verify facts before answering.")
result, _, err := a.Run(context.Background(), "What was Apple's stock price the day the iPhone was announced?")
if err != nil {
log.Fatal(err)
}
fmt.Println(result.Output)
fmt.Println("tool called:", result.ToolCalled)
}
Output: Apple stock was $11.74 on January 9, 2007. tool called: true
func (*Agent) Step ¶
func (a *Agent) Step(ctx context.Context, execCtx *ExecutionContext) error
Step executes one Think → (optionally) Act cycle, mutating execCtx in place. It is exported so callers can drive the loop manually for checkpointing or human-in-the-loop interrupts. Use execCtx.Done() to check for a final answer. Note: events are not emitted when calling Step directly; use Run for observability.
Example ¶
ExampleAgent_Step shows how to drive the ReAct loop manually step-by-step. This gives you control between steps — useful for streaming output to a UI, checkpointing long runs, or pausing for human approval before the agent acts.
package main
import (
"context"
"fmt"
"log"
agent "github.com/v8tix/react-agent"
"github.com/v8tix/react-agent/model"
)
// demoLLM is a scripted LLM stub: it returns responses[0], responses[1], …
// in sequence. Use it to write deterministic, self-contained documentation
// examples that produce a known output without any network calls.
type demoLLM struct {
responses []model.Response
n int
}
func (d *demoLLM) Generate(_ context.Context, _ model.Request) (model.Response, error) {
resp := d.responses[d.n]
d.n++
return resp, nil
}
func main() {
llm := &demoLLM{
responses: []model.Response{
{Content: []model.ContentItem{
model.Message{Role: "assistant", Content: "The answer is 42."},
}},
},
}
a := agent.New(llm, nil, nil).WithMaxSteps(10)
execCtx := agent.NewExecutionContextForTest()
execCtx.AddEvent("user", model.Message{Role: "user", Content: "What is the answer to life, the universe and everything?"})
for execCtx.CurrentStep() < 10 {
if err := a.Step(context.Background(), execCtx); err != nil {
log.Fatal(err)
}
if execCtx.Done() {
break
}
execCtx.IncrementStep()
}
answer, _ := execCtx.FinalResult()
fmt.Println(answer)
fmt.Println("done:", execCtx.Done())
}
Output: The answer is 42. done: true
func (*Agent) Think ¶
Think calls the LLM with the current execution context and returns its response. Note: events are not emitted when calling Think directly; use Run for observability.
func (*Agent) WithAfterToolCallbacks ¶ added in v1.0.1
func (a *Agent) WithAfterToolCallbacks(callbacks ...AfterToolCallback) *Agent
WithAfterToolCallbacks appends tool callbacks that run after a tool result is produced.
func (*Agent) WithBeforeToolCallbacks ¶ added in v1.0.1
func (a *Agent) WithBeforeToolCallbacks(callbacks ...BeforeToolCallback) *Agent
WithBeforeToolCallbacks appends tool callbacks that run before executor dispatch.
func (*Agent) WithInstructions ¶
WithInstructions sets the system prompt sent on every LLM request.
func (*Agent) WithMaxSteps ¶
WithMaxSteps overrides the default step limit (10). Panics if n < 1 — zero or negative steps is a programming error.
type AgentEvent ¶
type AgentEvent interface {
// contains filtered or unexported methods
}
AgentEvent is the sealed sum type for all agent lifecycle events. Use a type switch to handle specific event types:
for item := range events.Observe() {
switch e := item.V.(type) {
case agent.LLMCallEvent:
slog.Info("llm call", "latency_ms", e.Latency.Milliseconds())
case agent.RunEndEvent:
fmt.Println(e.Result.Output)
}
}
type ApprovalPolicy ¶ added in v1.0.1
type ApprovalPolicy interface {
RuleForTool(name string) (ApprovalRule, bool)
}
ApprovalPolicy decides whether a given tool requires human approval.
type ApprovalRule ¶ added in v1.0.1
ApprovalRule defines how a tool approval request should be presented and denied.
type BeforeToolCallback ¶ added in v1.0.1
type BeforeToolCallback interface {
BeforeTool(ctx context.Context, execCtx *ExecutionContext, call model.ToolCall) (*model.ToolResult, error)
}
BeforeToolCallback can short-circuit a tool call before the executor runs. Returning a non-nil ToolResult skips executor execution for that call.
type CallbackEvent ¶ added in v1.0.1
type CallbackEvent struct {
RunID string
Step int
Phase CallbackPhase
Stage CallbackStage
Callback string
ToolCallID string
ToolName string
Overrode bool
Latency time.Duration
Err error
}
CallbackEvent is emitted before and after each callback invocation.
type CallbackPhase ¶ added in v1.0.1
type CallbackPhase string
CallbackPhase identifies which callback stage emitted the event.
const ( CallbackPhaseBeforeTool CallbackPhase = "before_tool" CallbackPhaseAfterTool CallbackPhase = "after_tool" )
type CallbackStage ¶ added in v1.0.1
type CallbackStage string
CallbackStage identifies whether the event was emitted before invoking the callback or after it returned.
const ( CallbackStageStart CallbackStage = "start" CallbackStageFinish CallbackStage = "finish" )
type CompactionStrategy ¶ added in v1.0.1
type CompactionStrategy struct{}
CompactionStrategy replaces bulky tool payloads with short, sanitized summaries.
func NewCompactionStrategy ¶ added in v1.0.1
func NewCompactionStrategy() CompactionStrategy
NewCompactionStrategy creates a compaction optimizer for large tool payloads.
type ConfirmationCallback ¶ added in v1.0.1
type ConfirmationCallback struct {
// contains filtered or unexported fields
}
ConfirmationCallback suspends execution until selected tools are explicitly approved.
func NewConfirmationCallback ¶ added in v1.0.1
func NewConfirmationCallback(policy ApprovalPolicy) ConfirmationCallback
NewConfirmationCallback creates a before-tool callback backed by an approval policy.
func (ConfirmationCallback) BeforeTool ¶ added in v1.0.1
func (c ConfirmationCallback) BeforeTool(_ context.Context, execCtx *ExecutionContext, call model.ToolCall) (*model.ToolResult, error)
BeforeTool requests approval for matching tools and redacts sensitive arguments in the payload.
func (ConfirmationCallback) WithLogger ¶ added in v1.0.1
func (c ConfirmationCallback) WithLogger(logger *slog.Logger) ConfirmationCallback
WithLogger attaches structured approval lifecycle logs.
type ContextOptimizer ¶ added in v1.0.1
type ContextOptimizer struct {
// contains filtered or unexported fields
}
ContextOptimizer applies a list of optimization strategies once a token threshold has been exceeded.
func NewContextOptimizer ¶ added in v1.0.1
func NewContextOptimizer(counter TokenCounter, threshold int, strategies ...OptimizationStrategy) *ContextOptimizer
NewContextOptimizer builds a request mutator that conditionally applies optimization strategies.
func (*ContextOptimizer) Mutate ¶ added in v1.0.1
Mutate applies optimization strategies when the request exceeds the configured threshold.
func (*ContextOptimizer) WithLogger ¶ added in v1.0.1
func (o *ContextOptimizer) WithLogger(logger *slog.Logger) *ContextOptimizer
WithLogger attaches structured optimization logs.
type DuplicateChecker ¶ added in v1.0.1
type DuplicateChecker interface {
IsDuplicate(memory TaskMemory, existing []TaskMemory) bool
}
DuplicateChecker decides whether a memory is already represented in the store.
type Embedder ¶ added in v1.0.1
Embedder converts one or more texts into vectors for semantic search.
type ExecutionContext ¶
type ExecutionContext struct {
// contains filtered or unexported fields
}
ExecutionContext is the central mutable state for one agent run. It records all Events across steps and holds the final result once the agent produces a terminal response. All public methods are safe for concurrent use.
func NewExecutionContextForTest ¶
func NewExecutionContextForTest() *ExecutionContext
NewExecutionContextForTest exposes newExecutionContext for white-box unit tests.
func (*ExecutionContext) AddEvent ¶
func (ec *ExecutionContext) AddEvent(author string, content ...model.ContentItem)
AddEvent appends an event authored by author with the given content items. ID and Timestamp are generated automatically. Safe for concurrent use.
func (*ExecutionContext) CurrentStep ¶
func (ec *ExecutionContext) CurrentStep() int
CurrentStep returns the current step index. Safe for concurrent use.
func (*ExecutionContext) Done ¶
func (ec *ExecutionContext) Done() bool
Done reports whether the agent has produced a final answer. Safe for concurrent use.
func (*ExecutionContext) Events ¶
func (ec *ExecutionContext) Events() []model.Event
Events returns a defensive copy of the event log. Each Event's Content slice is independently copied so callers cannot corrupt internal state by mutating returned slices. Safe for concurrent use.
func (*ExecutionContext) FinalResult ¶
func (ec *ExecutionContext) FinalResult() (string, bool)
FinalResult returns the agent's final answer and true once Done() is true. Returns ("", false) if the agent has not finished yet. Safe for concurrent use.
func (*ExecutionContext) GetState ¶
func (ec *ExecutionContext) GetState(key string) (any, bool)
GetState retrieves a value from the run-scoped key-value store. Safe for concurrent use.
func (*ExecutionContext) ID ¶
func (ec *ExecutionContext) ID() string
ID returns the unique identifier for this execution. Safe for concurrent use.
func (*ExecutionContext) IncrementStep ¶
func (ec *ExecutionContext) IncrementStep()
IncrementStep advances the step counter by one. Safe for concurrent use.
func (*ExecutionContext) InteractionResponse ¶ added in v1.0.1
func (ec *ExecutionContext) InteractionResponse(requestID string) (InteractionResponse, bool)
InteractionResponse returns a previously supplied external response, if present.
func (*ExecutionContext) PendingInteraction ¶ added in v1.0.1
func (ec *ExecutionContext) PendingInteraction() (*InteractionRequest, bool)
PendingInteraction returns the active external interaction request, if any.
func (*ExecutionContext) SetState ¶
func (ec *ExecutionContext) SetState(key string, value any)
SetState stores a value in the run-scoped key-value store. Safe for concurrent use.
type InMemorySessionManager ¶ added in v1.0.1
type InMemorySessionManager struct {
// contains filtered or unexported fields
}
InMemorySessionManager is a thread-safe in-process SessionManager.
func NewInMemorySessionManager ¶ added in v1.0.1
func NewInMemorySessionManager() *InMemorySessionManager
NewInMemorySessionManager creates an empty in-memory session store.
func (*InMemorySessionManager) Create ¶ added in v1.0.1
func (m *InMemorySessionManager) Create(sessionID, userID string) (*Session, error)
Create inserts a new session for the given session and user identifiers.
func (*InMemorySessionManager) Get ¶ added in v1.0.1
func (m *InMemorySessionManager) Get(sessionID string) (*Session, error)
Get loads a previously saved session, or nil when it does not exist.
func (*InMemorySessionManager) GetOrCreate ¶ added in v1.0.1
func (m *InMemorySessionManager) GetOrCreate(sessionID, userID string) (*Session, error)
GetOrCreate returns an existing session for the same user or creates one.
func (*InMemorySessionManager) Save ¶ added in v1.0.1
func (m *InMemorySessionManager) Save(session *Session) error
Save upserts the supplied session snapshot.
type InMemoryVectorStore ¶ added in v1.0.1
type InMemoryVectorStore struct {
// contains filtered or unexported fields
}
InMemoryVectorStore is a thread-safe in-process vector store.
func NewInMemoryVectorStore ¶ added in v1.0.1
func NewInMemoryVectorStore() *InMemoryVectorStore
NewInMemoryVectorStore creates an empty in-memory vector store.
func (*InMemoryVectorStore) Add ¶ added in v1.0.1
func (s *InMemoryVectorStore) Add(_ context.Context, docs []VectorDocument) error
Add appends new vector documents to the store.
func (*InMemoryVectorStore) Search ¶ added in v1.0.1
func (s *InMemoryVectorStore) Search(_ context.Context, query []float64, topK int) ([]VectorDocument, error)
Search returns the top-K documents ranked by cosine similarity.
type InteractionRequest ¶ added in v1.0.1
type InteractionRequest struct {
ID string
Kind string
Prompt string
ToolCallID string
ToolName string
Payload map[string]any
}
InteractionRequest describes a prompt that must be answered from outside the agent.
type InteractionRequestedError ¶ added in v1.0.1
type InteractionRequestedError struct {
Suspended SuspendedRun
}
InteractionRequestedError exposes a suspended run while still matching ErrInteractionRequested.
func (*InteractionRequestedError) Error ¶ added in v1.0.1
func (e *InteractionRequestedError) Error() string
func (*InteractionRequestedError) Unwrap ¶ added in v1.0.1
func (e *InteractionRequestedError) Unwrap() error
type InteractionRequestedEvent ¶ added in v1.0.1
type InteractionRequestedEvent struct {
RunID string
Step int
Request InteractionRequest
}
InteractionRequestedEvent is emitted when the agent suspends to await external input.
type InteractionResponse ¶ added in v1.0.1
type InteractionResponse struct {
RequestID string
Approved *bool
Value string
Metadata map[string]any
}
InteractionResponse carries the external answer to a pending interaction request.
type InteractionResumedEvent ¶ added in v1.0.1
type InteractionResumedEvent struct {
RunID string
Step int
Response InteractionResponse
}
InteractionResumedEvent is emitted when a suspended interaction receives a response.
type LLMCallEvent ¶
LLMCallEvent is emitted after every Generate() call, including on error. Latency covers only the LLM network round-trip. The full request and response content are available via result.Context.Events().
type LLMClient ¶
type LLMClient interface {
Generate(ctx context.Context, req model.Request) (model.Response, error)
}
LLMClient abstracts communication with a language model. Implement this interface to support any LLM provider.
type LiteLLMClient ¶
type LiteLLMClient struct {
// contains filtered or unexported fields
}
LiteLLMClient adapts the openai-go client to the LLMClient interface. Works with OpenAI directly or with a LiteLLM proxy (same API surface).
func NewLiteLLMClient ¶
func NewLiteLLMClient(client *openai.Client, model openai.ChatModel) *LiteLLMClient
NewLiteLLMClient creates a LiteLLMClient wrapping the provided openai-go client.
type MemoryInjector ¶ added in v1.0.1
type MemoryInjector struct {
// contains filtered or unexported fields
}
MemoryInjector adds retrieved long-term memories to the prompt instructions.
func NewMemoryInjector ¶ added in v1.0.1
func NewMemoryInjector(searcher MemorySearcher, topK int) MemoryInjector
NewMemoryInjector creates a request mutator backed by semantic memory search.
type MemorySearcher ¶ added in v1.0.1
type MemorySearcher interface {
Search(ctx context.Context, query string, topK int) ([]TaskMemory, error)
}
MemorySearcher retrieves similar task memories for prompt injection.
type MutatingLLMClient ¶ added in v1.0.1
type MutatingLLMClient struct {
// contains filtered or unexported fields
}
MutatingLLMClient applies request mutators and then forwards the request to another LLMClient.
func NewMutatingLLMClient ¶ added in v1.0.1
func NewMutatingLLMClient(delegate LLMClient, mutators ...RequestMutator) *MutatingLLMClient
NewMutatingLLMClient wraps an LLMClient with one or more request mutators.
type OptimizationStrategy ¶ added in v1.0.1
OptimizationStrategy rewrites a request to reduce context size or noise.
type RequestMutator ¶ added in v1.0.1
RequestMutator rewrites a request immediately before the delegated LLM call.
func WithMutatorLogger ¶ added in v1.0.1
func WithMutatorLogger(mutator RequestMutator, logger *slog.Logger) RequestMutator
WithMutatorLogger wraps a request mutator with structured start/finish logging.
type RequestTokenCounter ¶ added in v1.0.1
type RequestTokenCounter struct {
// contains filtered or unexported fields
}
RequestTokenCounter uses tiktoken-compatible tokenization to count request size.
func NewRequestTokenCounter ¶ added in v1.0.1
func NewRequestTokenCounter(modelName string) (*RequestTokenCounter, error)
NewRequestTokenCounter constructs a token counter for the given model name.
type Result ¶
type Result struct {
// Output is the final answer produced by the LLM.
Output string
// ToolCalled reports whether at least one tool was invoked during the run.
ToolCalled bool
// Context is the full execution history for this run.
Context *ExecutionContext
}
Result is the output of a successful Agent.Run() call.
type RunEndEvent ¶
RunEndEvent is emitted once when Run returns, whether it succeeded. Result is nil when Err is non-nil.
type RunResult ¶ added in v1.0.1
type RunResult struct {
Output string
ToolCalled bool
Status RunStatus
SessionID string
PendingInteraction *InteractionRequest
}
RunResult summarizes one SessionRunner invocation.
type RunStartEvent ¶
RunStartEvent is emitted once before the ReAct loop begins.
type RunStatus ¶ added in v1.0.1
type RunStatus string
RunStatus reports whether a session run finished or is waiting for input.
type Session ¶ added in v1.0.1
type Session struct {
SessionID string
UserID string
Events []model.Event
State map[string]any
CreatedAt time.Time
UpdatedAt time.Time
}
Session stores the persisted conversation history and runner state for one user-facing conversation.
type SessionManager ¶ added in v1.0.1
type SessionManager interface {
Create(sessionID, userID string) (*Session, error)
Get(sessionID string) (*Session, error)
Save(session *Session) error
GetOrCreate(sessionID, userID string) (*Session, error)
}
SessionManager persists and reloads sessions for SessionRunner.
type SessionRunner ¶ added in v1.0.1
type SessionRunner struct {
// contains filtered or unexported fields
}
SessionRunner replays prior events from a session, executes the agent loop, and persists the updated state after each run or resume.
func NewSessionRunner ¶ added in v1.0.1
func NewSessionRunner(agent *Agent, sessions SessionManager, maxSteps int) *SessionRunner
NewSessionRunner builds a session-aware wrapper around Agent.
func (*SessionRunner) Resume ¶ added in v1.0.1
func (r *SessionRunner) Resume(ctx context.Context, sessionID, userID string, response InteractionResponse) (*RunResult, error)
Resume continues a previously suspended session using an external response.
func (*SessionRunner) Run ¶ added in v1.0.1
func (r *SessionRunner) Run(ctx context.Context, sessionID, userID, userInput string) (*RunResult, error)
Run appends the new user input to the stored conversation, executes until the run completes or suspends, and then persists the updated session state.
func (*SessionRunner) WithLogger ¶ added in v1.0.1
func (r *SessionRunner) WithLogger(logger *slog.Logger) *SessionRunner
WithLogger attaches structured lifecycle logging to the runner.
type SimpleDuplicateChecker ¶ added in v1.0.1
type SimpleDuplicateChecker struct{}
SimpleDuplicateChecker treats an identical TaskMemory payload as a duplicate.
func (SimpleDuplicateChecker) IsDuplicate ¶ added in v1.0.1
func (SimpleDuplicateChecker) IsDuplicate(memory TaskMemory, existing []TaskMemory) bool
IsDuplicate reports whether memory matches any candidate exactly.
type SlidingWindowStrategy ¶ added in v1.0.1
type SlidingWindowStrategy struct {
// contains filtered or unexported fields
}
SlidingWindowStrategy keeps the latest user message plus a bounded tail of recent events.
func NewSlidingWindowStrategy ¶ added in v1.0.1
func NewSlidingWindowStrategy(windowSize int) SlidingWindowStrategy
NewSlidingWindowStrategy creates a sliding-window optimizer.
type StaticApprovalPolicy ¶ added in v1.0.1
type StaticApprovalPolicy map[string]ApprovalRule
StaticApprovalPolicy maps tool names directly to approval rules.
func (StaticApprovalPolicy) RuleForTool ¶ added in v1.0.1
func (p StaticApprovalPolicy) RuleForTool(name string) (ApprovalRule, bool)
RuleForTool returns the configured rule for name, if any.
type StepEndEvent ¶
StepEndEvent is emitted after each Think→Act cycle. Err is non-nil when the step failed.
type StepStartEvent ¶
StepStartEvent is emitted at the beginning of each Think→Act cycle.
type SummarizationStrategy ¶ added in v1.0.1
type SummarizationStrategy struct {
// contains filtered or unexported fields
}
SummarizationStrategy replaces older middle-history events with a generated summary.
func NewSummarizationStrategy ¶ added in v1.0.1
func NewSummarizationStrategy(generator SummaryGenerator, keepRecent int) SummarizationStrategy
NewSummarizationStrategy creates a summary-based optimization strategy.
type SummaryGenerator ¶ added in v1.0.1
type SummaryGenerator interface {
Summarize(ctx context.Context, events []model.Event) (string, error)
}
SummaryGenerator compresses older events into a summary string.
type SuspendedRun ¶ added in v1.0.1
type SuspendedRun struct {
Context *ExecutionContext
Interaction InteractionRequest
}
SuspendedRun contains the paused execution state and the pending interaction.
type TaskMemory ¶ added in v1.0.1
type TaskMemory struct {
TaskSummary string
Approach string
FinalAnswer string
IsCorrect bool
ErrorAnalysis string
}
TaskMemory stores a reusable record of how a prior task was solved.
func (TaskMemory) EmbeddingText ¶ added in v1.0.1
func (m TaskMemory) EmbeddingText() string
EmbeddingText returns the text used to embed this memory for similarity search.
type TaskMemoryManager ¶ added in v1.0.1
type TaskMemoryManager struct {
// contains filtered or unexported fields
}
TaskMemoryManager saves and retrieves semantically indexed task memories.
func NewTaskMemoryManager ¶ added in v1.0.1
func NewTaskMemoryManager(embedder Embedder, store VectorStore, duplicateChecker DuplicateChecker) *TaskMemoryManager
NewTaskMemoryManager creates a semantic memory manager from its pluggable components.
func (*TaskMemoryManager) Save ¶ added in v1.0.1
func (m *TaskMemoryManager) Save(ctx context.Context, memory TaskMemory) (string, bool, error)
Save embeds, de-duplicates, and persists a task memory.
func (*TaskMemoryManager) Search ¶ added in v1.0.1
func (m *TaskMemoryManager) Search(ctx context.Context, query string, topK int) ([]TaskMemory, error)
Search retrieves similar task memories for a natural-language query.
func (*TaskMemoryManager) WithLogger ¶ added in v1.0.1
func (m *TaskMemoryManager) WithLogger(logger *slog.Logger) *TaskMemoryManager
WithLogger attaches structured save and search logs.
type TokenCounter ¶ added in v1.0.1
TokenCounter estimates the prompt size of a request before it is sent to the LLM.
type ToolExecEvent ¶
type ToolExecEvent struct {
RunID string
Step int
ToolNames []string
Latency time.Duration
Err error
}
ToolExecEvent is emitted after every executor.Execute() batch. ToolNames lists the names of tools that were called. Latency is 0 when the executor is nil.
type VectorDocument ¶ added in v1.0.1
type VectorDocument struct {
ID string
Vector []float64
Memory TaskMemory
}
VectorDocument binds an embedding to its original task memory payload.
type VectorStore ¶ added in v1.0.1
type VectorStore interface {
Add(ctx context.Context, docs []VectorDocument) error
Search(ctx context.Context, query []float64, topK int) ([]VectorDocument, error)
}
VectorStore persists vectorized memories and supports nearest-neighbor lookup.
Source Files
¶
Directories
¶
| Path | Synopsis |
|---|---|
|
Package mcpadapter bridges github.com/v8tix/mcp-toolkit with react-agent.
|
Package mcpadapter bridges github.com/v8tix/mcp-toolkit with react-agent. |
|
Package model contains the core data types shared across react-agent and its sub-packages.
|
Package model contains the core data types shared across react-agent and its sub-packages. |