Documentation
¶
Overview ¶
Package agent implements the ReAct (Reason + Act) pattern for AI agents.
Overview ¶
A ReAct agent runs a bounded Think → Act → Observe loop: the model thinks (generates a response), acts (calls tools), and observes (results are appended to the history), repeating until it produces a final answer or exhausts the step limit.
The pattern is based on "ReAct: Synergizing Reasoning and Acting in Language Models" (Yao et al., 2022 — https://arxiv.org/abs/2210.03629).
Building an agent ¶
Use the fluent builder to compose an agent from an LLM client, tool definitions, and a tool executor:
a := agent.New(client, toolDefs, executor).
WithInstructions("You are a precise research assistant.").
WithMaxSteps(15)
When a workflow should expose only a subset of tools at a given step, add Agent.WithDynamicToolsCallback:
a := agent.New(client, toolDefs, executor).
WithDynamicToolsCallback(func(execCtx *agent.ExecutionContext) []model.ToolDefinition {
_ = execCtx // inspect current events or state here
return toolDefs[:1]
})
Running an agent ¶
Agent.Run executes the full loop for a single user question. It returns a Result, a replayable rxgo.Observable of AgentEvent values, and any error:
result, events, err := a.Run(ctx, "Who won the 2025 Nobel Prize in Physics?")
if err != nil {
log.Fatal(err)
}
fmt.Println(result.Output)
Observable event stream ¶
The returned observable is a cold, replayable stream of everything that happened during the run. Subscribe by calling Observe():
for item := range events.Observe() {
switch e := item.V.(type) {
case agent.LLMCallEvent:
slog.Info("llm call", "step", e.Step, "latency_ms", e.Latency.Milliseconds())
case agent.ToolExecEvent:
slog.Info("tool exec", "tools", e.ToolNames)
case agent.RunEndEvent:
slog.Info("run end", "err", e.Err)
}
}
Calling Observe() again replays all events from the beginning — safe for multiple independent subscribers (loggers, metrics, tracing).
If you want live logging while the run is still executing, attach a sink with Agent.WithLiveEventSink:
a := agent.New(client, toolDefs, executor).
WithLiveEventSink(func(event agent.AgentEvent) {
if e, ok := event.(agent.LLMCallEvent); ok {
slog.Info("live llm call", "step", e.Step, "latency_ms", e.Latency.Milliseconds())
}
})
Execution history ¶
The full conversation is available via result.Context.Events(). Each model.Event has an Author ("user", "agent", or "tools"), a timestamp, and typed model.ContentItem values (model.Message, model.ToolCall, model.ToolResult).
Bringing your own tools ¶
Implement model.ToolExecutor to connect any tool-running backend:
type myExecutor struct{ /* your registry, MCP session, etc. */ }
func (e *myExecutor) Execute(ctx context.Context, calls []model.ToolCall) ([]model.ToolResult, error) {
results := make([]model.ToolResult, len(calls))
for i, call := range calls {
out, err := e.dispatch(ctx, call.Name, call.Arguments)
if err != nil {
results[i] = model.ToolResult{ID: call.ID, Name: call.Name, Status: "error", Content: []string{err.Error()}}
continue
}
results[i] = model.ToolResult{ID: call.ID, Name: call.Name, Status: "success", Content: []string{out}}
}
return results, nil
}
For MCP-based tools (github.com/v8tix/mcp-toolkit/v2), use the ready-made adapter in the [mcpadapter] sub-package.
Stateful sessions and approvals ¶
Use SessionRunner when a conversation must persist across multiple user-facing turns. It replays prior model.Event values from a SessionManager, runs the agent, and saves the updated state after each call. If a callback suspends the run, SessionRunner.Run returns StatusPending plus a pending interaction payload that your app can surface in a UI or API before resuming. Use NewPersistedSessionManager with a SessionPersister when that state must survive process boundaries:
sessions := agent.NewInMemorySessionManager()
runner := agent.NewSessionRunner(
agent.New(client, defs, executor).
WithBeforeToolCallbacks(agent.NewConfirmationCallback(agent.StaticApprovalPolicy{
"delete_file": {MessageTemplate: "Approve file deletion?"},
})),
sessions,
8,
)
first, _ := runner.Run(ctx, "chat-1", "user-7", "My name is Alice")
next, _ := runner.Run(ctx, "chat-1", "user-7", "What's my name?")
_, _ = first, next
Approval callbacks use Suspend under the hood and can be resumed with Agent.Resume or SessionRunner.Resume. The built-in ConfirmationCallback also redacts sensitive tool arguments from the interaction payload. Read and write [Session.State] when later turns depend on facts captured earlier — it works well as shared scratch space for multi-step workflows.
Planning and reflection policies ¶
For tasks that benefit from explicit planning, pair NewPlanningExecutor with PlanningToolDefinition. Add NewPlanningReflectionTracker plus NewPlanningReflectionPolicy when you want the agent to revise its task list before finalizing. Use WithPlanningReflectionStagnationThreshold to turn on a stricter loop where repeated planning-only churn triggers an explicit reflection step before more planning is allowed. When final answers must be grounded in gathered facts, add NewVerificationGate after an EvidenceCollector has started recording support for the answer.
Workflow-owned control ¶
Some applications need more than generic tool use. They need a bounded workflow where the model can still reason, but the application controls the critical phases. A friendly way to think about this is:
plan -> deterministic step -> fallback if needed -> gather evidence -> grounded answer
`react-agent` keeps those workflow rules out of the core runtime. Instead it exposes small seams so the application can own the policy:
- Agent.WithDynamicToolsCallback can hide tools that should not be visible in the current phase.
- BeforeToolCallback can block illegal tool choices, trigger circuit breakers, or queue corrective user messages.
- AfterToolCallback can record state transitions after success or failure.
- FinalAnswerCallback can reject an answer that is not grounded in the facts already gathered by the workflow.
- QueueDeferredUserMessage lets callbacks steer the next turn without breaking the event ordering guarantees of the loop.
A typical workflow-controlled setup looks like:
phaseTracker := newMyWorkflowTracker()
a := agent.New(client, defs, executor).
WithDynamicToolsCallback(func(execCtx *agent.ExecutionContext) []model.ToolDefinition {
return phaseTracker.AllowedTools(defs)
}).
WithBeforeToolCallbacks(phaseTracker).
WithAfterToolCallbacks(phaseTracker).
WithFinalAnswerCallbacks(myWorkflowGate{tracker: phaseTracker})
In that pattern, the library still owns the ReAct loop, history, events, and suspension/resume flow, while the application owns the business-specific workflow phases.
Request mutation and context memory ¶
MutatingLLMClient lets you rewrite a request immediately before it is sent to the underlying LLMClient. This is the extension point for prompt hygiene, context-window management, and memory injection.
Common building blocks:
- ContextOptimizer applies one or more OptimizationStrategy values once a TokenCounter threshold is exceeded.
- SlidingWindowStrategy preserves the latest user turn and a recent tail of events.
- CompactionStrategy replaces bulky tool payloads with short sanitized summaries.
- SummarizationStrategy moves older history into a generated summary in the instructions.
- WithMutatorLogger adds structured logs around any RequestMutator.
- StablePrefixDetector is a small seam for apps that want to identify the reusable prefix of a request for caching-friendly workflows. This is most useful when your provider supports prompt caching and the request has a large stable setup section.
Long-term task memory ¶
TaskMemoryManager stores solved tasks in a pluggable VectorStore so future requests can retrieve similar work. Pair it with MemoryInjector to inject the most relevant prior records into the prompt before each LLM call:
memories := agent.NewTaskMemoryManager(embedder, agent.NewInMemoryVectorStore(), agent.SimpleDuplicateChecker{})
clientWithMemory := agent.NewMutatingLLMClient(
client,
agent.NewMemoryInjector(memories, 3),
)
_, _, _ = memories, clientWithMemory, agent.New(clientWithMemory, defs, executor)
Attach [WithWritePolicy] to TaskMemoryManager when only higher-value task completions should be stored as long-term memory. The built-in ThresholdMemoryWritePolicy is a good default when you want to skip trivial or low-detail task outcomes instead of saving every successful run.
Retrieval-heavy applications can keep retrieval logic outside the agent while still sharing common contracts. HybridRetriever expresses a query-to-candidate retrieval step, Reranker refines those candidates, and ChunkContextEnricher can add document-aware context before indexing.
Retrieval terminology ¶
A few retrieval words show up often when building agent systems:
- A "chunk" is a small piece of source text that can be stored and retrieved later. For example, one paragraph from a refund policy.
- "Chunk context enrichment" means attaching source-level context so the chunk still makes sense by itself. For example, turning "refunds accepted within 30 days" into "Refund Policy — refunds accepted within 30 days".
- "Lexical retrieval" means matching exact words or phrases.
- "Semantic retrieval" means matching by meaning, even when wording changes.
- "Hybrid retrieval" means combining more than one retrieval signal into a single shortlist.
- "Reranking" means taking that rough shortlist and reordering it with a slower, more precise second pass.
- "Dynamic tools" means showing the model only the tools that make sense in the current phase of the workflow.
- "Grounding" means requiring the final answer to rely on authoritative facts already captured in the run.
- A "circuit breaker" means stopping a repeated bad action instead of letting the loop retry the same blocked path forever.
`react-agent` does not force one retrieval stack. Instead it exposes small contracts so applications can plug in their own lexical search, vector search, reranking model, or indexing pipeline without changing the core agent loop.
Approval and compression terminology ¶
Two other concepts appear frequently in production agents:
- An "approval loop" pauses before a risky action, asks an external system or human to decide, and then resumes or denies the action. In this package that path runs through ConfirmationCallback, Suspend, InteractionRequest, InteractionResponse, and Agent.Resume.
- "Compression" or "context optimization" means shrinking noisy history before the next model call so the useful parts stay visible. Common tools here are ContextOptimizer, CompactionStrategy, and SummarizationStrategy.
Manual step control ¶
Agent.Step is exported so callers can drive the loop themselves — useful for streaming, checkpointing, or human-in-the-loop interrupts:
execCtx := agent.NewExecutionContextForTest()
execCtx.AddEvent("user", model.Message{Role: "user", Content: question})
for execCtx.CurrentStep() < 20 {
if err := a.Step(ctx, execCtx); err != nil {
break
}
if execCtx.Done() {
break
}
execCtx.IncrementStep()
}
Index ¶
- Variables
- func FormatPlanTasks(tasks []PlanTask) string
- func MarshalPlanTasks(tasks []PlanTask) (json.RawMessage, error)
- func PlanningToolDefinition() model.ToolDefinition
- func QueueDeferredUserMessage(execCtx *ExecutionContext, content string)
- func Suspend(req InteractionRequest) error
- type AfterToolCallback
- type Agent
- func (a *Agent) Act(ctx context.Context, execCtx *ExecutionContext, calls []model.ToolCall) error
- func (a *Agent) Resume(ctx context.Context, suspended SuspendedRun, response InteractionResponse) (*Result, rxgo.Observable, error)
- func (a *Agent) Run(ctx context.Context, userMessage string) (*Result, rxgo.Observable, error)
- func (a *Agent) Step(ctx context.Context, execCtx *ExecutionContext) error
- func (a *Agent) Think(ctx context.Context, execCtx *ExecutionContext) (model.Response, error)
- func (a *Agent) WithAfterToolCallbacks(callbacks ...AfterToolCallback) *Agent
- func (a *Agent) WithBeforeToolCallbacks(callbacks ...BeforeToolCallback) *Agent
- func (a *Agent) WithDynamicToolsCallback(cb DynamicToolsCallback) *Agent
- func (a *Agent) WithFinalAnswerCallbacks(callbacks ...FinalAnswerCallback) *Agent
- func (a *Agent) WithInstructions(s string) *Agent
- func (a *Agent) WithLiveEventSink(sinks ...LiveEventSink) *Agent
- func (a *Agent) WithMaxSteps(n int) *Agent
- type AgentEvent
- type ApprovalPolicy
- type ApprovalRule
- type BeforeToolCallback
- type CallbackEvent
- type CallbackPhase
- type CallbackStage
- type ChunkContextEnricher
- type CompactionStrategy
- type ConfirmationCallback
- type ContextOptimizer
- type DuplicateChecker
- type DynamicToolsCallback
- type Embedder
- type EvidenceCollector
- type EvidenceItem
- type EvidenceTracker
- type ExecutionContext
- func (ec *ExecutionContext) AddEvent(author string, content ...model.ContentItem)
- func (ec *ExecutionContext) CurrentStep() int
- func (ec *ExecutionContext) Done() bool
- func (ec *ExecutionContext) Events() []model.Event
- func (ec *ExecutionContext) FinalResult() (string, bool)
- func (ec *ExecutionContext) GetState(key string) (any, bool)
- func (ec *ExecutionContext) ID() string
- func (ec *ExecutionContext) IncrementStep()
- func (ec *ExecutionContext) InteractionResponse(requestID string) (InteractionResponse, bool)
- func (ec *ExecutionContext) PendingInteraction() (*InteractionRequest, bool)
- func (ec *ExecutionContext) SetState(key string, value any)
- type FinalAnswerCallback
- type HybridRetriever
- type InMemorySessionManager
- func (m *InMemorySessionManager) Create(sessionID, userID string) (*Session, error)
- func (m *InMemorySessionManager) Get(sessionID string) (*Session, error)
- func (m *InMemorySessionManager) GetOrCreate(sessionID, userID string) (*Session, error)
- func (m *InMemorySessionManager) Save(session *Session) error
- type InMemoryVectorStore
- type InteractionRequest
- type InteractionRequestedError
- type InteractionRequestedEvent
- type InteractionResponse
- type InteractionResumedEvent
- type LLMCallEvent
- type LLMClient
- type LiteLLMClient
- type LiveEventSink
- type MemoryInjector
- type MemorySearcher
- type MemoryWriteDecision
- type MemoryWritePolicy
- type MutatingLLMClient
- type Observation
- type OptimizationStrategy
- type PlanRevision
- type PlanRevisionEvent
- type PlanTask
- type PlanTaskStatus
- type PlanningExecutor
- func (e *PlanningExecutor) Execute(ctx context.Context, calls []model.ToolCall) ([]model.ToolResult, error)
- func (e *PlanningExecutor) LatestPlan() (string, bool)
- func (e *PlanningExecutor) Plans() []string
- func (e *PlanningExecutor) Revisions() []PlanRevision
- func (e *PlanningExecutor) TaskCounts() []int
- func (e *PlanningExecutor) WithObservers(observers ...PlanningObserver) *PlanningExecutor
- type PlanningObserver
- type PlanningPolicy
- type PlanningReflectionEvent
- type PlanningReflectionEventKind
- type PlanningReflectionOption
- type PlanningReflectionPolicy
- type PlanningReflectionTracker
- func (t *PlanningReflectionTracker) AfterTool(ctx context.Context, execCtx *ExecutionContext, result model.ToolResult) (*model.ToolResult, error)
- func (t *PlanningReflectionTracker) BeforeTool(_ context.Context, execCtx *ExecutionContext, call model.ToolCall) (*model.ToolResult, error)
- func (t *PlanningReflectionTracker) LatestReflection() string
- func (t *PlanningReflectionTracker) NeedsReflection() bool
- func (t *PlanningReflectionTracker) NeedsRevision() bool
- func (t *PlanningReflectionTracker) RecordReflection(ctx context.Context, execCtx *ExecutionContext, reflection string, ...)
- type PolicyDecision
- type PolicyEvent
- type RecoveryAttempt
- type RecoveryEvent
- type RecoveryEventKind
- type RecoveryFailure
- type RecoveryPolicy
- type RecoveryTracker
- func (r *RecoveryTracker) AfterTool(ctx context.Context, execCtx *ExecutionContext, result model.ToolResult) (*model.ToolResult, error)
- func (r *RecoveryTracker) Attempts() []RecoveryAttempt
- func (r *RecoveryTracker) BeforeTool(_ context.Context, execCtx *ExecutionContext, call model.ToolCall) (*model.ToolResult, error)
- func (r *RecoveryTracker) Failures() []RecoveryFailure
- func (r *RecoveryTracker) HasUnresolvedFailures() bool
- func (r *RecoveryTracker) LatestReflection() string
- func (r *RecoveryTracker) RecordReflection(ctx context.Context, execCtx *ExecutionContext, reflection string) error
- func (r *RecoveryTracker) RequiresReflection() bool
- type RequestMutator
- type RequestTokenCounter
- type Reranker
- type Result
- type RetrievalCandidate
- type RunEndEvent
- type RunResult
- type RunStartEvent
- type RunStatus
- type Session
- type SessionManager
- type SessionPersister
- type SessionRunner
- func (r *SessionRunner) Resume(ctx context.Context, sessionID, userID string, response InteractionResponse) (*RunResult, error)
- func (r *SessionRunner) Run(ctx context.Context, sessionID, userID, userInput string) (*RunResult, error)
- func (r *SessionRunner) WithLogger(logger *slog.Logger) *SessionRunner
- type SimpleDuplicateChecker
- type SlidingWindowStrategy
- type StablePrefixDetector
- type StaticApprovalPolicy
- type StepEndEvent
- type StepStartEvent
- type SummarizationStrategy
- type SummaryGenerator
- type SuspendedRun
- type SynthesisEvent
- type SynthesisEventKind
- type SynthesisPolicy
- type SynthesisRecord
- type SynthesisTracker
- func (s *SynthesisTracker) AfterTool(ctx context.Context, execCtx *ExecutionContext, result model.ToolResult) (*model.ToolResult, error)
- func (s *SynthesisTracker) HasIncompleteAnalysis() bool
- func (s *SynthesisTracker) MarkSynthesisComplete(ctx context.Context, execCtx *ExecutionContext) error
- func (s *SynthesisTracker) Observations() []Observation
- func (s *SynthesisTracker) SynthesisHistory() []SynthesisRecord
- type TaskMemory
- type TaskMemoryManager
- func (m *TaskMemoryManager) Save(ctx context.Context, memory TaskMemory) (string, bool, error)
- func (m *TaskMemoryManager) Search(ctx context.Context, query string, topK int) ([]TaskMemory, error)
- func (m *TaskMemoryManager) WithLogger(logger *slog.Logger) *TaskMemoryManager
- func (m *TaskMemoryManager) WithWritePolicy(policy MemoryWritePolicy) *TaskMemoryManager
- type ThresholdMemoryWritePolicy
- type TokenCounter
- type ToolExecEvent
- type VectorDocument
- type VectorStore
- type VerificationGate
- func (g *VerificationGate) BeforeFinalAnswer(_ context.Context, _ *ExecutionContext, answer string) error
- func (g *VerificationGate) BeforeTool(_ context.Context, execCtx *ExecutionContext, call model.ToolCall) (*model.ToolResult, error)
- func (g *VerificationGate) LatestReflection() string
- func (g *VerificationGate) NeedsReflection() bool
- type VerificationOption
Examples ¶
Constants ¶
This section is empty.
Variables ¶
var ErrInteractionRequested = errors.New("agent: interaction requested")
ErrInteractionRequested signals that the agent suspended awaiting external input.
var ErrMaxStepsReached = errors.New("agent: max steps reached without final answer")
ErrMaxStepsReached is returned when Run exhausts maxSteps without a final answer.
Functions ¶
func FormatPlanTasks ¶ added in v1.0.2
FormatPlanTasks renders a full plan in the format expected by the planning flow.
func MarshalPlanTasks ¶ added in v1.0.2
func MarshalPlanTasks(tasks []PlanTask) (json.RawMessage, error)
MarshalPlanTasks marshals plan tasks into the JSON payload expected by the create_tasks tool.
func PlanningToolDefinition ¶ added in v1.0.2
func PlanningToolDefinition() model.ToolDefinition
PlanningToolDefinition returns a reusable ToolDefinition for Chapter 7 style planning.
func QueueDeferredUserMessage ¶ added in v1.0.2
func QueueDeferredUserMessage(execCtx *ExecutionContext, content string)
QueueDeferredUserMessage schedules a user message to be appended after the current tool result event finishes. Use this from callbacks that need to steer the next LLM turn without breaking the event ordering invariants of the run.
func Suspend ¶ added in v1.0.1
func Suspend(req InteractionRequest) error
Suspend requests external interaction from inside a callback.
Returning Suspend(req) from a callback is what turns a normal run into an approval loop or any other human-in-the-loop step.
Types ¶
type AfterToolCallback ¶ added in v1.0.1
type AfterToolCallback interface {
AfterTool(ctx context.Context, execCtx *ExecutionContext, result model.ToolResult) (*model.ToolResult, error)
}
AfterToolCallback can replace a tool result after the executor (or a before-tool callback) produced it. Returning a non-nil ToolResult replaces the current result for that call.
type Agent ¶
type Agent struct {
// contains filtered or unexported fields
}
Agent is the ReAct orchestrator. It runs a Think → Act → Observe loop until the LLM produces a final answer or maxSteps is exhausted.
func New ¶
func New(client LLMClient, defs []model.ToolDefinition, executor model.ToolExecutor) *Agent
New creates an Agent with sensible defaults (maxSteps=10).
- defs: tool definitions the LLM can call (pass nil or empty for no tools)
- executor: executes tool calls concurrently (pass nil when defs is empty)
Use the builder methods to customise the agent:
agent.New(client, defs, executor).
WithInstructions("You are helpful.").
WithMaxSteps(15)
Example ¶
ExampleNew shows how to construct an agent with the fluent builder. Replace demoLLM with agent.NewLiteLLMClient(openaiClient, model) to target a real LLM.
package main
import (
"context"
"fmt"
agent "github.com/v8tix/react-agent"
"github.com/v8tix/react-agent/model"
)
// demoLLM is a scripted LLM stub: it returns responses[0], responses[1], …
// in sequence. Use it to write deterministic, self-contained documentation
// examples that produce a known output without any network calls.
type demoLLM struct {
responses []model.Response
n int
}
func (d *demoLLM) Generate(_ context.Context, _ model.Request) (model.Response, error) {
resp := d.responses[d.n]
d.n++
return resp, nil
}
// demoExecutor echoes back a canned result for every tool call it receives.
// Suitable for examples that need a tool round-trip without a real backend.
type demoExecutor struct{}
func (demoExecutor) Execute(_ context.Context, calls []model.ToolCall) ([]model.ToolResult, error) {
results := make([]model.ToolResult, len(calls))
for i, c := range calls {
results[i] = model.ToolResult{
ID: c.ID,
Name: c.Name,
Status: "success",
Content: []string{fmt.Sprintf("result_of_%s", c.Name)},
}
}
return results, nil
}
func main() {
// Hypothetical: ask an assistant to look up a stock price.
llm := &demoLLM{} // swap for agent.NewLiteLLMClient(...)
defs := []model.ToolDefinition{
{
Name: "search_web",
Description: "Search the web for up-to-date information",
Parameters: map[string]any{
"type": "object",
"properties": map[string]any{
"query": map[string]any{"type": "string"},
},
"required": []string{"query"},
},
},
}
_ = agent.New(llm, defs, demoExecutor{}).
WithInstructions("You are a precise research assistant.").
WithMaxSteps(15)
}
Output:
func (*Agent) Act ¶
Act executes all requested tool calls via ToolExecutor and records the results. The agent's tool-call decision is appended as an "agent" event BEFORE execution, then tool results are appended as a "tools" event AFTER execution. Note: events are not emitted when calling Act directly; use Run for observability.
func (*Agent) Resume ¶ added in v1.0.1
func (a *Agent) Resume(ctx context.Context, suspended SuspendedRun, response InteractionResponse) (*Result, rxgo.Observable, error)
Resume continues a suspended run after an external interaction response arrives.
func (*Agent) Run ¶
Run executes the full ReAct loop for a single user message. It returns a Result, a replayable Observable of AgentEvents, and any error.
The Observable is a cold, replayable stream: each call to Observe() replays all events from the completed run. It is safe for multiple subscribers.
Example ¶
ExampleAgent_Run demonstrates a single-step run where the LLM answers directly without calling any tools.
package main
import (
"context"
"fmt"
"log"
agent "github.com/v8tix/react-agent"
"github.com/v8tix/react-agent/model"
)
// demoLLM is a scripted LLM stub: it returns responses[0], responses[1], …
// in sequence. Use it to write deterministic, self-contained documentation
// examples that produce a known output without any network calls.
type demoLLM struct {
responses []model.Response
n int
}
func (d *demoLLM) Generate(_ context.Context, _ model.Request) (model.Response, error) {
resp := d.responses[d.n]
d.n++
return resp, nil
}
func main() {
llm := &demoLLM{
responses: []model.Response{
{Content: []model.ContentItem{
model.Message{Role: "assistant", Content: "The capital of France is Paris."},
}},
},
}
a := agent.New(llm, nil, nil).
WithInstructions("You are a helpful assistant.")
result, _, err := a.Run(context.Background(), "What is the capital of France?")
if err != nil {
log.Fatal(err)
}
fmt.Println(result.Output)
fmt.Println(result.ToolCalled)
}
Output: The capital of France is Paris. false
Example (EventStream) ¶
ExampleAgent_Run_eventStream shows how to consume the observable event stream returned by Run to build logging, metrics, or tracing.
The observable is cold and replayable — calling Observe() again replays all events from the beginning, safe for multiple independent subscribers.
package main
import (
"context"
"encoding/json"
"fmt"
"log"
agent "github.com/v8tix/react-agent"
"github.com/v8tix/react-agent/model"
)
// demoLLM is a scripted LLM stub: it returns responses[0], responses[1], …
// in sequence. Use it to write deterministic, self-contained documentation
// examples that produce a known output without any network calls.
type demoLLM struct {
responses []model.Response
n int
}
func (d *demoLLM) Generate(_ context.Context, _ model.Request) (model.Response, error) {
resp := d.responses[d.n]
d.n++
return resp, nil
}
// demoExecutor echoes back a canned result for every tool call it receives.
// Suitable for examples that need a tool round-trip without a real backend.
type demoExecutor struct{}
func (demoExecutor) Execute(_ context.Context, calls []model.ToolCall) ([]model.ToolResult, error) {
results := make([]model.ToolResult, len(calls))
for i, c := range calls {
results[i] = model.ToolResult{
ID: c.ID,
Name: c.Name,
Status: "success",
Content: []string{fmt.Sprintf("result_of_%s", c.Name)},
}
}
return results, nil
}
func main() {
llm := &demoLLM{
responses: []model.Response{
{Content: []model.ContentItem{
model.ToolCall{ID: "c1", Name: "search_web", Arguments: json.RawMessage(`{}`)},
}},
{Content: []model.ContentItem{
model.Message{Role: "assistant", Content: "Done."},
}},
},
}
defs := []model.ToolDefinition{{
Name: "search_web",
Parameters: map[string]any{"type": "object", "properties": map[string]any{"query": map[string]any{"type": "string"}}, "required": []string{"query"}},
}}
a := agent.New(llm, defs, demoExecutor{})
_, events, err := a.Run(context.Background(), "What is the weather in Paris?")
if err != nil {
log.Fatal(err)
}
for item := range events.Observe() {
switch e := item.V.(type) {
case agent.RunStartEvent:
fmt.Println("run started")
case agent.LLMCallEvent:
fmt.Printf("llm call step=%d\n", e.Step)
case agent.ToolExecEvent:
fmt.Printf("tool exec: %v\n", e.ToolNames)
case agent.RunEndEvent:
fmt.Println("run ended")
}
}
}
Output: run started llm call step=0 tool exec: [search_web] llm call step=1 run ended
Example (ReasoningTrail) ¶
ExampleAgent_Run_reasoningTrail shows how to inspect the full conversation history — every message, tool call, and tool result — after a run. Useful for debugging, audit logs, or displaying the chain of thought.
package main
import (
"context"
"encoding/json"
"fmt"
"log"
agent "github.com/v8tix/react-agent"
"github.com/v8tix/react-agent/model"
)
// demoLLM is a scripted LLM stub: it returns responses[0], responses[1], …
// in sequence. Use it to write deterministic, self-contained documentation
// examples that produce a known output without any network calls.
type demoLLM struct {
responses []model.Response
n int
}
func (d *demoLLM) Generate(_ context.Context, _ model.Request) (model.Response, error) {
resp := d.responses[d.n]
d.n++
return resp, nil
}
// demoExecutor echoes back a canned result for every tool call it receives.
// Suitable for examples that need a tool round-trip without a real backend.
type demoExecutor struct{}
func (demoExecutor) Execute(_ context.Context, calls []model.ToolCall) ([]model.ToolResult, error) {
results := make([]model.ToolResult, len(calls))
for i, c := range calls {
results[i] = model.ToolResult{
ID: c.ID,
Name: c.Name,
Status: "success",
Content: []string{fmt.Sprintf("result_of_%s", c.Name)},
}
}
return results, nil
}
func main() {
llm := &demoLLM{
responses: []model.Response{
{Content: []model.ContentItem{
model.ToolCall{ID: "c1", Name: "lookup", Arguments: json.RawMessage(`{"id":"42"}`)},
}},
{Content: []model.ContentItem{
model.Message{Role: "assistant", Content: "Found it."},
}},
},
}
defs := []model.ToolDefinition{{
Name: "lookup",
Parameters: map[string]any{"type": "object", "properties": map[string]any{"id": map[string]any{"type": "string"}}, "required": []string{"id"}},
}}
result, _, err := agent.New(llm, defs, demoExecutor{}).
Run(context.Background(), "Look up record 42.")
if err != nil {
log.Fatal(err)
}
for _, event := range result.Context.Events() {
for _, item := range event.Content {
switch v := item.(type) {
case model.Message:
fmt.Printf("[%s] %s\n", event.Author, v.Content)
case model.ToolCall:
fmt.Printf("[%s] call %s\n", event.Author, v.Name)
case model.ToolResult:
fmt.Printf("[%s] result %s=%s\n", event.Author, v.Name, v.Content[0])
}
}
}
}
Output: [user] Look up record 42. [agent] call lookup [tools] result lookup=result_of_lookup [agent] Found it.
Example (WithTools) ¶
ExampleAgent_Run_withTools demonstrates a two-step run: the LLM first calls a tool, then produces a final answer once it has the search result.
This mirrors the classic ReAct scenario — the agent reasons about what information it needs, fetches it, then synthesises an answer.
package main
import (
"context"
"encoding/json"
"fmt"
"log"
agent "github.com/v8tix/react-agent"
"github.com/v8tix/react-agent/model"
)
// demoLLM is a scripted LLM stub: it returns responses[0], responses[1], …
// in sequence. Use it to write deterministic, self-contained documentation
// examples that produce a known output without any network calls.
type demoLLM struct {
responses []model.Response
n int
}
func (d *demoLLM) Generate(_ context.Context, _ model.Request) (model.Response, error) {
resp := d.responses[d.n]
d.n++
return resp, nil
}
// demoExecutor echoes back a canned result for every tool call it receives.
// Suitable for examples that need a tool round-trip without a real backend.
type demoExecutor struct{}
func (demoExecutor) Execute(_ context.Context, calls []model.ToolCall) ([]model.ToolResult, error) {
results := make([]model.ToolResult, len(calls))
for i, c := range calls {
results[i] = model.ToolResult{
ID: c.ID,
Name: c.Name,
Status: "success",
Content: []string{fmt.Sprintf("result_of_%s", c.Name)},
}
}
return results, nil
}
func main() {
llm := &demoLLM{
responses: []model.Response{
// Step 1: LLM decides to call search_web
{Content: []model.ContentItem{
model.ToolCall{
ID: "call_1",
Name: "search_web",
Arguments: json.RawMessage(`{"query":"AAPL stock price January 9 2007"}`),
},
}},
// Step 2: LLM reads the search result and gives the final answer
{Content: []model.ContentItem{
model.Message{Role: "assistant", Content: "Apple stock was $11.74 on January 9, 2007."},
}},
},
}
defs := []model.ToolDefinition{{
Name: "search_web",
Description: "Search the web for current information",
Parameters: map[string]any{
"type": "object",
"required": []string{"query"},
"properties": map[string]any{
"query": map[string]any{"type": "string"},
},
},
}}
a := agent.New(llm, defs, demoExecutor{}).
WithInstructions("You are a research assistant. Verify facts before answering.")
result, _, err := a.Run(context.Background(), "What was Apple's stock price the day the iPhone was announced?")
if err != nil {
log.Fatal(err)
}
fmt.Println(result.Output)
fmt.Println("tool called:", result.ToolCalled)
}
Output: Apple stock was $11.74 on January 9, 2007. tool called: true
func (*Agent) Step ¶
func (a *Agent) Step(ctx context.Context, execCtx *ExecutionContext) error
Step executes one Think → (optionally) Act cycle, mutating execCtx in place. It is exported so callers can drive the loop manually for checkpointing or human-in-the-loop interrupts. Use execCtx.Done() to check for a final answer. Note: events are not emitted when calling Step directly; use Run for observability.
Example ¶
ExampleAgent_Step shows how to drive the ReAct loop manually step-by-step. This gives you control between steps — useful for streaming output to a UI, checkpointing long runs, or pausing for human approval before the agent acts.
package main
import (
"context"
"fmt"
"log"
agent "github.com/v8tix/react-agent"
"github.com/v8tix/react-agent/model"
)
// demoLLM is a scripted LLM stub: it returns responses[0], responses[1], …
// in sequence. Use it to write deterministic, self-contained documentation
// examples that produce a known output without any network calls.
type demoLLM struct {
responses []model.Response
n int
}
func (d *demoLLM) Generate(_ context.Context, _ model.Request) (model.Response, error) {
resp := d.responses[d.n]
d.n++
return resp, nil
}
func main() {
llm := &demoLLM{
responses: []model.Response{
{Content: []model.ContentItem{
model.Message{Role: "assistant", Content: "The answer is 42."},
}},
},
}
a := agent.New(llm, nil, nil).WithMaxSteps(10)
execCtx := agent.NewExecutionContextForTest()
execCtx.AddEvent("user", model.Message{Role: "user", Content: "What is the answer to life, the universe and everything?"})
for execCtx.CurrentStep() < 10 {
if err := a.Step(context.Background(), execCtx); err != nil {
log.Fatal(err)
}
if execCtx.Done() {
break
}
execCtx.IncrementStep()
}
answer, _ := execCtx.FinalResult()
fmt.Println(answer)
fmt.Println("done:", execCtx.Done())
}
Output: The answer is 42. done: true
func (*Agent) Think ¶
Think calls the LLM with the current execution context and returns its response. Note: events are not emitted when calling Think directly; use Run for observability.
func (*Agent) WithAfterToolCallbacks ¶ added in v1.0.1
func (a *Agent) WithAfterToolCallbacks(callbacks ...AfterToolCallback) *Agent
WithAfterToolCallbacks appends tool callbacks that run after a tool result is produced.
func (*Agent) WithBeforeToolCallbacks ¶ added in v1.0.1
func (a *Agent) WithBeforeToolCallbacks(callbacks ...BeforeToolCallback) *Agent
WithBeforeToolCallbacks appends tool callbacks that run before executor dispatch.
func (*Agent) WithDynamicToolsCallback ¶ added in v1.0.2
func (a *Agent) WithDynamicToolsCallback(cb DynamicToolsCallback) *Agent
WithDynamicToolsCallback overrides the tool definitions sent to the LLM for each turn.
Returning nil or an empty slice hides all tools for that turn.
func (*Agent) WithFinalAnswerCallbacks ¶ added in v1.0.2
func (a *Agent) WithFinalAnswerCallbacks(callbacks ...FinalAnswerCallback) *Agent
WithFinalAnswerCallbacks appends callbacks that validate a proposed final answer before the run is allowed to finish.
func (*Agent) WithInstructions ¶
WithInstructions sets the system prompt sent on every LLM request.
func (*Agent) WithLiveEventSink ¶ added in v1.0.2
func (a *Agent) WithLiveEventSink(sinks ...LiveEventSink) *Agent
WithLiveEventSink appends callbacks that receive agent events while the run is active.
The replayable observable returned by Run/Resume remains unchanged; this hook is for live logging, metrics, or tracing during execution.
func (*Agent) WithMaxSteps ¶
WithMaxSteps overrides the default step limit (10). Panics if n < 1 — zero or negative steps is a programming error.
type AgentEvent ¶
type AgentEvent interface {
// contains filtered or unexported methods
}
AgentEvent is the sealed sum type for all agent lifecycle events. Events are collected during a run and replayed afterward through a cold observable, so multiple Observe() calls are safe and deterministic. Use a type switch to handle specific event types:
for item := range events.Observe() {
switch e := item.V.(type) {
case agent.LLMCallEvent:
slog.Info("llm call", "latency_ms", e.Latency.Milliseconds())
case agent.RunEndEvent:
fmt.Println(e.Result.Output)
}
}
type ApprovalPolicy ¶ added in v1.0.1
type ApprovalPolicy interface {
RuleForTool(name string) (ApprovalRule, bool)
}
ApprovalPolicy decides whether a given tool requires human approval.
type ApprovalRule ¶ added in v1.0.1
ApprovalRule defines how a tool approval request should be shown to a human and what message should come back if the action is denied.
type BeforeToolCallback ¶ added in v1.0.1
type BeforeToolCallback interface {
BeforeTool(ctx context.Context, execCtx *ExecutionContext, call model.ToolCall) (*model.ToolResult, error)
}
BeforeToolCallback can short-circuit a tool call before the executor runs. Returning a non-nil ToolResult skips executor execution for that call.
type CallbackEvent ¶ added in v1.0.1
type CallbackEvent struct {
RunID string
Step int
Phase CallbackPhase
Stage CallbackStage
Callback string
ToolCallID string
ToolName string
Overrode bool
Latency time.Duration
Err error
}
CallbackEvent is emitted before and after each callback invocation.
type CallbackPhase ¶ added in v1.0.1
type CallbackPhase string
CallbackPhase identifies which callback stage emitted the event.
const ( CallbackPhaseBeforeTool CallbackPhase = "before_tool" CallbackPhaseAfterTool CallbackPhase = "after_tool" )
type CallbackStage ¶ added in v1.0.1
type CallbackStage string
CallbackStage identifies whether the event was emitted before invoking the callback or after it returned.
const ( CallbackStageStart CallbackStage = "start" CallbackStageFinish CallbackStage = "finish" )
type ChunkContextEnricher ¶ added in v1.0.2
type ChunkContextEnricher interface {
EnrichChunk(context.Context, string, string, map[string]string) (string, error)
}
ChunkContextEnricher adds source-level context to a chunk before downstream indexing or retrieval so the chunk still makes sense on its own.
Typical enrichments include document title, URL, section heading, or other metadata that would be lost if the raw chunk were indexed by itself.
type CompactionStrategy ¶ added in v1.0.1
type CompactionStrategy struct{}
CompactionStrategy replaces bulky tool payloads with short, sanitized summaries.
func NewCompactionStrategy ¶ added in v1.0.1
func NewCompactionStrategy() CompactionStrategy
NewCompactionStrategy creates a compaction optimizer for large tool payloads.
type ConfirmationCallback ¶ added in v1.0.1
type ConfirmationCallback struct {
// contains filtered or unexported fields
}
ConfirmationCallback pauses execution before selected tools so an external UI, API, or human can approve or deny the action.
func NewConfirmationCallback ¶ added in v1.0.1
func NewConfirmationCallback(policy ApprovalPolicy) ConfirmationCallback
NewConfirmationCallback creates a before-tool callback backed by an approval policy.
func (ConfirmationCallback) BeforeTool ¶ added in v1.0.1
func (c ConfirmationCallback) BeforeTool(_ context.Context, execCtx *ExecutionContext, call model.ToolCall) (*model.ToolResult, error)
BeforeTool requests approval for matching tools and redacts sensitive arguments before they are exposed in the interaction payload.
func (ConfirmationCallback) WithLogger ¶ added in v1.0.1
func (c ConfirmationCallback) WithLogger(logger *slog.Logger) ConfirmationCallback
WithLogger attaches structured approval lifecycle logs.
type ContextOptimizer ¶ added in v1.0.1
type ContextOptimizer struct {
// contains filtered or unexported fields
}
ContextOptimizer applies a list of optimization strategies once a token threshold has been exceeded, letting callers compress or summarize history before the next model call.
func NewContextOptimizer ¶ added in v1.0.1
func NewContextOptimizer(counter TokenCounter, threshold int, strategies ...OptimizationStrategy) *ContextOptimizer
NewContextOptimizer builds a request mutator that conditionally applies optimization strategies when a request grows past the configured budget.
func (*ContextOptimizer) Mutate ¶ added in v1.0.1
Mutate applies optimization strategies when the request exceeds the configured threshold.
func (*ContextOptimizer) WithLogger ¶ added in v1.0.1
func (o *ContextOptimizer) WithLogger(logger *slog.Logger) *ContextOptimizer
WithLogger attaches structured optimization logs.
type DuplicateChecker ¶ added in v1.0.1
type DuplicateChecker interface {
IsDuplicate(memory TaskMemory, existing []TaskMemory) bool
}
DuplicateChecker decides whether a memory is already represented in the store.
type DynamicToolsCallback ¶ added in v1.0.2
type DynamicToolsCallback func(*ExecutionContext) []model.ToolDefinition
DynamicToolsCallback can tailor the tool list visible to the LLM for the next turn.
type Embedder ¶ added in v1.0.1
Embedder converts one or more texts into vectors for semantic search.
type EvidenceCollector ¶ added in v1.0.2
type EvidenceCollector interface {
Evidence() []EvidenceItem
}
EvidenceCollector exposes the current set of gathered evidence items.
type EvidenceItem ¶ added in v1.0.2
EvidenceItem represents one supporting fact that can justify a final answer.
type EvidenceTracker ¶ added in v1.0.2
type EvidenceTracker struct {
// contains filtered or unexported fields
}
EvidenceTracker records supporting evidence from successful tool results.
func NewEvidenceTracker ¶ added in v1.0.2
func NewEvidenceTracker(mapper func(model.ToolResult) (EvidenceItem, bool)) *EvidenceTracker
NewEvidenceTracker creates a reusable evidence tracker with a caller-provided mapper.
func (*EvidenceTracker) AfterTool ¶ added in v1.0.2
func (t *EvidenceTracker) AfterTool( _ context.Context, _ *ExecutionContext, result model.ToolResult, ) (*model.ToolResult, error)
AfterTool implements AfterToolCallback.
func (*EvidenceTracker) Evidence ¶ added in v1.0.2
func (t *EvidenceTracker) Evidence() []EvidenceItem
Evidence returns the recorded evidence items.
type ExecutionContext ¶
type ExecutionContext struct {
// contains filtered or unexported fields
}
ExecutionContext is the central mutable state for one agent run. It records all Events across steps and holds the final result once the agent produces a terminal response. All public methods are safe for concurrent use.
func NewExecutionContextForTest ¶
func NewExecutionContextForTest() *ExecutionContext
NewExecutionContextForTest exposes newExecutionContext for white-box unit tests.
func (*ExecutionContext) AddEvent ¶
func (ec *ExecutionContext) AddEvent(author string, content ...model.ContentItem)
AddEvent appends an event authored by author with the given content items. ID and Timestamp are generated automatically. Safe for concurrent use.
func (*ExecutionContext) CurrentStep ¶
func (ec *ExecutionContext) CurrentStep() int
CurrentStep returns the current step index. Safe for concurrent use.
func (*ExecutionContext) Done ¶
func (ec *ExecutionContext) Done() bool
Done reports whether the agent has produced a final answer. Safe for concurrent use.
func (*ExecutionContext) Events ¶
func (ec *ExecutionContext) Events() []model.Event
Events returns a defensive copy of the event log. Each Event's Content slice is independently copied so callers cannot corrupt internal state by mutating returned slices. Safe for concurrent use.
func (*ExecutionContext) FinalResult ¶
func (ec *ExecutionContext) FinalResult() (string, bool)
FinalResult returns the agent's final answer and true once Done() is true. Returns ("", false) if the agent has not finished yet. Safe for concurrent use.
func (*ExecutionContext) GetState ¶
func (ec *ExecutionContext) GetState(key string) (any, bool)
GetState retrieves a value from the run-scoped key-value store. Safe for concurrent use.
func (*ExecutionContext) ID ¶
func (ec *ExecutionContext) ID() string
ID returns the unique identifier for this execution. Safe for concurrent use.
func (*ExecutionContext) IncrementStep ¶
func (ec *ExecutionContext) IncrementStep()
IncrementStep advances the step counter by one. Safe for concurrent use.
func (*ExecutionContext) InteractionResponse ¶ added in v1.0.1
func (ec *ExecutionContext) InteractionResponse(requestID string) (InteractionResponse, bool)
InteractionResponse returns a previously supplied external response, if present.
func (*ExecutionContext) PendingInteraction ¶ added in v1.0.1
func (ec *ExecutionContext) PendingInteraction() (*InteractionRequest, bool)
PendingInteraction returns the active external interaction request, if any.
func (*ExecutionContext) SetState ¶
func (ec *ExecutionContext) SetState(key string, value any)
SetState stores a value in the run-scoped key-value store. Safe for concurrent use.
type FinalAnswerCallback ¶ added in v1.0.2
type FinalAnswerCallback interface {
BeforeFinalAnswer(ctx context.Context, execCtx *ExecutionContext, answer string) error
}
FinalAnswerCallback can reject a proposed final answer before the agent ends the run. Returning an error keeps the loop alive and lets the caller inject a corrective message back into the conversation.
type HybridRetriever ¶ added in v1.0.2
type HybridRetriever interface {
Retrieve(context.Context, string, int) ([]RetrievalCandidate, error)
}
HybridRetriever returns a ranked candidate list for a query.
Implementations commonly combine lexical search, semantic search, metadata filters, or any other retrieval signal behind one method.
type InMemorySessionManager ¶ added in v1.0.1
type InMemorySessionManager struct {
// contains filtered or unexported fields
}
InMemorySessionManager is a thread-safe in-process SessionManager.
func NewInMemorySessionManager ¶ added in v1.0.1
func NewInMemorySessionManager() *InMemorySessionManager
NewInMemorySessionManager creates an empty in-memory session store.
func (*InMemorySessionManager) Create ¶ added in v1.0.1
func (m *InMemorySessionManager) Create(sessionID, userID string) (*Session, error)
Create inserts a new session for the given session and user identifiers.
func (*InMemorySessionManager) Get ¶ added in v1.0.1
func (m *InMemorySessionManager) Get(sessionID string) (*Session, error)
Get loads a previously saved session, or nil when it does not exist.
func (*InMemorySessionManager) GetOrCreate ¶ added in v1.0.1
func (m *InMemorySessionManager) GetOrCreate(sessionID, userID string) (*Session, error)
GetOrCreate returns an existing session for the same user or creates one.
func (*InMemorySessionManager) Save ¶ added in v1.0.1
func (m *InMemorySessionManager) Save(session *Session) error
Save upserts the supplied session snapshot.
type InMemoryVectorStore ¶ added in v1.0.1
type InMemoryVectorStore struct {
// contains filtered or unexported fields
}
InMemoryVectorStore is a thread-safe in-process vector store.
func NewInMemoryVectorStore ¶ added in v1.0.1
func NewInMemoryVectorStore() *InMemoryVectorStore
NewInMemoryVectorStore creates an empty in-memory vector store.
func (*InMemoryVectorStore) Add ¶ added in v1.0.1
func (s *InMemoryVectorStore) Add(_ context.Context, docs []VectorDocument) error
Add appends new vector documents to the store.
func (*InMemoryVectorStore) Search ¶ added in v1.0.1
func (s *InMemoryVectorStore) Search(_ context.Context, query []float64, topK int) ([]VectorDocument, error)
Search returns the top-K documents ranked by cosine similarity.
type InteractionRequest ¶ added in v1.0.1
type InteractionRequest struct {
ID string
Kind string
Prompt string
ToolCallID string
ToolName string
Payload map[string]any
}
InteractionRequest describes a question that must be answered from outside the agent before the run can continue.
Common examples are "approve this tool call?" or "which account should be used for this action?"
type InteractionRequestedError ¶ added in v1.0.1
type InteractionRequestedError struct {
Suspended SuspendedRun
}
InteractionRequestedError exposes a suspended run while still matching ErrInteractionRequested.
func (*InteractionRequestedError) Error ¶ added in v1.0.1
func (e *InteractionRequestedError) Error() string
func (*InteractionRequestedError) Unwrap ¶ added in v1.0.1
func (e *InteractionRequestedError) Unwrap() error
type InteractionRequestedEvent ¶ added in v1.0.1
type InteractionRequestedEvent struct {
RunID string
Step int
Request InteractionRequest
}
InteractionRequestedEvent is emitted when the agent suspends to await external input.
type InteractionResponse ¶ added in v1.0.1
type InteractionResponse struct {
RequestID string
Approved *bool
Value string
Metadata map[string]any
}
InteractionResponse carries the external answer to a pending interaction request.
type InteractionResumedEvent ¶ added in v1.0.1
type InteractionResumedEvent struct {
RunID string
Step int
Response InteractionResponse
}
InteractionResumedEvent is emitted when a suspended interaction receives a response.
type LLMCallEvent ¶
LLMCallEvent is emitted after every Generate() call, including on error. Latency covers only the LLM network round-trip. The full request and response content are available via result.Context.Events().
type LLMClient ¶
type LLMClient interface {
Generate(ctx context.Context, req model.Request) (model.Response, error)
}
LLMClient abstracts communication with a language model. Implement this interface to support any LLM provider.
type LiteLLMClient ¶
type LiteLLMClient struct {
// contains filtered or unexported fields
}
LiteLLMClient adapts the openai-go client to the LLMClient interface. Works with OpenAI directly or with a LiteLLM proxy (same API surface).
func NewLiteLLMClient ¶
func NewLiteLLMClient(client *openai.Client, model openai.ChatModel) *LiteLLMClient
NewLiteLLMClient creates a LiteLLMClient wrapping the provided openai-go client.
type LiveEventSink ¶ added in v1.0.2
type LiveEventSink func(AgentEvent)
LiveEventSink consumes agent events as they are emitted during a run.
Unlike the replayable observable returned by Agent.Run, a live sink receives events while the run is still in progress. Sinks should stay lightweight and non-blocking because they execute on the event collector goroutine.
type MemoryInjector ¶ added in v1.0.1
type MemoryInjector struct {
// contains filtered or unexported fields
}
MemoryInjector adds retrieved long-term memories to the prompt instructions so the model can reuse prior approaches instead of starting from scratch.
func NewMemoryInjector ¶ added in v1.0.1
func NewMemoryInjector(searcher MemorySearcher, topK int) MemoryInjector
NewMemoryInjector creates a request mutator backed by semantic memory search.
type MemorySearcher ¶ added in v1.0.1
type MemorySearcher interface {
Search(ctx context.Context, query string, topK int) ([]TaskMemory, error)
}
MemorySearcher retrieves similar task memories for prompt injection.
type MemoryWriteDecision ¶ added in v1.0.2
MemoryWriteDecision captures whether a long-term memory record should be stored.
type MemoryWritePolicy ¶ added in v1.0.2
type MemoryWritePolicy interface {
Decide(context.Context, TaskMemory) (MemoryWriteDecision, error)
}
MemoryWritePolicy decides whether a task memory is worth persisting.
type MutatingLLMClient ¶ added in v1.0.1
type MutatingLLMClient struct {
// contains filtered or unexported fields
}
MutatingLLMClient applies request mutators and then forwards the request to another LLMClient.
func NewMutatingLLMClient ¶ added in v1.0.1
func NewMutatingLLMClient(delegate LLMClient, mutators ...RequestMutator) *MutatingLLMClient
NewMutatingLLMClient wraps an LLMClient with one or more request mutators.
type Observation ¶ added in v1.0.2
Observation is a tool result captured during synthesis.
type OptimizationStrategy ¶ added in v1.0.1
OptimizationStrategy rewrites a request to reduce context size or noise.
type PlanRevision ¶ added in v1.0.2
PlanRevision represents a captured planning snapshot.
type PlanRevisionEvent ¶ added in v1.0.2
type PlanRevisionEvent struct {
RunID string
Step int
Revision PlanRevision
}
PlanRevisionEvent is emitted when a planning tool call records a new revision.
type PlanTask ¶ added in v1.0.2
type PlanTask struct {
Content string
Status PlanTaskStatus
}
PlanTask represents a single item in a planning tool result.
func ParsePlanTasks ¶ added in v1.0.2
ParsePlanTasks unmarshals the JSON payload used by the create_tasks tool into the typed plan representation.
type PlanTaskStatus ¶ added in v1.0.2
type PlanTaskStatus string
PlanTaskStatus describes the execution state of a task in a generated plan.
const ( // PlanTaskPending is used for tasks that have not started yet. PlanTaskPending PlanTaskStatus = "pending" // PlanTaskInProgress is used for the next task the agent should work on. PlanTaskInProgress PlanTaskStatus = "in_progress" // PlanTaskCompleted is used for tasks that are already finished. PlanTaskCompleted PlanTaskStatus = "completed" )
type PlanningExecutor ¶ added in v1.0.2
type PlanningExecutor struct {
// contains filtered or unexported fields
}
PlanningExecutor executes create_tasks calls, returning the formatted plan and capturing each plan snapshot for later inspection. Unknown tools can be delegated to another executor when composition is needed.
func NewPlanningExecutor ¶ added in v1.0.2
func NewPlanningExecutor(delegate model.ToolExecutor) *PlanningExecutor
NewPlanningExecutor creates a PlanningExecutor that optionally delegates non-planning tools to another executor.
func (*PlanningExecutor) Execute ¶ added in v1.0.2
func (e *PlanningExecutor) Execute(ctx context.Context, calls []model.ToolCall) ([]model.ToolResult, error)
Execute handles planning tool calls directly and delegates all other calls to the underlying executor when present.
func (*PlanningExecutor) LatestPlan ¶ added in v1.0.2
func (e *PlanningExecutor) LatestPlan() (string, bool)
LatestPlan returns the most recently captured plan snapshot.
func (*PlanningExecutor) Plans ¶ added in v1.0.2
func (e *PlanningExecutor) Plans() []string
Plans returns the recorded formatted plan snapshots.
func (*PlanningExecutor) Revisions ¶ added in v1.0.2
func (e *PlanningExecutor) Revisions() []PlanRevision
Revisions returns the recorded planning snapshots in a typed form that is easier to reuse in policies, tests, and observers.
func (*PlanningExecutor) TaskCounts ¶ added in v1.0.2
func (e *PlanningExecutor) TaskCounts() []int
TaskCounts returns the recorded task count for each captured plan snapshot.
func (*PlanningExecutor) WithObservers ¶ added in v1.0.2
func (e *PlanningExecutor) WithObservers(observers ...PlanningObserver) *PlanningExecutor
WithObservers registers planning observers that are notified for each new captured revision.
type PlanningObserver ¶ added in v1.0.2
type PlanningObserver interface {
OnPlanRevision(revision PlanRevision)
}
PlanningObserver receives each captured planning revision as it is recorded.
type PlanningPolicy ¶ added in v1.0.2
type PlanningPolicy struct {
// contains filtered or unexported fields
}
PlanningPolicy enforces a minimum number of planning revisions before the agent can return a final answer.
func NewPlanningPolicy ¶ added in v1.0.2
func NewPlanningPolicy(source planRevisionSource, minRevisions int) *PlanningPolicy
NewPlanningPolicy creates a reusable planning-specific final-answer policy. A minimum revision count of zero or less defaults to two revisions.
func (*PlanningPolicy) BeforeFinalAnswer ¶ added in v1.0.2
func (p *PlanningPolicy) BeforeFinalAnswer( _ context.Context, _ *ExecutionContext, _ string, ) error
BeforeFinalAnswer implements FinalAnswerCallback.
type PlanningReflectionEvent ¶ added in v1.0.2
type PlanningReflectionEvent struct {
RunID string
Step int
Kind PlanningReflectionEventKind
Content string
}
PlanningReflectionEvent is emitted when the unified planning/reflection tracker detects insufficient progress or records a reflection about plan revision needs.
type PlanningReflectionEventKind ¶ added in v1.0.2
type PlanningReflectionEventKind string
PlanningReflectionEventKind describes the type of planning/reflection event.
const ( PlanningReflectionEventInsufficientProgress PlanningReflectionEventKind = "insufficient_progress" PlanningReflectionEventStagnationObserved PlanningReflectionEventKind = "stagnation_observed" PlanningReflectionEventReflectionRecorded PlanningReflectionEventKind = "reflection_recorded" PlanningReflectionEventRevisionNeeded PlanningReflectionEventKind = "revision_needed" PlanningReflectionEventRevisionResolved PlanningReflectionEventKind = "revision_resolved" )
type PlanningReflectionOption ¶ added in v1.0.2
type PlanningReflectionOption func(*PlanningReflectionTracker)
PlanningReflectionOption configures a PlanningReflectionTracker.
func WithPlanningReflectionStagnationThreshold ¶ added in v1.0.2
func WithPlanningReflectionStagnationThreshold(n int) PlanningReflectionOption
WithPlanningReflectionStagnationThreshold enables stagnation-aware reflection after repeated planning-only revisions without meaningful progress.
type PlanningReflectionPolicy ¶ added in v1.0.2
type PlanningReflectionPolicy struct {
// contains filtered or unexported fields
}
PlanningReflectionPolicy enforces that the agent revises its plan after an early draft answer or a stagnation-triggered reflection before it can finalize.
func NewPlanningReflectionPolicy ¶ added in v1.0.2
func NewPlanningReflectionPolicy( source planRevisionSource, tracker *PlanningReflectionTracker, minRevisions int, ) *PlanningReflectionPolicy
NewPlanningReflectionPolicy creates a unified planning/reflection policy.
func (*PlanningReflectionPolicy) BeforeFinalAnswer ¶ added in v1.0.2
func (p *PlanningReflectionPolicy) BeforeFinalAnswer( ctx context.Context, execCtx *ExecutionContext, answer string, ) error
BeforeFinalAnswer implements FinalAnswerCallback.
type PlanningReflectionTracker ¶ added in v1.0.2
type PlanningReflectionTracker struct {
// contains filtered or unexported fields
}
PlanningReflectionTracker coordinates plan revision after either an early answer or repeated planning-only stagnation.
func NewPlanningReflectionTracker ¶ added in v1.0.2
func NewPlanningReflectionTracker(options ...PlanningReflectionOption) *PlanningReflectionTracker
NewPlanningReflectionTracker creates a new unified planning/reflection tracker.
func (*PlanningReflectionTracker) AfterTool ¶ added in v1.0.2
func (t *PlanningReflectionTracker) AfterTool( ctx context.Context, execCtx *ExecutionContext, result model.ToolResult, ) (*model.ToolResult, error)
AfterTool records when planning stagnates and when a later plan revision resolves a previously requested revision.
func (*PlanningReflectionTracker) BeforeTool ¶ added in v1.0.2
func (t *PlanningReflectionTracker) BeforeTool( _ context.Context, execCtx *ExecutionContext, call model.ToolCall, ) (*model.ToolResult, error)
BeforeTool blocks continued tool churn while a reflection is required.
func (*PlanningReflectionTracker) LatestReflection ¶ added in v1.0.2
func (t *PlanningReflectionTracker) LatestReflection() string
LatestReflection returns the most recently recorded planning reflection text.
func (*PlanningReflectionTracker) NeedsReflection ¶ added in v1.0.2
func (t *PlanningReflectionTracker) NeedsReflection() bool
NeedsReflection reports whether the agent must reflect before continuing.
func (*PlanningReflectionTracker) NeedsRevision ¶ added in v1.0.2
func (t *PlanningReflectionTracker) NeedsRevision() bool
NeedsRevision reports whether the current plan still must be revised before a final answer can be accepted.
func (*PlanningReflectionTracker) RecordReflection ¶ added in v1.0.2
func (t *PlanningReflectionTracker) RecordReflection( ctx context.Context, execCtx *ExecutionContext, reflection string, revisions int, )
RecordReflection stores explicit reflection after a stagnation block and then requires a revised plan before the final answer.
type PolicyDecision ¶ added in v1.0.2
type PolicyDecision string
PolicyDecision is the outcome of a final-answer policy check.
const ( PolicyDecisionAccept PolicyDecision = "accept" PolicyDecisionReject PolicyDecision = "reject" )
type PolicyEvent ¶ added in v1.0.2
type PolicyEvent struct {
RunID string
Step int
PolicyName string
Decision PolicyDecision
Answer string
Reason string
Latency time.Duration
}
PolicyEvent is emitted after a final-answer callback evaluates a proposed answer. It makes policy decisions observable in the same stream as other agent lifecycle events.
type RecoveryAttempt ¶ added in v1.0.2
RecoveryAttempt captures a successful retry after a previous failure.
type RecoveryEvent ¶ added in v1.0.2
type RecoveryEvent struct {
RunID string
Step int
Kind RecoveryEventKind
ToolCallID string
ToolName string
Reason string
}
RecoveryEvent is emitted when a recovery tracker observes a failed tool result or a successful retry.
type RecoveryEventKind ¶ added in v1.0.2
type RecoveryEventKind string
RecoveryEventKind describes where the agent is in an error-recovery flow.
const ( RecoveryEventFailureObserved RecoveryEventKind = "failure_observed" RecoveryEventRecovered RecoveryEventKind = "recovered" RecoveryEventReflectionRecorded RecoveryEventKind = "reflection_recorded" )
type RecoveryFailure ¶ added in v1.0.2
RecoveryFailure captures a failed tool result that may require reflection and a retry before the agent can finish.
type RecoveryPolicy ¶ added in v1.0.2
type RecoveryPolicy struct {
// contains filtered or unexported fields
}
RecoveryPolicy blocks final answers while unresolved tool failures remain.
func NewRecoveryPolicy ¶ added in v1.0.2
func NewRecoveryPolicy(source recoveryStateSource) *RecoveryPolicy
NewRecoveryPolicy creates a reusable recovery policy.
func (*RecoveryPolicy) BeforeFinalAnswer ¶ added in v1.0.2
func (p *RecoveryPolicy) BeforeFinalAnswer( ctx context.Context, execCtx *ExecutionContext, answer string, ) error
BeforeFinalAnswer implements FinalAnswerCallback.
type RecoveryTracker ¶ added in v1.0.2
type RecoveryTracker struct {
// contains filtered or unexported fields
}
RecoveryTracker records failed tool results and successful retries. It can be plugged directly into the agent as an AfterToolCallback.
func NewRecoveryTracker ¶ added in v1.0.2
func NewRecoveryTracker() *RecoveryTracker
NewRecoveryTracker creates a reusable recovery tracker.
func (*RecoveryTracker) AfterTool ¶ added in v1.0.2
func (r *RecoveryTracker) AfterTool( ctx context.Context, execCtx *ExecutionContext, result model.ToolResult, ) (*model.ToolResult, error)
AfterTool records failed tool results and successful recovery attempts.
func (*RecoveryTracker) Attempts ¶ added in v1.0.2
func (r *RecoveryTracker) Attempts() []RecoveryAttempt
Attempts returns the recorded successful recovery attempts.
func (*RecoveryTracker) BeforeTool ¶ added in v1.0.2
func (r *RecoveryTracker) BeforeTool( _ context.Context, execCtx *ExecutionContext, call model.ToolCall, ) (*model.ToolResult, error)
BeforeTool blocks retries until a reflection message has been recorded after a failure.
func (*RecoveryTracker) Failures ¶ added in v1.0.2
func (r *RecoveryTracker) Failures() []RecoveryFailure
Failures returns the recorded failures.
func (*RecoveryTracker) HasUnresolvedFailures ¶ added in v1.0.2
func (r *RecoveryTracker) HasUnresolvedFailures() bool
HasUnresolvedFailures reports whether any tool failures still lack a successful follow-up attempt.
func (*RecoveryTracker) LatestReflection ¶ added in v1.0.2
func (r *RecoveryTracker) LatestReflection() string
LatestReflection returns the most recently recorded reflection text.
func (*RecoveryTracker) RecordReflection ¶ added in v1.0.2
func (r *RecoveryTracker) RecordReflection( ctx context.Context, execCtx *ExecutionContext, reflection string, ) error
RecordReflection stores the recovery reflection text and clears the reflection requirement.
func (*RecoveryTracker) RequiresReflection ¶ added in v1.0.2
func (r *RecoveryTracker) RequiresReflection() bool
RequiresReflection reports whether a reflection message is still required before retrying.
type RequestMutator ¶ added in v1.0.1
RequestMutator rewrites a request immediately before the delegated LLM call.
func WithMutatorLogger ¶ added in v1.0.1
func WithMutatorLogger(mutator RequestMutator, logger *slog.Logger) RequestMutator
WithMutatorLogger wraps a request mutator with structured start/finish logging.
type RequestTokenCounter ¶ added in v1.0.1
type RequestTokenCounter struct {
// contains filtered or unexported fields
}
RequestTokenCounter uses tiktoken-compatible tokenization to count request size.
func NewRequestTokenCounter ¶ added in v1.0.1
func NewRequestTokenCounter(modelName string) (*RequestTokenCounter, error)
NewRequestTokenCounter constructs a token counter for the given model name.
type Reranker ¶ added in v1.0.2
type Reranker interface {
Rerank(context.Context, string, []RetrievalCandidate, int) ([]RetrievalCandidate, error)
}
Reranker makes a second, usually more precise pass over an existing candidate set for a query.
A common pattern is "retrieve 20 quickly, rerank to 5 carefully".
type Result ¶
type Result struct {
// Output is the final answer produced by the LLM.
Output string
// ToolCalled reports whether at least one tool was invoked during the run.
ToolCalled bool
// Context is the full execution history for this run.
Context *ExecutionContext
}
Result is the output of a successful Agent.Run() call.
type RetrievalCandidate ¶ added in v1.0.2
type RetrievalCandidate struct {
ID string
Content string
Metadata map[string]string
Score float64
}
RetrievalCandidate is a normalized shortlist item that lets lexical, semantic, or hybrid retrieval stages speak the same shape before reranking.
type RunEndEvent ¶
RunEndEvent is emitted once when Run returns, whether it succeeded. Result is nil when Err is non-nil.
type RunResult ¶ added in v1.0.1
type RunResult struct {
Output string
ToolCalled bool
Status RunStatus
SessionID string
PendingInteraction *InteractionRequest
}
RunResult summarizes one SessionRunner invocation.
type RunStartEvent ¶
RunStartEvent is emitted once before the ReAct loop begins.
type RunStatus ¶ added in v1.0.1
type RunStatus string
RunStatus reports whether a session run finished or is waiting for input.
type Session ¶ added in v1.0.1
type Session struct {
SessionID string
UserID string
Events []model.Event
State map[string]any
CreatedAt time.Time
UpdatedAt time.Time
}
Session stores the persisted conversation history and runner state for one user-facing conversation.
type SessionManager ¶ added in v1.0.1
type SessionManager interface {
Create(sessionID, userID string) (*Session, error)
Get(sessionID string) (*Session, error)
Save(session *Session) error
GetOrCreate(sessionID, userID string) (*Session, error)
}
SessionManager persists and reloads sessions for SessionRunner.
func NewPersistedSessionManager ¶ added in v1.0.2
func NewPersistedSessionManager(persister SessionPersister) SessionManager
NewPersistedSessionManager adapts a SessionPersister to the SessionManager interface.
type SessionPersister ¶ added in v1.0.2
type SessionPersister interface {
SaveSession(context.Context, Session) error
LoadSession(context.Context, string) (Session, error)
}
SessionPersister stores raw session snapshots for durable runners.
type SessionRunner ¶ added in v1.0.1
type SessionRunner struct {
// contains filtered or unexported fields
}
SessionRunner replays prior events from a session, executes the agent loop, and persists the updated state after each run or resume so conversations can continue across separate calls.
func NewSessionRunner ¶ added in v1.0.1
func NewSessionRunner(agent *Agent, sessions SessionManager, maxSteps int) *SessionRunner
NewSessionRunner builds a session-aware wrapper around Agent for chat-style or workflow-style conversations that span multiple turns.
func (*SessionRunner) Resume ¶ added in v1.0.1
func (r *SessionRunner) Resume(ctx context.Context, sessionID, userID string, response InteractionResponse) (*RunResult, error)
Resume continues a previously suspended session using an external response.
func (*SessionRunner) Run ¶ added in v1.0.1
func (r *SessionRunner) Run(ctx context.Context, sessionID, userID, userInput string) (*RunResult, error)
Run appends the new user input to the stored conversation, executes until the run completes or suspends, and then persists the updated session state.
func (*SessionRunner) WithLogger ¶ added in v1.0.1
func (r *SessionRunner) WithLogger(logger *slog.Logger) *SessionRunner
WithLogger attaches structured lifecycle logging to the runner.
type SimpleDuplicateChecker ¶ added in v1.0.1
type SimpleDuplicateChecker struct{}
SimpleDuplicateChecker treats an identical TaskMemory payload as a duplicate.
func (SimpleDuplicateChecker) IsDuplicate ¶ added in v1.0.1
func (SimpleDuplicateChecker) IsDuplicate(memory TaskMemory, existing []TaskMemory) bool
IsDuplicate reports whether memory matches any candidate exactly.
type SlidingWindowStrategy ¶ added in v1.0.1
type SlidingWindowStrategy struct {
// contains filtered or unexported fields
}
SlidingWindowStrategy keeps the latest user message plus a bounded tail of recent events.
func NewSlidingWindowStrategy ¶ added in v1.0.1
func NewSlidingWindowStrategy(windowSize int) SlidingWindowStrategy
NewSlidingWindowStrategy creates a sliding-window optimizer.
type StablePrefixDetector ¶ added in v1.0.2
StablePrefixDetector identifies cache-friendly prefixes in evolving requests.
type StaticApprovalPolicy ¶ added in v1.0.1
type StaticApprovalPolicy map[string]ApprovalRule
StaticApprovalPolicy maps tool names directly to approval rules.
func (StaticApprovalPolicy) RuleForTool ¶ added in v1.0.1
func (p StaticApprovalPolicy) RuleForTool(name string) (ApprovalRule, bool)
RuleForTool returns the configured rule for name, if any.
type StepEndEvent ¶
StepEndEvent is emitted after each Think→Act cycle. Err is non-nil when the step failed.
type StepStartEvent ¶
StepStartEvent is emitted at the beginning of each Think→Act cycle.
type SummarizationStrategy ¶ added in v1.0.1
type SummarizationStrategy struct {
// contains filtered or unexported fields
}
SummarizationStrategy replaces older middle-history events with a generated summary.
func NewSummarizationStrategy ¶ added in v1.0.1
func NewSummarizationStrategy(generator SummaryGenerator, keepRecent int) SummarizationStrategy
NewSummarizationStrategy creates a summary-based optimization strategy.
type SummaryGenerator ¶ added in v1.0.1
type SummaryGenerator interface {
Summarize(ctx context.Context, events []model.Event) (string, error)
}
SummaryGenerator compresses older events into a summary string.
type SuspendedRun ¶ added in v1.0.1
type SuspendedRun struct {
Context *ExecutionContext
Interaction InteractionRequest
}
SuspendedRun contains the paused execution state plus the interaction that must be answered before the run can resume.
type SynthesisEvent ¶ added in v1.0.2
type SynthesisEvent struct {
RunID string
Step int
Kind SynthesisEventKind
ToolCallID string
ToolName string
Content string
}
SynthesisEvent is emitted when a synthesis tracker records an observation or completes a synthesis.
type SynthesisEventKind ¶ added in v1.0.2
type SynthesisEventKind string
SynthesisEventKind describes the type of synthesis event.
const ( SynthesisEventObservationRecorded SynthesisEventKind = "observation_recorded" SynthesisEventSynthesisComplete SynthesisEventKind = "synthesis_complete" )
type SynthesisPolicy ¶ added in v1.0.2
type SynthesisPolicy struct {
// contains filtered or unexported fields
}
SynthesisPolicy blocks final answers while analysis remains incomplete.
func NewSynthesisPolicy ¶ added in v1.0.2
func NewSynthesisPolicy(source synthesisStateSource) *SynthesisPolicy
NewSynthesisPolicy creates a reusable synthesis policy.
func (*SynthesisPolicy) BeforeFinalAnswer ¶ added in v1.0.2
func (p *SynthesisPolicy) BeforeFinalAnswer( ctx context.Context, execCtx *ExecutionContext, _ string, ) error
BeforeFinalAnswer implements FinalAnswerCallback.
type SynthesisRecord ¶ added in v1.0.2
type SynthesisRecord struct {
Observations []Observation
}
SynthesisRecord captures a completed synthesis with its observations.
type SynthesisTracker ¶ added in v1.0.2
type SynthesisTracker struct {
// contains filtered or unexported fields
}
SynthesisTracker records tool observations and tracks synthesis completion. It can be plugged directly into the agent as an AfterToolCallback.
func NewSynthesisTracker ¶ added in v1.0.2
func NewSynthesisTracker() *SynthesisTracker
NewSynthesisTracker creates a reusable synthesis tracker.
func (*SynthesisTracker) AfterTool ¶ added in v1.0.2
func (s *SynthesisTracker) AfterTool( ctx context.Context, execCtx *ExecutionContext, result model.ToolResult, ) (*model.ToolResult, error)
AfterTool records tool results as observations.
func (*SynthesisTracker) HasIncompleteAnalysis ¶ added in v1.0.2
func (s *SynthesisTracker) HasIncompleteAnalysis() bool
HasIncompleteAnalysis reports whether analysis remains incomplete.
func (*SynthesisTracker) MarkSynthesisComplete ¶ added in v1.0.2
func (s *SynthesisTracker) MarkSynthesisComplete(ctx context.Context, execCtx *ExecutionContext) error
MarkSynthesisComplete marks the current synthesis as complete and starts tracking next one.
func (*SynthesisTracker) Observations ¶ added in v1.0.2
func (s *SynthesisTracker) Observations() []Observation
Observations returns the current observations.
func (*SynthesisTracker) SynthesisHistory ¶ added in v1.0.2
func (s *SynthesisTracker) SynthesisHistory() []SynthesisRecord
SynthesisHistory returns all completed synthesis records.
type TaskMemory ¶ added in v1.0.1
type TaskMemory struct {
TaskSummary string
Approach string
FinalAnswer string
IsCorrect bool
ErrorAnalysis string
}
TaskMemory stores a reusable record of how a prior task was solved.
func (TaskMemory) EmbeddingText ¶ added in v1.0.1
func (m TaskMemory) EmbeddingText() string
EmbeddingText returns the text used to embed this memory for similarity search.
type TaskMemoryManager ¶ added in v1.0.1
type TaskMemoryManager struct {
// contains filtered or unexported fields
}
TaskMemoryManager saves and retrieves semantically indexed task memories.
Think of it as "we solved a similar problem before; bring that pattern back when a new request looks close enough."
func NewTaskMemoryManager ¶ added in v1.0.1
func NewTaskMemoryManager(embedder Embedder, store VectorStore, duplicateChecker DuplicateChecker) *TaskMemoryManager
NewTaskMemoryManager creates a semantic memory manager from its pluggable components.
func (*TaskMemoryManager) Save ¶ added in v1.0.1
func (m *TaskMemoryManager) Save(ctx context.Context, memory TaskMemory) (string, bool, error)
Save embeds, de-duplicates, and persists a task memory.
func (*TaskMemoryManager) Search ¶ added in v1.0.1
func (m *TaskMemoryManager) Search(ctx context.Context, query string, topK int) ([]TaskMemory, error)
Search retrieves similar task memories for a natural-language query.
func (*TaskMemoryManager) WithLogger ¶ added in v1.0.1
func (m *TaskMemoryManager) WithLogger(logger *slog.Logger) *TaskMemoryManager
WithLogger attaches structured save and search logs.
func (*TaskMemoryManager) WithWritePolicy ¶ added in v1.0.2
func (m *TaskMemoryManager) WithWritePolicy(policy MemoryWritePolicy) *TaskMemoryManager
WithWritePolicy attaches an optional policy that can skip low-value memories.
type ThresholdMemoryWritePolicy ¶ added in v1.0.2
type ThresholdMemoryWritePolicy struct {
// contains filtered or unexported fields
}
ThresholdMemoryWritePolicy stores only memories whose heuristic score meets the configured threshold.
func NewThresholdMemoryWritePolicy ¶ added in v1.0.2
func NewThresholdMemoryWritePolicy(threshold float64) ThresholdMemoryWritePolicy
NewThresholdMemoryWritePolicy creates a simple heuristic write policy.
func (ThresholdMemoryWritePolicy) Decide ¶ added in v1.0.2
func (p ThresholdMemoryWritePolicy) Decide(_ context.Context, memory TaskMemory) (MemoryWriteDecision, error)
Decide scores the supplied memory and reports whether it should be stored.
type TokenCounter ¶ added in v1.0.1
TokenCounter estimates the prompt size of a request before it is sent to the LLM.
type ToolExecEvent ¶
type ToolExecEvent struct {
RunID string
Step int
ToolNames []string
Latency time.Duration
Err error
}
ToolExecEvent is emitted after every executor.Execute() batch. ToolNames lists the names of tools that were called. Latency is 0 when the executor is nil.
type VectorDocument ¶ added in v1.0.1
type VectorDocument struct {
ID string
Vector []float64
Memory TaskMemory
}
VectorDocument binds an embedding to its original task memory payload.
type VectorStore ¶ added in v1.0.1
type VectorStore interface {
Add(ctx context.Context, docs []VectorDocument) error
Search(ctx context.Context, query []float64, topK int) ([]VectorDocument, error)
}
VectorStore persists vectorized memories and supports nearest-neighbor lookup.
type VerificationGate ¶ added in v1.0.2
type VerificationGate struct {
// contains filtered or unexported fields
}
VerificationGate blocks final answers until enough evidence has been gathered.
func NewVerificationGate ¶ added in v1.0.2
func NewVerificationGate(collector EvidenceCollector, minItems int, options ...VerificationOption) *VerificationGate
NewVerificationGate creates a reusable evidence gate.
func (*VerificationGate) BeforeFinalAnswer ¶ added in v1.0.2
func (g *VerificationGate) BeforeFinalAnswer( _ context.Context, _ *ExecutionContext, answer string, ) error
BeforeFinalAnswer implements FinalAnswerCallback.
func (*VerificationGate) BeforeTool ¶ added in v1.0.2
func (g *VerificationGate) BeforeTool( _ context.Context, execCtx *ExecutionContext, call model.ToolCall, ) (*model.ToolResult, error)
BeforeTool blocks further evidence gathering until an actionable reflection is recorded.
func (*VerificationGate) LatestReflection ¶ added in v1.0.2
func (g *VerificationGate) LatestReflection() string
LatestReflection returns the most recently recorded reflection, if any.
func (*VerificationGate) NeedsReflection ¶ added in v1.0.2
func (g *VerificationGate) NeedsReflection() bool
NeedsReflection reports whether the gate is waiting for an actionable reflection.
type VerificationOption ¶ added in v1.0.2
type VerificationOption func(*VerificationGate)
VerificationOption configures a VerificationGate.
func WithActionableVerificationReflection ¶ added in v1.0.2
func WithActionableVerificationReflection() VerificationOption
WithActionableVerificationReflection requires a short reflection before more evidence can be gathered after an insufficiently verified answer attempt.
Source Files
¶
- agent.go
- callbacks.go
- context.go
- doc.go
- interaction.go
- llm.go
- logging.go
- memory_approval.go
- memory_context.go
- memory_longterm.go
- memory_retrieval.go
- memory_sanitize.go
- memory_selective.go
- memory_session.go
- observer.go
- planning.go
- planning_policy.go
- planning_reflection.go
- planning_verification.go
- recovery.go
- result.go
- synthesis.go
Directories
¶
| Path | Synopsis |
|---|---|
|
internal
|
|
|
Package mcpadapter bridges github.com/v8tix/mcp-toolkit with react-agent.
|
Package mcpadapter bridges github.com/v8tix/mcp-toolkit with react-agent. |
|
Package model contains the core data types shared across react-agent and its sub-packages.
|
Package model contains the core data types shared across react-agent and its sub-packages. |