Documentation
¶
Overview ¶
Package embedded implements the EyrieClaw embedded agent — an agent that runs inside the Eyrie process as a goroutine rather than as a separate framework process. It calls LLM APIs directly and executes tools as Go functions, streaming responses through the standard adapter.ChatEvent model.
The provider layer is pluggable: LLMProvider is the interface, and OpenAICompatProvider is the first implementation (covers OpenRouter, OpenAI, Ollama, and any OpenAI-compatible endpoint).
Index ¶
- type AgentLoop
- type Event
- type KeyStore
- type LLMProvider
- type LogBuffer
- type LogEntry
- type LoopConfig
- type Message
- type OpenAICompatProvider
- type Response
- type SessionStore
- func (ss *SessionStore) Append(key string, msg Message) error
- func (ss *SessionStore) AppendMany(key string, msgs []Message) error
- func (ss *SessionStore) Clear(key string) error
- func (ss *SessionStore) Delete(key string) error
- func (ss *SessionStore) Get(key string) []Message
- func (ss *SessionStore) Keys() []string
- type Tool
- type ToolCall
- type ToolDef
- type ToolFunction
- type ToolRegistry
Constants ¶
This section is empty.
Variables ¶
This section is empty.
Functions ¶
This section is empty.
Types ¶
type AgentLoop ¶
type AgentLoop struct {
// contains filtered or unexported fields
}
AgentLoop orchestrates the send-tool-repeat cycle for an embedded agent. It calls the LLM provider, executes tool calls, appends results, and repeats until the model produces a text-only response or a budget is hit.
func NewAgentLoop ¶
func NewAgentLoop(provider LLMProvider, tools *ToolRegistry, cfg LoopConfig, logBuf *LogBuffer) *AgentLoop
NewAgentLoop creates a loop with the given provider, tools, and config.
type Event ¶
type Event struct {
Type string // "delta", "tool_start", "tool_result", "done", "error"
Content string
Tool string
ToolID string
Args map[string]any
Output string
Success *bool
Error string
// Usage stats (populated on "done" events)
InputTokens int
OutputTokens int
}
Event is the embedded agent's internal event type, converted to adapter.ChatEvent by the EmbeddedAdapter. This avoids an import cycle between embedded and adapter packages.
type KeyStore ¶
type KeyStore struct {
// contains filtered or unexported fields
}
KeyStore reads API keys from ~/.eyrie/keys.json and environment variables. Environment variables take precedence over the JSON file. Keys are looked up by provider name (e.g. "openrouter", "anthropic", "openai").
func NewKeyStore ¶
func NewKeyStore() *KeyStore
NewKeyStore creates a KeyStore, loading any existing keys from ~/.eyrie/keys.json. Missing or unreadable files are silently ignored.
type LLMProvider ¶
type LLMProvider interface {
// Chat sends messages to the LLM and returns the complete response.
Chat(ctx context.Context, messages []Message, tools []ToolDef, model string, opts map[string]any) (*Response, error)
// ChatStream sends messages and calls onDelta for each text chunk.
// Tool calls are accumulated and returned in the final Response.
ChatStream(ctx context.Context, messages []Message, tools []ToolDef, model string, opts map[string]any, onDelta func(delta string)) (*Response, error)
}
LLMProvider is the interface for calling an LLM chat completions API. Implementations handle serialization, HTTP transport, and response parsing.
type LogBuffer ¶
type LogBuffer struct {
// contains filtered or unexported fields
}
LogBuffer is a thread-safe ring buffer for embedded agent log entries. Uses a head index and count to avoid O(n) copy-shift on every Add().
func NewLogBuffer ¶
NewLogBuffer creates a buffer with the given capacity.
type LoopConfig ¶
type LoopConfig struct {
// MaxIterations caps the number of LLM round-trips per turn.
// Default: 20. Prevents infinite tool-call loops.
MaxIterations int
// TurnTimeout is the hard deadline for an entire turn.
// Default: 5 minutes.
TurnTimeout time.Duration
// ToolTimeout is the per-tool execution timeout.
// Default: 60 seconds.
ToolTimeout time.Duration
// MaxContextTokens is the approximate token budget. When exceeded,
// the oldest messages (after system) are dropped. 0 = no limit.
MaxContextTokens int
}
LoopConfig holds the safety budgets for the agent loop.
func DefaultLoopConfig ¶
func DefaultLoopConfig() LoopConfig
DefaultLoopConfig returns conservative defaults.
type Message ¶
type Message struct {
Role string `json:"role"` // "system", "user", "assistant", "tool"
Content string `json:"content,omitempty"`
ToolCalls []ToolCall `json:"tool_calls,omitempty"`
ToolCallID string `json:"tool_call_id,omitempty"` // set when Role == "tool"
Name string `json:"name,omitempty"`
}
Message represents a chat message in the OpenAI chat completions format.
type OpenAICompatProvider ¶
type OpenAICompatProvider struct {
// contains filtered or unexported fields
}
OpenAICompatProvider implements LLMProvider using raw HTTP against any OpenAI-compatible chat completions endpoint. This avoids pulling in the openai-go SDK (which adds significant dependency weight) in favor of a lean implementation following PicoClaw's proven pattern.
func NewOpenAICompatProvider ¶
func NewOpenAICompatProvider(apiKey, apiBase string) *OpenAICompatProvider
NewOpenAICompatProvider creates a provider for the given base URL and key.
func (*OpenAICompatProvider) Chat ¶
func (p *OpenAICompatProvider) Chat(ctx context.Context, messages []Message, tools []ToolDef, model string, opts map[string]any) (*Response, error)
Chat sends a non-streaming chat completion request.
func (*OpenAICompatProvider) ChatStream ¶
func (p *OpenAICompatProvider) ChatStream(ctx context.Context, messages []Message, tools []ToolDef, model string, opts map[string]any, onDelta func(delta string)) (*Response, error)
ChatStream sends a streaming chat completion request and calls onDelta for each text content chunk. Returns the accumulated response with all tool calls assembled.
type Response ¶
type Response struct {
Content string `json:"content"`
ToolCalls []ToolCall `json:"tool_calls,omitempty"`
FinishReason string `json:"finish_reason"`
InputTokens int `json:"input_tokens,omitempty"`
OutputTokens int `json:"output_tokens,omitempty"`
}
Response is the parsed result of an LLM chat completion.
type SessionStore ¶
type SessionStore struct {
// contains filtered or unexported fields
}
SessionStore manages conversation history for embedded agents. Sessions are held in memory for fast access and persisted to JSONL files in the workspace so they survive Eyrie restarts.
func NewSessionStore ¶
func NewSessionStore(sessionDir string) *SessionStore
NewSessionStore creates a store backed by the given directory. Any existing JSONL session files are loaded into memory.
func (*SessionStore) Append ¶
func (ss *SessionStore) Append(key string, msg Message) error
Append adds a message to the session and persists it to disk. Disk write happens first — if it fails, in-memory state is not modified so the two stay consistent.
func (*SessionStore) AppendMany ¶
func (ss *SessionStore) AppendMany(key string, msgs []Message) error
AppendMany adds multiple messages to the session and persists them. Opens the file once for all messages instead of N times. All messages are written to disk first; only after success is the in-memory state updated.
func (*SessionStore) Clear ¶
func (ss *SessionStore) Clear(key string) error
Clear removes all messages from a session (in memory and on disk) without deleting the session key. Disk truncation happens first to avoid inconsistency on failure.
func (*SessionStore) Delete ¶
func (ss *SessionStore) Delete(key string) error
Delete removes a session from memory and disk. Disk removal happens first to avoid orphaned files on failure.
func (*SessionStore) Get ¶
func (ss *SessionStore) Get(key string) []Message
Get returns the message history for a session key. Returns nil if the session does not exist.
type Tool ¶
type Tool struct {
Name string
Description string
Parameters map[string]any
Execute func(ctx context.Context, args map[string]any) (string, error)
}
Tool is a single executable tool that can be registered with the agent loop.
type ToolCall ¶
type ToolCall struct {
ID string `json:"id"`
Type string `json:"type"` // "function"
Function struct {
Name string `json:"name"`
Arguments string `json:"arguments"` // raw JSON string
} `json:"function"`
}
ToolCall represents a tool invocation returned by the model.
type ToolDef ¶
type ToolDef struct {
Type string `json:"type"` // always "function"
Function ToolFunction `json:"function"`
}
ToolDef defines a tool the model can invoke, matching the OpenAI tool format.
type ToolFunction ¶
type ToolFunction struct {
Name string `json:"name"`
Description string `json:"description"`
Parameters map[string]any `json:"parameters"`
}
ToolFunction describes the function within a ToolDef.
type ToolRegistry ¶
type ToolRegistry struct {
// contains filtered or unexported fields
}
ToolRegistry manages the set of tools available to an embedded agent. Tools are opt-in — only those listed in the agent config are registered.
func NewToolRegistry ¶
func NewToolRegistry() *ToolRegistry
NewToolRegistry creates an empty registry.
func (*ToolRegistry) Definitions ¶
func (r *ToolRegistry) Definitions() []ToolDef
Definitions returns the registered tools as ToolDef values suitable for passing to the LLM provider.
func (*ToolRegistry) Get ¶
func (r *ToolRegistry) Get(name string) *Tool
Get returns a tool by name, or nil if not found.
func (*ToolRegistry) List ¶
func (r *ToolRegistry) List() []string
List returns all registered tool names.
func (*ToolRegistry) Register ¶
func (r *ToolRegistry) Register(t *Tool)
Register adds a tool to the registry, replacing any existing tool with the same name.
func (*ToolRegistry) RegisterBuiltins ¶
func (r *ToolRegistry) RegisterBuiltins(names []string, workspace string)
RegisterBuiltins registers the subset of built-in tools specified by names. Unknown tool names are silently ignored. workspace is the root directory for file-system tools (read_file, write_file, list_dir, exec).