embedded

package
v0.2.1 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Apr 28, 2026 License: MIT Imports: 18 Imported by: 0

Documentation

Overview

Package embedded implements the EyrieClaw embedded agent — an agent that runs inside the Eyrie process as a goroutine rather than as a separate framework process. It calls LLM APIs directly and executes tools as Go functions, streaming responses through the standard adapter.ChatEvent model.

The provider layer is pluggable: LLMProvider is the interface, and OpenAICompatProvider is the first implementation (covers OpenRouter, OpenAI, Ollama, and any OpenAI-compatible endpoint).

Index

Constants

This section is empty.

Variables

This section is empty.

Functions

func ParseToolArgs

func ParseToolArgs(raw string) map[string]any

ParseToolArgs unmarshals the raw JSON arguments string from a tool call into a map. Returns an empty map on parse failure. Exported because the commander package also needs to parse tool args from stored history.

Types

type AgentLoop

type AgentLoop struct {
	// contains filtered or unexported fields
}

AgentLoop orchestrates the send-tool-repeat cycle for an embedded agent. It calls the LLM provider, executes tool calls, appends results, and repeats until the model produces a text-only response or a budget is hit.

func NewAgentLoop

func NewAgentLoop(provider LLMProvider, tools *ToolRegistry, cfg LoopConfig, logBuf *LogBuffer) *AgentLoop

NewAgentLoop creates a loop with the given provider, tools, and config.

func (*AgentLoop) Run

func (al *AgentLoop) Run(ctx context.Context, systemPrompt string, history []Message, model string) <-chan Event

Run executes the agent loop for a single turn, emitting Events to the returned channel. The channel is closed after a terminal event ("done" or "error").

type AnthropicProvider

type AnthropicProvider struct {
	// contains filtered or unexported fields
}

AnthropicProvider implements LLMProvider against Anthropic's native /v1/messages API. It translates between our OpenAI-shaped internal types (Message, ToolDef, ToolCall) and Anthropic's content-block format on the way in and out.

WHY translate (not switch our internal types): the commander and EyrieClaw are already built around OpenAI-shaped Message/ToolDef types. Changing those would force a refactor across both consumers. Translating at the provider boundary is the minimum-disruption path and means either provider can be swapped in behind the same interface.

Differences from OpenAICompatProvider that this provider handles:

  • system prompt is a top-level request parameter, not a message
  • tool definitions are flatter: {name, description, input_schema}
  • assistant content is an array of typed blocks (text | tool_use)
  • tool results live inside user messages as tool_result blocks
  • streaming is event-typed (content_block_start/delta/stop) rather than a single delta stream
  • auth uses x-api-key + anthropic-version header, not Bearer

func NewAnthropicProvider

func NewAnthropicProvider(apiKey, apiBase string) *AnthropicProvider

NewAnthropicProvider creates a provider for the given base URL and API key. apiBase should be the URL without the /v1/messages suffix. Defaults to https://api.anthropic.com when empty.

func (*AnthropicProvider) Chat

func (p *AnthropicProvider) Chat(ctx context.Context, messages []Message, tools []ToolDef, model string, opts map[string]any) (*Response, error)

Chat sends a non-streaming request to /v1/messages. Returns the response translated back into our OpenAI-shaped Response type.

func (*AnthropicProvider) ChatStream

func (p *AnthropicProvider) ChatStream(ctx context.Context, messages []Message, tools []ToolDef, model string, opts map[string]any, onDelta func(delta string)) (*Response, error)

ChatStream sends a streaming request to /v1/messages and calls onDelta for each text chunk. Returns the accumulated Response once the stream completes.

Anthropic's streaming format is fundamentally different from OpenAI's. OpenAI emits one kind of event (a chat.completion.chunk) whose delta may contain either text or tool-call fragments, with tool_calls addressed by a numeric index. Anthropic emits a small state machine of typed events:

message_start         — usage/metadata; we capture input_tokens here
content_block_start   — "block N is going to be text" or "tool_use id=...
                        name=..." — tool_use IDs arrive here, not on delta
content_block_delta   — the actual chunk: text_delta.text or
                        input_json_delta.partial_json
content_block_stop    — block N is finished
message_delta         — stop_reason + final output_tokens
message_stop          — terminator

We accumulate per-block state in a map keyed by block index, then assemble the final Response when message_stop arrives.

type Event

type Event struct {
	Type    string // "delta", "tool_start", "tool_result", "done", "error"
	Content string
	Tool    string
	ToolID  string
	Args    map[string]any
	Output  string
	Success *bool
	Error   string
	// Usage stats (populated on "done" events)
	InputTokens  int
	OutputTokens int
}

Event is the embedded agent's internal event type, converted to adapter.ChatEvent by the EmbeddedAdapter. This avoids an import cycle between embedded and adapter packages.

type LLMProvider

type LLMProvider interface {
	// Chat sends messages to the LLM and returns the complete response.
	Chat(ctx context.Context, messages []Message, tools []ToolDef, model string, opts map[string]any) (*Response, error)

	// ChatStream sends messages and calls onDelta for each text chunk.
	// Tool calls are accumulated and returned in the final Response.
	ChatStream(ctx context.Context, messages []Message, tools []ToolDef, model string, opts map[string]any, onDelta func(delta string)) (*Response, error)
}

LLMProvider is the interface for calling an LLM chat completions API. Implementations handle serialization, HTTP transport, and response parsing.

type LogBuffer

type LogBuffer struct {
	// contains filtered or unexported fields
}

LogBuffer is a thread-safe ring buffer for embedded agent log entries. Uses a head index and count to avoid O(n) copy-shift on every Add().

func NewLogBuffer

func NewLogBuffer(maxSize int) *LogBuffer

NewLogBuffer creates a buffer with the given capacity.

func (*LogBuffer) Add

func (lb *LogBuffer) Add(level, message string)

Add appends a log entry with the current timestamp.

func (*LogBuffer) Entries

func (lb *LogBuffer) Entries() []LogEntry

Entries returns a copy of all buffered log entries in chronological order.

type LogEntry

type LogEntry struct {
	Timestamp time.Time
	Level     string
	Message   string
}

LogEntry is the embedded agent's internal log entry type.

type LoopConfig

type LoopConfig struct {
	// MaxIterations caps the number of LLM round-trips per turn.
	// Default: 20. Prevents infinite tool-call loops.
	MaxIterations int

	// TurnTimeout is the hard deadline for an entire turn.
	// Default: 5 minutes.
	TurnTimeout time.Duration

	// ToolTimeout is the per-tool execution timeout.
	// Default: 60 seconds.
	ToolTimeout time.Duration

	// MaxContextTokens is the approximate token budget. When exceeded,
	// the oldest messages (after system) are dropped. 0 = no limit.
	MaxContextTokens int
}

LoopConfig holds the safety budgets for the agent loop.

func DefaultLoopConfig

func DefaultLoopConfig() LoopConfig

DefaultLoopConfig returns conservative defaults.

type Message

type Message struct {
	Role       string     `json:"role"` // "system", "user", "assistant", "tool"
	Content    string     `json:"content,omitempty"`
	ToolCalls  []ToolCall `json:"tool_calls,omitempty"`
	ToolCallID string     `json:"tool_call_id,omitempty"` // set when Role == "tool"
	Name       string     `json:"name,omitempty"`
}

Message represents a chat message in the OpenAI chat completions format.

type OpenAICompatProvider

type OpenAICompatProvider struct {
	// contains filtered or unexported fields
}

OpenAICompatProvider implements LLMProvider using raw HTTP against any OpenAI-compatible chat completions endpoint. This avoids pulling in the openai-go SDK (which adds significant dependency weight) in favor of a lean implementation following PicoClaw's proven pattern.

func NewOpenAICompatProvider

func NewOpenAICompatProvider(apiKey, apiBase string) *OpenAICompatProvider

NewOpenAICompatProvider creates a provider for the given base URL and key.

func (*OpenAICompatProvider) Chat

func (p *OpenAICompatProvider) Chat(ctx context.Context, messages []Message, tools []ToolDef, model string, opts map[string]any) (*Response, error)

Chat sends a non-streaming chat completion request.

func (*OpenAICompatProvider) ChatStream

func (p *OpenAICompatProvider) ChatStream(ctx context.Context, messages []Message, tools []ToolDef, model string, opts map[string]any, onDelta func(delta string)) (*Response, error)

ChatStream sends a streaming chat completion request and calls onDelta for each text content chunk. Returns the accumulated response with all tool calls assembled.

type Response

type Response struct {
	Content      string     `json:"content"`
	ToolCalls    []ToolCall `json:"tool_calls,omitempty"`
	FinishReason string     `json:"finish_reason"`
	InputTokens  int        `json:"input_tokens,omitempty"`
	OutputTokens int        `json:"output_tokens,omitempty"`
}

Response is the parsed result of an LLM chat completion.

type SessionStore

type SessionStore struct {
	// contains filtered or unexported fields
}

SessionStore manages conversation history for embedded agents. Sessions are held in memory for fast access and persisted to JSONL files in the workspace so they survive Eyrie restarts.

func NewSessionStore

func NewSessionStore(sessionDir string) *SessionStore

NewSessionStore creates a store backed by the given directory. Any existing JSONL session files are loaded into memory.

func (*SessionStore) Append

func (ss *SessionStore) Append(key string, msg Message) error

Append adds a message to the session and persists it to disk. Disk write happens first — if it fails, in-memory state is not modified so the two stay consistent.

func (*SessionStore) AppendMany

func (ss *SessionStore) AppendMany(key string, msgs []Message) error

AppendMany adds multiple messages to the session and persists them. Opens the file once for all messages instead of N times. All messages are written to disk first; only after success is the in-memory state updated.

func (*SessionStore) Clear

func (ss *SessionStore) Clear(key string) error

Clear removes all messages from a session (in memory and on disk) without deleting the session key. Disk truncation happens first to avoid inconsistency on failure.

func (*SessionStore) Delete

func (ss *SessionStore) Delete(key string) error

Delete removes a session from memory and disk. Disk removal happens first to avoid orphaned files on failure.

func (*SessionStore) Get

func (ss *SessionStore) Get(key string) []Message

Get returns the message history for a session key. Returns nil if the session does not exist.

func (*SessionStore) Keys

func (ss *SessionStore) Keys() []string

Keys returns all session keys.

type Tool

type Tool struct {
	Name        string
	Description string
	Parameters  map[string]any
	Execute     func(ctx context.Context, args map[string]any) (string, error)
}

Tool is a single executable tool that can be registered with the agent loop.

type ToolCall

type ToolCall struct {
	ID       string `json:"id"`
	Type     string `json:"type"` // "function"
	Function struct {
		Name      string `json:"name"`
		Arguments string `json:"arguments"` // raw JSON string
	} `json:"function"`
}

ToolCall represents a tool invocation returned by the model.

type ToolDef

type ToolDef struct {
	Type     string       `json:"type"` // always "function"
	Function ToolFunction `json:"function"`
}

ToolDef defines a tool the model can invoke, matching the OpenAI tool format.

type ToolFunction

type ToolFunction struct {
	Name        string         `json:"name"`
	Description string         `json:"description"`
	Parameters  map[string]any `json:"parameters"`
}

ToolFunction describes the function within a ToolDef.

type ToolRegistry

type ToolRegistry struct {
	// contains filtered or unexported fields
}

ToolRegistry manages the set of tools available to an embedded agent. Tools are opt-in — only those listed in the agent config are registered.

func NewToolRegistry

func NewToolRegistry() *ToolRegistry

NewToolRegistry creates an empty registry.

func (*ToolRegistry) Definitions

func (r *ToolRegistry) Definitions() []ToolDef

Definitions returns the registered tools as ToolDef values suitable for passing to the LLM provider.

func (*ToolRegistry) Get

func (r *ToolRegistry) Get(name string) *Tool

Get returns a tool by name, or nil if not found.

func (*ToolRegistry) List

func (r *ToolRegistry) List() []string

List returns all registered tool names.

func (*ToolRegistry) Register

func (r *ToolRegistry) Register(t *Tool)

Register adds a tool to the registry, replacing any existing tool with the same name.

func (*ToolRegistry) RegisterBuiltins

func (r *ToolRegistry) RegisterBuiltins(names []string, workspace string)

RegisterBuiltins registers the subset of built-in tools specified by names. Unknown tool names are silently ignored. workspace is the root directory for file-system tools (read_file, write_file, list_dir, exec).

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL