agent

package
v1.0.2 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Jan 15, 2026 License: Apache-2.0 Imports: 43 Imported by: 0

Documentation

Overview

Copyright 2026 Teradata

Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at

http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.

Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at

http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.

Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at

http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.

Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at

http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.

Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at

http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.

Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at

http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. Package agent provides dynamic tool discovery for MCP servers.

Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at

http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.

Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at

http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.

Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at

http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. Package agent provides MCP integration for the Loom agent framework.

Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at

http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.

Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at

http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.

Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at

http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.

Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at

http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.

Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at

http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.

Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at

http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.

Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at

http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.

Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at

http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.

Index

Constants

View Source
const (
	StagePatternSelection = types.StagePatternSelection
	StageSchemaDiscovery  = types.StageSchemaDiscovery
	StageLLMGeneration    = types.StageLLMGeneration
	StageToolExecution    = types.StageToolExecution
	StageSynthesis        = types.StageSynthesis
	StageHumanInTheLoop   = types.StageHumanInTheLoop
	StageGuardrailCheck   = types.StageGuardrailCheck
	StageSelfCorrection   = types.StageSelfCorrection
	StageCompleted        = types.StageCompleted
	StageFailed           = types.StageFailed
)

Re-export ExecutionStage constants for backward compatibility

Variables

View Source
var ProfileDefaults = map[loomv1.WorkloadProfile]CompressionProfile{
	loomv1.WorkloadProfile_WORKLOAD_PROFILE_BALANCED: {
		Name:                     "balanced",
		MaxL1Messages:            8,
		MinL1Messages:            4,
		WarningThresholdPercent:  60,
		CriticalThresholdPercent: 75,
		NormalBatchSize:          3,
		WarningBatchSize:         5,
		CriticalBatchSize:        7,
	},
	loomv1.WorkloadProfile_WORKLOAD_PROFILE_DATA_INTENSIVE: {
		Name:                     "data_intensive",
		MaxL1Messages:            5,
		MinL1Messages:            3,
		WarningThresholdPercent:  50,
		CriticalThresholdPercent: 70,
		NormalBatchSize:          2,
		WarningBatchSize:         4,
		CriticalBatchSize:        6,
	},
	loomv1.WorkloadProfile_WORKLOAD_PROFILE_CONVERSATIONAL: {
		Name:                     "conversational",
		MaxL1Messages:            12,
		MinL1Messages:            6,
		WarningThresholdPercent:  70,
		CriticalThresholdPercent: 85,
		NormalBatchSize:          4,
		WarningBatchSize:         6,
		CriticalBatchSize:        8,
	},
}

ProfileDefaults provides preset profiles for common workload types.

Functions

func ContextWithProgressCallback

func ContextWithProgressCallback(ctx context.Context, callback ProgressCallback) context.Context

ContextWithProgressCallback stores a progress callback in the context so that nested operations (like tool executions) can emit progress events.

func GetAvailableROMs

func GetAvailableROMs() []string

GetAvailableROMs returns a list of available ROM identifiers. Useful for documentation and validation.

func GetBaseROM

func GetBaseROM() []byte

GetBaseROM returns the raw base ROM content (START_HERE.md). This is the single source of truth for the base ROM, used by both: - Agent ROM loading (via LoadROMContent) - Deployment to ~/.loom/START_HERE.md (via embedded package)

func GetBaseROMSize

func GetBaseROMSize() int

GetBaseROMSize returns the size of the base ROM (START_HERE.md).

func GetDomainROMSize

func GetDomainROMSize(romID string) int

GetDomainROMSize returns the size of a specific domain ROM. Returns 0 if ROM doesn't exist.

func GetROMSize

func GetROMSize(romID string) int

GetROMSize returns the total size of composed ROM in bytes. Includes base ROM + domain ROM if applicable.

func LoadAgentConfig

func LoadAgentConfig(path string) (*loomv1.AgentConfig, error)

LoadAgentConfig loads agent configuration from a YAML file and converts it to proto.

func LoadConfigFromString

func LoadConfigFromString(yamlContent string) (*loomv1.AgentConfig, error)

LoadConfigFromString loads agent configuration from a YAML string and converts it to proto. This is used by the meta-agent factory to spawn agents from generated YAML configs. Supports both legacy format (agent:) and k8s-style format (apiVersion/kind/metadata/spec).

func LoadROMContent

func LoadROMContent(romID string, backendPath string) string

LoadROMContent loads ROM (Read-Only Memory) content based on configuration. ROM provides operational guidance and optional domain-specific knowledge.

Architecture:

  • Base ROM (START_HERE.md): Always included for all agents (5KB) Provides: tool discovery, communication patterns, artifacts, memory usage
  • Domain ROMs: Optional specialized knowledge (e.g., TD.rom for Teradata SQL) Automatically composed with base ROM using clear separators

Parameters:

  • romID: ROM identifier from agent config ("TD", "teradata", "auto", "none", or "")
  • backendPath: Backend path from agent metadata (for auto-detection)

Returns composed ROM content (markdown format).

ROM Composition Rules:

  1. Base ROM is ALWAYS included (operational guidance)
  2. Domain ROM is added if specified (with separator)
  3. Use romID="none" to opt-out of ALL ROMs (rare)
  4. Empty romID="" = base ROM only (no domain knowledge)

Examples:

romID=""         → Base ROM only (5KB)
romID="TD"       → Base + Teradata ROM (5KB + 11KB = 16KB)
romID="auto"     → Base + auto-detected domain ROM
romID="none"     → No ROM at all (explicit opt-out)

func LoadWorkflowAgents

func LoadWorkflowAgents(path string, llmProvider LLMProvider) ([]*loomv1.AgentConfig, error)

LoadWorkflowAgents loads a workflow file and extracts ALL agent configs (coordinator + sub-agents). Returns a slice of AgentConfigs with proper namespacing:

  • Coordinator: registered as {workflow-name}
  • Sub-agents: registered as {workflow-name}:{agent-id}

Supports two formats: 1. Orchestration format (apiVersion/kind/spec) - used by looms workflow run 2. Weaver format (agent config with embedded workflow section)

This allows connecting to individual agents while ensuring the workflow uses the same registered instances.

func LoadWorkflowCoordinator

func LoadWorkflowCoordinator(path string, llmProvider LLMProvider) (*loomv1.AgentConfig, error)

LoadWorkflowCoordinator loads a workflow file and extracts only the coordinator agent. This is a convenience wrapper around LoadWorkflowAgents for backward compatibility. Deprecated: Use LoadWorkflowAgents to register all agents in the workflow.

func OpenDB

func OpenDB(config DBConfig) (*sql.DB, error)

OpenDB opens a SQLite database with optional encryption support. Returns a *sql.DB connection or an error.

Uses SQLCipher driver for all connections (handles both encrypted and unencrypted). When encryption is disabled (default), no key is set. When encryption is enabled, uses SQLCipher with the provided key.

Example without encryption (default):

db, err := OpenDB(DBConfig{Path: "sessions.db"})

Example with encryption:

db, err := OpenDB(DBConfig{
    Path: "sessions.db",
    EncryptDatabase: true,
    EncryptionKey: os.Getenv("LOOM_DB_KEY"),
})

func SaveAgentConfig

func SaveAgentConfig(config *loomv1.AgentConfig, path string) error

SaveAgentConfig saves an agent configuration to a YAML file

func ValidateAgentConfig

func ValidateAgentConfig(config *loomv1.AgentConfig) error

ValidateAgentConfig validates an agent configuration

func ValidatePatternConfig

func ValidatePatternConfig(cfg *PatternConfig) error

ValidatePatternConfig validates pattern configuration

Types

type Agent

type Agent struct {
	// contains filtered or unexported fields
}

Agent is the core conversation agent that orchestrates LLM calls, tool execution, and backend interactions. It's designed to be backend-agnostic and work with any ExecutionBackend implementation (SQL databases, REST APIs, documents, etc.).

func NewAgent

func NewAgent(backend fabric.ExecutionBackend, llmProvider LLMProvider, opts ...Option) *Agent

NewAgent creates a new Agent instance.

For comprehensive observability, pass instrumented LLM and executor:

llmProvider = llm.NewInstrumentedProvider(baseProvider, tracer)
// Then create agent with WithTracer(tracer)

The agent will automatically use instrumented versions if provided, enabling end-to-end tracing of conversations, LLM calls, and tool executions.

func (*Agent) Chat

func (a *Agent) Chat(ctx context.Context, sessionID string, userMessage string) (*Response, error)

Chat processes a user message and returns a response. This is the main entry point for conversational interaction.

func (*Agent) ChatWithProgress

func (a *Agent) ChatWithProgress(ctx context.Context, sessionID string, userMessage string, progressCallback ProgressCallback) (*Response, error)

ChatWithProgress is like Chat but supports streaming progress updates. The progressCallback will be called at key execution stages to report progress. This is used by StreamWeave to provide real-time feedback to clients.

func (*Agent) CleanupMCPClients

func (a *Agent) CleanupMCPClients() error

CleanupMCPClients closes all MCP clients that were registered with AutoClose=true. This should be called when the agent is done to properly cleanup resources.

Example:

defer agent.CleanupMCPClients()

func (*Agent) CreateSession

func (a *Agent) CreateSession(sessionID string) *Session

CreateSession creates a new session without sending a message to the LLM. Use this for session initialization; use Chat() for actual conversations.

func (*Agent) DeleteSession

func (a *Agent) DeleteSession(sessionID string)

DeleteSession removes a session.

func (*Agent) EnableDynamicDiscovery

func (a *Agent) EnableDynamicDiscovery(mcpMgr *manager.Manager)

EnableDynamicDiscovery enables dynamic tool discovery on the agent.

When enabled, if a tool is not found in the registered tools, the agent will attempt to discover it from MCP servers at runtime.

Example:

agent := NewAgent(config)
agent.EnableDynamicDiscovery(mcpMgr)

// Don't register tools upfront
// Tools discovered on-demand during conversations

func (*Agent) GetCircuitBreakers

func (a *Agent) GetCircuitBreakers() *fabric.CircuitBreakerManager

GetCircuitBreakers returns the circuit breaker manager for failure isolation (may be nil if not enabled).

func (*Agent) GetConfig

func (a *Agent) GetConfig() *Config

GetConfig returns a copy of the agent configuration.

func (*Agent) GetDescription

func (a *Agent) GetDescription() string

GetDescription returns the agent description from configuration.

func (*Agent) GetGuardrails

func (a *Agent) GetGuardrails() *fabric.GuardrailEngine

GetGuardrails returns the guardrail engine for pre-flight validation (may be nil if not enabled).

func (*Agent) GetLLMModel

func (a *Agent) GetLLMModel() string

GetLLMModel returns the model identifier (e.g., "claude-3-5-sonnet-20241022").

func (*Agent) GetLLMProviderName

func (a *Agent) GetLLMProviderName() string

GetLLMProviderName returns the name of the LLM provider (e.g., "anthropic", "bedrock", "ollama").

func (*Agent) GetName

func (a *Agent) GetName() string

GetName returns the agent name from configuration.

func (*Agent) GetOrchestrator

func (a *Agent) GetOrchestrator() *patterns.Orchestrator

GetOrchestrator returns the pattern orchestrator for intent classification.

func (*Agent) GetSession

func (a *Agent) GetSession(sessionID string) (*Session, bool)

GetSession retrieves a session by ID.

func (*Agent) ListSessions

func (a *Agent) ListSessions() []*Session

ListSessions returns all active sessions.

func (*Agent) ListTools

func (a *Agent) ListTools() []string

ListTools returns a list of all registered tool names.

func (*Agent) Receive

func (a *Agent) Receive(ctx context.Context, msg *loomv1.CommunicationMessage) (interface{}, error)

Receive receives and resolves a message from another agent. If the message uses reference semantics, the reference is resolved to actual data.

func (*Agent) ReceiveWithTimeout

func (a *Agent) ReceiveWithTimeout(ctx context.Context, timeout time.Duration) (*loomv1.CommunicationMessage, error)

ReceiveWithTimeout receives a message with a timeout. Returns nil if no message is available within the timeout period.

func (*Agent) RegisterMCPServer

func (a *Agent) RegisterMCPServer(ctx context.Context, mcpMgr *manager.Manager, serverName string) error

RegisterMCPServer registers all tools from ONE specific server in the manager.

This provides selective registration at the server level instead of registering all servers at once. Useful for controlling context window usage.

Example:

err := agent.RegisterMCPServer(ctx, mcpMgr, "filesystem")
// Only filesystem tools registered, not github, postgres, etc.

func (*Agent) RegisterMCPServers

func (a *Agent) RegisterMCPServers(ctx context.Context, configs ...MCPServerConfig) error

RegisterMCPServers is a convenience method to register multiple MCP servers at once.

Example:

err := agent.RegisterMCPServers(ctx,
    MCPServerConfig{Name: "filesystem", Client: fsClient},
    MCPServerConfig{Name: "github", Client: ghClient},
    MCPServerConfig{Name: "postgres", Client: pgClient},
)

func (*Agent) RegisterMCPTool

func (a *Agent) RegisterMCPTool(ctx context.Context, mcpMgr *manager.Manager, serverName, toolName string) error

RegisterMCPTool registers ONE specific tool from a server.

This provides the finest-grained control over tool registration. Useful when you need just 1-2 specific tools from a server.

Example:

err := agent.RegisterMCPTool(ctx, mcpMgr, "filesystem", "read_file")
// Only filesystem:read_file registered

func (*Agent) RegisterMCPTools

func (a *Agent) RegisterMCPTools(ctx context.Context, config MCPServerConfig) error

RegisterMCPTools connects to an MCP server and registers all its tools with the agent.

This is a convenience method that: 1. Lists all tools from the MCP server 2. Converts them to shuttle.Tool instances 3. Registers them with the agent

Example usage:

// Create MCP client
trans := transport.NewStdioTransport(config)
mcpClient := client.NewClient(client.Config{Transport: trans})
mcpClient.Initialize(ctx, clientInfo)

// Register all MCP tools with agent
err := agent.RegisterMCPTools(ctx, MCPServerConfig{
    Name:   "filesystem",
    Client: mcpClient,
})

Tools will be namespaced by server name (e.g., "filesystem:read_file")

func (*Agent) RegisterMCPToolsFromManager

func (a *Agent) RegisterMCPToolsFromManager(ctx context.Context, mcpMgr *manager.Manager) error

RegisterMCPToolsFromManager registers tools from a manager using config-based filtering.

This is the recommended method for production use. It respects the tool filters defined in the manager's configuration.

Example config:

mcp:
  servers:
    filesystem:
      enabled: true
      tools:
        include: [read_file, write_file]
    github:
      enabled: true
      tools:
        all: true
        exclude: [delete_repository]

Example usage:

err := agent.RegisterMCPToolsFromManager(ctx, mcpMgr)
// Only tools matching config filters are registered

func (*Agent) RegisterTool

func (a *Agent) RegisterTool(tool shuttle.Tool)

RegisterTool registers a tool with the agent.

func (*Agent) RegisterTools

func (a *Agent) RegisterTools(tools ...shuttle.Tool)

RegisterTools registers multiple tools.

func (*Agent) RegisteredTools

func (a *Agent) RegisteredTools() []shuttle.Tool

RegisteredTools returns all registered tools.

func (*Agent) RegisteredToolsByBackend

func (a *Agent) RegisteredToolsByBackend(backend string) []shuttle.Tool

RegisteredToolsByBackend returns all tools registered for a specific backend. Pass empty string to get backend-agnostic tools.

func (*Agent) Send

func (a *Agent) Send(ctx context.Context, toAgent string, messageType string, data interface{}) (*loomv1.CommunicationMessage, error)

Send sends a message to another agent using value or reference semantics. The communication policy determines whether to use direct value or reference.

func (*Agent) SendAndReceive

func (a *Agent) SendAndReceive(ctx context.Context, toAgent string, messageType string, data interface{}, timeout time.Duration) (interface{}, error)

SendAndReceive sends a message and waits for a response (RPC-style). Blocks until response is received or timeout occurs.

func (*Agent) SendAsync

func (a *Agent) SendAsync(ctx context.Context, toAgent string, messageType string, data interface{}) (string, error)

SendAsync sends a message asynchronously (fire-and-forget). If the destination agent is offline, the message is queued for later delivery. Returns immediately without waiting for the message to be delivered.

func (*Agent) SendWithAck

func (a *Agent) SendWithAck(ctx context.Context, toAgent string, messageType string, data interface{}, timeout time.Duration) error

SendWithAck sends a message and waits for acknowledgment. Returns nil if message was successfully delivered and acknowledged.

func (*Agent) SetCommunicationPolicy

func (a *Agent) SetCommunicationPolicy(policy *communication.PolicyManager)

SetCommunicationPolicy configures the communication policy manager. This determines when to use references vs values in inter-agent communication.

func (*Agent) SetLLMProvider

func (a *Agent) SetLLMProvider(llm LLMProvider)

SetLLMProvider switches the LLM provider for this agent. This allows mid-session model switching while preserving conversation context. The new provider will be used for all future LLM calls in all sessions.

func (*Agent) SetReferenceStore

func (a *Agent) SetReferenceStore(store communication.ReferenceStore)

SetReferenceStore configures the reference store for inter-agent communication. This enables Send/Receive methods for agent-to-agent messaging.

func (*Agent) SetSQLResultStore

func (a *Agent) SetSQLResultStore(sqlStore *storage.SQLResultStore)

SetSQLResultStore configures SQL result store for this agent. This enables queryable storage for large SQL results, preventing context blowout.

func (*Agent) SetSharedMemory

func (a *Agent) SetSharedMemory(sharedMemory *storage.SharedMemoryStore)

SetSharedMemory configures shared memory for this agent. This injects the shared memory store into: - The agent itself (for formatToolResult to store large results) - All existing sessions' segmented memory - The tool executor for automatic large result handling - Future sessions created by this agent - Re-registers GetToolResultTool with the new store

func (*Agent) SetToolRegistryForDynamicDiscovery

func (a *Agent) SetToolRegistryForDynamicDiscovery(toolRegistry shuttle.ToolRegistry, mcpManager shuttle.MCPManager)

SetToolRegistryForDynamicDiscovery configures the tool registry for dynamic tool discovery. When enabled, agents can use tools discovered via tool_search without explicit registration. MCP tools found in the registry will be dynamically registered when first used.

func (*Agent) ToolCount

func (a *Agent) ToolCount() int

ToolCount returns the number of registered tools.

func (*Agent) UnregisterTool

func (a *Agent) UnregisterTool(name string)

UnregisterTool unregisters a tool by name.

type AgentConfigYAML

type AgentConfigYAML struct {
	Agent struct {
		Name         string                 `yaml:"name"`
		Description  string                 `yaml:"description"`
		BackendPath  string                 `yaml:"backend_path"`
		LLM          LLMConfigYAML          `yaml:"llm"`
		SystemPrompt string                 `yaml:"system_prompt"`
		ROM          string                 `yaml:"rom"` // ROM identifier: "TD", "teradata", "auto", or ""
		Tools        ToolsConfigYAML        `yaml:"tools"`
		Memory       MemoryConfigYAML       `yaml:"memory"`
		Behavior     BehaviorConfigYAML     `yaml:"behavior"`
		Metadata     map[string]interface{} `yaml:"metadata"`
	} `yaml:"agent"`
}

AgentConfigYAML represents the YAML structure for agent configuration. This struct mirrors the proto AgentConfig but uses YAML-friendly types. Legacy format with "agent:" as root key.

type AgentInstanceInfo

type AgentInstanceInfo struct {
	ID             string
	Name           string
	Status         string // "running", "stopped", "error", "initializing"
	CreatedAt      time.Time
	UpdatedAt      time.Time
	ActiveSessions int
	TotalMessages  int64
	Error          string
}

AgentInstanceInfo tracks runtime information about an agent instance

type AnthropicCompressor

type AnthropicCompressor struct {
	// contains filtered or unexported fields
}

AnthropicCompressor is a production-ready LLM caller for Anthropic's Claude. Implements LLMCaller interface using the official Anthropic SDK.

Example usage:

import "github.com/anthropics/anthropic-sdk-go"

client := anthropic.NewClient(option.WithAPIKey("your-key"))
compressor := NewAnthropicCompressor(client, "claude-3-haiku-20240307")
memCompressor := NewLLMCompressor(compressor)

Note: This is a reference implementation. Users should adapt based on their LLM provider and SDK. The key is implementing the LLMCaller interface.

func NewAnthropicCompressor

func NewAnthropicCompressor(client interface{}, modelName string) *AnthropicCompressor

NewAnthropicCompressor creates an Anthropic-based compressor. This is a reference implementation - adapt for your LLM provider.

func (*AnthropicCompressor) CompressConversation

func (a *AnthropicCompressor) CompressConversation(ctx context.Context, conversationText string) (string, error)

CompressConversation implements LLMCaller for Anthropic's Claude. Note: This is a skeleton implementation. Full implementation requires the anthropic-sdk-go and proper error handling.

type BehaviorConfigYAML

type BehaviorConfigYAML struct {
	MaxIterations      int                `yaml:"max_iterations"`
	TimeoutSeconds     int                `yaml:"timeout_seconds"`
	AllowCodeExecution bool               `yaml:"allow_code_execution"`
	AllowedDomains     []string           `yaml:"allowed_domains"`
	MaxTurns           int                `yaml:"max_turns"`
	MaxToolExecutions  int                `yaml:"max_tool_executions"`
	Patterns           *PatternConfigYAML `yaml:"patterns"`
}

BehaviorConfigYAML represents behavior configuration in YAML

type CachedToolResult

type CachedToolResult struct {
	ToolName      string
	Args          map[string]interface{}
	Result        string // Brief summary of result (for small results)
	Timestamp     time.Time
	DataReference *loomv1.DataReference // For large results stored in shared memory
}

CachedToolResult represents a recent tool execution stored in memory.

type ClearRecalledContextTool

type ClearRecalledContextTool struct {
	// contains filtered or unexported fields
}

ClearRecalledContextTool removes promoted messages from context. Allows agents to reclaim token budget after using recalled context.

func NewClearRecalledContextTool

func NewClearRecalledContextTool(memory *Memory) *ClearRecalledContextTool

NewClearRecalledContextTool creates a new clear recalled context tool.

func (*ClearRecalledContextTool) Backend

func (t *ClearRecalledContextTool) Backend() string

Backend returns the backend type this tool requires (empty = backend-agnostic).

func (*ClearRecalledContextTool) Description

func (t *ClearRecalledContextTool) Description() string

Description returns the tool description.

func (*ClearRecalledContextTool) Execute

func (t *ClearRecalledContextTool) Execute(ctx context.Context, input map[string]interface{}) (*shuttle.Result, error)

Execute clears promoted context.

func (*ClearRecalledContextTool) InputSchema

func (t *ClearRecalledContextTool) InputSchema() *shuttle.JSONSchema

InputSchema returns the JSON schema for tool parameters.

func (*ClearRecalledContextTool) Name

func (t *ClearRecalledContextTool) Name() string

Name returns the tool name.

type CompressionProfile

type CompressionProfile struct {
	// Profile name (for logging and debugging)
	Name string

	// Maximum messages in L1 cache before compression triggers
	MaxL1Messages int

	// Minimum messages to keep in L1 after compression
	MinL1Messages int

	// Warning threshold as percentage (0-100)
	// Compression triggers when token usage exceeds this
	WarningThresholdPercent int

	// Critical threshold as percentage (0-100)
	// Aggressive compression when token usage exceeds this
	CriticalThresholdPercent int

	// Number of messages to compress in normal conditions
	NormalBatchSize int

	// Number of messages to compress under warning threshold
	WarningBatchSize int

	// Number of messages to compress under critical threshold
	CriticalBatchSize int
}

CompressionProfile defines memory compression behavior for a specific workload type. Profiles provide preset values for thresholds, batch sizes, and L1 cache limits.

func ResolveCompressionProfile

func ResolveCompressionProfile(config *loomv1.MemoryCompressionConfig) (CompressionProfile, error)

ResolveCompressionProfile resolves a compression configuration into a final profile. Precedence: Explicit config values > Profile defaults > Balanced profile defaults

func (CompressionProfile) Validate

func (p CompressionProfile) Validate() error

Validate checks if the profile has valid values.

type Config

type Config struct {
	// Name is the agent name (used for identification and logging)
	Name string

	// Description is a human-readable description of the agent's purpose
	Description string

	// MaxTurns is the maximum number of conversation turns before forcing completion
	MaxTurns int

	// MaxToolExecutions is the maximum number of tool executions per conversation
	MaxToolExecutions int

	// SystemPrompt is the direct system prompt text (takes precedence over SystemPromptKey)
	SystemPrompt string

	// SystemPromptKey is the key for loading the system prompt from promptio
	SystemPromptKey string

	// ROM identifier for domain-specific knowledge ("TD", "teradata", "auto", or "")
	Rom string

	// Metadata for agent configuration (includes backend_path for ROM auto-detection)
	Metadata map[string]string

	// EnableTracing enables observability tracing
	EnableTracing bool

	// PatternsDir is the directory containing pattern YAML files (optional)
	PatternsDir string

	// Backend configuration
	BackendConfig map[string]interface{}

	// Retry configuration for LLM calls
	Retry RetryConfig

	// MaxContextTokens is the model's context window size (0 = use defaults/auto-detect)
	MaxContextTokens int

	// ReservedOutputTokens is the number of tokens reserved for model output (0 = use defaults, typically 10%)
	ReservedOutputTokens int

	// PatternConfig controls pattern injection (nil = use defaults)
	PatternConfig *PatternConfig
}

Config holds agent configuration.

func DefaultConfig

func DefaultConfig() *Config

DefaultConfig returns a Config with sensible defaults.

type Context

type Context = types.Context

type CustomToolConfigYAML

type CustomToolConfigYAML struct {
	Name           string `yaml:"name"`
	Implementation string `yaml:"implementation"`
}

CustomToolConfigYAML represents custom tool configuration in YAML

type DBConfig

type DBConfig struct {
	// Path to the SQLite database file
	Path string

	// EncryptDatabase enables SQLCipher encryption at rest.
	// When true, requires EncryptionKey to be set.
	// Default: false (opt-in for enterprise deployments)
	EncryptDatabase bool

	// EncryptionKey is the encryption key for SQLCipher.
	// Can be provided directly or via LOOM_DB_KEY environment variable.
	// Required when EncryptDatabase is true.
	EncryptionKey string
}

DBConfig holds database configuration including optional encryption.

type DynamicToolDiscovery

type DynamicToolDiscovery struct {
	// contains filtered or unexported fields
}

DynamicToolDiscovery enables runtime tool discovery using simple text search.

Instead of registering all tools upfront, tools are discovered on-demand based on user intent. This prevents context window bloat while maintaining access to all available tools.

Search Strategy:

  • Simple text matching (case-insensitive) on tool name and description
  • No complex NLP or embedding required
  • Results cached for future use

Example:

discovery := NewDynamicToolDiscovery(mcpMgr, logger)
tool, err := discovery.Search(ctx, "read file")
// Finds "filesystem:read_file" by matching "read" and "file"

func NewDynamicToolDiscovery

func NewDynamicToolDiscovery(mcpMgr *manager.Manager, logger *zap.Logger) *DynamicToolDiscovery

NewDynamicToolDiscovery creates a new dynamic tool discovery system.

func (*DynamicToolDiscovery) CacheSize

func (d *DynamicToolDiscovery) CacheSize() int

CacheSize returns the current cache size.

func (*DynamicToolDiscovery) ClearCache

func (d *DynamicToolDiscovery) ClearCache()

ClearCache clears the discovery cache.

func (*DynamicToolDiscovery) Search

func (d *DynamicToolDiscovery) Search(ctx context.Context, intent string) (shuttle.Tool, error)

Search finds a tool matching the user intent using simple text search.

Search process:

  1. Check cache for previously discovered tools
  2. Search all MCP servers for matching tools
  3. Use simple text matching on tool name and description
  4. Cache the result for future use
  5. Return the first matching tool

Returns an error if no matching tool is found.

func (*DynamicToolDiscovery) SearchMultiple

func (d *DynamicToolDiscovery) SearchMultiple(ctx context.Context, intent string) ([]shuttle.Tool, error)

SearchMultiple finds multiple tools matching the intent.

Unlike Search which returns the first match, this returns all matching tools. Useful when you want to give the LLM multiple options.

func (*DynamicToolDiscovery) SetSQLResultStore

func (d *DynamicToolDiscovery) SetSQLResultStore(store *storage.SQLResultStore)

SetSQLResultStore configures SQL result store for dynamically discovered tools.

func (*DynamicToolDiscovery) SetSharedMemory

func (d *DynamicToolDiscovery) SetSharedMemory(store *storage.SharedMemoryStore)

SetSharedMemory configures shared memory store for dynamically discovered tools.

type ErrorFilters

type ErrorFilters struct {
	SessionID string    // Filter by session
	ToolName  string    // Filter by tool
	StartTime time.Time // Time range start
	EndTime   time.Time // Time range end
	Limit     int       // Max results (0 = unlimited)
}

ErrorFilters for querying errors.

type ErrorStore

type ErrorStore interface {
	// Store saves an error and returns a unique ID
	Store(ctx context.Context, err *StoredError) (string, error)

	// Get retrieves a specific error by ID
	Get(ctx context.Context, errorID string) (*StoredError, error)

	// List returns errors matching filters (for analytics/debugging)
	List(ctx context.Context, filters ErrorFilters) ([]*StoredError, error)
}

ErrorStore provides persistent storage for tool execution errors. Errors are stored with full details allowing agents to retrieve them on demand.

type ExecutionStage

type ExecutionStage = types.ExecutionStage

type FailureEscalationConfig

type FailureEscalationConfig struct {
	MaxConsecutiveFailures int  // Threshold for escalation (default: 2)
	TrackFailureSignature  bool // Whether to track failure signatures (default: true)
}

FailureEscalationConfig holds configuration for failure escalation.

func DefaultFailureEscalationConfig

func DefaultFailureEscalationConfig() FailureEscalationConfig

DefaultFailureEscalationConfig returns default failure escalation configuration.

type Finding

type Finding struct {
	Path      string      `json:"path"`      // Hierarchical key: "table.statistics.row_count"
	Value     interface{} `json:"value"`     // The actual data: numbers, strings, arrays, objects
	Category  string      `json:"category"`  // Type: "statistic", "schema", "observation", "distribution"
	Note      string      `json:"note"`      // Optional explanation/context
	Timestamp time.Time   `json:"timestamp"` // When recorded
	Source    string      `json:"source"`    // Which tool_call_id produced this (optional)
}

Finding represents a structured piece of information discovered during analysis. Findings are stored in the Kernel layer to provide working memory for agents, preventing hallucination by maintaining verified facts from tool executions.

type GetErrorDetailsTool

type GetErrorDetailsTool struct {
	// contains filtered or unexported fields
}

GetErrorDetailsTool is a built-in tool that fetches complete error information for a previously failed tool execution.

This tool is automatically registered on all agents to support the error submission channel pattern where error summaries are sent to the LLM by default, and full details are fetched on demand.

func NewGetErrorDetailsTool

func NewGetErrorDetailsTool(store ErrorStore) *GetErrorDetailsTool

NewGetErrorDetailsTool creates a new GetErrorDetailsTool.

func (*GetErrorDetailsTool) Backend

func (t *GetErrorDetailsTool) Backend() string

Backend returns the backend type this tool requires. Empty string means backend-agnostic (works with any agent).

func (*GetErrorDetailsTool) Description

func (t *GetErrorDetailsTool) Description() string

Description returns the tool description for the LLM.

func (*GetErrorDetailsTool) Execute

func (t *GetErrorDetailsTool) Execute(ctx context.Context, input map[string]interface{}) (*shuttle.Result, error)

Execute fetches the error details from the error store.

func (*GetErrorDetailsTool) InputSchema

func (t *GetErrorDetailsTool) InputSchema() *shuttle.JSONSchema

InputSchema returns the JSON schema for the tool input.

func (*GetErrorDetailsTool) Name

func (t *GetErrorDetailsTool) Name() string

Name returns the tool name.

type GetToolResultTool

type GetToolResultTool struct {
	// contains filtered or unexported fields
}

GetToolResultTool retrieves METADATA about large tool results. BREAKING CHANGE in v1.0.1: Now returns ONLY metadata, never full data. Use query_tool_result to retrieve filtered/paginated data.

This implements progressive disclosure - agents inspect metadata before retrieving data, preventing context blowout from accidentally loading 50MB results.

func NewGetToolResultTool

func NewGetToolResultTool(memoryStore *storage.SharedMemoryStore, sqlStore *storage.SQLResultStore) *GetToolResultTool

NewGetToolResultTool creates a new GetToolResultTool.

func (*GetToolResultTool) Backend

func (t *GetToolResultTool) Backend() string

Backend returns the backend type this tool requires. Empty string means backend-agnostic (works with any agent).

func (*GetToolResultTool) Description

func (t *GetToolResultTool) Description() string

Description returns the tool description for the LLM.

func (*GetToolResultTool) Execute

func (t *GetToolResultTool) Execute(ctx context.Context, input map[string]interface{}) (*shuttle.Result, error)

Execute retrieves metadata from either shared memory or SQL store.

func (*GetToolResultTool) InputSchema

func (t *GetToolResultTool) InputSchema() *shuttle.JSONSchema

InputSchema returns the JSON schema for the tool input.

func (*GetToolResultTool) Name

func (t *GetToolResultTool) Name() string

Name returns the tool name.

type HITLRequestInfo

type HITLRequestInfo = types.HITLRequestInfo

type K8sStyleAgentConfig

type K8sStyleAgentConfig struct {
	APIVersion string `yaml:"apiVersion"`
	Kind       string `yaml:"kind"`
	Metadata   struct {
		Name        string                 `yaml:"name"`
		Version     string                 `yaml:"version"`
		Description string                 `yaml:"description"`
		Labels      map[string]interface{} `yaml:"labels"`
	} `yaml:"metadata"`
	Spec struct {
		Backend struct {
			Name   string                 `yaml:"name"`
			Type   string                 `yaml:"type"`
			Config map[string]interface{} `yaml:"config"`
		} `yaml:"backend"`
		LLM           LLMConfigYAML          `yaml:"llm"`
		Tools         interface{}            `yaml:"tools"` // Can be ToolsConfigYAML or []interface{}
		SystemPrompt  string                 `yaml:"system_prompt"`
		ROM           string                 `yaml:"rom"` // ROM identifier: "TD", "teradata", "auto", or ""
		Config        BehaviorConfigYAML     `yaml:"config"`
		Memory        MemoryConfigYAML       `yaml:"memory"`
		Observability map[string]interface{} `yaml:"observability"`
	} `yaml:"spec"`
}

K8sStyleAgentConfig represents the new k8s-style YAML format with apiVersion, kind, metadata, spec.

type LLMCaller

type LLMCaller interface {
	// CompressConversation takes conversation text and returns a concise summary.
	// Should limit output to 512 tokens for cost efficiency.
	CompressConversation(ctx context.Context, conversationText string) (string, error)
}

LLMCaller defines the interface for calling an LLM to compress messages. Implementations should provide cheap, fast compression calls.

type LLMCompressor

type LLMCompressor struct {
	// contains filtered or unexported fields
}

LLMCompressor is a concrete implementation of MemoryCompressor that uses an LLM to create intelligent summaries of conversation history.

Provides 50-80% token reduction through LLM-powered summarization.

func NewLLMCompressor

func NewLLMCompressor(llmCaller LLMCaller) *LLMCompressor

NewLLMCompressor creates a new LLM-powered memory compressor. If llmCaller is nil, falls back to simple text extraction.

func (*LLMCompressor) CompressMessages

func (c *LLMCompressor) CompressMessages(ctx context.Context, messages []Message) (string, error)

CompressMessages compresses a slice of messages into a concise summary. Uses LLM if available, otherwise falls back to simple extraction.

LLM compression typically achieves: - 50-80% token reduction - 2-3 sentence summaries - Preservation of key context (tables, queries, findings)

func (*LLMCompressor) IsEnabled

func (c *LLMCompressor) IsEnabled() bool

IsEnabled returns whether LLM-powered compression is enabled.

func (*LLMCompressor) SetLLMCaller

func (c *LLMCompressor) SetLLMCaller(llmCaller LLMCaller)

SetLLMCaller updates the LLM caller for the compressor. Useful for lazy initialization after agent is fully set up.

type LLMConfigYAML

type LLMConfigYAML struct {
	Provider             string   `yaml:"provider"`
	Model                string   `yaml:"model"`
	Temperature          float64  `yaml:"temperature"`
	MaxTokens            int      `yaml:"max_tokens"`
	StopSequences        []string `yaml:"stop_sequences"`
	TopP                 float64  `yaml:"top_p"`
	TopK                 int      `yaml:"top_k"`
	MaxContextTokens     int      `yaml:"max_context_tokens"`
	ReservedOutputTokens int      `yaml:"reserved_output_tokens"`
}

LLMConfigYAML represents LLM configuration in YAML

type LLMProvider

type LLMProvider = types.LLMProvider

type LLMResponse

type LLMResponse = types.LLMResponse

type MCPClientRef

type MCPClientRef struct {
	Client     interface{ Close() error } // MCP client with Close method
	ServerName string
}

MCPClientRef holds a reference to an MCP client for cleanup

type MCPServerConfig

type MCPServerConfig struct {
	// Name is the unique identifier for this MCP server
	// Used for tool namespacing (e.g., "filesystem" -> "filesystem:read_file")
	Name string

	// Client is the initialized MCP client
	Client *client.Client

	// AutoClose determines if the client should be closed when agent is done
	// Default: false (client lifecycle managed externally)
	AutoClose bool
}

MCPServerConfig configures an MCP server connection

type MCPToolConfigYAML

type MCPToolConfigYAML struct {
	Server string   `yaml:"server"`
	Tools  []string `yaml:"tools"`
}

MCPToolConfigYAML represents MCP tool configuration in YAML

type Memory

type Memory struct {
	// contains filtered or unexported fields
}

Memory manages conversation sessions and history. Supports optional persistent storage via SessionStore.

func NewMemory

func NewMemory() *Memory

NewMemory creates a new in-memory session manager.

func NewMemoryWithStore

func NewMemoryWithStore(store *SessionStore) *Memory

NewMemoryWithStore creates a memory manager with persistent storage.

func (*Memory) AddMessage

func (m *Memory) AddMessage(sessionID string, msg Message)

AddMessage adds a message to a session and notifies observers. This is the preferred way to add messages when real-time updates are needed. Falls back to session.AddMessage if session not found in Memory.

func (*Memory) ClearAll

func (m *Memory) ClearAll()

ClearAll removes all sessions from memory (does not affect persistent store).

func (*Memory) CountSessions

func (m *Memory) CountSessions() int

CountSessions returns the number of active sessions.

func (*Memory) DeleteSession

func (m *Memory) DeleteSession(sessionID string)

DeleteSession removes a session.

func (*Memory) GetOrCreateSession

func (m *Memory) GetOrCreateSession(sessionID string) *Session

GetOrCreateSession gets an existing session or creates a new one. If persistent storage is configured, attempts to load from database first.

func (*Memory) GetOrCreateSessionWithAgent

func (m *Memory) GetOrCreateSessionWithAgent(sessionID, agentID, parentSessionID string) *Session

GetOrCreateSessionWithAgent gets an existing session or creates a new one with agent metadata. This is used for multi-agent workflows where sub-agents need to access parent sessions. Parameters:

  • sessionID: Unique session identifier
  • agentID: Agent identity (e.g., "coordinator", "analyzer-sub-agent")
  • parentSessionID: Parent session ID (for sub-agents to access coordinator session)

func (*Memory) GetSession

func (m *Memory) GetSession(sessionID string) (*Session, bool)

GetSession retrieves a session by ID.

func (*Memory) GetStore

func (m *Memory) GetStore() *SessionStore

GetStore returns the SessionStore if persistence is enabled, nil otherwise. Used for registering cleanup hooks and accessing persistence layer.

func (*Memory) ListSessions

func (m *Memory) ListSessions() []*Session

ListSessions returns all active sessions.

func (*Memory) PersistMessage

func (m *Memory) PersistMessage(ctx context.Context, sessionID string, msg Message) error

PersistMessage saves a message to persistent storage if configured.

func (*Memory) PersistSession

func (m *Memory) PersistSession(ctx context.Context, session *Session) error

PersistSession saves a session to persistent storage if configured.

func (*Memory) PersistToolExecution

func (m *Memory) PersistToolExecution(ctx context.Context, sessionID string, exec ToolExecution) error

PersistToolExecution saves a tool execution to persistent storage if configured.

func (*Memory) RegisterObserver

func (m *Memory) RegisterObserver(agentID string, observer MemoryObserver)

RegisterObserver registers an observer for a specific agent's memory updates. The observer will be notified when messages are added to any session for this agent. This enables real-time cross-session updates.

func (*Memory) SetCompressionProfile

func (m *Memory) SetCompressionProfile(profile *CompressionProfile)

SetCompressionProfile sets the compression profile for new sessions. This controls compression behavior (thresholds, batch sizes) for memory management. If profile is nil, balanced profile defaults will be used.

func (*Memory) SetContextLimits

func (m *Memory) SetContextLimits(maxContextTokens, reservedOutputTokens int)

SetContextLimits sets the context window size and output reservation for new sessions. If maxContextTokens is 0, defaults will be used (200K for backwards compatibility). If reservedOutputTokens is 0, it will be calculated as 10% of maxContextTokens.

func (*Memory) SetLLMProvider

func (m *Memory) SetLLMProvider(llm LLMProvider)

SetLLMProvider sets the LLM provider for semantic search reranking (existing and future sessions). This enables LLM-based relevance scoring to improve search quality beyond BM25 keyword matching.

func (*Memory) SetSharedMemory

func (m *Memory) SetSharedMemory(sharedMemory *storage.SharedMemoryStore)

SetSharedMemory configures shared memory for all sessions. This will inject the shared memory into all existing sessions and ensure future sessions also get it.

func (*Memory) SetSystemPromptFunc

func (m *Memory) SetSystemPromptFunc(fn SystemPromptFunc)

SetSystemPromptFunc sets a function to generate system prompts for new sessions. This allows dynamic prompt loading from PromptRegistry or other sources.

func (*Memory) SetTracer

func (m *Memory) SetTracer(tracer observability.Tracer)

SetTracer sets the observability tracer for all sessions (existing and future). This enables error logging and metrics collection for memory operations.

func (*Memory) UnregisterObserver

func (m *Memory) UnregisterObserver(agentID string, observer MemoryObserver)

UnregisterObserver removes an observer for a specific agent. Note: This does a simple identity comparison, so the same observer instance must be passed.

type MemoryCompressionBatchSizesYAML

type MemoryCompressionBatchSizesYAML struct {
	Normal   int `yaml:"normal"`
	Warning  int `yaml:"warning"`
	Critical int `yaml:"critical"`
}

MemoryCompressionBatchSizesYAML represents compression batch sizes in YAML

type MemoryCompressionConfigYAML

type MemoryCompressionConfigYAML struct {
	WorkloadProfile          string                           `yaml:"workload_profile"`
	MaxL1Messages            int                              `yaml:"max_l1_messages"`
	MinL1Messages            int                              `yaml:"min_l1_messages"`
	WarningThresholdPercent  int                              `yaml:"warning_threshold_percent"`
	CriticalThresholdPercent int                              `yaml:"critical_threshold_percent"`
	BatchSizes               *MemoryCompressionBatchSizesYAML `yaml:"batch_sizes"`
}

MemoryCompressionConfigYAML represents memory compression configuration in YAML

type MemoryCompressor

type MemoryCompressor interface {
	CompressMessages(ctx context.Context, messages []Message) (string, error)
	IsEnabled() bool
}

MemoryCompressor defines the interface for LLM-powered memory compression. Implementations should compress message history into brief summaries.

type MemoryConfigYAML

type MemoryConfigYAML struct {
	Type              string                       `yaml:"type"`
	Path              string                       `yaml:"path"`
	DSN               string                       `yaml:"dsn"`
	MaxHistory        int                          `yaml:"max_history"`
	MemoryCompression *MemoryCompressionConfigYAML `yaml:"memory_compression"`
}

MemoryConfigYAML represents memory configuration in YAML

type MemoryLayer

type MemoryLayer string

MemoryLayer represents different tiers of context memory

const (
	LayerROM    MemoryLayer = "rom"    // Read-only: Documentation, system prompt (never changes)
	LayerKernel MemoryLayer = "kernel" // Tool definitions, recent tool results (per conversation)
	LayerL1     MemoryLayer = "l1"     // Hot: Recent messages (last 5-10 exchanges)
	LayerL2     MemoryLayer = "l2"     // Warm: Summarized history (compressed older messages)
	LayerSwap   MemoryLayer = "swap"   // Cold: Long-term storage (database-backed)
)

type MemoryObserver

type MemoryObserver interface {
	// OnMessageAdded is called when a message is added to any session for this agent
	OnMessageAdded(agentID string, sessionID string, msg Message)
}

MemoryObserver is called when messages are added to sessions. This enables real-time updates across multiple sessions viewing the same agent's memory.

type MemoryObserverFunc

type MemoryObserverFunc func(agentID string, sessionID string, msg Message)

MemoryObserverFunc is a function adapter for MemoryObserver.

func (MemoryObserverFunc) OnMessageAdded

func (f MemoryObserverFunc) OnMessageAdded(agentID string, sessionID string, msg Message)

OnMessageAdded implements MemoryObserver.

type MemorySnapshot

type MemorySnapshot struct {
	ID           int
	SessionID    string
	SnapshotType string
	Content      string
	TokenCount   int
	CreatedAt    time.Time
}

MemorySnapshot represents a saved memory snapshot (e.g., L2 summary).

type Message

type Message = types.Message

Type aliases for backward compatibility with code that imports pkg/agent. These types are now defined in pkg/types to break import cycles.

type ModelContextLimits

type ModelContextLimits struct {
	MaxContextTokens     int // Total context window size
	ReservedOutputTokens int // Tokens reserved for model output (typically 10%)
}

ModelContextLimits defines the context window and output reservation for a model

func GetModelContextLimits

func GetModelContextLimits(modelName string) *ModelContextLimits

GetModelContextLimits returns the context limits for a given model name. Returns the limits if found, or nil if the model is not in the lookup table.

func GetProviderDefaultLimits

func GetProviderDefaultLimits(provider string) ModelContextLimits

GetProviderDefaultLimits returns sensible defaults for a provider. Used when model-specific limits are not available.

func ResolveContextLimits

func ResolveContextLimits(provider, model string, configuredMax, configuredReserved int32) ModelContextLimits

ResolveContextLimits determines the context limits to use, with fallback precedence: 1. Explicit configuration (if maxContextTokens > 0) 2. Model lookup table 3. Provider defaults 4. System-wide default (200K for backwards compatibility)

type Option

type Option func(*Agent)

Option is a functional option for configuring an Agent.

func WithCircuitBreakers

func WithCircuitBreakers(breakers *fabric.CircuitBreakerManager) Option

WithCircuitBreakers enables failure isolation for tools.

func WithCommunicationPolicy

func WithCommunicationPolicy(policy *communication.PolicyManager) Option

WithCommunicationPolicy sets the policy for determining reference vs value communication.

func WithCompressionProfile

func WithCompressionProfile(profile *CompressionProfile) Option

WithCompressionProfile sets the compression profile for memory management. This controls compression thresholds and batch sizes for conversation history.

func WithConfig

func WithConfig(config *Config) Option

WithConfig sets the agent configuration.

func WithDescription

func WithDescription(description string) Option

WithDescription sets the agent description.

func WithErrorStore

func WithErrorStore(store ErrorStore) Option

WithErrorStore enables error submission channel for storing full error details. When set, tool execution errors are stored in SQLite with only summaries sent to LLM. The get_error_details built-in tool is automatically registered.

func WithGuardrails

func WithGuardrails(guardrails *fabric.GuardrailEngine) Option

WithGuardrails enables pre-flight validation and error tracking.

func WithMemory

func WithMemory(memory *Memory) Option

WithMemory sets a custom memory manager.

func WithMessageQueue

func WithMessageQueue(queue *communication.MessageQueue) Option

WithMessageQueue enables async agent-to-agent messaging. When set, agents can send/receive messages via the queue, enabling fire-and-forget, request-response, and acknowledgment-based communication.

func WithName

func WithName(name string) Option

WithName sets the agent name in the configuration.

func WithPatternConfig

func WithPatternConfig(cfg *PatternConfig) Option

WithPatternConfig sets pattern configuration.

func WithPatternInjection

func WithPatternInjection(enabled bool) Option

WithPatternInjection enables/disables pattern injection.

func WithPermissionChecker

func WithPermissionChecker(checker *shuttle.PermissionChecker) Option

WithPermissionChecker sets the permission checker for tool execution.

func WithPrompts

func WithPrompts(registry prompts.PromptRegistry) Option

WithPrompts sets the prompt registry.

func WithReferenceStore

func WithReferenceStore(store communication.ReferenceStore) Option

WithReferenceStore enables inter-agent communication via reference store. When set, agents can send/receive messages using value or reference semantics.

func WithSharedMemory

func WithSharedMemory(sharedMemory interface{}) Option

WithSharedMemory sets the SharedMemoryStore for large tool result storage. This enables agents to store and reference large tool outputs efficiently.

func WithSystemPrompt

func WithSystemPrompt(prompt string) Option

WithSystemPrompt sets the direct system prompt text.

func WithTracer

func WithTracer(tracer observability.Tracer) Option

WithTracer sets the observability tracer.

func WithoutSelfCorrection

func WithoutSelfCorrection() Option

WithoutSelfCorrection explicitly disables self-correction (guardrails + circuit breakers). By default, agents have self-correction enabled. Use this option to disable it. Note: This creates a marker guardrails/breakers that prevents default initialization.

type PatternConfig

type PatternConfig struct {
	// Enabled controls whether pattern injection is active
	Enabled bool

	// MinConfidence is the minimum confidence threshold (0.0-1.0)
	MinConfidence float64

	// MaxPatternsPerTurn limits patterns injected per conversation turn
	MaxPatternsPerTurn int

	// EnableTracking enables pattern effectiveness metrics
	EnableTracking bool

	// UseLLMClassifier enables LLM-based intent classification (default: false, uses keyword-based)
	UseLLMClassifier bool
}

PatternConfig holds pattern injection configuration

func DefaultPatternConfig

func DefaultPatternConfig() *PatternConfig

DefaultPatternConfig returns defaults for pattern injection (enabled by default)

type PatternConfigYAML

type PatternConfigYAML struct {
	Enabled            *bool    `yaml:"enabled"`
	MinConfidence      *float64 `yaml:"min_confidence"`
	MaxPatternsPerTurn *int     `yaml:"max_patterns_per_turn"`
	EnableTracking     *bool    `yaml:"enable_tracking"`
	UseLLMClassifier   *bool    `yaml:"use_llm_classifier"`
}

PatternConfigYAML represents pattern configuration in YAML

type ProgressCallback

type ProgressCallback = types.ProgressCallback

func ProgressCallbackFromContext

func ProgressCallbackFromContext(ctx context.Context) ProgressCallback

ProgressCallbackFromContext retrieves the progress callback from context. Returns nil if no callback is stored in the context.

type ProgressEvent

type ProgressEvent = types.ProgressEvent

type QueryToolResultTool

type QueryToolResultTool struct {
	// contains filtered or unexported fields
}

QueryToolResultTool queries large results with filtering/pagination. Enhanced in v1.0.1: Now supports non-SQL data (JSON arrays) via pagination.

For SQL results: Use SQL queries to filter/aggregate For JSON arrays: Use offset/limit for pagination (SQL support coming in Phase 4.5) For CSV data: SQL queries coming in Phase 4.5

func NewQueryToolResultTool

func NewQueryToolResultTool(sqlStore *storage.SQLResultStore, memoryStore *storage.SharedMemoryStore) *QueryToolResultTool

NewQueryToolResultTool creates a new QueryToolResultTool.

func (*QueryToolResultTool) Backend

func (t *QueryToolResultTool) Backend() string

Backend returns the backend type this tool requires. Empty string means backend-agnostic (works with any agent).

func (*QueryToolResultTool) Description

func (t *QueryToolResultTool) Description() string

Description returns the tool description for the LLM.

func (*QueryToolResultTool) Execute

func (t *QueryToolResultTool) Execute(ctx context.Context, input map[string]interface{}) (*shuttle.Result, error)

Execute queries stored data with routing based on storage location.

func (*QueryToolResultTool) InputSchema

func (t *QueryToolResultTool) InputSchema() *shuttle.JSONSchema

InputSchema returns the JSON schema for the tool input.

func (*QueryToolResultTool) Name

func (t *QueryToolResultTool) Name() string

Name returns the tool name.

type RecallConversationTool

type RecallConversationTool struct {
	// contains filtered or unexported fields
}

RecallConversationTool retrieves old messages from swap storage. Allows agents to access conversation history beyond L1/L2 capacity.

func NewRecallConversationTool

func NewRecallConversationTool(memory *Memory) *RecallConversationTool

NewRecallConversationTool creates a new recall conversation tool.

func (*RecallConversationTool) Backend

func (t *RecallConversationTool) Backend() string

Backend returns the backend type this tool requires (empty = backend-agnostic).

func (*RecallConversationTool) Description

func (t *RecallConversationTool) Description() string

Description returns the tool description.

func (*RecallConversationTool) Execute

func (t *RecallConversationTool) Execute(ctx context.Context, input map[string]interface{}) (*shuttle.Result, error)

Execute retrieves messages from swap and promotes them to context.

func (*RecallConversationTool) InputSchema

func (t *RecallConversationTool) InputSchema() *shuttle.JSONSchema

InputSchema returns the JSON schema for tool parameters.

func (*RecallConversationTool) Name

func (t *RecallConversationTool) Name() string

Name returns the tool name.

type RecordFindingTool

type RecordFindingTool struct {
	// contains filtered or unexported fields
}

RecordFindingTool allows agents to record verified findings in working memory. This prevents hallucination by maintaining structured facts discovered during analysis.

Findings are stored in the SegmentedMemory Kernel layer and automatically injected into LLM context as a "Verified Findings" summary, providing working memory across tool executions.

func NewRecordFindingTool

func NewRecordFindingTool(memory *Memory) *RecordFindingTool

NewRecordFindingTool creates a new RecordFindingTool.

func (*RecordFindingTool) Backend

func (t *RecordFindingTool) Backend() string

Backend returns the backend type this tool requires. Empty string means backend-agnostic (works with any agent).

func (*RecordFindingTool) Description

func (t *RecordFindingTool) Description() string

Description returns the tool description for the LLM.

func (*RecordFindingTool) Execute

func (t *RecordFindingTool) Execute(ctx context.Context, input map[string]interface{}) (*shuttle.Result, error)

Execute records the finding in working memory.

func (*RecordFindingTool) InputSchema

func (t *RecordFindingTool) InputSchema() *shuttle.JSONSchema

InputSchema returns the JSON schema for the tool input.

func (*RecordFindingTool) Name

func (t *RecordFindingTool) Name() string

Name returns the tool name.

type Registry

type Registry struct {
	// contains filtered or unexported fields
}

Registry manages agent configurations and instances. It provides centralized agent lifecycle management, hot-reloading, and persistence.

func NewRegistry

func NewRegistry(config RegistryConfig) (*Registry, error)

NewRegistry creates a new agent registry

func (*Registry) Close

func (r *Registry) Close() error

Close closes the registry and cleans up resources

func (*Registry) CreateAgent

func (r *Registry) CreateAgent(ctx context.Context, name string) (*Agent, error)

CreateAgent instantiates an agent from its configuration

func (*Registry) CreateEphemeralAgent

func (r *Registry) CreateEphemeralAgent(ctx context.Context, role string) (*Agent, error)

CreateEphemeralAgent creates a temporary agent based on a role. This implements the collaboration.AgentFactory interface. The agent is NOT registered and caller must manage its lifecycle.

func (*Registry) DeleteAgent

func (r *Registry) DeleteAgent(ctx context.Context, name string, force bool) error

DeleteAgent removes an agent

func (*Registry) ForceReload

func (r *Registry) ForceReload(ctx context.Context, name string) error

ForceReload manually triggers a reload for the specified agent. This bypasses the file watcher and directly calls the reload callback. Useful for programmatic reloads (e.g., after metaagent creates an agent) or when file watchers are unreliable (e.g., macOS fsnotify issues).

func (*Registry) GetAgent

func (r *Registry) GetAgent(ctx context.Context, name string) (*Agent, error)

GetAgent returns a running agent instance

func (*Registry) GetAgentInfo

func (r *Registry) GetAgentInfo(name string) (*AgentInstanceInfo, error)

GetAgentInfo returns information about an agent

func (*Registry) GetConfig

func (r *Registry) GetConfig(name string) *loomv1.AgentConfig

GetConfig returns the config for a specific agent by name

func (*Registry) ListAgents

func (r *Registry) ListAgents() []*AgentInstanceInfo

ListAgents returns all registered agents

func (*Registry) ListConfigs

func (r *Registry) ListConfigs() []*loomv1.AgentConfig

ListConfigs returns all loaded agent configurations (including non-instantiated agents)

func (*Registry) LoadAgents

func (r *Registry) LoadAgents(ctx context.Context) error

LoadAgents loads all agent configurations from the agents directory and workflows

func (*Registry) LoadWorkflows

func (r *Registry) LoadWorkflows(ctx context.Context) error

LoadWorkflows loads workflow files and registers their coordinator agents

func (*Registry) RegisterConfig

func (r *Registry) RegisterConfig(config *loomv1.AgentConfig)

RegisterConfig registers an agent configuration in the registry This is used by the meta-agent factory to add dynamically generated configs

func (*Registry) ReloadAgent

func (r *Registry) ReloadAgent(ctx context.Context, name string) error

ReloadAgent hot-reloads an agent's configuration

func (*Registry) SetReloadCallback

func (r *Registry) SetReloadCallback(cb ReloadCallback)

SetReloadCallback sets the callback function to be called when an agent config changes. The callback receives the agent name and new configuration, and should update the running agent.

func (*Registry) SetSharedMemory

func (r *Registry) SetSharedMemory(sharedMemory interface{})

SetSharedMemory sets the SharedMemoryStore for agents created by this registry. This must be called after registry creation if agents need access to shared memory for large tool result storage.

func (*Registry) StartAgent

func (r *Registry) StartAgent(ctx context.Context, name string) error

StartAgent starts a stopped agent

func (*Registry) StopAgent

func (r *Registry) StopAgent(ctx context.Context, name string) error

StopAgent stops a running agent

func (*Registry) WatchConfigs

func (r *Registry) WatchConfigs(ctx context.Context) error

WatchConfigs watches for config file changes and auto-reloads agents.

Note: fsnotify behavior varies by platform. On Darwin (macOS), the underlying FSEvents/kqueue implementation may not reliably detect file modifications in all cases. The callback mechanism is solid, but file change detection may require investigation or alternative approaches (e.g., polling, explicit reload triggers) on macOS.

type RegistryConfig

type RegistryConfig struct {
	ConfigDir    string
	DBPath       string
	MCPManager   *manager.Manager
	LLMProvider  LLMProvider
	Logger       *zap.Logger
	Tracer       observability.Tracer
	SessionStore *SessionStore          // For persistent agent session traces
	ToolRegistry *toolregistry.Registry // Tool search registry for dynamic tool discovery

	// Database encryption (opt-in for enterprise deployments)
	EncryptDatabase bool   // Enable SQLCipher encryption
	EncryptionKey   string // Encryption key (or use LOOM_DB_KEY env var)
}

RegistryConfig configures the agent registry

type ReloadCallback

type ReloadCallback func(name string, config *loomv1.AgentConfig) error

ReloadCallback is called when an agent config changes. It receives the agent name and new configuration.

type Response

type Response struct {
	// Content is the text response
	Content string

	// Usage tracks token usage and cost
	Usage Usage

	// ToolExecutions contains tools that were executed
	ToolExecutions []ToolExecution

	// Metadata contains additional response information
	Metadata map[string]interface{}

	// Thinking contains the agent's internal reasoning process
	// (for models that support extended thinking)
	Thinking string
}

Response represents the agent's response to a user message.

type RetryConfig

type RetryConfig struct {
	// MaxRetries is the maximum number of retry attempts (0 = no retries)
	MaxRetries int

	// InitialDelay is the initial delay before the first retry
	InitialDelay time.Duration

	// MaxDelay is the maximum delay between retries
	MaxDelay time.Duration

	// Multiplier is the exponential backoff multiplier (e.g., 2.0 for doubling)
	Multiplier float64

	// Enabled enables retry logic
	Enabled bool
}

RetryConfig configures exponential backoff retry logic for LLM calls

type SQLiteErrorStore

type SQLiteErrorStore struct {
	// contains filtered or unexported fields
}

SQLiteErrorStore implements ErrorStore with SQLite persistence.

func NewSQLiteErrorStore

func NewSQLiteErrorStore(dbPath string, tracer observability.Tracer) (*SQLiteErrorStore, error)

NewSQLiteErrorStore creates a new SQLiteErrorStore. It opens the same database as SessionStore for error persistence.

func (*SQLiteErrorStore) Get

func (s *SQLiteErrorStore) Get(ctx context.Context, errorID string) (*StoredError, error)

Get retrieves a specific error by ID.

func (*SQLiteErrorStore) List

func (s *SQLiteErrorStore) List(ctx context.Context, filters ErrorFilters) ([]*StoredError, error)

List returns errors matching filters.

func (*SQLiteErrorStore) Store

func (s *SQLiteErrorStore) Store(ctx context.Context, err *StoredError) (string, error)

Store saves an error and returns a unique ID.

type SearchConversationTool

type SearchConversationTool struct {
	// contains filtered or unexported fields
}

SearchConversationTool searches conversation history using semantic search. Uses BM25 + LLM reranking to find relevant messages based on natural language queries.

func NewSearchConversationTool

func NewSearchConversationTool(memory *Memory) *SearchConversationTool

NewSearchConversationTool creates a new semantic search tool.

func (*SearchConversationTool) Backend

func (t *SearchConversationTool) Backend() string

Backend returns the backend type (empty = backend-agnostic).

func (*SearchConversationTool) Description

func (t *SearchConversationTool) Description() string

Description returns the tool description.

func (*SearchConversationTool) Execute

func (t *SearchConversationTool) Execute(
	ctx context.Context,
	input map[string]interface{},
) (*shuttle.Result, error)

Execute performs semantic search and optionally promotes results.

func (*SearchConversationTool) InputSchema

func (t *SearchConversationTool) InputSchema() *shuttle.JSONSchema

InputSchema returns the JSON schema for tool parameters.

func (*SearchConversationTool) Name

func (t *SearchConversationTool) Name() string

Name returns the tool name.

type SegmentedMemory

type SegmentedMemory struct {
	// contains filtered or unexported fields
}

SegmentedMemory manages context using a tiered memory hierarchy.

Architecture: - ROM Layer: Static documentation/prompts (never changes during session) - Kernel Layer: Tool definitions, recent results, schema cache (per conversation) - L1 Cache: Hot - Recent messages (last 10 messages / 5 exchanges) - L2 Cache: Warm - Compressed history summaries - Swap: Cold - Database-backed long-term storage

Features: - Adaptive compression: Triggers at 70% token budget usage - LRU schema caching: Max 10 schemas with least-recently-used eviction - Database-backed tool results: Keeps only immediate previous result in memory - Token budget enforcement: 200K context, 20K output reserve = 180K available

func NewSegmentedMemory

func NewSegmentedMemory(romContent string, maxContextTokens, reservedOutputTokens int) *SegmentedMemory

NewSegmentedMemory creates a new segmented memory instance with ROM content. The ROM content is static documentation/prompts that never change during the session.

Configuration: - Token Budget: Configurable context window and output reserve - L1 Cache: Last 10 messages (5 exchanges) for focused context - Kernel: Max 1 tool result (database-backed optimization) - Schema Cache: Max 10 schemas with LRU eviction

If maxContextTokens or reservedOutputTokens are 0, defaults to Claude Sonnet 4.5 values (200K/20K)

func NewSegmentedMemoryWithCompression

func NewSegmentedMemoryWithCompression(romContent string, maxContextTokens, reservedOutputTokens int, profile CompressionProfile) *SegmentedMemory

NewSegmentedMemoryWithCompression creates a new segmented memory instance with custom compression profile. This allows fine-grained control over compression behavior for different workload types.

Configuration: - Token Budget: Configurable context window and output reserve - L1 Cache: Configurable based on profile (data_intensive=5, balanced=8, conversational=12) - Kernel: Max 1 tool result (database-backed optimization) - Schema Cache: Max 10 schemas with LRU eviction

If maxContextTokens or reservedOutputTokens are 0, defaults to Claude Sonnet 4.5 values (200K/20K)

func (*SegmentedMemory) AddMessage

func (sm *SegmentedMemory) AddMessage(msg Message)

AddMessage adds a message to L1 cache with adaptive compression. Compression triggers based on two criteria: 1. L1 at max capacity (hard limit) 2. Token budget exceeds profile's warning threshold (soft limit)

Compression strategy is profile-dependent: - data_intensive: warning=50%, critical=70%, batches=2/4/6 - balanced: warning=60%, critical=75%, batches=3/5/7 - conversational: warning=70%, critical=85%, batches=4/6/8

func (*SegmentedMemory) AddToolResult

func (sm *SegmentedMemory) AddToolResult(result CachedToolResult)

AddToolResult adds a tool execution result to kernel layer. Database-backed optimization: Keeps ONLY immediate previous result in memory. All historical results should be persisted to database and retrievable via tools.

func (*SegmentedMemory) CacheSchema

func (sm *SegmentedMemory) CacheSchema(key, schema string)

CacheSchema stores a discovered schema in kernel layer with LRU eviction. When cache exceeds maxSchemas (default: 10), the least recently used schema is evicted.

func (*SegmentedMemory) ClearFindings

func (sm *SegmentedMemory) ClearFindings()

ClearFindings removes all findings from working memory. Useful for starting fresh analysis or cleaning up between tasks.

func (*SegmentedMemory) ClearL2

func (sm *SegmentedMemory) ClearL2()

ClearL2 clears the L2 summary cache.

func (*SegmentedMemory) ClearPromotedContext

func (sm *SegmentedMemory) ClearPromotedContext()

ClearPromotedContext removes all promoted messages from context. This allows reclaiming token budget used by retrieved old messages.

func (*SegmentedMemory) CompactMemory

func (sm *SegmentedMemory) CompactMemory() (int, int)

CompactMemory forces compression of all L1 to L2. Returns number of messages compressed and tokens saved.

func (*SegmentedMemory) GetAllFindings

func (sm *SegmentedMemory) GetAllFindings() map[string]Finding

GetAllFindings returns all recorded findings.

func (*SegmentedMemory) GetCachedToolResults

func (sm *SegmentedMemory) GetCachedToolResults() []CachedToolResult

GetCachedToolResults returns a copy of all cached tool results.

func (*SegmentedMemory) GetContextWindow

func (sm *SegmentedMemory) GetContextWindow() string

GetContextWindow builds the full context for LLM with proper layering. Returns formatted context string with ROM, Kernel, L2, and L1 layers.

func (*SegmentedMemory) GetFinding

func (sm *SegmentedMemory) GetFinding(path string) (Finding, bool)

GetFinding retrieves a specific finding by path.

func (*SegmentedMemory) GetFindingsSummary

func (sm *SegmentedMemory) GetFindingsSummary() string

GetFindingsSummary generates a formatted markdown summary of all findings (thread-safe). This summary is injected into the LLM context to provide verified working memory.

func (*SegmentedMemory) GetL1MessageCount

func (sm *SegmentedMemory) GetL1MessageCount() int

GetL1MessageCount returns number of messages in L1 cache.

func (*SegmentedMemory) GetL2Summary

func (sm *SegmentedMemory) GetL2Summary() string

GetL2Summary returns the L2 summary content for inspection. Returns empty string if no compression has occurred yet.

func (*SegmentedMemory) GetMemoryStats

func (sm *SegmentedMemory) GetMemoryStats() map[string]interface{}

GetMemoryStats returns comprehensive memory statistics.

func (*SegmentedMemory) GetMessages

func (sm *SegmentedMemory) GetMessages() []Message

GetMessages returns all L1 messages for building conversation context.

func (*SegmentedMemory) GetMessagesForLLM

func (sm *SegmentedMemory) GetMessagesForLLM() []Message

GetMessagesForLLM builds the full message list for the LLM call. Returns: ROM message + L2 summary message (if exists) + pattern (if injected) + promoted context (if exists) + L1 messages. This is what gets sent to the LLM in Message format.

func (*SegmentedMemory) GetPromotedContext

func (sm *SegmentedMemory) GetPromotedContext() []Message

GetPromotedContext returns a copy of promoted messages.

func (*SegmentedMemory) GetSchema

func (sm *SegmentedMemory) GetSchema(key string) (string, bool)

GetSchema retrieves a cached schema and updates access time for LRU tracking.

func (*SegmentedMemory) GetSwapStats

func (sm *SegmentedMemory) GetSwapStats() (evictions, retrievals int)

GetSwapStats returns swap layer statistics.

func (*SegmentedMemory) GetTokenBudgetUsage

func (sm *SegmentedMemory) GetTokenBudgetUsage() (int, int, int)

GetTokenBudgetUsage returns current token budget usage information. Returns: (used, available, total)

func (*SegmentedMemory) GetTokenCount

func (sm *SegmentedMemory) GetTokenCount() int

GetTokenCount returns current token count across all memory layers.

func (*SegmentedMemory) HasL2Content

func (sm *SegmentedMemory) HasL2Content() bool

HasL2Content returns true if L2 summary has content (compression occurred).

func (*SegmentedMemory) InjectPattern

func (sm *SegmentedMemory) InjectPattern(patternContent string, patternName string)

InjectPattern injects a formatted pattern into the message stream. Pattern is added as system message after L2 summary, before promoted context. This placement ensures pattern knowledge is available but doesn't override ROM or conversation history.

func (*SegmentedMemory) IsSwapEnabled

func (sm *SegmentedMemory) IsSwapEnabled() bool

IsSwapEnabled returns true if the swap layer is configured and operational.

func (*SegmentedMemory) PromoteMessagesToContext

func (sm *SegmentedMemory) PromoteMessagesToContext(messages []Message) error

PromoteMessagesToContext adds retrieved messages from swap to active context. The messages are added as "promoted context" separate from L1 (which is for recent conversation). This allows old context to be available to the LLM without polluting L1. Checks token budget before promotion - returns error if budget would be exceeded.

func (*SegmentedMemory) RecordFinding

func (sm *SegmentedMemory) RecordFinding(path string, value interface{}, category, note, source string)

RecordFinding stores a verified finding in the kernel layer for working memory. This prevents hallucination by maintaining structured facts discovered during analysis. If maxFindings is exceeded, this is a no-op (findings are transient, not critical).

func (*SegmentedMemory) RetrieveL2Snapshots

func (sm *SegmentedMemory) RetrieveL2Snapshots(ctx context.Context, limit int) ([]string, error)

RetrieveL2Snapshots retrieves old L2 summary snapshots from swap. Returns snapshots in chronological order (oldest first). Limit controls maximum snapshots to return (0 = all).

func (*SegmentedMemory) RetrieveMessagesFromSwap

func (sm *SegmentedMemory) RetrieveMessagesFromSwap(ctx context.Context, offset, limit int) ([]Message, error)

RetrieveMessagesFromSwap retrieves old messages from database. Returns messages in chronological order (oldest first). Offset and limit control pagination (offset=0, limit=10 gets first 10 messages).

func (*SegmentedMemory) SearchMessages

func (sm *SegmentedMemory) SearchMessages(
	ctx context.Context,
	query string,
	limit int,
) ([]Message, error)

SearchMessages performs semantic search over conversation history using BM25 + LLM reranking.

Algorithm: 1. BM25 full-text search via FTS5 (top-50 candidates) 2. LLM-based reranking for semantic relevance (top-N results)

Returns top-N most relevant messages ordered by relevance.

func (*SegmentedMemory) SetCompressor

func (sm *SegmentedMemory) SetCompressor(compressor MemoryCompressor)

SetCompressor sets the memory compressor for intelligent history compression. Should be called after agent initialization to avoid dependency cycles.

func (*SegmentedMemory) SetLLMProvider

func (sm *SegmentedMemory) SetLLMProvider(llm LLMProvider)

SetLLMProvider injects an LLM provider for semantic search reranking. If not set, semantic search will fall back to BM25-only ranking.

func (*SegmentedMemory) SetMaxL2Tokens

func (sm *SegmentedMemory) SetMaxL2Tokens(maxTokens int)

SetMaxL2Tokens configures the maximum token count for L2 before eviction to swap. Default is 5000 tokens. When L2 exceeds this limit, the entire L2 summary is moved to swap storage and L2 is cleared to start fresh.

func (*SegmentedMemory) SetSessionStore

func (sm *SegmentedMemory) SetSessionStore(store *SessionStore, sessionID string)

SetSessionStore enables the swap layer with database-backed long-term storage. When set, L2 summaries will be automatically evicted to swap when exceeding maxL2Tokens. This enables "forever conversations" by preventing unbounded L2 growth.

func (*SegmentedMemory) SetSharedMemory

func (sm *SegmentedMemory) SetSharedMemory(sharedMemory *storage.SharedMemoryStore)

SetSharedMemory sets the shared memory store for large data handling. When set, large tool results can be stored in shared memory to save context tokens.

func (*SegmentedMemory) SetTracer

func (sm *SegmentedMemory) SetTracer(tracer observability.Tracer)

SetTracer sets the observability tracer for error logging and metrics. Should be called after agent initialization to enable proper error reporting.

type Session

type Session = types.Session

type SessionCleanupHook

type SessionCleanupHook func(ctx context.Context, sessionID string)

SessionCleanupHook is called when a session is deleted. Used for cleanup tasks like releasing shared memory references. The hook receives the session ID being deleted.

type SessionStore

type SessionStore struct {
	// contains filtered or unexported fields
}

SessionStore provides persistent storage for sessions, messages, and tool executions. All database operations are traced to hawk for observability.

func NewSessionStore

func NewSessionStore(dbPath string, tracer observability.Tracer) (*SessionStore, error)

NewSessionStore creates a new SessionStore with SQLite persistence. For backward compatibility, encryption is disabled by default. Use NewSessionStoreWithConfig for encryption support.

func NewSessionStoreWithConfig

func NewSessionStoreWithConfig(config DBConfig, tracer observability.Tracer) (*SessionStore, error)

NewSessionStoreWithConfig creates a new SessionStore with optional encryption.

func (*SessionStore) Close

func (s *SessionStore) Close() error

Close closes the database connection.

func (*SessionStore) DeleteSession

func (s *SessionStore) DeleteSession(ctx context.Context, sessionID string) error

DeleteSession removes a session and all its associated data.

func (*SessionStore) GetStats

func (s *SessionStore) GetStats(ctx context.Context) (*Stats, error)

GetStats returns database statistics for monitoring.

func (*SessionStore) ListSessions

func (s *SessionStore) ListSessions(ctx context.Context) ([]string, error)

ListSessions returns all session IDs.

func (*SessionStore) LoadAgentSessions

func (s *SessionStore) LoadAgentSessions(ctx context.Context, agentID string) ([]string, error)

LoadAgentSessions loads all sessions for a given agent. Returns sessions where agent_id matches the provided agentID.

func (*SessionStore) LoadMemorySnapshots

func (s *SessionStore) LoadMemorySnapshots(ctx context.Context, sessionID string, snapshotType string, limit int) ([]MemorySnapshot, error)

LoadMemorySnapshots retrieves memory snapshots for a session. Returns snapshots in chronological order (oldest first). Limit controls the maximum number of snapshots to return (0 = all).

func (*SessionStore) LoadMessages

func (s *SessionStore) LoadMessages(ctx context.Context, sessionID string) ([]Message, error)

LoadMessages loads all messages for a session.

func (*SessionStore) LoadMessagesForAgent

func (s *SessionStore) LoadMessagesForAgent(ctx context.Context, agentID string) ([]Message, error)

LoadMessagesForAgent loads all messages for an agent across all its sessions. This includes messages from: - All sessions owned by this agent (agent_id = agentID) - Parent sessions (if agent has coordinator parent) Filters by session_context to include only relevant messages (coordinator, shared).

func (*SessionStore) LoadMessagesFromParentSession

func (s *SessionStore) LoadMessagesFromParentSession(ctx context.Context, sessionID string) ([]Message, error)

LoadMessagesFromParentSession loads messages from the parent session of a given session. This is used by sub-agents to see coordinator instructions. Returns empty slice if session has no parent.

func (*SessionStore) LoadSession

func (s *SessionStore) LoadSession(ctx context.Context, sessionID string) (*Session, error)

LoadSession loads a session from the database.

func (*SessionStore) RegisterCleanupHook

func (s *SessionStore) RegisterCleanupHook(hook SessionCleanupHook)

RegisterCleanupHook registers a callback to be invoked when sessions are deleted. This enables decoupled cleanup operations (e.g., releasing shared memory references) without tight coupling between SessionStore and other components. Thread-safe: Can be called from multiple goroutines.

func (*SessionStore) SaveMemorySnapshot

func (s *SessionStore) SaveMemorySnapshot(ctx context.Context, sessionID, snapshotType, content string, tokenCount int) error

SaveMemorySnapshot persists a memory snapshot (L2 summary) to the database. This is used by the swap layer to archive L2 summaries when they exceed the token limit.

func (*SessionStore) SaveMessage

func (s *SessionStore) SaveMessage(ctx context.Context, sessionID string, msg Message) error

SaveMessage persists a message to the database.

func (*SessionStore) SaveSession

func (s *SessionStore) SaveSession(ctx context.Context, session *Session) error

SaveSession persists a session to the database.

func (*SessionStore) SaveToolExecution

func (s *SessionStore) SaveToolExecution(ctx context.Context, sessionID string, exec ToolExecution) error

SaveToolExecution persists a tool execution to the database.

func (*SessionStore) SearchFTS5

func (s *SessionStore) SearchFTS5(ctx context.Context, sessionID, query string, limit int) ([]Message, error)

SearchFTS5 searches message content using FTS5 full-text search with BM25 ranking. Returns messages sorted by relevance (highest BM25 score first).

Parameters:

  • sessionID: Filter results to specific session
  • query: Natural language search query (FTS5 MATCH syntax)
  • limit: Maximum number of results to return

Returns messages ordered by BM25 relevance score.

type SimpleCompressor

type SimpleCompressor struct{}

SimpleCompressor is a basic compressor that doesn't use LLM. Useful for testing or when LLM integration isn't available.

func NewSimpleCompressor

func NewSimpleCompressor() *SimpleCompressor

NewSimpleCompressor creates a compressor that only does keyword extraction.

func (*SimpleCompressor) CompressMessages

func (c *SimpleCompressor) CompressMessages(ctx context.Context, messages []Message) (string, error)

CompressMessages performs simple keyword extraction.

func (*SimpleCompressor) IsEnabled

func (c *SimpleCompressor) IsEnabled() bool

IsEnabled always returns false for simple compressor.

type SoftReminderConfig

type SoftReminderConfig struct {
	ToolExecutionThreshold int  // Threshold to start reminders (default: 10)
	StopThreshold          int  // Threshold to stop reminders (default: 20)
	Enabled                bool // Whether soft reminders are enabled (default: true)
}

SoftReminderConfig holds configuration for soft reminders.

func DefaultSoftReminderConfig

func DefaultSoftReminderConfig() SoftReminderConfig

DefaultSoftReminderConfig returns default soft reminder configuration.

type Stats

type Stats struct {
	SessionCount       int
	MessageCount       int
	ToolExecutionCount int
	TotalCostUSD       float64
	TotalTokens        int
}

Stats holds database statistics.

type StoredError

type StoredError struct {
	ID           string          // err_YYYYMMDD_HHMMSS_<random>
	Timestamp    time.Time       // When the error occurred
	SessionID    string          // Session that encountered this error
	ToolName     string          // Name of the tool that failed
	RawError     json.RawMessage // Original error in any format (no assumptions about structure)
	ShortSummary string          // First line or 100 chars for quick reference
}

StoredError represents a tool execution error in storage.

type SystemPromptFunc

type SystemPromptFunc func() string

SystemPromptFunc is a function that returns the system prompt for a new session. It can be used to dynamically load prompts from a PromptRegistry or other source.

type TokenBudget

type TokenBudget struct {
	MaxTokens      int
	UsedTokens     int
	ReservedTokens int // Reserved for output (e.g., 20000)
	// contains filtered or unexported fields
}

TokenBudget represents a token budget with usage tracking.

func NewTokenBudget

func NewTokenBudget(maxTokens, reservedForOutput int) *TokenBudget

NewTokenBudget creates a new token budget. For Claude Sonnet 4.5: 200K total, reserve 20K for output = 180K available for input.

func (*TokenBudget) AvailableTokens

func (tb *TokenBudget) AvailableTokens() int

AvailableTokens returns the number of tokens available for new content.

func (*TokenBudget) CanFit

func (tb *TokenBudget) CanFit(tokens int) bool

CanFit checks if a given number of tokens can fit in the budget.

func (*TokenBudget) Free

func (tb *TokenBudget) Free(tokens int)

Free returns tokens to the budget.

func (*TokenBudget) GetUsage

func (tb *TokenBudget) GetUsage() (used, available, total int)

GetUsage returns current usage statistics.

func (*TokenBudget) IsCritical

func (tb *TokenBudget) IsCritical() bool

IsCritical checks if usage is at critical levels (>85%).

func (*TokenBudget) IsNearLimit

func (tb *TokenBudget) IsNearLimit(thresholdPct float64) bool

IsNearLimit checks if usage is approaching budget limits. Returns true if usage is above the given percentage threshold.

func (*TokenBudget) NeedsWarning

func (tb *TokenBudget) NeedsWarning() bool

NeedsWarning checks if usage warrants a warning (>70%).

func (*TokenBudget) Reset

func (tb *TokenBudget) Reset()

Reset resets the used token count.

func (*TokenBudget) UsagePercentage

func (tb *TokenBudget) UsagePercentage() float64

UsagePercentage returns the percentage of budget used.

func (*TokenBudget) Use

func (tb *TokenBudget) Use(tokens int) bool

Use marks tokens as used. Returns false if budget exceeded.

type TokenBudgetConfig

type TokenBudgetConfig struct {
	MaxContextTokens     int     // Total context window (default: 200000)
	ReservedOutputTokens int     // Reserved for output (default: 20000)
	WarningThresholdPct  float64 // Warning threshold (default: 70.0)
	CriticalThresholdPct float64 // Critical threshold (default: 85.0)
	MaxOutputTokens      int     // Maximum output tokens (default: 8192)
	MinOutputTokens      int     // Minimum output tokens (default: 2048)
	OutputBudgetFraction float64 // Fraction of available for output (default: 0.5)
}

TokenBudgetConfig holds configuration for token budget management.

func DefaultTokenBudgetConfig

func DefaultTokenBudgetConfig() TokenBudgetConfig

DefaultTokenBudgetConfig returns default token budget configuration for Claude Sonnet 4.5.

type TokenCounter

type TokenCounter struct {
	// contains filtered or unexported fields
}

TokenCounter provides accurate token counting for LLM context management. Uses tiktoken with cl100k_base encoding (Claude-compatible approximation).

func GetTokenCounter

func GetTokenCounter() *TokenCounter

GetTokenCounter returns a singleton token counter instance.

func (*TokenCounter) CountTokens

func (tc *TokenCounter) CountTokens(text string) int

CountTokens returns the accurate token count for a given text.

func (*TokenCounter) CountTokensMultiple

func (tc *TokenCounter) CountTokensMultiple(texts ...string) int

CountTokensMultiple counts tokens across multiple text segments.

func (*TokenCounter) EstimateMessagesTokens

func (tc *TokenCounter) EstimateMessagesTokens(messages []Message) int

EstimateMessagesTokens estimates token count for a slice of messages. Includes formatting overhead for message structure.

func (*TokenCounter) EstimateToolResultTokens

func (tc *TokenCounter) EstimateToolResultTokens(results []CachedToolResult) int

EstimateToolResultTokens estimates token count for cached tool results.

type ToolCall

type ToolCall = types.ToolCall

type ToolExecution

type ToolExecution struct {
	ToolName string
	Input    map[string]interface{}
	Result   *shuttle.Result
	Error    error
}

ToolExecution records a tool execution.

type ToolsConfigYAML

type ToolsConfigYAML struct {
	MCP     []MCPToolConfigYAML    `yaml:"mcp"`
	Custom  []CustomToolConfigYAML `yaml:"custom"`
	Builtin []string               `yaml:"builtin"`
}

ToolsConfigYAML represents tools configuration in YAML

type Usage

type Usage = types.Usage

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL