types

package
v1.0.2 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Jan 15, 2026 License: Apache-2.0 Imports: 5 Imported by: 0

Documentation

Overview

Package types contains shared types used across the loom framework. This package breaks import cycles by providing common types that both pkg/agent and pkg/llm packages depend on.

Index

Constants

This section is empty.

Variables

This section is empty.

Functions

func SupportsStreaming

func SupportsStreaming(provider LLMProvider) bool

SupportsStreaming checks if a provider supports token streaming. Returns true if the provider implements StreamingLLMProvider.

Types

type ContentBlock

type ContentBlock struct {
	// Type is the content type ("text" or "image")
	Type string

	// Text contains text content (when Type is "text")
	Text string

	// Image contains image content (when Type is "image")
	Image *ImageContent
}

ContentBlock represents a piece of content in a multi-modal message. Can be text or image content.

type Context

type Context interface {
	// Embed standard context
	context.Context

	// Session returns the current session
	Session() *Session

	// Tracer returns the observability tracer
	Tracer() observability.Tracer

	// ProgressCallback returns the progress callback (may be nil)
	ProgressCallback() ProgressCallback
}

Context is an enhanced context.Context with agent-specific methods. This wraps the standard context to provide agent-specific functionality.

type ExecutionStage

type ExecutionStage string

ExecutionStage represents the current stage of agent execution.

const (
	StagePatternSelection ExecutionStage = "pattern_selection"
	StageSchemaDiscovery  ExecutionStage = "schema_discovery"
	StageLLMGeneration    ExecutionStage = "llm_generation"
	StageToolExecution    ExecutionStage = "tool_execution"
	StageSynthesis        ExecutionStage = "synthesis"         // Synthesizing tool execution results
	StageHumanInTheLoop   ExecutionStage = "human_in_the_loop" // Waiting for human response
	StageGuardrailCheck   ExecutionStage = "guardrail_check"
	StageSelfCorrection   ExecutionStage = "self_correction"
	StageCompleted        ExecutionStage = "completed"
	StageFailed           ExecutionStage = "failed"
)

type HITLRequestInfo

type HITLRequestInfo struct {
	// RequestID is the unique HITL request identifier
	RequestID string

	// Question is the question being asked to the human
	Question string

	// RequestType is the type of request (approval, decision, input, review)
	RequestType string

	// Priority is the request priority (low, normal, high, critical)
	Priority string

	// Timeout is how long to wait for human response
	Timeout time.Duration

	// Context provides additional context for the request
	Context map[string]interface{}
}

HITLRequestInfo carries information about a HITL request during streaming.

type ImageContent

type ImageContent struct {
	// Type is always "image"
	Type string

	// Source contains the image data
	Source ImageSource
}

ImageContent represents an image in a message.

type ImageSource

type ImageSource struct {
	// Type is the source type ("base64" or "url")
	Type string

	// MediaType is the MIME type ("image/jpeg", "image/png", "image/gif", "image/webp")
	MediaType string

	// Data contains base64-encoded image data (when Type is "base64")
	Data string

	// URL contains the image URL (when Type is "url")
	URL string
}

ImageSource contains the actual image data.

type LLMProvider

type LLMProvider interface {
	// Chat sends a conversation to the LLM and returns the response
	Chat(ctx context.Context, messages []Message, tools []shuttle.Tool) (*LLMResponse, error)

	// Name returns the provider name
	Name() string

	// Model returns the model identifier
	Model() string
}

LLMProvider defines the interface for LLM providers. This allows pluggable LLM backends (Anthropic, Bedrock, Ollama, Azure, etc.).

Note: The Chat method accepts context.Context (not agent.Context) to avoid import cycles. Agent-specific context (session, tracer, progress) should be handled at the agent layer, not in LLM providers.

type LLMResponse

type LLMResponse struct {
	// Content is the text response (if no tool calls)
	Content string

	// ToolCalls contains requested tool executions
	ToolCalls []ToolCall

	// StopReason indicates why the LLM stopped
	StopReason string

	// Usage tracks token usage
	Usage Usage

	// Metadata contains provider-specific metadata
	Metadata map[string]interface{}

	// Thinking contains the agent's internal reasoning process
	// (for models that support extended thinking like Claude with thinking blocks)
	Thinking string
}

LLMResponse represents a response from the LLM.

type Message

type Message struct {
	// ID is the unique message identifier (from database)
	ID string

	// Role is the message sender (user, assistant, tool)
	Role string

	// Content is the message text (for text-only messages, backward compatible)
	Content string

	// ContentBlocks contains multi-modal content (text and/or images)
	// If present, this takes precedence over Content field
	ContentBlocks []ContentBlock

	// ToolCalls contains tool invocations (if role is assistant)
	ToolCalls []ToolCall

	// ToolUseID is the ID of the tool_use block this result corresponds to (if role is tool)
	// This is used by LLM providers like Bedrock/Anthropic to match tool results to tool requests
	ToolUseID string

	// ToolResult contains tool execution result (if role is tool)
	ToolResult *shuttle.Result

	// SessionContext identifies the context (direct, coordinator, shared)
	// Used for cross-session memory filtering
	SessionContext SessionContext

	// Timestamp when the message was created
	Timestamp time.Time

	// TokenCount for cost tracking
	TokenCount int

	// CostUSD for cost tracking
	CostUSD float64
}

Message represents a single message in the conversation.

type ProgressCallback

type ProgressCallback func(event ProgressEvent)

ProgressCallback is called when agent execution progress occurs. This allows streaming progress updates to clients.

type ProgressEvent

type ProgressEvent struct {
	// Stage is the current execution stage
	Stage ExecutionStage

	// Progress is the completion percentage (0-100)
	Progress int32

	// Message is a human-readable description of current activity
	Message string

	// ToolName is the tool being executed (if applicable)
	ToolName string

	// Timestamp when this event occurred
	Timestamp time.Time

	// HITLRequest contains HITL request details (only when Stage == StageHumanInTheLoop)
	HITLRequest *HITLRequestInfo

	// PartialContent is the accumulated content so far during streaming
	PartialContent string

	// IsTokenStream indicates if this event is a token streaming update
	IsTokenStream bool

	// TokenCount is the running count of tokens received
	TokenCount int32

	// TTFT is the time to first token in milliseconds (0 if not applicable)
	TTFT int64
}

ProgressEvent represents a progress update during agent execution.

type SegmentedMemoryInterface

type SegmentedMemoryInterface interface {
	AddMessage(msg Message)
	GetMessagesForLLM() []Message
}

SegmentedMemoryInterface defines the interface for segmented memory. This allows Session to work with segmented memory without importing pkg/agent.

type Session

type Session struct {

	// ID is the unique session identifier
	ID string

	// AgentID identifies which agent owns this session (for cross-session memory)
	// Example: "coordinator-agent", "analyzer-sub-agent"
	// This enables agent-scoped memory where a sub-agent can access messages
	// from multiple sessions (coordinator parent + direct user sessions)
	AgentID string

	// ParentSessionID links sub-agent sessions to their coordinator session
	// NULL for top-level sessions (direct user interaction)
	// Set for sub-agent sessions created by workflow coordinators
	ParentSessionID string

	// Messages is the conversation history (flat, for backward compatibility)
	Messages []Message

	// SegmentedMem is the tiered memory manager (ROM/Kernel/L1/L2)
	// If nil, falls back to flat Messages list
	// Note: SegmentedMemory type will remain in pkg/agent
	SegmentedMem interface{} // Keep as interface{} to avoid circular dependency

	// FailureTracker tracks consecutive tool failures for escalation
	// Note: consecutiveFailureTracker type will remain in pkg/agent
	FailureTracker interface{} // Keep as interface{} to avoid circular dependency

	// Context holds session-specific context (database, table, etc.)
	Context map[string]interface{}

	// CreatedAt is when the session was created
	CreatedAt time.Time

	// UpdatedAt is when the session was last updated
	UpdatedAt time.Time

	// TotalCostUSD is the accumulated cost for this session
	TotalCostUSD float64

	// TotalTokens is the accumulated token usage
	TotalTokens int
	// contains filtered or unexported fields
}

Session represents a conversation session with history and context. Thread-safe: All methods can be called concurrently.

func (*Session) AddMessage

func (s *Session) AddMessage(msg Message)

AddMessage adds a message to the session history. If SegmentedMem is configured, uses tiered memory management. Otherwise falls back to flat message list.

func (*Session) GetMessages

func (s *Session) GetMessages() []Message

GetMessages returns a copy of the conversation history. If SegmentedMem is configured, returns the optimized context window (ROM + Kernel + L1 + L2). Otherwise returns the flat message list.

type SessionContext

type SessionContext string

SessionContext identifies the context in which a message was created. Used for cross-session memory filtering.

const (
	// SessionContextDirect indicates message is in direct session (user <-> agent)
	SessionContextDirect SessionContext = "direct"

	// SessionContextCoordinator indicates message is from coordinator to sub-agent
	SessionContextCoordinator SessionContext = "coordinator"

	// SessionContextShared indicates message is visible across sessions
	// (e.g., sub-agent response visible in both coordinator and direct sessions)
	SessionContextShared SessionContext = "shared"
)

type StreamingLLMProvider

type StreamingLLMProvider interface {
	LLMProvider

	// ChatStream streams tokens as they're generated from the LLM.
	// Returns the complete LLMResponse after the stream finishes.
	// Calls tokenCallback for each token/chunk received from the LLM.
	// The callback is called synchronously and should not block.
	ChatStream(ctx context.Context, messages []Message, tools []shuttle.Tool,
		tokenCallback TokenCallback) (*LLMResponse, error)
}

StreamingLLMProvider extends LLMProvider with token streaming support. Providers implement this interface if they support real-time token streaming. Use the SupportsStreaming helper to check if a provider implements this interface.

type TokenCallback

type TokenCallback func(token string)

TokenCallback is called for each token/chunk during streaming. Implementations should be lightweight and non-blocking.

type ToolCall

type ToolCall struct {
	// ID is a unique identifier for this tool call
	ID string

	// Name is the tool name
	Name string

	// Input contains the tool parameters as JSON
	Input map[string]interface{}
}

ToolCall represents a tool invocation by the LLM.

type Usage

type Usage struct {
	InputTokens  int
	OutputTokens int
	TotalTokens  int
	CostUSD      float64
}

Usage tracks LLM token usage and costs.

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL