Documentation
      ¶
    
    
  
    
  
    Overview ¶
Package provider implements an abstraction layer for interacting with AI model providers (like OpenAI, Anthropic, etc.) in a consistent way. It defines interfaces and types for streaming AI completions while handling provider-specific implementation details.
Design decisions:
- Provider abstraction: Single interface that different AI providers can implement
 - Streaming first: Built around streaming responses for real-time interaction
 - Type-safe events: Generic types ensure compile-time correctness of response handling
 - Structured metadata: Each event includes run/turn IDs and timestamps for tracking
 - Error handling: Dedicated error type that preserves context and metadata
 - Memory management: Integration with short-term memory for context preservation
 
Key concepts:
- Provider: Interface defining the contract for AI model providers
 - StreamEvent: Base interface for all streaming events (chunks, responses, errors)
 - CompletionParams: Configuration for chat completion requests
 - Checkpoint: Captures conversation state for context management
 
The streaming architecture uses four main event types:
- Delim: Delimiter events marking stream boundaries
 - Chunk: Incremental response fragments
 - Response: Complete responses with checkpoints
 - Error: Error events with preserved context
 
Example usage:
provider := openai.NewProvider(config)
params := CompletionParams{
    RunID:        uuid.New(),
    Instructions: "You are a helpful assistant",
    Stream:       true,
    Tools:        []tool.Definition{...},
}
events, err := provider.ChatCompletion(ctx, params)
if err != nil {
    return err
}
for event := range events {
    switch e := event.(type) {
    case Chunk[messages.AssistantMessage]:
        // Handle incremental response
    case Response[messages.AssistantMessage]:
        // Handle complete response
    case Error:
        // Handle error with context
    }
}
The package is designed to be extensible, allowing new providers to be added by implementing the Provider interface while maintaining consistent behavior and error handling across different AI model providers.
Index ¶
- func ChunkToMessage[T messages.Response, M messages.ModelMessage](dst *messages.Message[M], src Chunk[T])
 - func ResponseToMessage[T messages.Response, M messages.ModelMessage](dst *messages.Message[M], src Response[T])
 - type Chunk
 - type CompletionParams
 - type Delim
 - type Error
 - type Provider
 - type Response
 - type StreamEvent
 - type StructuredOutput
 
Constants ¶
This section is empty.
Variables ¶
This section is empty.
Functions ¶
func ChunkToMessage ¶ added in v0.0.10
func ResponseToMessage ¶ added in v0.0.10
Types ¶
type Chunk ¶ added in v0.0.10
type Chunk[T messages.Response] struct { RunID uuid.UUID `json:"run_id"` TurnID uuid.UUID `json:"turn_id"` Chunk T `json:"chunk"` Timestamp strfmt.DateTime `json:"timestamp,omitempty"` Meta gjson.Result `json:"meta,omitempty"` }
func (Chunk[T]) MarshalJSON ¶ added in v0.0.10
MarshalJSON implements custom JSON marshaling for Chunk[T]
func (*Chunk[T]) UnmarshalJSON ¶ added in v0.0.10
UnmarshalJSON implements custom JSON unmarshaling for Chunk[T]
type CompletionParams ¶ added in v0.0.10
type CompletionParams struct {
	// RunID uniquely identifies this completion request for tracking and debugging
	RunID uuid.UUID
	// Instructions provide the system prompt or role instructions for the AI
	Instructions string
	// Thread contains the conversation history and context
	Thread *shorttermmemory.Aggregator
	// Stream indicates whether to receive responses as a stream of chunks
	// When true, responses come incrementally. When false, wait for complete response.
	Stream bool
	// ResponseSchema defines the structure for formatted output
	// When provided, the AI will attempt to format its response according to this schema
	ResponseSchema *StructuredOutput
	// Model specifies which AI model to use for this completion
	// It must provide its name and associated provider
	Model interface {
		Name() string
		Provider() Provider
	}
	// Tools defines the available functions/capabilities the AI can use
	Tools []tool.Definition
	// contains filtered or unexported fields
}
    CompletionParams encapsulates all parameters needed for a chat completion request. It provides configuration for how the AI model should process the request and structure its response.
type Delim ¶ added in v0.0.10
type Delim struct {
	RunID  uuid.UUID `json:"run_id"`
	TurnID uuid.UUID `json:"turn_id"`
	Delim  string    `json:"delim"`
}
    func (Delim) MarshalJSON ¶ added in v0.0.10
MarshalJSON implements custom JSON marshaling for Delim
func (*Delim) UnmarshalJSON ¶ added in v0.0.10
UnmarshalJSON implements custom JSON unmarshaling for Delim
type Error ¶ added in v0.0.10
type Error struct {
	RunID     uuid.UUID       `json:"run_id"`
	TurnID    uuid.UUID       `json:"turn_id"`
	Err       error           `json:"error"`
	Timestamp strfmt.DateTime `json:"timestamp,omitempty"`
	Meta      gjson.Result    `json:"meta,omitempty"`
}
    func (Error) MarshalJSON ¶ added in v0.0.10
MarshalJSON implements custom JSON marshaling for Error
func (*Error) UnmarshalJSON ¶ added in v0.0.10
UnmarshalJSON implements custom JSON unmarshaling for Error
type Provider ¶
type Provider interface {
	ChatCompletion(context.Context, CompletionParams) (<-chan StreamEvent, error)
}
    Provider defines the interface for AI model providers (e.g., OpenAI, Anthropic). Implementations of this interface handle the specifics of communicating with different AI services while maintaining a consistent interface for the rest of the application.
type Response ¶ added in v0.0.10
type Response[T messages.Response] struct { RunID uuid.UUID `json:"run_id"` TurnID uuid.UUID `json:"turn_id"` Checkpoint shorttermmemory.Checkpoint `json:"checkpoint"` Response T `json:"response"` Timestamp strfmt.DateTime `json:"timestamp,omitempty"` Meta gjson.Result `json:"meta,omitempty"` }
func (Response[T]) MarshalJSON ¶ added in v0.0.10
MarshalJSON implements custom JSON marshaling for Response[T]
func (*Response[T]) UnmarshalJSON ¶ added in v0.0.10
UnmarshalJSON implements custom JSON unmarshaling for Response[T]
type StreamEvent ¶ added in v0.0.10
type StreamEvent interface {
	// contains filtered or unexported methods
}
    type StructuredOutput ¶ added in v0.1.2
type StructuredOutput struct {
	// Name identifies this output format
	Name string
	// Description explains the purpose and usage of this format
	Description string
	// Schema defines the JSON structure that responses should follow
	Schema *jsonschema.Schema
}
    StructuredOutput defines a schema for formatted AI responses. This allows requesting responses in specific formats for easier parsing and validation.