Documentation
¶
Index ¶
- Constants
- type AnthropicConfig
- type AnthropicProvider
- type FallbackOptions
- type FallbackSwitchEvent
- type Message
- type MessageAttachment
- type MiddlewareState
- type Model
- func MustProvider(p Provider) Model
- func NewAnthropic(cfg AnthropicConfig) (Model, error)
- func NewOpenAI(cfg OpenAIConfig) (Model, error)
- func NewOpenAIResponses(cfg OpenAIConfig) (Model, error)
- func WrapWithFallback(base Model, fallbacks []string) Model
- func WrapWithFallbackWithOptions(base Model, fallbacks []string, opts FallbackOptions) Model
- type OpenAIConfig
- type OpenAIProvider
- type Provider
- type ProviderFunc
- type Request
- type Response
- type StreamHandler
- type StreamOnlyModel
- type StreamOnlyProvider
- type StreamResult
- type ToolCall
- type ToolDefinition
- type Usage
Constants ¶
const (
// MiddlewareStateKey exposes the context key so other packages can attach middleware state.
MiddlewareStateKey = middlewareStateKey
)
Variables ¶
This section is empty.
Functions ¶
This section is empty.
Types ¶
type AnthropicConfig ¶
type AnthropicConfig struct {
APIKey string
BaseURL string
Model string
MaxTokens int
MaxRetries int
System string
Temperature *float64
HTTPClient *http.Client
}
AnthropicConfig wires a plain anthropic-sdk-go client into the Model interface.
type AnthropicProvider ¶
type AnthropicProvider struct {
APIKey string
BaseURL string
ModelName string
MaxTokens int
MaxRetries int
System string
Temperature *float64
CacheTTL time.Duration
// contains filtered or unexported fields
}
AnthropicProvider caches anthropic clients with optional TTL.
type FallbackOptions ¶ added in v0.1.7
type FallbackOptions struct {
// PrimaryModel is used as display/source model when Request.Model is empty.
PrimaryModel string
// OnSwitch is called before each fallback switch attempt.
OnSwitch func(FallbackSwitchEvent)
}
FallbackOptions controls optional fallback wrapper behavior.
type FallbackSwitchEvent ¶ added in v0.1.7
FallbackSwitchEvent records a model switch during fallback retry.
type Message ¶
type Message struct {
Role string
Content string
ToolCalls []ToolCall
Attachments []MessageAttachment // Optional: image/audio/file attachments
}
Message represents a single conversational turn. Tool calls emitted by the assistant are kept on ToolCalls.
type MessageAttachment ¶
type MessageAttachment struct {
Type string // "image", "audio", or "file"
Data string // base64-encoded data
MimeType string // e.g., "image/jpeg", "audio/wav"
SourceType string // "base64" or "url"
}
MessageAttachment represents a multimodal attachment in a message.
type MiddlewareState ¶
MiddlewareState is the minimal contract required for model providers to surface request/response data to middleware consumers without depending on the middleware package (which would cause an import cycle).
type Model ¶
type Model interface {
Complete(ctx context.Context, req Request) (*Response, error)
CompleteStream(ctx context.Context, req Request, cb StreamHandler) error
}
Model is the provider-agnostic interface used by the agent runtime.
func MustProvider ¶
MustProvider materialises a model immediately and panics on failure.
func NewAnthropic ¶
func NewAnthropic(cfg AnthropicConfig) (Model, error)
NewAnthropic constructs a production-ready Anthropic-backed Model.
func NewOpenAI ¶
func NewOpenAI(cfg OpenAIConfig) (Model, error)
NewOpenAI constructs a production-ready OpenAI-backed Model.
func NewOpenAIResponses ¶
func NewOpenAIResponses(cfg OpenAIConfig) (Model, error)
NewOpenAIResponses constructs an OpenAI model using the Responses API.
func WrapWithFallback ¶ added in v0.1.7
WrapWithFallback returns a model wrapper that retries with fallback model IDs. The primary call uses the incoming request model unchanged; fallbacks override Request.Model in order. Empty/duplicate fallback entries are ignored.
func WrapWithFallbackWithOptions ¶ added in v0.1.7
func WrapWithFallbackWithOptions(base Model, fallbacks []string, opts FallbackOptions) Model
WrapWithFallbackWithOptions behaves like WrapWithFallback and accepts observer hooks for switch notifications.
type OpenAIConfig ¶
type OpenAIConfig struct {
APIKey string
BaseURL string // Optional: for Azure or proxies
Model string // e.g., "gpt-4o", "gpt-4-turbo"
MaxTokens int
MaxRetries int
System string
Temperature *float64
HTTPClient *http.Client
UseResponses bool // true = /responses API, false = /chat/completions
}
OpenAIConfig configures the OpenAI-backed Model.
type OpenAIProvider ¶
type OpenAIProvider struct {
APIKey string
BaseURL string // Optional: for Azure or proxies
ModelName string
MaxTokens int
MaxRetries int
System string
Temperature *float64
CacheTTL time.Duration
// contains filtered or unexported fields
}
OpenAIProvider caches OpenAI clients with optional TTL.
type ProviderFunc ¶
ProviderFunc is an adapter to allow use of ordinary functions as providers.
type Request ¶
type Request struct {
Messages []Message
Tools []ToolDefinition
System string
Model string
SessionID string
MaxTokens int
Temperature *float64
EnablePromptCache bool // Enable prompt caching for system and recent messages
}
Request drives a single model completion.
type StreamHandler ¶
type StreamHandler func(StreamResult) error
StreamHandler consumes streaming updates in order.
type StreamOnlyModel ¶
type StreamOnlyModel struct {
Inner Model
}
StreamOnlyModel wraps a Model so that Complete() internally uses CompleteStream() to collect the response. This works around API proxies that return empty tool_use.input in non-streaming mode but work correctly in streaming mode.
func NewStreamOnlyModel ¶
func NewStreamOnlyModel(inner Model) *StreamOnlyModel
NewStreamOnlyModel returns a wrapper that forces all completions through the streaming path.
func (*StreamOnlyModel) Complete ¶
Complete calls CompleteStream internally and assembles the final Response.
func (*StreamOnlyModel) CompleteStream ¶
func (s *StreamOnlyModel) CompleteStream(ctx context.Context, req Request, cb StreamHandler) error
CompleteStream delegates directly to the inner model.
type StreamOnlyProvider ¶
type StreamOnlyProvider struct {
Inner Provider
}
StreamOnlyProvider wraps a Provider so that the Model it returns always routes Complete() through CompleteStream(). Use this when the upstream API proxy only returns correct tool_use.input in streaming mode.
type StreamResult ¶
StreamResult delivers incremental updates during streaming calls.
type ToolCall ¶
type ToolCall struct {
ID string
Name string
Arguments map[string]any
Result string // Result stores the execution result for this specific tool call
}
ToolCall captures a function-style invocation generated by the model.
type ToolDefinition ¶
ToolDefinition describes a callable function exposed to the model.