ai

package
v0.2.2 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Apr 3, 2026 License: MIT Imports: 20 Imported by: 0

Documentation

Index

Constants

View Source
const LargeArgumentThreshold = 64 * 1024

LargeArgumentThreshold is the byte size above which provider implementations should prefer UnmarshalArgumentsFromReader (streaming from io.Reader) over UnmarshalArguments (from []byte). §17.3: 64KB threshold.

Variables

View Source
var ErrStreamClosedWithoutTerminalEvent = errors.New("event stream closed without terminal done/error event")

Functions

func AdjustMaxTokensForThinking

func AdjustMaxTokensForThinking(baseMaxTokens, modelMaxTokens int, level ThinkingLevel, budgets *ThinkingBudgets) (maxTokens int, thinkingBudget int)

AdjustMaxTokensForThinking computes output max tokens and thinking budget.

func CalculateCost

func CalculateCost(model Model, usage *Usage)

CalculateCost computes usage cost in dollars using $/million token pricing.

func ClearAPIClients

func ClearAPIClients()

ClearAPIClients removes all registered API clients. For testing only. Do not call in production code.

func ClearEmbeddingAPIClients

func ClearEmbeddingAPIClients()

ClearEmbeddingAPIClients removes all registered embedding API clients. For testing only. Do not call in production code.

func ClearEmbeddingModels

func ClearEmbeddingModels()

ClearEmbeddingModels removes all registered embedding models.

func ClearModels

func ClearModels()

ClearModels removes all registered models.

func ClearProviderConfigs

func ClearProviderConfigs()

ClearProviderConfigs removes all provider configurations and direct API keys. For testing only. Do not call in production code. Tests should use t.Cleanup(ai.ClearProviderConfigs) for full cleanup, or defer ai.UnregisterProviderConfig(name) for individual cleanup.

func ClearSchemaCache

func ClearSchemaCache()

ClearSchemaCache removes all entries from the schema compilation cache. Intended for benchmarks and tests.

func CoerceTypes

func CoerceTypes(schema json.RawMessage, args map[string]any) map[string]any

CoerceTypes attempts to coerce argument values to match their JSON Schema types. This handles LLM output that sends "123" instead of 123 for integer fields. Modifies args in-place. Returns the (possibly modified) args map.

Coercion rules:

  • string → integer: strconv.Atoi
  • string → number: strconv.ParseFloat
  • string → boolean: strconv.ParseBool
  • string → null: "null" literal → nil
  • object: recurse into sub-schema
  • array: coerce each element according to items schema

On coercion failure, the original value is left unchanged (validation will catch it).

func GetModelProviders

func GetModelProviders() []string

GetModelProviders returns all providers that have models.

func IsContextOverflow

func IsContextOverflow(msg *AssistantMessage, contextWindow int) bool

IsContextOverflow detects overflow errors and silent overflow conditions.

func ListProviderConfigs

func ListProviderConfigs() []string

ListProviderConfigs returns all registered provider names.

func MillisToTime

func MillisToTime(ms int64) time.Time

MillisToTime converts Unix milliseconds to time.

func ModelsEqual

func ModelsEqual(a, b *Model) bool

ModelsEqual checks equality by ID and Provider.

func RegisterAPIClient

func RegisterAPIClient(c APIClient)

RegisterAPIClient registers an API client type. Typically called once per protocol implementation at init time.

func RegisterCustomModel

func RegisterCustomModel(providerName string, modelID string, opts CustomModelOpts) error

RegisterCustomModel registers a model for a custom provider. The model's Provider and API fields are set from the provider config. PricingKnown defaults to false unless explicitly set in opts.

func RegisterCustomProvider

func RegisterCustomProvider(cfg CustomProviderConfig) error

RegisterCustomProvider registers a custom provider from runtime config. It registers the ProviderConfig in the provider registry. Returns an error if Name, APIClientType, or BaseURL is empty.

Note: API client type validation is NOT performed at registration time. This is intentional -- API clients may not be registered yet if RegisterCustomProvider is called during init(). Validation happens at Stream()/Embed() time, which already checks via GetAPIClient/GetEmbeddingAPIClient.

func RegisterEmbeddingAPIClient

func RegisterEmbeddingAPIClient(c EmbeddingAPIClient)

RegisterEmbeddingAPIClient registers an embedding API client type. Typically called once per embedding protocol implementation at init time.

func RegisterEmbeddingModel

func RegisterEmbeddingModel(m EmbeddingModel)

RegisterEmbeddingModel registers an embedding model by ID.

func RegisterModel

func RegisterModel(m Model)

RegisterModel registers a model under its provider.

func RegisterProviderConfig

func RegisterProviderConfig(cfg ProviderConfig)

RegisterProviderConfig registers a named provider configuration.

func SchemaCacheStats

func SchemaCacheStats() (hits, misses int64)

SchemaCacheStats returns current cache hit/miss counts.

func SupportsXHigh

func SupportsXHigh(model Model) bool

SupportsXHigh reports whether model supports xhigh reasoning.

func TimeToMillis

func TimeToMillis(t time.Time) int64

TimeToMillis converts time to Unix milliseconds.

func UnmarshalArguments

func UnmarshalArguments(data []byte) (map[string]any, error)

UnmarshalArguments parses JSON tool call arguments from a byte slice. Always uses json.Unmarshal since the data is already in memory. For payloads exceeding LargeArgumentThreshold, provider implementations should prefer UnmarshalArgumentsFromReader to stream directly from the response body and avoid materializing the full JSON in a []byte first.

func UnmarshalArgumentsFromReader

func UnmarshalArgumentsFromReader(r io.Reader) (map[string]any, error)

UnmarshalArgumentsFromReader parses tool call arguments from a reader using json.NewDecoder for streaming. This is the preferred path for provider implementations when ToolCall.Arguments JSON exceeds LargeArgumentThreshold (64KB), as it reads in 4KB chunks without materializing the entire JSON payload in a contiguous byte slice.

func UnregisterProviderConfig

func UnregisterProviderConfig(name string)

UnregisterProviderConfig removes a provider by name. Also removes any direct API key associated with this provider.

func ValidateToolArguments

func ValidateToolArguments(tool Tool, args map[string]any) error

ValidateToolArguments validates tool call arguments against the tool's JSON Schema. Arguments are coerced (e.g., string "123" → int 123) before validation. Returns nil if valid, or a descriptive error with paths.

func ValidateToolCall

func ValidateToolCall(tools []Tool, tc ToolCall) error

ValidateToolCall finds the tool by name and validates the tool call's arguments.

Types

type APIClient

type APIClient interface {
	// ClientType returns the API client type identifier.
	// Examples: "openai-completions", "anthropic-messages", "google-genai".
	ClientType() string

	// Stream starts a streaming LLM call.
	// The endpoint provides base URL and resolved API key.
	// The model provides model-specific config (ID, compat flags, etc.).
	Stream(ctx context.Context, endpoint ProviderEndpoint, model Model,
		llmCtx Context, opts StreamOptions) *EventStream

	// StreamSimple is the high-level API that maps ThinkingLevel to
	// provider-specific params.
	StreamSimple(ctx context.Context, endpoint ProviderEndpoint, model Model,
		llmCtx Context, opts SimpleStreamOptions) *EventStream
}

APIClient is a protocol-level implementation for a specific API format. It knows HOW to talk to an API (request format, SSE parsing, response mapping) but not WHERE or with what credentials. Those come from ProviderEndpoint, passed per-call.

Implementations are stateless and safe for concurrent use. Examples: openai-completions, anthropic-messages, google-genai.

func GetAPIClient

func GetAPIClient(clientType string) (APIClient, error)

GetAPIClient returns the API client for the given type.

type AssistantMessage

type AssistantMessage struct {
	Content      []ContentBlock // TextContent | ThinkingContent | ToolCall
	API          string
	Provider     string
	Model        string
	Usage        Usage
	StopReason   StopReason
	ErrorMessage string
	Timestamp    int64
}

AssistantMessage represents an LLM response.

func Complete

func Complete(ctx context.Context, model Model, llmCtx Context, opts StreamOptions) (AssistantMessage, error)

Complete makes a blocking LLM call.

func CompleteSimple

func CompleteSimple(ctx context.Context, model Model, llmCtx Context, opts SimpleStreamOptions) (AssistantMessage, error)

CompleteSimple makes a blocking LLM call with simplified options.

func (*AssistantMessage) GetTimestamp

func (m *AssistantMessage) GetTimestamp() int64

type AssistantMessageEvent

type AssistantMessageEvent struct {
	Type         EventType
	ContentIndex int
	Delta        string
	Content      string
	ToolCall     *ToolCall
	Partial      *AssistantMessage
	Message      *AssistantMessage
	Error        *AssistantMessage
	Reason       StopReason
}

AssistantMessageEvent is the tagged union of streaming events.

type CacheRetention

type CacheRetention string
const (
	CacheRetentionNone  CacheRetention = "none"
	CacheRetentionShort CacheRetention = "short"
	CacheRetentionLong  CacheRetention = "long"
)

type ContentBlock

type ContentBlock interface {
	// contains filtered or unexported methods
}

ContentBlock is the sealed interface for message content.

type ContentType

type ContentType string
const (
	ContentTypeText     ContentType = "text"
	ContentTypeThinking ContentType = "thinking"
	ContentTypeImage    ContentType = "image"
	ContentTypeToolCall ContentType = "toolCall"
)

type Context

type Context struct {
	SystemPrompt string
	Messages     []Message
	Tools        []Tool
}

Context is conversation state for provider calls.

type CustomModelOpts

type CustomModelOpts struct {
	Name          string
	Reasoning     bool
	Input         []string
	Cost          ModelCost
	PricingKnown  bool // false by default; set to true if pricing is known
	ContextWindow int
	MaxTokens     int
	Compat        *ModelCompat
}

CustomModelOpts provides optional fields when registering a custom model.

type CustomProviderConfig

type CustomProviderConfig struct {
	ProviderConfig

	// APIKey is a directly-provided API key. If set, takes precedence
	// over KeyEnvVars during resolution.
	APIKey string
}

CustomProviderConfig is a convenience for registering a provider at runtime for an arbitrary endpoint. Unlike catalog providers, custom providers: - Accept any model name (no catalog validation) - Have unknown pricing by default (PricingKnown=false, Cost fields zero-valued) - Can accept API keys directly (not just via env vars)

type EmbedFunc

type EmbedFunc func(ctx context.Context, endpoint ProviderEndpoint, model EmbeddingModel, req EmbeddingRequest) (*EmbeddingResponse, error)

EmbedFunc performs one provider-specific single-batch embedding call.

type Embedding

type Embedding struct {
	Index  int
	Values []float32
	Raw    any
}

Embedding is one embedding vector for an input index.

type EmbeddingAPIClient

type EmbeddingAPIClient interface {
	// ClientType returns the embedding API client type identifier.
	// Examples: "openai-embeddings", "google-embeddings", "cohere-embeddings".
	ClientType() string

	// Embed generates embeddings for the given texts.
	Embed(ctx context.Context, endpoint ProviderEndpoint, model EmbeddingModel,
		req EmbeddingRequest) (*EmbeddingResponse, error)
}

EmbeddingAPIClient is a protocol-level implementation for embedding APIs. Like APIClient, it is stateless — base URL and API key come from ProviderEndpoint, passed per-call.

func GetEmbeddingAPIClient

func GetEmbeddingAPIClient(clientType string) (EmbeddingAPIClient, error)

GetEmbeddingAPIClient returns the embedding API client for the given type.

type EmbeddingCost

type EmbeddingCost struct {
	PerMTok float64 `json:"perMTok"`
}

type EmbeddingEncoding

type EmbeddingEncoding string

EmbeddingEncoding specifies embedding output representation.

const (
	EmbeddingEncodingFloat   EmbeddingEncoding = "float"
	EmbeddingEncodingBase64  EmbeddingEncoding = "base64"
	EmbeddingEncodingInt8    EmbeddingEncoding = "int8"
	EmbeddingEncodingUint8   EmbeddingEncoding = "uint8"
	EmbeddingEncodingBinary  EmbeddingEncoding = "binary"
	EmbeddingEncodingUBinary EmbeddingEncoding = "ubinary"
)

type EmbeddingModel

type EmbeddingModel struct {
	ID               string              `json:"id"`
	Name             string              `json:"name"`
	API              string              `json:"api"`
	Provider         string              `json:"provider"`
	MaxInputTokens   int                 `json:"maxInputTokens"`
	DefaultDims      int                 `json:"defaultDims"`
	MaxDims          int                 `json:"maxDims,omitempty"`
	MinDims          int                 `json:"minDims,omitempty"`
	MaxBatchSize     int                 `json:"maxBatchSize"`
	SupportsDimCtrl  bool                `json:"supportsDimCtrl"`
	SupportsTaskType bool                `json:"supportsTaskType"`
	Headers          map[string][]string `json:"headers,omitempty"`
	Cost             EmbeddingCost       `json:"cost"`
	PricingKnown     bool                `json:"pricingKnown"`
}

EmbeddingModel describes one embedding model entry in catalog/registry.

func GetEmbeddingModel

func GetEmbeddingModel(id string) (EmbeddingModel, bool)

GetEmbeddingModel returns a registered embedding model by ID.

func ListEmbeddingModels

func ListEmbeddingModels() []EmbeddingModel

ListEmbeddingModels returns all registered embedding models.

func ListEmbeddingModelsByProvider

func ListEmbeddingModelsByProvider(provider string) []EmbeddingModel

ListEmbeddingModelsByProvider returns registered embedding models for provider.

type EmbeddingRequest

type EmbeddingRequest struct {
	Texts      []string
	TaskType   EmbeddingTaskType
	Dimensions int
	Encoding   EmbeddingEncoding
	OnProgress func(completed, total int)
}

EmbeddingRequest is the provider-agnostic embedding request.

type EmbeddingResponse

type EmbeddingResponse struct {
	Embeddings []Embedding
	Model      string
	Usage      EmbeddingUsage
}

EmbeddingResponse is the provider-agnostic embedding response.

func BatchEmbed

func BatchEmbed(ctx context.Context, embedFn EmbedFunc, endpoint ProviderEndpoint, model EmbeddingModel, req EmbeddingRequest) (*EmbeddingResponse, error)

BatchEmbed splits large embedding requests across provider max batch size.

func Embed

func Embed(ctx context.Context, modelID string, req EmbeddingRequest) (*EmbeddingResponse, error)

Embed is the top-level embedding entry point. Resolution: model.Provider → ProviderConfig → EmbeddingAPIClient + ResolveEndpoint

type EmbeddingTaskType

type EmbeddingTaskType string

EmbeddingTaskType hints how embedding vectors will be used.

const (
	EmbeddingTaskQuery          EmbeddingTaskType = "query"
	EmbeddingTaskDocument       EmbeddingTaskType = "document"
	EmbeddingTaskClassification EmbeddingTaskType = "classification"
	EmbeddingTaskClustering     EmbeddingTaskType = "clustering"
	EmbeddingTaskSimilarity     EmbeddingTaskType = "similarity"
	EmbeddingTaskUnspecified    EmbeddingTaskType = ""
)

type EmbeddingUsage

type EmbeddingUsage struct {
	Tokens int
	Cost   float64
}

EmbeddingUsage is embedding usage/cost telemetry.

type EventStream

type EventStream struct {
	C <-chan AssistantMessageEvent
	// contains filtered or unexported fields
}

EventStream wraps a buffered channel for streaming AssistantMessageEvents.

func NewEventStream

func NewEventStream() *EventStream

NewEventStream creates a new EventStream with a buffered channel. In test builds, a finalizer detects leaked (unclosed) streams.

func Stream

func Stream(ctx context.Context, model Model, llmCtx Context, opts StreamOptions) *EventStream

Stream starts a streaming LLM call. Resolution: model.Provider → ProviderConfig → APIClient + ResolveEndpoint

func StreamSimple

func StreamSimple(ctx context.Context, model Model, llmCtx Context, opts SimpleStreamOptions) *EventStream

StreamSimple starts a streaming LLM call with simplified options.

func (*EventStream) Close

func (s *EventStream) Close()

Close closes the stream and injects a terminal error if none was sent.

func (*EventStream) Drain

func (s *EventStream) Drain() (AssistantMessage, error)

Drain consumes all events and returns the final result.

func (*EventStream) Result

func (s *EventStream) Result() (AssistantMessage, error)

Result blocks until the stream terminal result is available.

func (*EventStream) Send

func (s *EventStream) Send(event AssistantMessageEvent)

Send pushes an event into the stream and publishes terminal result on done/error.

type EventType

type EventType string

EventType enumerates streaming event kinds.

const (
	EventStart         EventType = "start"
	EventTextStart     EventType = "text_start"
	EventTextDelta     EventType = "text_delta"
	EventTextEnd       EventType = "text_end"
	EventThinkingStart EventType = "thinking_start"
	EventThinkingDelta EventType = "thinking_delta"
	EventThinkingEnd   EventType = "thinking_end"
	EventToolCallStart EventType = "toolcall_start"
	EventToolCallDelta EventType = "toolcall_delta"
	EventToolCallEnd   EventType = "toolcall_end"
	EventDone          EventType = "done"
	EventError         EventType = "error"
)

type ImageContent

type ImageContent struct {
	Data     string // base64 encoded
	MimeType string // e.g. image/jpeg
}

type Message

type Message interface {
	GetTimestamp() int64
	// contains filtered or unexported methods
}

Message is the sealed interface for conversation messages.

func TransformMessages

func TransformMessages(messages []Message, targetModel Model, normalizeToolCallID ToolCallIDNormalizer) []Message

TransformMessages transforms a conversation for replay on a (possibly different) model.

type Model

type Model struct {
	ID            string              `json:"id"`
	Name          string              `json:"name"`
	API           string              `json:"api"`
	Provider      string              `json:"provider"`
	Reasoning     bool                `json:"reasoning"`
	Input         []string            `json:"input"`
	Cost          ModelCost           `json:"cost"`
	PricingKnown  bool                `json:"pricingKnown"`
	ContextWindow int                 `json:"contextWindow"`
	MaxTokens     int                 `json:"maxTokens"`
	Headers       map[string][]string `json:"headers,omitempty"`
	Compat        *ModelCompat        `json:"compat,omitempty"`
}

Model defines a provider model configuration.

func GetModel

func GetModel(provider, modelID string) (Model, error)

GetModel returns a deep copy of a registered model.

func GetModels

func GetModels(provider string) []Model

GetModels returns deep copies of all models for a provider.

type ModelCompat

type ModelCompat struct {
	SupportsStore                    *bool             `json:"supportsStore,omitempty"`
	SupportsDeveloperRole            *bool             `json:"supportsDeveloperRole,omitempty"`
	SupportsReasoningEffort          *bool             `json:"supportsReasoningEffort,omitempty"`
	ReasoningEffortMap               map[string]string `json:"reasoningEffortMap,omitempty"`
	SupportsUsageInStreaming         *bool             `json:"supportsUsageInStreaming,omitempty"`
	MaxTokensField                   string            `json:"maxTokensField,omitempty"`
	RequiresToolResultName           *bool             `json:"requiresToolResultName,omitempty"`
	RequiresAssistantAfterToolResult *bool             `json:"requiresAssistantAfterToolResult,omitempty"`
	RequiresThinkingAsText           *bool             `json:"requiresThinkingAsText,omitempty"`
	RequiresMistralToolIDs           *bool             `json:"requiresMistralToolIds,omitempty"`
	ThinkingFormat                   string            `json:"thinkingFormat,omitempty"`
	SupportsStrictMode               *bool             `json:"supportsStrictMode,omitempty"`
}

type ModelCost

type ModelCost struct {
	Input      float64 `json:"input"`
	Output     float64 `json:"output"`
	CacheRead  float64 `json:"cacheRead"`
	CacheWrite float64 `json:"cacheWrite"`
}

type ProviderConfig

type ProviderConfig struct {
	// Name is the unique provider identifier.
	// Examples: "openai", "anthropic", "openrouter".
	Name string `json:"name"`

	// APIClientType is the protocol to use for chat completions with this provider.
	// Must match a registered APIClient's ClientType() return value.
	// Examples: "openai-completions", "anthropic-messages", "google-genai".
	// Note: embedding models use a separate client type -- see EmbeddingAPIClientType.
	APIClientType string `json:"apiClientType"`

	// EmbeddingAPIClientType is the protocol to use for embeddings with this provider.
	// Must match a registered EmbeddingAPIClient's ClientType() return value.
	// Optional: only needed if this provider offers embedding models.
	EmbeddingAPIClientType string `json:"embeddingApiClientType,omitempty"`

	// BaseURL is the base URL for the API endpoint.
	// Examples: "https://api.openai.com/v1", "https://openrouter.ai/api/v1".
	BaseURL string `json:"baseUrl"`

	// KeyEnvVars is the ordered list of environment variable names to check
	// for the API key. The first non-empty value wins.
	KeyEnvVars []string `json:"keyEnvVars,omitempty"`

	// Headers are extra HTTP headers sent with every request to this provider.
	// Multi-valued headers use multiple values in the slice.
	Headers map[string][]string `json:"headers,omitempty"`

	// ProviderSpecific holds provider-level config that API client
	// implementations may need. Examples: Anthropic API version,
	// Google API version path segment.
	ProviderSpecific map[string]string `json:"providerSpecific,omitempty"`
}

ProviderConfig is the declarative configuration for a named service endpoint. It specifies which API client type to use, where to send requests, and how to authenticate. Stored in the provider registry.

func GetProviderConfig

func GetProviderConfig(name string) (ProviderConfig, error)

GetProviderConfig returns the provider configuration for the given name.

type ProviderEndpoint

type ProviderEndpoint struct {
	// ProviderName is the provider identifier (for error messages, telemetry).
	ProviderName string

	// BaseURL is the resolved base URL.
	BaseURL string

	// APIKey is the resolved API key (from env var or StreamOptions override).
	// May be empty if the provider doesn't require authentication (e.g., local).
	APIKey string

	// Headers are merged provider-level + call-level headers.
	Headers map[string][]string

	// ProviderSpecific passes through from ProviderConfig.
	ProviderSpecific map[string]string
}

ProviderEndpoint is the resolved endpoint info passed to APIClient methods. It is computed from ProviderConfig at call time, with API key resolved from environment variables or StreamOptions overrides.

func ResolveEmbeddingEndpoint added in v0.2.1

func ResolveEmbeddingEndpoint(endpoint ProviderEndpoint) ProviderEndpoint

ResolveEmbeddingEndpoint enriches an existing ProviderEndpoint with API key resolution when the endpoint's APIKey is empty. Uses the same resolution order as ResolveEndpoint: directAPIKeys → env vars. This allows embedding clients to receive a pre-built endpoint from callers (e.g., with credentials from an external secret store) while still falling back to env vars when no explicit key is provided.

func ResolveEndpoint

func ResolveEndpoint(cfg ProviderConfig, opts StreamOptions) ProviderEndpoint

ResolveEndpoint creates a ProviderEndpoint from a ProviderConfig and call-level options. API key resolution order:

  1. opts.APIKey (explicit per-call override)
  2. directAPIKeys[cfg.Name] (set by RegisterCustomProvider)
  3. First non-empty env var from cfg.KeyEnvVars
  4. Empty string (provider may not require auth)

type ProviderError

type ProviderError struct {
	Code       ProviderErrorCode
	Message    string
	StatusCode int
	Provider   string
	RetryAfter time.Duration
}

ProviderError is a typed provider error with optional transport metadata.

func (*ProviderError) Error

func (e *ProviderError) Error() string

type ProviderErrorCode

type ProviderErrorCode string
const (
	ErrContextOverflow ProviderErrorCode = "context_overflow"
	ErrRateLimit       ProviderErrorCode = "rate_limit"
	ErrAuth            ProviderErrorCode = "auth"
	ErrServerError     ProviderErrorCode = "server_error"
	ErrUnknown         ProviderErrorCode = "unknown"
)

type Role

type Role string
const (
	RoleUser       Role = "user"
	RoleAssistant  Role = "assistant"
	RoleToolResult Role = "toolResult"
)

type SimpleStreamOptions

type SimpleStreamOptions struct {
	StreamOptions
	Reasoning       ThinkingLevel
	ThinkingBudgets *ThinkingBudgets
}

SimpleStreamOptions layers high-level reasoning controls over StreamOptions.

type StopReason

type StopReason string
const (
	StopReasonStop    StopReason = "stop"
	StopReasonLength  StopReason = "length"
	StopReasonToolUse StopReason = "toolUse"
	StopReasonError   StopReason = "error"
	StopReasonAborted StopReason = "aborted"
)

type StreamOptions

type StreamOptions struct {
	Temperature     *float64
	TopP            *float64
	TopK            *int
	MaxTokens       *int
	APIKey          string
	SessionID       string
	CacheRetention  CacheRetention
	Headers         map[string][]string
	MaxRetryDelayMs int
	Metadata        map[string]any
	OnPayload       func(payload any)
}

StreamOptions controls provider streaming behavior.

func BuildBaseOptions

func BuildBaseOptions(model Model, opts *SimpleStreamOptions) StreamOptions

BuildBaseOptions converts SimpleStreamOptions into StreamOptions.

type TextContent

type TextContent struct {
	Text          string
	TextSignature string // optional, provider-specific
}

type ThinkingBudgets

type ThinkingBudgets struct {
	Minimal *int
	Low     *int
	Medium  *int
	High    *int
}

ThinkingBudgets stores per-thinking-level token budgets.

type ThinkingContent

type ThinkingContent struct {
	Thinking          string
	ThinkingSignature string // optional, provider-specific
	Redacted          bool   // true if redacted by safety filters
}

type ThinkingLevel

type ThinkingLevel string
const (
	ThinkingMinimal ThinkingLevel = "minimal"
	ThinkingLow     ThinkingLevel = "low"
	ThinkingMedium  ThinkingLevel = "medium"
	ThinkingHigh    ThinkingLevel = "high"
	ThinkingXHigh   ThinkingLevel = "xhigh"
)

func ClampReasoning

func ClampReasoning(level ThinkingLevel) ThinkingLevel

ClampReasoning reduces xhigh to high for providers that do not support xhigh.

type Tool

type Tool struct {
	Name        string
	Description string
	Parameters  json.RawMessage // JSON Schema object
}

Tool defines a callable tool schema.

type ToolCall

type ToolCall struct {
	ID               string
	Name             string
	Arguments        map[string]any
	ThoughtSignature string // optional, Google-specific
}

type ToolCallIDNormalizer

type ToolCallIDNormalizer func(id string, model Model, source *AssistantMessage) string

ToolCallIDNormalizer maps tool-call IDs when crossing provider/model boundaries.

type ToolResultMessage

type ToolResultMessage struct {
	ToolCallID string
	ToolName   string
	Content    []ContentBlock // TextContent | ImageContent
	IsError    bool
	Timestamp  int64
}

ToolResultMessage represents the result of a tool execution.

func (*ToolResultMessage) GetTimestamp

func (m *ToolResultMessage) GetTimestamp() int64

type Usage

type Usage struct {
	Input       int
	Output      int
	CacheRead   int
	CacheWrite  int
	TotalTokens int
	Cost        UsageCost
}

Usage tracks token accounting and computed cost.

type UsageCost

type UsageCost struct {
	Input      float64
	Output     float64
	CacheRead  float64
	CacheWrite float64
	Total      float64
}

UsageCost stores monetary cost components.

type UserMessage

type UserMessage struct {
	Content   []ContentBlock // TextContent | ImageContent
	Timestamp int64          // Unix milliseconds
}

UserMessage represents a user turn in the conversation.

func (*UserMessage) GetTimestamp

func (m *UserMessage) GetTimestamp() int64

Directories

Path Synopsis
provider
testutil
stubserver
Package stubserver provides a configurable HTTP test server for provider testing.
Package stubserver provides a configurable HTTP test server for provider testing.

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL