Documentation
¶
Overview ¶
Package ai provides a unified interface for AI provider implementations.
This package is designed to be used by:
- Eval harness (internal/eval_harness/)
- CLI --ai flag (cmd/ailang/)
- std/ai effect (internal/effects/)
Each provider (openai, gemini, anthropic) implements the Provider interface and can be wrapped with Handler for use with the effects system.
Package ai routing types.
AIRoutingPolicy is an optional, additive IR attached to ai.Request that expresses constraints and preferences for dynamic model/provider selection. In v0.16.0 it is consumed only by the OpenRouter provider — all other providers must reject a non-zero policy with ErrRoutingNotSupported so callers cannot accidentally mask their intent (no silent fallbacks).
Index ¶
- Constants
- Variables
- func EnvVarForProvider(provider ProviderType) string
- func GetAPIKey(provider ProviderType) (string, error)
- func IsBuiltinName(name string) bool
- func IsRetryable(code string) bool
- func RequestsImage(req *Request) bool
- type AICapability
- type AIError
- type AIRoutingPolicy
- type Handler
- func (h *Handler) Call(input string) (string, error)
- func (h *Handler) CallImage(prompt, outputPath, options string) (string, error)
- func (h *Handler) CallImageBase64(prompt, options string) (string, error)
- func (h *Handler) CallJson(input string, schema string) (string, error)
- func (h *Handler) CallWithContext(ctx context.Context, input string) (string, error)
- func (h *Handler) GenerateWithDetails(ctx context.Context, input string) (*Response, error)
- func (h *Handler) LastRoutingMetadata() *trace.ResolvedRoute
- func (h *Handler) Model() string
- func (h *Handler) Provider() Provider
- func (h *Handler) Step(model string, messages []Message, tools []ToolSchema) (*Response, error)
- type HandlerOption
- type ImageOptions
- type Message
- type ModelConfig
- type Provider
- type ProviderError
- type ProviderRegistry
- func (r *ProviderRegistry) Diagnostics() []string
- func (r *ProviderRegistry) Lookup(name string) (Provider, bool)
- func (r *ProviderRegistry) Names() []string
- func (r *ProviderRegistry) Register(name string, p Provider, source string) error
- func (r *ProviderRegistry) Reset()
- func (r *ProviderRegistry) SourceOf(name string) string
- type ProviderType
- type Request
- type Response
- type RoutePreference
- type ToolCall
- type ToolSchema
Constants ¶
const ( CodeAuthFailed = "AuthFailed" CodeTimeout = "Timeout" CodeConnectionFailed = "ConnectionFailed" CodeBudgetExhausted = "BudgetExhausted" CodeProviderNotFound = "ProviderNotFound" CodeCapabilityNotSupported = "CapabilityNotSupported" CodeProtocolError = "ProtocolError" CodeModelNotAllowed = "ModelNotAllowed" // New for M-AI-TOOL-LOOP (non-streaming + tool-loop). CodeRateLimit = "RateLimit" CodeContextLength = "ContextLength" CodeSchemaValidation = "SchemaValidation" CodeToolsNotSupported = "ToolsNotSupported" CodeModelNotFound = "ModelNotFound" CodeInternal = "Internal" )
Error codes used in AIError.Code. The vocabulary is a superset of the codes already in use by the streaming surface (cmd/ailang/configdriven_streaming.go's wrapErrAsAIError) — the new entries (RateLimit, ContextLength, SchemaValidation, ToolsNotSupported, ModelNotFound, Internal) are added by M-AI-TOOL-LOOP for the non-streaming Step / callResult / callJsonResult paths.
Pre-existing codes (M-AI-CALL-STREAM-HELPER, v0.15.1):
AuthFailed, Timeout, ConnectionFailed, BudgetExhausted, ProviderNotFound, CapabilityNotSupported, ProtocolError, ModelNotAllowed
Variables ¶
var ErrRoutingNotSupported = errors.New("routing policy not supported by this provider")
ErrRoutingNotSupported is returned by providers that do not support routing when a non-zero AIRoutingPolicy is present. Callers can use errors.Is to detect this case and either remove the policy, switch to OpenRouter, or surface the error to the user.
var ErrRoutingRequiresRouteableMode = errors.New("routing requires !{AI[mode=routeable]} on the function signature")
ErrRoutingRequiresRouteableMode is returned when a routing policy is supplied for a function declared !{AI[mode=fixed]} (or bare !{AI} which desugars to fixed under M-AI-EFFECT-MODES). The fix is either to declare !{AI[mode=routeable]} on the function signature (preferred — type-level intent), or to remove the routing flags.
This is the runtime sibling to the type-level invariant unification check that rejects !{AI[mode=fixed]} unifying with !{AI[mode=routeable]}. The CLI safety-gate (cmd/ailang/routing_flags.go) is the load-bearing protection; providers may also raise this error as defence-in-depth on any path that bypasses the typechecker (manual handler construction, embedded use, etc.).
TODO(M-AI-EFFECT-MODES M3+): wire the declared mode through Request so providers can enforce this at handler.Generate time. The current AIHandler interface (string -> string) doesn't carry effect-row info; extending it requires plumbing a DeclaredAIMode field on Request and populating it from the calling function's elaborated effect row at AI op invocation. The CLI-side gate is sufficient protection for v0.15.0.
var GlobalProviderRegistry = NewProviderRegistry()
GlobalProviderRegistry is populated at process startup from installed packages' ailang.toml manifests. Tests should call Reset() in setup or construct a fresh ProviderRegistry where possible.
Functions ¶
func EnvVarForProvider ¶ added in v0.15.0
func EnvVarForProvider(provider ProviderType) string
EnvVarForProvider returns the environment variable name that holds the API key for the given provider. Returns empty string for providers that don't need an API key (Google ADC, Ollama local).
func GetAPIKey ¶
func GetAPIKey(provider ProviderType) (string, error)
GetAPIKey returns the API key for a provider from environment variables.
func IsBuiltinName ¶ added in v0.15.0
IsBuiltinName reports whether the given name is one of the hardcoded built-in provider names. Useful for cmd/ailang dispatch logic that short-circuits to built-ins before consulting the registry.
func IsRetryable ¶ added in v0.15.2
IsRetryable returns the canonical retryable hint for an AIError code. Single source of truth — adapters and the AILANG-side wrapErrAsAIError (cmd/ailang/configdriven_streaming.go) both call this.
The convention: transient/network/load conditions are retryable; configuration/auth/schema mismatches are not. Unknown codes default to retryable=true (conservative) so adapters that emit a new code without updating this table still surface as recoverable.
func RequestsImage ¶
RequestsImage returns true if the request asks for image generation.
Types ¶
type AICapability ¶ added in v0.15.0
type AICapability string
AICapability is a required model capability.
Used in AIRoutingPolicy.Require to constrain which models the router may pick. Models that do not advertise a required capability are excluded. The string values are stable wire identifiers — keep them aligned with OpenRouter's `provider.require_parameters` vocabulary where possible.
const ( CapStructuredOutputs AICapability = "structured_outputs" CapToolCalling AICapability = "tool_calling" CapVision AICapability = "vision" CapJSONMode AICapability = "json_mode" CapStreaming AICapability = "streaming" )
type AIError ¶ added in v0.15.2
type AIError struct {
Code string // one of the Code* constants above
Message string // human-readable, may include provider name verbatim
Retryable bool // caller's retry hint
}
AIError is the canonical typed error returned by Provider.Step (and consumed by the new _ai_call_result / _ai_call_json_result / _ai_step builtins).
Shape mirrors std/ai/streaming.AIError exactly: { code, message, retryable }. Provider and statusCode are intentionally NOT included — they were considered and deferred when the AIError record shape was locked in v0.15.0. Add them here only if a downstream consumer (motoko_agent, eval harness) reports a concrete need. Record-shape extension is additive and safe.
func ClassifyError ¶ added in v0.15.2
ClassifyError maps a non-HTTP Go error into an AIError. Useful for transport/timeout/cancel errors that surface before any HTTP response is received.
func ClassifyHTTPError ¶ added in v0.15.2
ClassifyHTTPError maps an HTTP status + provider response body into an AIError with a normalized code. Adapters call this from their Step implementations after parsing a non-2xx response.
The body is scanned for substrings that disambiguate codes within a status class — e.g. HTTP 400 may be a context-length overflow ("context length exceeded") OR a schema validation failure ("does not match schema") OR plain bad request. The match is case-insensitive and only looks for high-confidence signals; ambiguous bodies fall through to the status-code default.
func NewAIError ¶ added in v0.15.2
NewAIError constructs an AIError; convenience to avoid &ai.AIError{...} at every call site.
type AIRoutingPolicy ¶ added in v0.15.0
type AIRoutingPolicy struct {
// Order is the preferred provider sequence (e.g., ["anthropic", "openai", "google"]).
// OpenRouter routes through this list in order.
Order []string
// AllowFallback enables falling through Order on failure.
AllowFallback bool
// Require lists hard-required model capabilities.
// Models that don't advertise these capabilities are excluded.
Require []AICapability
// MaxPricePerMTok caps the price per million tokens in USD.
// Empty string = no cap. Stored as string for precision.
//
// NOTE: In M2 this field is currently NOT forwarded to OpenRouter — their
// per-call max-price filter lives under `transforms` which is explicitly
// deferred per the v0.16.0 design doc. The field is preserved on the IR
// so callers can express the intent today and a follow-up milestone can
// wire it through without breaking the API.
MaxPricePerMTok string
// Prefer is the optimization target.
Prefer RoutePreference
}
AIRoutingPolicy expresses constraints and preferences for dynamic model/provider selection. Currently consumed only by OpenRouter; other providers reject non-nil policies with ErrRoutingNotSupported.
Zero value (all empty) is meaningful: "use the requested model with no fallback". A non-nil policy whose HasRouting() returns false is equivalent to no policy from the upstream's perspective.
func (*AIRoutingPolicy) HasRouting ¶ added in v0.15.0
func (p *AIRoutingPolicy) HasRouting() bool
HasRouting returns true if the policy actually requests provider routing (non-empty Order or AllowFallback). Used by non-OpenRouter providers to decide whether to reject the request.
A policy that only sets capability requirements or a sort preference but does not request a provider order or fallback is not considered "routing" for rejection purposes — non-OpenRouter providers may still satisfy those constraints natively (or ignore them) without contradicting caller intent.
func (*AIRoutingPolicy) IsZero ¶ added in v0.15.0
func (p *AIRoutingPolicy) IsZero() bool
IsZero returns true if the policy expresses no constraints. A nil pointer to AIRoutingPolicy and an IsZero policy are semantically equivalent — both mean "no routing policy".
type Handler ¶
type Handler struct {
// contains filtered or unexported fields
}
Handler wraps a Provider for use with the effects.AIHandler interface. This bridges the unified AI package with AILANG's effect system.
Thread-safety: single-threaded use within one evaluation. lastRoute is updated after every call and read immediately afterwards by the AI effect ops via LastRoutingMetadata().
func NewHandler ¶
func NewHandler(provider Provider, model string, opts ...HandlerOption) *Handler
NewHandler creates a new Handler that wraps a Provider.
The Handler implements effects.AIHandler, allowing any Provider to be used with AILANG's AI effect system.
Example:
client := anthropic.NewClient(apiKey)
handler := ai.NewHandler(client, "claude-sonnet-4-5",
ai.WithSystemPrompt("You are a helpful assistant."),
ai.WithMaxTokens(4096),
)
effCtx.AI = effects.NewAIContext(handler)
func (*Handler) Call ¶
Call implements effects.AIHandler. It sends the input to the provider and returns the generated text.
func (*Handler) CallImage ¶
CallImage generates an image and writes it to outputPath. Options is a JSON string: {"aspect_ratio": "16:9", "mime_type": "image/png"}.
func (*Handler) CallImageBase64 ¶
CallImageBase64 generates an image and returns JSON with base64 data. Returns: {"base64": "...", "mime_type": "image/png"}
func (*Handler) CallJson ¶
CallJson sends a request configured for JSON structured output. If schema is non-empty, providers enforce the schema on the response. If schema is empty, providers return valid JSON without schema enforcement.
Uses at least 8192 max tokens (JSON responses need more room than freeform text). Trims whitespace from the response (some providers pad output to token boundary).
func (*Handler) CallWithContext ¶
CallWithContext is like Call but accepts a context for cancellation/timeout.
func (*Handler) GenerateWithDetails ¶
GenerateWithDetails returns the full response including token counts. This is useful for eval harness and cost tracking.
func (*Handler) LastRoutingMetadata ¶ added in v0.15.0
func (h *Handler) LastRoutingMetadata() *trace.ResolvedRoute
LastRoutingMetadata returns routing info for the most recent Call, CallJson, or related operation, or nil if the call did not engage routing.
Thread-safety: caller must invoke immediately after the matching Call/CallJson returns, before any other handler operation. Single-threaded use only — matches the AI handler contract.
func (*Handler) Step ¶ added in v0.15.2
Step implements effects.AIHandler.Step (M-AI-TOOL-LOOP, v0.17.0). It dispatches to the bound provider's Step method, wiring the per-call model + handler-bound system prompt + max-tokens + routing policy onto the request, and forwards Messages + Tools verbatim. Errors flow back unchanged — the effect-op layer wraps them into *AIError before returning Err(AIError record) to AILANG.
Note: the model parameter is per-call routable; the handler-bound model (h.model) is used only as a default when model == "".
type HandlerOption ¶
type HandlerOption func(*Handler)
HandlerOption configures a Handler.
func WithMaxTokens ¶
func WithMaxTokens(tokens int) HandlerOption
WithMaxTokens sets the maximum response tokens.
func WithRoutingPolicy ¶ added in v0.15.0
func WithRoutingPolicy(p *AIRoutingPolicy) HandlerOption
WithRoutingPolicy attaches an AIRoutingPolicy to every outgoing Request from this handler. The policy is consumed only by the OpenRouter provider; other providers will reject a non-zero policy with ErrRoutingNotSupported.
Pass nil (or omit the option) for no policy.
func WithSystemPrompt ¶
func WithSystemPrompt(prompt string) HandlerOption
WithSystemPrompt sets the system prompt for all requests.
type ImageOptions ¶
type ImageOptions struct {
// AspectRatio controls the image aspect ratio (e.g., "1:1", "16:9", "9:16").
AspectRatio string
// MIMEType controls the output format (e.g., "image/png", "image/jpeg").
MIMEType string
}
ImageOptions configures image generation parameters.
type Message ¶ added in v0.15.2
Message is one entry in a multi-turn AI conversation.
Role string ("user" | "assistant" | "tool" | "system")
Content prose content (may be empty for assistant messages
whose ToolCalls is non-empty)
ToolCalls assistant only — tool invocations the model emitted
ToolCallID "tool" role only — references a prior ToolCall.ID
type ModelConfig ¶
type ModelConfig struct {
APIName string // API name to send to provider
Provider ProviderType // Provider type
EnvVar string // Environment variable for API key
}
ModelConfig contains provider-specific model configuration. This is a minimal struct for the ai package; full config is in eval_harness.
type Provider ¶
type Provider interface {
// Generate makes a single completion request.
// The context can be used for cancellation and timeouts.
Generate(ctx context.Context, req *Request) (*Response, error)
// Step is the multi-turn / tool-aware variant of Generate, added by
// M-AI-TOOL-LOOP (v0.17.0). It uses req.Messages and req.Tools, and
// populates resp.ToolCalls + resp.FinishReason. Adapters that have
// not yet implemented Step return an *AIError with Code = CodeInternal
// and a "not yet implemented" message; adapters that fundamentally
// cannot support tools (e.g. ollama) return CodeToolsNotSupported when
// len(req.Tools) > 0 and otherwise fall through to Generate.
Step(ctx context.Context, req *Request) (*Response, error)
// Name returns the provider name (e.g., "openai", "gemini", "anthropic")
Name() string
}
Provider interface for AI providers. Each provider (openai, gemini, anthropic) implements this interface.
type ProviderError ¶
type ProviderError struct {
Provider string // Provider name
StatusCode int // HTTP status code (0 if not applicable)
Message string // Error message
Err error // Underlying error (may be nil)
}
ProviderError represents an error from an AI provider.
func NewProviderError ¶
func NewProviderError(provider string, statusCode int, message string, err error) *ProviderError
NewProviderError creates a new ProviderError.
func (*ProviderError) Error ¶
func (e *ProviderError) Error() string
func (*ProviderError) Unwrap ¶
func (e *ProviderError) Unwrap() error
type ProviderRegistry ¶ added in v0.15.0
type ProviderRegistry struct {
// contains filtered or unexported fields
}
ProviderRegistry holds config-driven AI providers registered at runtime from [[ai_provider]] blocks in installed packages' ailang.toml manifests. See design_docs/planned/v0_16_0/m-ai-provider-config.md.
Built-in providers (openai, anthropic, gemini, ollama, openrouter) are NOT stored here — they remain hardcoded in cmd/ailang/{exec,ai_handlers}.go for the features they need (tool use, image input, OpenRouter routing) that the v1 [[ai_provider]] schema doesn't yet cover.
Resolution order at dispatch time: built-in providers first, registry second. On name collision the built-in wins (with warning) — see D4 in the master sequence doc.
func NewProviderRegistry ¶ added in v0.15.0
func NewProviderRegistry() *ProviderRegistry
NewProviderRegistry returns an empty registry. Most callers want GlobalProviderRegistry instead — created at process start, populated once after package resolution.
func (*ProviderRegistry) Diagnostics ¶ added in v0.15.0
func (r *ProviderRegistry) Diagnostics() []string
Diagnostics returns informational warnings about the current registry state — currently: registered names that shadow built-in provider names. Returns nil if no diagnostics. Caller decides whether to log/print/error.
func (*ProviderRegistry) Lookup ¶ added in v0.15.0
func (r *ProviderRegistry) Lookup(name string) (Provider, bool)
Lookup returns the registered provider for the given name, or (nil, false) if not registered. Built-in providers are NOT stored here; dispatch must consult built-ins first.
func (*ProviderRegistry) Names ¶ added in v0.15.0
func (r *ProviderRegistry) Names() []string
Names returns the registered provider names in deterministic (alphabetical) order. Used for diagnostics, dispatch fallback ordering, and CLI listing.
func (*ProviderRegistry) Register ¶ added in v0.15.0
func (r *ProviderRegistry) Register(name string, p Provider, source string) error
Register adds a provider to the registry. Returns a structured error if the same name is already registered (cross-package conflict per D11 in the master sequence doc). The error message names both source manifests so the user can resolve by removing one or aliasing.
func (*ProviderRegistry) Reset ¶ added in v0.15.0
func (r *ProviderRegistry) Reset()
Reset clears the registry. Tests should call this in setup to avoid pollution between tests that exercise the global instance.
func (*ProviderRegistry) SourceOf ¶ added in v0.15.0
func (r *ProviderRegistry) SourceOf(name string) string
SourceOf returns the manifest path that declared the given provider, or "" if not registered. Used for error messages.
type ProviderType ¶
type ProviderType string
ProviderType represents an AI provider.
const ( ProviderOpenAI ProviderType = "openai" ProviderAnthropic ProviderType = "anthropic" ProviderGoogle ProviderType = "google" ProviderOllama ProviderType = "ollama" ProviderOpenRouter ProviderType = "openrouter" )
func GuessProvider ¶
func GuessProvider(modelName string) ProviderType
GuessProvider attempts to determine the provider from a model name. This is used when models.yml is not available.
OpenRouter detection precedence: checked BEFORE the bare-prefix checks so that vendor/model strings like "anthropic/claude-sonnet-4.5" route to OpenRouter rather than the direct Anthropic API. An explicit "openrouter:" prefix is also accepted (and stripped at handler-construction time).
func ProviderFromString ¶
func ProviderFromString(s string) ProviderType
ProviderFromString converts a string to ProviderType.
func (ProviderType) String ¶
func (p ProviderType) String() string
String returns the string representation of a ProviderType.
type Request ¶
type Request struct {
// Model is the model name (e.g., "gemini-2.5-flash", "gpt-5", "claude-sonnet-4-5")
Model string
// SystemPrompt contains system/developer instructions
SystemPrompt string
// UserPrompt contains the user message
UserPrompt string
// MaxTokens is the maximum number of response tokens (0 = provider default)
MaxTokens int
// Temperature controls randomness (0.0-2.0, 0 = provider default)
Temperature float64
// ResponseFormat controls structured output. Values: "json" or "" (text).
// When set to "json", providers configure their native structured output.
ResponseFormat string
// ResponseSchema is an optional JSON Schema string for structured output.
// When provided with ResponseFormat="json", providers enforce this schema.
// When empty with ResponseFormat="json", providers return valid JSON without schema.
ResponseSchema string
// ResponseModalities controls output types. Values: ["TEXT"], ["IMAGE"], ["TEXT", "IMAGE"].
// When set to ["IMAGE"], providers that support image generation will return image data.
ResponseModalities []string
// ImageOptions configures image generation parameters (used with ResponseModalities containing "IMAGE").
ImageOptions *ImageOptions
// Options contains provider-specific options
Options map[string]any
// Routing is an optional dynamic-routing policy. When set with HasRouting()
// returning true, only providers that support routing (currently: openrouter)
// will accept the request. Other providers return ErrRoutingNotSupported.
Routing *AIRoutingPolicy
// Messages, when non-empty, supersedes SystemPrompt + UserPrompt.
// Used for multi-turn conversations and Provider.Step tool dispatch.
// Adapters that see a non-empty Messages MUST use it; legacy single-shot
// callers that only set SystemPrompt + UserPrompt continue to work
// unchanged on the Generate path.
Messages []Message
// Tools advertises tool schemas the model may call. Empty = no tools.
// Only consulted by Provider.Step. Providers that do not support tools
// (currently: ollama) return AIError{Code: CodeToolsNotSupported,
// Retryable: false} when len(Tools) > 0.
Tools []ToolSchema
}
Request represents a generic AI request.
Most fields map directly onto provider-native parameters. The optional Routing field carries an AIRoutingPolicy for dynamic provider selection — currently consumed only by OpenRouter; other providers reject a non-zero policy with ErrRoutingNotSupported.
type Response ¶
type Response struct {
// Text is the generated text content
Text string
// ImageData contains raw image bytes (PNG/JPEG) when the response includes an image.
// Nil for text-only responses.
ImageData []byte
// ImageMIME is the MIME type of ImageData (e.g., "image/png").
// Empty for text-only responses.
ImageMIME string
// InputTokens is the number of prompt/input tokens
InputTokens int
// OutputTokens is the number of completion/output tokens
OutputTokens int
// TotalTokens is InputTokens + OutputTokens
TotalTokens int
// ReasonTokens is the number of reasoning tokens (for o1/codex models)
ReasonTokens int
// CachedTokens is the number of cached input tokens (provider-specific).
// OpenRouter reports this in usage.prompt_tokens_details.cached_tokens.
// Other providers may leave this as 0.
CachedTokens int
// CostUSD is the inference cost in USD as reported by the provider.
// Stored as a string to preserve provider-reported precision.
// OpenRouter reports this in usage.cost. Empty string if not reported.
CostUSD string
// Model is the model that was actually used (may differ from request)
Model string
// RequestedModel is the model the caller asked for. May differ from
// Model (which is what the provider actually used) when routing is in
// effect. OpenRouter is the only provider where these can differ today;
// direct providers leave this empty (in which case Model is the
// requested-and-used model).
RequestedModel string
// ResolvedProvider is the underlying provider that ultimately served
// the request (e.g., "anthropic" when OpenRouter routed to
// claude-sonnet-4.5). Empty when not applicable or not reported.
ResolvedProvider string
// FallbackChain lists the models tried in order before settling on
// Model. Empty for direct providers; usually [Model] for successful
// first-try OpenRouter calls. Reserved for future use when richer
// fallback signals become available.
FallbackChain []string
// ToolCalls, when non-empty, indicates the model wants the host to
// dispatch these tools and feed results back via a follow-up Step call.
// Only populated by Provider.Step responses; Generate leaves this nil.
ToolCalls []ToolCall
// FinishReason is the normalized stop reason. One of:
// "stop" — natural end of model turn (no tool calls)
// "tool_calls" — model emitted tool calls; loop driver should dispatch
// "length" — truncated by max_tokens
// "error" — provider-side error after partial response
// Empty on legacy Generate responses (back-compat).
FinishReason string
}
Response represents a generic AI response.
type RoutePreference ¶ added in v0.15.0
type RoutePreference string
RoutePreference is the optimization target for routing.
PreferUnspecified means "no opinion" — let the provider use its default routing behaviour. The other constants map onto OpenRouter's `provider.sort` field (see openrouter/routing.go for the translation).
const ( PreferUnspecified RoutePreference = "" PreferCheapest RoutePreference = "cheapest" PreferFastest RoutePreference = "fastest" PreferMostReliable RoutePreference = "most_reliable" )
type ToolCall ¶ added in v0.15.2
type ToolCall struct {
ID string // provider-assigned (Anthropic, OpenAI) or adapter-generated (Gemini)
Name string // tool name from the advertised ToolSchema
Arguments string // JSON-encoded; caller decodes per the advertised parameters schema
}
ToolCall is a single tool invocation emitted by the model in a Step response. The host dispatches it and feeds the result back as a Message with Role="tool" and ToolCallID matching this ID.
type ToolSchema ¶ added in v0.15.2
type ToolSchema struct {
Name string
Description string
Parameters string // JSON Schema as a string
}
ToolSchema is a JSON-Schema-described tool the model may call. Parameters is the raw JSON Schema string (same shape as Request.ResponseSchema).
Directories
¶
| Path | Synopsis |
|---|---|
|
Package anthropic provides an Anthropic Claude API client implementing the ai.Provider interface.
|
Package anthropic provides an Anthropic Claude API client implementing the ai.Provider interface. |
|
Package configdriven implements a generic AI provider whose behaviour is driven by an [[ai_provider]] block in a package's ailang.toml manifest.
|
Package configdriven implements a generic AI provider whose behaviour is driven by an [[ai_provider]] block in a package's ailang.toml manifest. |
|
Package gemini provides a Google Gemini API client implementing the ai.Provider interface.
|
Package gemini provides a Google Gemini API client implementing the ai.Provider interface. |
|
Package ollama provides an Ollama API client implementing the ai.Provider interface.
|
Package ollama provides an Ollama API client implementing the ai.Provider interface. |
|
Package openai provides an OpenAI API client implementing the ai.Provider interface.
|
Package openai provides an OpenAI API client implementing the ai.Provider interface. |
|
Package openrouter provides an OpenRouter API client implementing the ai.Provider interface.
|
Package openrouter provides an OpenRouter API client implementing the ai.Provider interface. |