configdriven

package
v0.16.1 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: May 6, 2026 License: Apache-2.0 Imports: 18 Imported by: 0

Documentation

Overview

Package configdriven implements a generic AI provider whose behaviour is driven by an [[ai_provider]] block in a package's ailang.toml manifest. See design_docs/planned/v0_15_0/m-ai-provider-config.md.

Index

Constants

This section is empty.

Variables

This section is empty.

Functions

This section is empty.

Types

type HeaderPair

type HeaderPair struct {
	Name  string
	Value string
}

HeaderPair is one (name, value) entry for the StreamSSEPost headers list. Mirrors the AILANG-side `{name, value}` record shape that StreamSSEPost expects in its config arg.

type Option

type Option func(*Provider)

Option configures a Provider at construction time.

func WithHTTPClient

func WithHTTPClient(c *http.Client) Option

WithHTTPClient overrides the default HTTP client. Used by tests with httptest.Server-backed clients.

type Provider

type Provider struct {
	// contains filtered or unexported fields
}

Provider is the generic ai.Provider implementation driven by an [[ai_provider]] block in a package's ailang.toml manifest.

One Provider instance corresponds to one [[ai_provider]] block. The runtime registers many of these — one per declared block across all installed packages — alongside the hardcoded built-in providers (openai, anthropic, gemini, ollama, openrouter).

Per the [[ai_provider]] design (see design_docs/planned/v0_15_0/m-ai-provider-config.md):

  • AIRoutingPolicy on the request is rejected (D11; non-OpenRouter providers must reject non-zero policies — silent routing fallback is forbidden)
  • Image generation requests are rejected unless capabilities.vision is true
  • Capability-mismatched requests fail fast with CapabilityNotSupported
  • Trace spans match the structure of built-in provider spans (so replay/observability tooling treats config-driven and built-in providers identically)

func New

func New(spec *pkg.AIProviderSpec, opts ...Option) *Provider

New creates a Provider for the given [[ai_provider]] spec. The spec is retained by reference; callers must not mutate it after construction.

func (*Provider) Generate

func (p *Provider) Generate(ctx context.Context, req *ai.Request) (*ai.Response, error)

Generate implements ai.Provider.

func (*Provider) Name

func (p *Provider) Name() string

Name implements ai.Provider — returns the provider name from the config (e.g. "vllm" for an [[ai_provider]] with name = "vllm").

func (*Provider) Spec

func (p *Provider) Spec() *pkg.AIProviderSpec

Spec returns the provider config that drives this Provider's behaviour. Consumed by the streaming dispatch layer (internal/effects/ai_streaming.go) which needs the streaming sub-block, request shape, and auth fields to construct an SSE-POST request. Returned by reference; callers must NOT mutate it.

func (*Provider) Step added in v0.15.2

func (p *Provider) Step(ctx context.Context, req *ai.Request) (*ai.Response, error)

Step is the multi-turn / tool-aware completion entry point introduced by M-AI-TOOL-LOOP (v0.17.0). Config-driven providers describe their HTTP shape via [[ai_provider]] TOML blocks (M-AI-PROVIDER-CONFIG, v0.15.0); adding tool-loop support here means extending the spec schema with a "supports_tools" bit and an optional translation block. That is out of scope for the initial M-AI-TOOL-LOOP sprint — config-driven providers stay stub'd until a real consumer needs them.

Until then, any caller invoking Step on a config-driven provider gets a typed error pointing at the design doc. Callers can fall back to Generate for non-tool single-shot calls.

type StreamRequest

type StreamRequest struct {
	URL     string
	Body    string
	Headers []HeaderPair
}

StreamRequest is the prepared input to effects.StreamSSEPost: the URL to POST to, the JSON body, and the resolved auth + content headers.

func BuildStreamRequest

func BuildStreamRequest(spec *pkg.AIProviderSpec, model string, messagesJSON string) (*StreamRequest, *ai.ProviderError)

BuildStreamRequest prepares an SSE-POST request from an [[ai_provider]] spec plus a model + serialised messages. Resolves env-var references at call time so credentials don't leak into start-time globals. Returns an AI-shaped error on capability mismatch or auth failure so the caller can short-circuit without hitting the network.

Callers (effects/ai_streaming.go): do NOT bypass this — Stream-effect-only code paths break the AI cap (D1) and budget tracking (D11). All AI streaming dispatch goes through here so the budget/cap/span machinery applies uniformly.

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL