Documentation
¶
Overview ¶
Package providers defines the LLMProvider interface and a self-registration registry. Each provider implementation registers itself via its init() function, so adding a new provider requires only:
- Create internal/providers/<name>/provider.go implementing LLMProvider.
- Call providers.Register("<name>", ...) in its init().
- Add a blank import in main.go: _ "…/providers/<name>"
Index ¶
Constants ¶
This section is empty.
Variables ¶
This section is empty.
Functions ¶
func Complete ¶
func Complete(ctx context.Context, model, systemPrompt, userMsg string, maxTokens int) (string, error)
Complete performs a simple single-turn text completion using the provider registered for the given model. It requires the provider to implement Completer. Typical use: optimizer meta-agent calls, semantic health checks.
func Detect ¶
Detect returns the provider name for a given model ID. It inspects the model string prefix to infer the backend:
claude-* → anthropic gpt-*, o1*, o3*, o4* → openai gemini-* → gemini
Falls back to "openai" for unrecognised models so that any OpenAI-compatible endpoint (e.g. Ollama) works out of the box.
func Register ¶
func Register(name string, factory func() LLMProvider)
Register adds a provider factory to the registry under the given name. Call this from an init() function in each provider package.
Types ¶
type Completer ¶
type Completer interface {
// Complete performs one non-streaming Messages call.
// model is the model ID (e.g. "claude-sonnet-4-20250514").
// Returns the assistant's text response.
Complete(ctx context.Context, model, systemPrompt, userMsg string, maxTokens int) (string, error)
}
Completer is an optional interface for providers that support simple single-turn completions without the full agentic tool-use loop. Use Complete() (the package-level helper) rather than calling this directly.
type LLMProvider ¶
type LLMProvider interface {
// RunTask executes a task through the provider's agentic tool-use loop.
// tools is the merged list of available tools (MCP + webhook + built-ins).
// callTool dispatches a named tool invocation and returns the result.
// chunkFn, if non-nil, is called with each text token as it is generated (streaming mode).
// Returns the text result, token usage accumulated across all LLM calls, and any error.
RunTask(
ctx context.Context,
cfg *config.Config,
task queue.Task,
tools []mcp.Tool,
callTool func(context.Context, string, json.RawMessage) (string, error),
chunkFn func(string),
) (string, queue.TokenUsage, error)
}
LLMProvider is the interface every LLM backend must implement.
func New ¶
func New(name string) (LLMProvider, error)
New returns the LLMProvider for the given name. An empty name defaults to "anthropic".
Directories
¶
| Path | Synopsis |
|---|---|
|
Package anthropic implements LLMProvider for the Anthropic Claude API.
|
Package anthropic implements LLMProvider for the Anthropic Claude API. |
|
Package gemini implements LLMProvider for Google Gemini models.
|
Package gemini implements LLMProvider for Google Gemini models. |
|
Package mock provides a deterministic LLMProvider for local testing.
|
Package mock provides a deterministic LLMProvider for local testing. |
|
Package openai implements LLMProvider for OpenAI-compatible APIs.
|
Package openai implements LLMProvider for OpenAI-compatible APIs. |