Documentation
¶
Overview ¶
Package provider defines the unified interface for LLM CLI providers.
This package enables seamless switching between different AI coding CLI tools (Claude Code, Gemini CLI, Codex CLI, OpenCode, local models) while maintaining a consistent API. All providers support MCP (Model Context Protocol) as the universal extension mechanism for custom tools.
Usage ¶
Create a client using the registry:
client, err := provider.New("claude", provider.Config{
Model: "claude-sonnet-4-20250514",
WorkDir: "/path/to/project",
})
if err != nil {
log.Fatal(err)
}
defer client.Close()
// Check capabilities before using provider-specific features
caps := client.Capabilities()
if caps.HasTool("Glob") {
// Use native glob search
} else if caps.MCP {
// Fall back to MCP filesystem server
}
Available Providers ¶
- "claude": Claude Code CLI (Anthropic)
- "gemini": Gemini CLI (Google)
- "codex": Codex CLI (OpenAI)
- "opencode": OpenCode CLI (Open Source)
- "local": Local model sidecar (Ollama, llama.cpp, vLLM)
Tool Support ¶
Each provider has different native tools. Use Capabilities() to discover what's available, and MCP for cross-provider custom tools. See the Capabilities struct documentation for details on each provider's native capabilities.
Index ¶
- Variables
- func Available() []string
- func ClearRegistry()
- func IsAuthError(err error) bool
- func IsCapabilityError(err error) bool
- func IsRegistered(name string) bool
- func IsRetryable(err error) bool
- func Register(name string, factory Factory)
- func Unregister(name string)
- type Capabilities
- type Client
- type Config
- func (c Config) GetBoolOption(key string, defaultVal bool) bool
- func (c Config) GetIntOption(key string, defaultVal int) int
- func (c Config) GetOption(key string) any
- func (c Config) GetStringOption(key, defaultVal string) string
- func (c Config) GetStringSliceOption(key string) []string
- func (c *Config) LoadFromEnv()
- func (c *Config) Validate() error
- func (c Config) WithMCP(mcp *claudeconfig.MCPConfig) Config
- func (c Config) WithModel(model string) Config
- func (c Config) WithOption(key string, value any) Config
- func (c Config) WithProvider(provider string) Config
- func (c Config) WithWorkDir(dir string) Config
- type ContentPart
- type Error
- type Factory
- type Message
- type Request
- type Response
- type Role
- type StreamChunk
- type TokenUsage
- type Tool
- type ToolCall
Constants ¶
This section is empty.
Variables ¶
var ( // ClaudeCapabilities describes Claude Code CLI's native capabilities. ClaudeCapabilities = Capabilities{ Streaming: true, Tools: true, MCP: true, Sessions: true, Images: true, NativeTools: []string{"Read", "Write", "Edit", "Glob", "Grep", "Bash", "Task", "TodoWrite", "WebFetch", "WebSearch", "AskUserQuestion", "NotebookEdit", "LSP", "Skill", "EnterPlanMode", "ExitPlanMode", "KillShell", "TaskOutput"}, ContextFile: "CLAUDE.md", } // GeminiCapabilities describes Gemini CLI's native capabilities. GeminiCapabilities = Capabilities{ Streaming: true, Tools: true, MCP: true, Sessions: false, Images: true, NativeTools: []string{"read_file", "write_file", "run_shell_command", "web_fetch", "google_web_search", "save_memory", "write_todos"}, ContextFile: "GEMINI.md", } // CodexCapabilities describes Codex CLI's native capabilities. CodexCapabilities = Capabilities{ Streaming: true, Tools: true, MCP: true, Sessions: true, Images: true, NativeTools: []string{"file_read", "file_write", "shell", "web_search"}, ContextFile: "", } // OpenCodeCapabilities describes OpenCode CLI's native capabilities. OpenCodeCapabilities = Capabilities{ Streaming: true, Tools: true, MCP: true, Sessions: false, Images: false, NativeTools: []string{"write", "edit", "bash", "WebFetch", "Task"}, ContextFile: "", } // LocalCapabilities describes local model sidecar capabilities. // Local models have no native tools; all tools must be provided via MCP. LocalCapabilities = Capabilities{ Streaming: true, Tools: false, MCP: true, Sessions: false, Images: false, NativeTools: nil, ContextFile: "", } )
Pre-defined capability sets for known providers. These are exported for documentation purposes; actual capabilities are returned by each provider's Capabilities() method.
var ( // ErrUnknownProvider indicates the requested provider is not registered. ErrUnknownProvider = errors.New("unknown provider") ErrUnavailable = errors.New("LLM service unavailable") // ErrContextTooLong indicates the input exceeds the context window. ErrContextTooLong = errors.New("context exceeds maximum length") // ErrRateLimited indicates the request was rate limited. ErrRateLimited = errors.New("rate limited") // ErrInvalidRequest indicates the request is malformed. ErrInvalidRequest = errors.New("invalid request") // ErrTimeout indicates the request timed out. ErrTimeout = errors.New("request timed out") // ErrCLINotFound indicates the CLI binary was not found in PATH. ErrCLINotFound = errors.New("CLI binary not found") // ErrCredentialsNotFound indicates credentials are missing. ErrCredentialsNotFound = errors.New("credentials not found") // ErrCredentialsExpired indicates credentials have expired. ErrCredentialsExpired = errors.New("credentials expired") // ErrCapabilityNotSupported indicates the provider doesn't support the requested capability. ErrCapabilityNotSupported = errors.New("capability not supported by provider") )
Sentinel errors for provider operations.
Functions ¶
func Available ¶
func Available() []string
Available returns the names of all registered providers. The list is sorted alphabetically for consistent ordering.
func ClearRegistry ¶
func ClearRegistry()
ClearRegistry removes all registered providers. This is primarily useful for testing.
func IsAuthError ¶
IsAuthError checks if an error is authentication-related.
func IsCapabilityError ¶
IsCapabilityError checks if an error is due to missing provider capability.
func IsRegistered ¶
IsRegistered checks if a provider is registered.
func IsRetryable ¶
IsRetryable checks if an error is likely transient and worth retrying.
func Register ¶
Register adds a provider factory to the registry. Providers should call this in their init() function. Panics if a provider with the same name is already registered.
Example:
func init() {
provider.Register("claude", func(cfg provider.Config) (provider.Client, error) {
return NewClaudeCLI(cfg)
})
}
func Unregister ¶
func Unregister(name string)
Unregister removes a provider from the registry. This is primarily useful for testing.
Types ¶
type Capabilities ¶
type Capabilities struct {
// Streaming indicates if the provider supports streaming responses.
Streaming bool `json:"streaming"`
// Tools indicates if the provider supports tool/function calling.
Tools bool `json:"tools"`
// MCP indicates if the provider supports MCP (Model Context Protocol) servers.
// If true, custom tools can be added via MCP regardless of native tool support.
MCP bool `json:"mcp"`
// Sessions indicates if the provider supports multi-turn conversation sessions.
Sessions bool `json:"sessions"`
// Images indicates if the provider supports image inputs.
Images bool `json:"images"`
// NativeTools lists the provider's built-in tools by name.
// Tool names are provider-specific (e.g., "Read" for Claude, "read_file" for Gemini).
NativeTools []string `json:"native_tools"`
// ContextFile is the filename for project-specific context (e.g., "CLAUDE.md", "GEMINI.md").
// Empty string if the provider doesn't support context files.
ContextFile string `json:"context_file,omitempty"`
}
Capabilities describes what a provider natively supports. Use this to make informed decisions about feature availability before attempting operations that may not be supported.
func (Capabilities) HasTool ¶
func (c Capabilities) HasTool(name string) bool
HasTool checks if a native tool is available by name. Tool names are case-sensitive and provider-specific.
type Client ¶
type Client interface {
// Complete sends a request and returns the full response.
// The context controls cancellation and timeouts.
Complete(ctx context.Context, req Request) (*Response, error)
// Stream sends a request and returns a channel of response chunks.
// The channel is closed when streaming completes (check chunk.Done).
// Errors during streaming are returned via chunk.Error.
Stream(ctx context.Context, req Request) (<-chan StreamChunk, error)
// Provider returns the provider name (e.g., "claude", "gemini", "codex").
Provider() string
// Capabilities returns what this provider natively supports.
// Use this to check for specific tools before attempting to use them.
Capabilities() Capabilities
// Close releases any resources held by the client.
// For CLI-based providers, this may terminate any running sessions.
Close() error
}
Client is the unified interface for LLM CLI providers. Implementations must be safe for concurrent use.
func MustNew ¶
MustNew creates a new Client, panicking on error. Use only when provider availability is guaranteed (e.g., in tests).
type Config ¶
type Config struct {
// Provider is the name of the provider to use.
// Required. Values: "claude", "gemini", "codex", "opencode", "local"
Provider string `json:"provider" yaml:"provider" mapstructure:"provider"`
// Model is the model to use (provider-specific name).
// Examples: "claude-sonnet-4-20250514", "gemini-2.5-pro", "gpt-5-codex"
Model string `json:"model" yaml:"model" mapstructure:"model"`
// FallbackModel is used when the primary model is unavailable.
// Optional.
FallbackModel string `json:"fallback_model" yaml:"fallback_model" mapstructure:"fallback_model"`
// SystemPrompt is the system message prepended to all requests.
// Optional.
SystemPrompt string `json:"system_prompt" yaml:"system_prompt" mapstructure:"system_prompt"`
// MaxTurns limits conversation turns (tool calls + responses).
// 0 means no limit. Default varies by provider.
MaxTurns int `json:"max_turns" yaml:"max_turns" mapstructure:"max_turns"`
// Timeout is the maximum duration for a completion request.
// 0 uses the provider default.
Timeout time.Duration `json:"timeout" yaml:"timeout" mapstructure:"timeout"`
// MaxBudgetUSD limits spending per request.
// 0 means no limit. Not all providers support this.
MaxBudgetUSD float64 `json:"max_budget_usd" yaml:"max_budget_usd" mapstructure:"max_budget_usd"`
// WorkDir is the working directory for file operations.
// Default: current directory.
WorkDir string `json:"work_dir" yaml:"work_dir" mapstructure:"work_dir"`
// AllowedTools limits which tools the model can use.
// Empty means all tools allowed. Tool names are provider-specific.
AllowedTools []string `json:"allowed_tools" yaml:"allowed_tools" mapstructure:"allowed_tools"`
// DisallowedTools explicitly blocks certain tools.
// Takes precedence over AllowedTools.
DisallowedTools []string `json:"disallowed_tools" yaml:"disallowed_tools" mapstructure:"disallowed_tools"`
// MCP configures MCP servers to enable.
// MCP is the universal tool extension mechanism supported by all providers.
MCP *claudeconfig.MCPConfig `json:"mcp" yaml:"mcp" mapstructure:"mcp"`
// Env provides additional environment variables for CLI execution.
Env map[string]string `json:"env" yaml:"env" mapstructure:"env"`
// Options holds provider-specific configuration.
// See each provider's documentation for available options.
//
// Common options by provider:
//
// Claude:
// - "permission_mode": "acceptEdits" | "bypassPermissions"
// - "skip_permissions": bool (dangerously skip all permission prompts)
// - "session_id": string
// - "home_dir": string (for containers)
// - "config_dir": string
// - "output_format": "json" | "stream-json" | "text"
// - "claude_path": string (path to claude binary)
// - "continue": bool (continue most recent session)
// - "resume": string (resume specific session ID)
// - "no_session_persistence": bool
// - "json_schema": string (JSON schema for structured output)
// - "tools": []string (exact tool set, different from allowed_tools)
// - "setting_sources": []string (e.g., ["project", "local", "user"])
// - "add_dirs": []string (additional directories for file access)
// - "append_system_prompt": string (append to system prompt)
// - "mcp_config": string (MCP config file path or JSON)
// - "mcp_config_paths": []string (multiple MCP config paths)
// - "strict_mcp_config": bool
//
// Gemini:
// - "include_directories": []string
// - "yolo": bool (auto-approve all actions)
// - "sandbox": string (docker/podman/custom, via GEMINI_SANDBOX env)
//
// Codex:
// - "sandbox": "read-only" | "workspace-write" | "danger-full-access"
// - "ask_for_approval": "untrusted" | "on-failure" | "on-request" | "never"
// - "search": bool
// - "web_search": "cached" | "live" | "disabled"
// - "full_auto": bool
// - "yolo": bool (dangerously bypass approvals and sandbox)
// - "resume_all": bool (use --all when resuming last session)
// - "profile": string
// - "local_provider": "lmstudio" | "ollama"
// - "config_overrides": map[string]any (repeated -c key=value)
// - "skip_git_repo_check": bool
// - "output_schema": string
// - "output_last_message": string
// - "model_reasoning_effort": "minimal" | "low" | "medium" | "high"
// - "hide_agent_reasoning": bool
// - "oss": bool (local OSS backend mode)
// - "enable_features": []string
// - "disable_features": []string
// - "color": "auto" | "always" | "never"
//
// OpenCode:
// - "quiet": bool
// - "agent": "build" | "plan"
// - "debug": bool
//
// Local:
// - "backend": "ollama" | "llama.cpp" | "vllm"
// - "sidecar_path": string
//
// Continue (Continue.dev CLI):
// - "path": string (path to cn binary)
// - "config_path": string (path to config.yaml)
// - "api_key": string (CONTINUE_API_KEY)
// - "rule": string (rule from Mission Control)
// - "verbose": bool (enable logging)
// - "resume": bool (resume previous session)
// - "allowed_tools": []string (--allow patterns)
// - "ask_tools": []string (--ask patterns)
// - "excluded_tools": []string (--exclude tools)
//
// Aider:
// - "path": string (path to aider binary)
// - "ollama_api_base": string (OLLAMA_API_BASE)
// - "edit_format": string (diff, whole, etc.)
// - "no_git": bool (disable git)
// - "no_auto_commits": bool (disable auto-commits)
// - "no_stream": bool (disable streaming)
// - "dry_run": bool (preview without modifying)
// - "yes_always": bool (auto-confirm all prompts)
// - "editable_files": []string (--file flags)
// - "read_only_files": []string (--read flags)
Options map[string]any `json:"options" yaml:"options" mapstructure:"options"`
}
Config holds configuration for creating an LLM provider client. Common fields apply to all providers; use Options for provider-specific settings.
func DefaultConfig ¶
func DefaultConfig() Config
DefaultConfig returns a Config with sensible defaults. Provider must still be set before use.
func FromEnv ¶
func FromEnv() Config
FromEnv creates a Config from environment variables with defaults.
func (Config) GetBoolOption ¶
GetBoolOption retrieves a bool option, returning defaultVal if not set.
func (Config) GetIntOption ¶
GetIntOption retrieves an int option, returning defaultVal if not set.
func (Config) GetOption ¶
GetOption retrieves a provider-specific option by key. Returns the zero value if the option is not set or has the wrong type.
func (Config) GetStringOption ¶
GetStringOption retrieves a string option, returning defaultVal if not set.
func (Config) GetStringSliceOption ¶
GetStringSliceOption retrieves a string slice option, returning nil if not set. Handles both []string and []any (from JSON unmarshaling).
func (*Config) LoadFromEnv ¶
func (c *Config) LoadFromEnv()
LoadFromEnv populates config fields from environment variables. Environment variables use LLMKIT_ prefix and take precedence over existing values.
Supported variables:
- LLMKIT_PROVIDER: Provider name
- LLMKIT_MODEL: Model name
- LLMKIT_FALLBACK_MODEL: Fallback model name
- LLMKIT_SYSTEM_PROMPT: System prompt
- LLMKIT_MAX_TURNS: Maximum turns
- LLMKIT_TIMEOUT: Timeout duration (e.g., "5m")
- LLMKIT_MAX_BUDGET_USD: Maximum budget
- LLMKIT_WORK_DIR: Working directory
func (Config) WithMCP ¶
func (c Config) WithMCP(mcp *claudeconfig.MCPConfig) Config
WithMCP returns a copy of the config with the specified MCP configuration.
func (Config) WithOption ¶
WithOption returns a copy of the config with the specified option set.
func (Config) WithProvider ¶
WithProvider returns a copy of the config with the specified provider.
func (Config) WithWorkDir ¶
WithWorkDir returns a copy of the config with the specified working directory.
type ContentPart ¶
type ContentPart struct {
// Type indicates the content type: "text", "image", "file"
Type string `json:"type"`
// Text content (when Type == "text")
Text string `json:"text,omitempty"`
// ImageURL for remote images (when Type == "image")
ImageURL string `json:"image_url,omitempty"`
// ImageBase64 for inline images (when Type == "image")
// Format: base64-encoded image data
ImageBase64 string `json:"image_base64,omitempty"`
// MediaType specifies the MIME type (e.g., "image/png", "image/jpeg")
MediaType string `json:"media_type,omitempty"`
// FilePath for local file references (when Type == "file")
FilePath string `json:"file_path,omitempty"`
}
ContentPart represents a piece of multimodal content.
type Error ¶
type Error struct {
Provider string // Provider name ("claude", "gemini", etc.)
Op string // Operation that failed ("complete", "stream")
Err error // Underlying error
Retryable bool // Whether the error is likely transient
}
Error wraps provider errors with context.
type Factory ¶
Factory creates a new Client from the given configuration. Each provider registers its own factory function.
type Message ¶
type Message struct {
Role Role `json:"role"`
Content string `json:"content"`
Name string `json:"name,omitempty"` // For tool results
// ContentParts enables multimodal content (text + images).
// If set, takes precedence over Content for providers that support images.
// Each part can be text, image (base64 or URL), or other media types.
ContentParts []ContentPart `json:"content_parts,omitempty"`
}
Message is a conversation turn. For simple text messages, use Content. For multimodal messages (images, files), use ContentParts instead.
func NewImageBase64Message ¶
NewImageBase64Message creates a message with an inline base64 image.
func NewImageMessage ¶
NewImageMessage creates a message with an image from a URL.
func NewTextMessage ¶
NewTextMessage creates a simple text message.
func (Message) GetText ¶
GetText returns the text content of the message. For multimodal messages, concatenates all text parts.
func (Message) IsMultimodal ¶
IsMultimodal returns true if the message has multimodal content.
type Request ¶
type Request struct {
// SystemPrompt sets the system message that guides the model's behavior.
SystemPrompt string `json:"system_prompt,omitempty"`
// Messages is the conversation history to send to the model.
Messages []Message `json:"messages"`
// Model specifies which model to use (provider-specific name).
// Examples: "claude-sonnet-4-20250514", "gemini-2.5-pro", "gpt-5-codex"
Model string `json:"model,omitempty"`
// MaxTokens limits the response length.
MaxTokens int `json:"max_tokens,omitempty"`
// Temperature controls response randomness (0.0 = deterministic, 1.0 = creative).
Temperature float64 `json:"temperature,omitempty"`
// Tools lists available tools the model can invoke.
Tools []Tool `json:"tools,omitempty"`
// MCP configures MCP servers to enable for this request.
// MCP is the universal tool extension mechanism supported by all providers.
MCP *claudeconfig.MCPConfig `json:"mcp,omitempty"`
// Options holds provider-specific configuration not covered by standard fields.
// See each provider's documentation for available options.
Options map[string]any `json:"options,omitempty"`
}
Request configures an LLM completion call. This is the provider-agnostic request format used across all CLI providers.
type Response ¶
type Response struct {
// Content is the text response from the model.
Content string `json:"content"`
// ToolCalls contains any tool invocations requested by the model.
ToolCalls []ToolCall `json:"tool_calls,omitempty"`
// Usage tracks token consumption for this request.
Usage TokenUsage `json:"usage"`
// Model is the actual model used (may differ from requested).
Model string `json:"model"`
// FinishReason indicates why the model stopped generating.
// Common values: "stop", "length", "tool_calls"
FinishReason string `json:"finish_reason"`
// Duration is the time taken for the completion.
Duration time.Duration `json:"duration"`
// SessionID is the session identifier (for providers that support sessions).
// Empty if the provider doesn't support sessions.
SessionID string `json:"session_id,omitempty"`
// CostUSD is the estimated cost in USD (for providers that track cost).
// Zero if cost tracking is not available.
CostUSD float64 `json:"cost_usd,omitempty"`
// NumTurns is the number of conversation turns (for multi-turn sessions).
NumTurns int `json:"num_turns,omitempty"`
// Metadata holds provider-specific response data.
// See each provider's documentation for available fields.
Metadata map[string]any `json:"metadata,omitempty"`
}
Response is the output of a completion call.
type StreamChunk ¶
type StreamChunk struct {
// Content is the text content in this chunk.
Content string `json:"content,omitempty"`
// ToolCalls contains tool invocations (usually only in final chunks).
ToolCalls []ToolCall `json:"tool_calls,omitempty"`
// Usage is the token usage (only set in final chunk).
Usage *TokenUsage `json:"usage,omitempty"`
// Done indicates this is the final chunk.
Done bool `json:"done"`
// Error is non-nil if streaming failed.
Error error `json:"-"`
}
StreamChunk is a piece of a streaming response.
type TokenUsage ¶
type TokenUsage struct {
InputTokens int `json:"input_tokens"`
OutputTokens int `json:"output_tokens"`
TotalTokens int `json:"total_tokens"`
// Cache-related tokens (provider-specific, may be zero)
CacheCreationInputTokens int `json:"cache_creation_input_tokens,omitempty"`
CacheReadInputTokens int `json:"cache_read_input_tokens,omitempty"`
}
TokenUsage tracks token consumption.
func (*TokenUsage) Add ¶
func (u *TokenUsage) Add(other TokenUsage)
Add combines token usage from another TokenUsage.