Documentation
¶
Overview ¶
Package model provides LLM provider configuration and client implementations.
Configuration Reference ¶
Model configuration can be specified via CLI flags, environment variables, config file, or native keystore (for API keys only).
## Resolution Order
For each field, the first source with a value wins:
- CLI flags (highest priority)
- Environment variables
- Config file (~/.config/devlore/config.yaml)
- Native keystore (API key only, lowest priority)
## Configuration Fields
| Field | CLI Flag | Environment Variable | Config Key | Keystore | |--------------|--------------------|-----------------------------|----------------|------------------| | name | --model | DEVLORE_MODEL | model.name | — | | api_key | --model-api-key | DEVLORE_MODEL_API_KEY | model.api_key | Account=provider | | endpoint | --model-endpoint | DEVLORE_MODEL_ENDPOINT | model.endpoint | — | | provider | --model-provider | DEVLORE_MODEL_PROVIDER | model.provider | — |
## Provider Field
The provider field determines:
- API protocol: Anthropic API vs OpenAI-compatible API (different request/response formats)
- Authentication method: "Authorization: Bearer" header vs "api-key" header (Azure)
- Default endpoint: Each provider has a different default API URL
- Keystore account: API keys are stored under Service=com.noblefactor.DevLore, Account=<provider>
- Client implementation: Which Provider implementation to instantiate
Supported providers: anthropic, gemini, github, groq, ollama, openai
## Native Keystore
API keys are stored in the native OS keystore:
- macOS: Keychain (security command)
- Linux: libsecret (secret-tool command)
- Windows: Credential Manager (PowerShell)
Keystore entry format:
Service: com.noblefactor.DevLore Account: <provider> (e.g., "anthropic", "openai") Key: <api-key>
## Config File Example
# ~/.config/devlore/config.yaml model: name: claude-sonnet-4-20250514 provider: anthropic
API keys should be stored in the native keystore, not the config file.
## CLI Usage Examples
# Using environment variables DEVLORE_MODEL_PROVIDER=github DEVLORE_MODEL_API_KEY=$(gh auth token) writ migrate --dry-run ~/dotfiles # Using CLI flags writ --model-provider=github --model-api-key=$(gh auth token) migrate --dry-run ~/dotfiles
Package model provides an opaque LLM provider interface and configuration.
Provider Interface ¶
The Provider interface is the single abstraction for all AI/LLM backends. Callers interact only through this interface—they don't need to know whether the underlying provider is Anthropic, OpenAI, Ollama, etc.
type Provider interface {
Chat(ctx context.Context, req ChatRequest) (*ChatResponse, error)
Name() string // Provider name (also keystore account)
Model() string // Model identifier
Endpoint() string // API endpoint (empty = default)
Available(ctx context.Context) bool
}
The interface exposes queryable attributes (provider, model, endpoint) but never exposes the API key for security.
Provider Field Uses ¶
The provider field in configuration determines:
- API Protocol: Anthropic API vs OpenAI-compatible API (different request/response formats)
- Authentication Method: "Authorization: Bearer" header vs "api-key" header (Azure)
- Default Endpoint: Each provider has a different default API URL
- Keystore Account: API keys stored under Service=com.noblefactor.DevLore, Account=<provider>
- Client Implementation: Which Provider implementation to instantiate
Supported Providers ¶
| Provider | Default Endpoint | Auth Header | |--------------|------------------------------------------------|---------------------------| | anthropic | https://api.anthropic.com/v1/messages | x-api-key | | gemini | https://generativelanguage.googleapis.com/v1beta| ?key= query param | | github | https://models.inference.ai.azure.com | Authorization: Bearer | | groq | https://api.groq.com/openai/v1 | Authorization: Bearer | | ollama | http://localhost:11434 | (none) | | openai | https://api.openai.com/v1 | Authorization: Bearer |
See config.go for full configuration reference including CLI flags, environment variables, config file format, and resolution order.
Index ¶
Constants ¶
This section is empty.
Variables ¶
This section is empty.
Functions ¶
This section is empty.
Types ¶
type AnthropicProvider ¶
type AnthropicProvider struct {
// contains filtered or unexported fields
}
AnthropicProvider implements Provider for Anthropic Claude.
func NewAnthropicProvider ¶
func NewAnthropicProvider(apiKey, model string) *AnthropicProvider
NewAnthropicProvider creates an Anthropic provider.
func (*AnthropicProvider) Available ¶
func (a *AnthropicProvider) Available(ctx context.Context) bool
Available checks if the API key is valid.
func (*AnthropicProvider) Chat ¶
func (a *AnthropicProvider) Chat(ctx context.Context, req ChatRequest) (*ChatResponse, error)
Chat sends a chat completion request to Anthropic.
func (*AnthropicProvider) Endpoint ¶
func (a *AnthropicProvider) Endpoint() string
Endpoint returns empty string (Anthropic uses fixed endpoint).
func (*AnthropicProvider) Model ¶
func (a *AnthropicProvider) Model() string
Model returns the model identifier (e.g., "claude-sonnet-4-20250514").
func (*AnthropicProvider) Name ¶
func (a *AnthropicProvider) Name() string
Name returns "anthropic".
type CLIFlags ¶
type CLIFlags struct {
APIKey string //nolint:gosec // G101: field name is intentional, not a credential
Endpoint string // --model-endpoint
Model string // --model
Provider string // --model-provider
}
CLIFlags holds model configuration from command-line flags. Fields are sorted alphabetically to match documentation.
func (CLIFlags) ApplyTo ¶
func (f CLIFlags) ApplyTo(cfg *config.ModelConfig)
ApplyTo applies CLI flags to a config. CLI flags take highest priority.
type ChatRequest ¶
type ChatRequest struct {
SystemPrompt string // System prompt (instructions)
Messages []Message // Conversation messages
Temperature float64 // 0.0 for deterministic, higher for creativity
MaxTokens int // Maximum response tokens (0 = provider default)
JSONMode bool // Request JSON output if supported
}
ChatRequest represents a chat completion request.
type ChatResponse ¶
type ChatResponse struct {
Content string // Response text
FinishReason string // "stop", "length", etc.
TokensUsed int // Total tokens consumed
}
ChatResponse contains the AI response.
type GeminiProvider ¶
type GeminiProvider struct {
// contains filtered or unexported fields
}
GeminiProvider implements Provider for Google's Gemini API.
func NewGeminiProvider ¶
func NewGeminiProvider(apiKey, model string) *GeminiProvider
NewGeminiProvider creates a Gemini provider.
func (*GeminiProvider) Available ¶
func (g *GeminiProvider) Available(ctx context.Context) bool
Available checks if the API key is set.
func (*GeminiProvider) Chat ¶
func (g *GeminiProvider) Chat(ctx context.Context, req ChatRequest) (*ChatResponse, error)
Chat sends a chat completion request to Gemini.
func (*GeminiProvider) Endpoint ¶
func (g *GeminiProvider) Endpoint() string
Endpoint returns the API endpoint URL.
func (*GeminiProvider) Model ¶
func (g *GeminiProvider) Model() string
Model returns the model identifier.
func (*GeminiProvider) Name ¶
func (g *GeminiProvider) Name() string
Name returns "gemini" for keystore lookup.
type GroqProvider ¶
type GroqProvider struct {
*OpenAIProvider
// contains filtered or unexported fields
}
GroqProvider implements Provider for Groq's OpenAI-compatible API. Groq provides extremely fast inference using custom LPU hardware.
func NewGroqProvider ¶
func NewGroqProvider(apiKey, model string) *GroqProvider
NewGroqProvider creates a Groq provider. Groq uses OpenAI-compatible API at api.groq.com.
func (*GroqProvider) Available ¶
func (g *GroqProvider) Available(ctx context.Context) bool
Available checks if the API key is set.
func (*GroqProvider) Name ¶
func (g *GroqProvider) Name() string
Name returns "groq" for keystore lookup.
type OllamaProvider ¶
type OllamaProvider struct {
// contains filtered or unexported fields
}
OllamaProvider implements Provider for local Ollama.
func NewOllamaProvider ¶
func NewOllamaProvider(endpoint, model string) *OllamaProvider
NewOllamaProvider creates an Ollama provider.
func (*OllamaProvider) Available ¶
func (o *OllamaProvider) Available(ctx context.Context) bool
Available checks if Ollama is running and the model is available.
func (*OllamaProvider) Chat ¶
func (o *OllamaProvider) Chat(ctx context.Context, req ChatRequest) (*ChatResponse, error)
Chat sends a chat completion request to Ollama.
func (*OllamaProvider) Endpoint ¶
func (o *OllamaProvider) Endpoint() string
Endpoint returns the Ollama API endpoint URL.
func (*OllamaProvider) Model ¶
func (o *OllamaProvider) Model() string
Model returns the model identifier (e.g., "llama3.1:8b").
type OpenAIProvider ¶
type OpenAIProvider struct {
// contains filtered or unexported fields
}
OpenAIProvider implements Provider for OpenAI and compatible APIs.
func NewOpenAIProvider ¶
func NewOpenAIProvider(apiKey, model, endpoint string) *OpenAIProvider
NewOpenAIProvider creates an OpenAI provider. endpoint can be empty for api.openai.com, or set for Azure/compatible APIs.
func (*OpenAIProvider) Available ¶
func (o *OpenAIProvider) Available(ctx context.Context) bool
Available checks if the API key is set.
func (*OpenAIProvider) Chat ¶
func (o *OpenAIProvider) Chat(ctx context.Context, req ChatRequest) (*ChatResponse, error)
Chat sends a chat completion request to OpenAI.
func (*OpenAIProvider) Endpoint ¶
func (o *OpenAIProvider) Endpoint() string
Endpoint returns the API endpoint URL.
func (*OpenAIProvider) Model ¶
func (o *OpenAIProvider) Model() string
Model returns the model identifier (e.g., "gpt-4-turbo").
type Provider ¶
type Provider interface {
// Chat sends messages and returns a response.
Chat(ctx context.Context, req ChatRequest) (*ChatResponse, error)
// Name returns the provider name (e.g., "ollama", "anthropic", "openai").
// This is the same value used for keystore account lookup.
Name() string
// Model returns the model identifier (e.g., "llama3.1:8b", "claude-sonnet-4-20250514").
Model() string
// Endpoint returns the API endpoint URL.
// Returns empty string if using provider's default endpoint.
Endpoint() string
// Available returns true if the provider is ready to use.
// For cloud providers, this typically means the API key is configured.
// For local providers (Ollama), this checks if the service is running.
Available(ctx context.Context) bool
}
Provider is the opaque interface for AI/LLM backends.
Callers interact only through this interface—they don't need to know whether the underlying provider is Anthropic, OpenAI, Ollama, etc. The interface exposes queryable attributes (provider, model, endpoint) but never exposes the API key for security.
func EnsureProvider ¶
EnsureProvider returns a configured AI provider, prompting the user if needed. If interactive is false, fails immediately if no provider is configured.
Lookup priority:
- CLI flags (passed via cliFlags parameter)
- Environment: DEVLORE_MODEL_PROVIDER, DEVLORE_MODEL_API_KEY, etc.
- Config file: ~/.config/devlore/config.yaml
- API key from native keystore (for configured provider)
- Auto-detect from common API key env vars (GROQ_API_KEY, GEMINI_API_KEY, etc.)
- Auto-detect from native keystore entries
- Fallback to Ollama if available locally
- Interactive prompt (only if interactive=true)
func NewProvider ¶
func NewProvider(cfg config.ModelConfig) (Provider, error)
NewProvider creates a provider from configuration.