Documentation
¶
Overview ¶
Package model provides LLM provider configuration and client implementations.
Configuration Reference ¶
Model configuration can be specified via CLI flags, environment variables, config file, or native keystore (for API keys only).
## Resolution Order
For each field, the first source with a value wins:
- CLI flags (highest priority)
- Environment variables
- Config file (~/.config/devlore/config.yaml)
- Native keystore (API key only, lowest priority)
## Configuration Fields
| Field | CLI Flag | Environment Variable | Config Key | Keystore | |--------------|--------------------|-----------------------------|----------------|------------------| | model | --model | DEVLORE_MODEL | model.model | — | | api-key | --model-api-key | DEVLORE_MODEL_API_KEY | model.api-key | Account=provider | | endpoint | --model-endpoint | DEVLORE_MODEL_ENDPOINT | model.endpoint | — | | provider | --model-provider | DEVLORE_MODEL_PROVIDER | model.provider | — |
## Provider Field
The provider field determines:
- API protocol: Anthropic API vs OpenAI-compatible API (different request/response formats)
- Authentication method: "Authorization: Bearer" header vs "api-key" header (Azure)
- Default endpoint: Each provider has a different default API URL
- Keystore account: API keys are stored under Service=com.noblefactor.DevLore, Account=<provider>
- Client implementation: Which Provider implementation to instantiate
Supported providers: anthropic, azure-openai, github, ollama, openai
## Native Keystore
API keys are stored in the native OS keystore:
- macOS: Keychain (security command)
- Linux: libsecret (secret-tool command)
- Windows: Credential Manager (PowerShell)
Keystore entry format:
Service: com.noblefactor.DevLore Account: <provider> (e.g., "anthropic", "openai") Key: <api-key>
## Config File Example
# ~/.config/devlore/config.yaml model: model: claude-sonnet-4-20250514 provider: anthropic
API keys should be stored in the native keystore, not the config file.
## CLI Usage Examples
# Using environment variables DEVLORE_MODEL_PROVIDER=github DEVLORE_MODEL_API_KEY=$(gh auth token) writ migrate --dry-run ~/dotfiles # Using CLI flags writ --model-provider=github --model-api-key=$(gh auth token) migrate --dry-run ~/dotfiles
Package model provides an opaque LLM provider interface and configuration.
Provider Interface ¶
The Provider interface is the single abstraction for all AI/LLM backends. Callers interact only through this interface—they don't need to know whether the underlying provider is Anthropic, OpenAI, Ollama, etc.
type Provider interface {
Chat(ctx context.Context, req ChatRequest) (*ChatResponse, error)
Name() string // Provider name (also keystore account)
Model() string // Model identifier
Endpoint() string // API endpoint (empty = default)
Available(ctx context.Context) bool
}
The interface exposes queryable attributes (provider, model, endpoint) but never exposes the API key for security.
Provider Field Uses ¶
The provider field in configuration determines:
- API Protocol: Anthropic API vs OpenAI-compatible API (different request/response formats)
- Authentication Method: "Authorization: Bearer" header vs "api-key" header (Azure)
- Default Endpoint: Each provider has a different default API URL
- Keystore Account: API keys stored under Service=com.noblefactor.DevLore, Account=<provider>
- Client Implementation: Which Provider implementation to instantiate
Supported Providers ¶
| Provider | Default Endpoint | Auth Header | |--------------|--------------------------------------------|---------------------------| | anthropic | https://api.anthropic.com/v1/messages | x-api-key | | azure-openai | (requires endpoint) | api-key | | github | https://models.inference.ai.azure.com | Authorization: Bearer | | ollama | http://localhost:11434 | (none) | | openai | https://api.openai.com/v1 | Authorization: Bearer |
See config.go for full configuration reference including CLI flags, environment variables, config file format, and resolution order.
Index ¶
Constants ¶
const ( EnvModel = "DEVLORE_MODEL" EnvModelAPIKey = "DEVLORE_MODEL_API_KEY" EnvModelEndpoint = "DEVLORE_MODEL_ENDPOINT" EnvModelProvider = "DEVLORE_MODEL_PROVIDER" )
Environment variables for model configuration (sorted alphabetically).
Variables ¶
This section is empty.
Functions ¶
func ConfigPath ¶
ConfigPath returns the path to the devlore config file.
func SaveConfig ¶
SaveConfig saves model configuration to the config file. API keys are stored in the native keystore, not the config file.
Types ¶
type AnthropicProvider ¶
type AnthropicProvider struct {
// contains filtered or unexported fields
}
AnthropicProvider implements Provider for Anthropic Claude.
func NewAnthropicProvider ¶
func NewAnthropicProvider(apiKey, model string) *AnthropicProvider
NewAnthropicProvider creates an Anthropic provider.
func (*AnthropicProvider) Available ¶
func (a *AnthropicProvider) Available(ctx context.Context) bool
Available checks if the API key is valid.
func (*AnthropicProvider) Chat ¶
func (a *AnthropicProvider) Chat(ctx context.Context, req ChatRequest) (*ChatResponse, error)
Chat sends a chat completion request to Anthropic.
func (*AnthropicProvider) Endpoint ¶
func (a *AnthropicProvider) Endpoint() string
Endpoint returns empty string (Anthropic uses fixed endpoint).
func (*AnthropicProvider) Model ¶
func (a *AnthropicProvider) Model() string
Model returns the model identifier (e.g., "claude-sonnet-4-20250514").
func (*AnthropicProvider) Name ¶
func (a *AnthropicProvider) Name() string
Name returns "anthropic".
type AzureOpenAIProvider ¶
type AzureOpenAIProvider struct {
// contains filtered or unexported fields
}
AzureOpenAIProvider implements Provider for Azure OpenAI Service.
func NewAzureOpenAIProvider ¶
func NewAzureOpenAIProvider(apiKey, endpoint, deployment string) *AzureOpenAIProvider
NewAzureOpenAIProvider creates an Azure OpenAI provider. endpoint should be the base URL like https://myresource.openai.azure.com deployment is the name of your model deployment (e.g., "gpt-4")
func (*AzureOpenAIProvider) Available ¶
func (a *AzureOpenAIProvider) Available(ctx context.Context) bool
Available checks if the required configuration is present.
func (*AzureOpenAIProvider) Chat ¶
func (a *AzureOpenAIProvider) Chat(ctx context.Context, req ChatRequest) (*ChatResponse, error)
Chat sends a chat completion request to Azure OpenAI.
func (*AzureOpenAIProvider) Endpoint ¶
func (a *AzureOpenAIProvider) Endpoint() string
Endpoint returns the Azure OpenAI endpoint URL.
func (*AzureOpenAIProvider) Model ¶
func (a *AzureOpenAIProvider) Model() string
Model returns the deployment name (Azure's equivalent of model).
func (*AzureOpenAIProvider) Name ¶
func (a *AzureOpenAIProvider) Name() string
Name returns "azure-openai".
type CLIFlags ¶
type CLIFlags struct {
APIKey string // --model-api-key
Endpoint string // --model-endpoint
Model string // --model
Provider string // --model-provider
}
CLIFlags holds model configuration from command-line flags. Fields are sorted alphabetically to match documentation.
type ChatRequest ¶
type ChatRequest struct {
SystemPrompt string // System prompt (instructions)
Messages []Message // Conversation messages
Temperature float64 // 0.0 for deterministic, higher for creativity
MaxTokens int // Maximum response tokens (0 = provider default)
JSONMode bool // Request JSON output if supported
}
ChatRequest represents a chat completion request.
type ChatResponse ¶
type ChatResponse struct {
Content string // Response text
FinishReason string // "stop", "length", etc.
TokensUsed int // Total tokens consumed
}
ChatResponse contains the AI response.
type Config ¶
type Config struct {
Provider string `yaml:"provider"` // ollama, anthropic, openai, azure-openai
Model string `yaml:"model"` // Model name (e.g., "llama3.1:8b", "claude-sonnet-4-20250514")
Endpoint string `yaml:"endpoint"` // API endpoint (optional, for custom/azure)
APIKey string `yaml:"api_key"` // API key (for cloud providers)
}
Config holds AI provider configuration per ADR-017.
func DefaultConfig ¶
func DefaultConfig() Config
DefaultConfig returns the default configuration (Ollama).
func LoadConfig ¶
LoadConfig loads model configuration with consistent fallback:
CLI (handled by caller) → Environment → Config → Keystore
First source with a value wins. If config has api-key, keystore is not checked.
type OllamaProvider ¶
type OllamaProvider struct {
// contains filtered or unexported fields
}
OllamaProvider implements Provider for local Ollama.
func NewOllamaProvider ¶
func NewOllamaProvider(endpoint, model string) *OllamaProvider
NewOllamaProvider creates an Ollama provider.
func (*OllamaProvider) Available ¶
func (o *OllamaProvider) Available(ctx context.Context) bool
Available checks if Ollama is running and the model is available.
func (*OllamaProvider) Chat ¶
func (o *OllamaProvider) Chat(ctx context.Context, req ChatRequest) (*ChatResponse, error)
Chat sends a chat completion request to Ollama.
func (*OllamaProvider) Endpoint ¶
func (o *OllamaProvider) Endpoint() string
Endpoint returns the Ollama API endpoint URL.
func (*OllamaProvider) Model ¶
func (o *OllamaProvider) Model() string
Model returns the model identifier (e.g., "llama3.1:8b").
type OpenAIProvider ¶
type OpenAIProvider struct {
// contains filtered or unexported fields
}
OpenAIProvider implements Provider for OpenAI and compatible APIs.
func NewOpenAIProvider ¶
func NewOpenAIProvider(apiKey, model, endpoint string) *OpenAIProvider
NewOpenAIProvider creates an OpenAI provider. endpoint can be empty for api.openai.com, or set for Azure/compatible APIs.
func (*OpenAIProvider) Available ¶
func (o *OpenAIProvider) Available(ctx context.Context) bool
Available checks if the API key is set.
func (*OpenAIProvider) Chat ¶
func (o *OpenAIProvider) Chat(ctx context.Context, req ChatRequest) (*ChatResponse, error)
Chat sends a chat completion request to OpenAI.
func (*OpenAIProvider) Endpoint ¶
func (o *OpenAIProvider) Endpoint() string
Endpoint returns the API endpoint URL.
func (*OpenAIProvider) Model ¶
func (o *OpenAIProvider) Model() string
Model returns the model identifier (e.g., "gpt-4-turbo").
type Provider ¶
type Provider interface {
// Chat sends messages and returns a response.
Chat(ctx context.Context, req ChatRequest) (*ChatResponse, error)
// Name returns the provider name (e.g., "ollama", "anthropic", "openai").
// This is the same value used for keystore account lookup.
Name() string
// Model returns the model identifier (e.g., "llama3.1:8b", "claude-sonnet-4-20250514").
Model() string
// Endpoint returns the API endpoint URL.
// Returns empty string if using provider's default endpoint.
Endpoint() string
// Available returns true if the provider is ready to use.
// For cloud providers, this typically means the API key is configured.
// For local providers (Ollama), this checks if the service is running.
Available(ctx context.Context) bool
}
Provider is the opaque interface for AI/LLM backends.
Callers interact only through this interface—they don't need to know whether the underlying provider is Anthropic, OpenAI, Ollama, etc. The interface exposes queryable attributes (provider, model, endpoint) but never exposes the API key for security.
func EnsureProvider ¶
EnsureProvider returns a configured AI provider, prompting the user if needed. If interactive is false, fails immediately if no provider is configured.
Lookup priority:
- CLI flags (passed via cliFlags parameter)
- Environment: DEVLORE_AI_PROVIDER, DEVLORE_AI_API_KEY, etc.
- Config file: ~/.config/devlore/config.yaml
- API key from native keystore
- Fallback to Ollama if available locally
- Interactive prompt (only if interactive=true)
func NewProvider ¶
NewProvider creates a provider from configuration.