Documentation
¶
Index ¶
Constants ¶
View Source
const ( DefaultBaseURL = "http://localhost:11434/v1" DefaultModel = "qwen3" )
Variables ¶
This section is empty.
Functions ¶
func ExampleTOML ¶
func ExampleTOML() string
ExampleTOML returns a commented config file suitable for writing as a starter config. Not written automatically -- offered to the user on demand.
Types ¶
type Config ¶
type Config struct {
LLM LLM `toml:"llm"`
}
Config is the top-level application configuration, loaded from a TOML file.
func Load ¶
Load reads the TOML config file from the default path if it exists, falls back to defaults for any unset fields, and applies environment variable overrides last.
func LoadFromPath ¶
LoadFromPath reads the TOML config file at the given path if it exists, falls back to defaults for any unset fields, and applies environment variable overrides last.
type LLM ¶
type LLM struct {
// BaseURL is the root of an OpenAI-compatible API.
// The client appends /chat/completions, /models, etc.
// Default: http://localhost:11434/v1 (Ollama)
BaseURL string `toml:"base_url"`
// Model is the model identifier passed in chat requests.
// Default: qwen3
Model string `toml:"model"`
// ExtraContext is custom text appended to all system prompts.
// Useful for domain-specific details: house style, currency, location, etc.
// Optional; defaults to empty.
ExtraContext string `toml:"extra_context"`
}
LLM holds settings for the local LLM inference backend.
Click to show internal directories.
Click to hide internal directories.