Documentation
¶
Index ¶
Constants ¶
const ( DefaultBaseURL = "http://localhost:11434/v1" DefaultModel = "qwen3" DefaultLLMTimeout = 5 * time.Second DefaultCacheTTLDays = 30 )
Variables ¶
This section is empty.
Functions ¶
func ExampleTOML ¶
func ExampleTOML() string
ExampleTOML returns a commented config file suitable for writing as a starter config. Not written automatically -- offered to the user on demand.
Types ¶
type Config ¶
Config is the top-level application configuration, loaded from a TOML file.
func Load ¶
Load reads the TOML config file from the default path if it exists, falls back to defaults for any unset fields, and applies environment variable overrides last.
func LoadFromPath ¶
LoadFromPath reads the TOML config file at the given path if it exists, falls back to defaults for any unset fields, and applies environment variable overrides last.
type Documents ¶ added in v1.27.0
type Documents struct {
// MaxFileSize is the largest file (in bytes) that can be imported as a
// document attachment. Default: 50 MiB.
MaxFileSize int64 `toml:"max_file_size"`
// CacheTTLDays is the number of days an extracted document cache entry
// is kept before being evicted on the next startup. Set to 0 to disable
// eviction. Default: 30.
CacheTTLDays int `toml:"cache_ttl_days"`
}
Documents holds settings for document attachments.
type LLM ¶
type LLM struct {
// BaseURL is the root of an OpenAI-compatible API.
// The client appends /chat/completions, /models, etc.
// Default: http://localhost:11434/v1 (Ollama)
BaseURL string `toml:"base_url"`
// Model is the model identifier passed in chat requests.
// Default: qwen3
Model string `toml:"model"`
// ExtraContext is custom text appended to all system prompts.
// Useful for domain-specific details: house style, currency, location, etc.
// Optional; defaults to empty.
ExtraContext string `toml:"extra_context"`
// Timeout is the maximum time to wait for quick LLM server operations
// (ping, model listing, auto-detect). Go duration string, e.g. "5s",
// "10s", "500ms". Default: "5s".
Timeout string `toml:"timeout"`
}
LLM holds settings for the local LLM inference backend.
func (LLM) TimeoutDuration ¶ added in v1.32.0
TimeoutDuration returns the parsed LLM timeout, falling back to DefaultLLMTimeout if the value is empty or unparseable.