Documentation
¶
Index ¶
- Constants
- Variables
- func AddCustomEndpoint(cfg *config.Config, u *ui.UI, name, endpoint, modelName, apiKey string) error
- func ConfigureLiteLLM(cfg *config.Config, u *ui.UI, provider, apiKey string, models []string) error
- func FormatBytes(b int64) string
- func GetConfiguredModels(cfg *config.Config) ([]string, error)
- func GetMasterKey(cfg *config.Config) (string, error)
- func GetProviderStatus(cfg *config.Config) (map[string]ProviderStatus, error)
- func HasConfiguredModels(cfg *config.Config) bool
- func HasProviderConfigured(cfg *config.Config, provider string) bool
- func LoadDotEnv(path string) map[string]string
- func PatchLiteLLMProvider(cfg *config.Config, u *ui.UI, provider, apiKey string, models []string) error
- func ProviderEnvVar(provider string) string
- func ProviderFromModelName(name string) string
- func PullOllamaModel(name string) error
- func RemoveModel(cfg *config.Config, u *ui.UI, modelName string) error
- func ResolveAPIKey(provider string) (key, envVarUsed string)
- func RestartLiteLLM(cfg *config.Config, u *ui.UI, provider string) error
- func ValidateCustomEndpoint(endpoint, modelName, apiKey string) error
- func WarnAndStripV1Suffix(endpoint string) string
- type LiteLLMConfig
- type LiteLLMParams
- type ModelEntry
- type OllamaModel
- type ProviderInfo
- type ProviderStatus
Constants ¶
const ( // Provider name constants used in model routing and configuration. ProviderOllama = "ollama" ProviderAnthropic = "anthropic" ProviderOpenAI = "openai" )
Variables ¶
var WellKnownModels = map[string][]string{ ProviderAnthropic: { "claude-opus-4-6", "claude-sonnet-4-6", "claude-haiku-4-5-20251001", "claude-sonnet-4-5-20250929", }, ProviderOpenAI: { "gpt-5.4", "gpt-4.1", "gpt-4.1-mini", "o4-mini", "o3", }, }
WellKnownModels maps provider names to their commonly-used model IDs. Used to populate OpenClaw's model allowlist when a wildcard is configured and the LiteLLM pod is not reachable for a live /v1/models query.
Functions ¶
func AddCustomEndpoint ¶ added in v0.8.1
func AddCustomEndpoint(cfg *config.Config, u *ui.UI, name, endpoint, modelName, apiKey string) error
AddCustomEndpoint adds a custom OpenAI-compatible endpoint to LiteLLM after validating it works.
func ConfigureLiteLLM ¶ added in v0.8.1
ConfigureLiteLLM adds a provider to the LiteLLM gateway. For cloud providers, it patches the Secret with the API key and adds the model to config.yaml. For Ollama, it discovers local models and adds them.
When only models change (no API key), models are hot-added via the /model/new API — no restart required. When API keys change, a rolling restart is triggered so the new Secret values are picked up.
func FormatBytes ¶ added in v0.8.1
FormatBytes formats a byte count as a human-readable string.
func GetConfiguredModels ¶ added in v0.8.1
GetConfiguredModels returns the model names available in LiteLLM. Wildcard entries (e.g. anthropic/*) are expanded: first by querying the running LiteLLM pod's /v1/models endpoint, falling back to the baked-in WellKnownModels list if the cluster is unreachable.
func GetMasterKey ¶ added in v0.8.1
GetMasterKey reads the LiteLLM master key from the cluster Secret.
func GetProviderStatus ¶
func GetProviderStatus(cfg *config.Config) (map[string]ProviderStatus, error)
GetProviderStatus reads LiteLLM config and returns provider status.
func HasConfiguredModels ¶ added in v0.8.1
HasConfiguredModels returns true if LiteLLM has at least one non-catch-all model configured (i.e., something other than the "paid/*" route).
func HasProviderConfigured ¶ added in v0.8.1
HasProviderConfigured returns true if LiteLLM already has at least one model entry for the given provider (e.g., "anthropic", "openai").
func LoadDotEnv ¶ added in v0.8.1
LoadDotEnv reads KEY=value pairs from a .env file. Returns an empty map if the file doesn't exist or is unreadable. Skips comments (#) and blank lines. Does not call os.Setenv.
func PatchLiteLLMProvider ¶ added in v0.8.1
func PatchLiteLLMProvider(cfg *config.Config, u *ui.UI, provider, apiKey string, models []string) error
PatchLiteLLMProvider patches the LiteLLM Secret (API key) and ConfigMap (model_list) for a provider without restarting the deployment. Call RestartLiteLLM afterwards (once, after batching multiple providers).
func ProviderEnvVar ¶ added in v0.8.1
ProviderEnvVar returns the env var name for a provider's API key.
func ProviderFromModelName ¶ added in v0.8.1
ProviderFromModelName infers the provider from a model name string.
func PullOllamaModel ¶ added in v0.8.1
PullOllamaModel pulls a model from the Ollama registry. It streams progress to stdout, matching the UX of `ollama pull`.
func RemoveModel ¶ added in v0.8.1
RemoveModel removes a model entry from the LiteLLM ConfigMap (persistence) and hot-deletes it from the running router via the API (immediate effect). No pod restart is required.
func ResolveAPIKey ¶ added in v0.8.1
ResolveAPIKey checks the primary env var and each AltEnvVar in order for the given provider. Returns the key value and the env var it was found in. Both are empty if no key is available.
func RestartLiteLLM ¶ added in v0.8.1
RestartLiteLLM restarts the LiteLLM deployment and waits for rollout.
func ValidateCustomEndpoint ¶ added in v0.8.1
ValidateCustomEndpoint validates that a custom OpenAI-compatible endpoint works. It runs a 2-step validation: reachability check, then inference probe. The inference probe is the definitive test — some servers (e.g., mlx-lm) don't list the loaded model in /models but accept it for inference.
func WarnAndStripV1Suffix ¶ added in v0.8.1
WarnAndStripV1Suffix checks if an endpoint URL has a trailing /v1 suffix, warns the user, and returns the stripped URL. For OpenAI-compatible providers, LiteLLM auto-appends /v1, causing double /v1/v1 if the user includes it.
Types ¶
type LiteLLMConfig ¶ added in v0.8.1
type LiteLLMConfig struct {
ModelList []ModelEntry `yaml:"model_list"`
GeneralSettings map[string]any `yaml:"general_settings,omitempty"`
LiteLLMSettings map[string]any `yaml:"litellm_settings,omitempty"`
}
LiteLLMConfig represents the LiteLLM proxy config.yaml structure.
type LiteLLMParams ¶ added in v0.8.1
type LiteLLMParams struct {
Model string `yaml:"model"`
APIBase string `yaml:"api_base,omitempty"`
APIKey string `yaml:"api_key,omitempty"`
}
LiteLLMParams holds the routing parameters for a model.
type ModelEntry ¶ added in v0.8.1
type ModelEntry struct {
ModelName string `yaml:"model_name"`
LiteLLMParams LiteLLMParams `yaml:"litellm_params"`
}
ModelEntry is a single entry in model_list.
type OllamaModel ¶ added in v0.8.1
type OllamaModel struct {
Name string `json:"name"`
Size int64 `json:"size"`
ModifiedAt string `json:"modified_at"`
}
OllamaModel describes a model pulled in the local Ollama instance.
func ListOllamaModels ¶ added in v0.8.1
func ListOllamaModels() ([]OllamaModel, error)
ListOllamaModels queries the local Ollama server for pulled models. Returns nil and an error if Ollama is not reachable.
type ProviderInfo ¶
type ProviderInfo struct {
ID string // provider id (e.g. "anthropic", "openai", "ollama")
Name string // display name
EnvVar string // primary env var for API key (empty for Ollama)
AltEnvVars []string // fallback env vars checked in order (e.g. CLAUDE_CODE_OAUTH_TOKEN)
}
ProviderInfo describes an LLM provider.
func GetAvailableProviders ¶
func GetAvailableProviders(_ *config.Config) ([]ProviderInfo, error)
GetAvailableProviders returns the known provider list (static, no pod query needed).