Documentation
¶
Index ¶
Constants ¶
This section is empty.
Variables ¶
This section is empty.
Functions ¶
func GenerateCompletion ¶ added in v0.39.0
func GenerateCompletion(provider *AIProvider, model, systemPrompt, userPrompt string) (string, error)
GenerateCompletion sends a chat completion request to an OpenAI-compatible API. It buffers the full response (non-streaming) and returns the completion text. Timeout is set to 120 seconds to accommodate large generation requests.
Types ¶
type AIProvider ¶ added in v0.39.0
type AIProvider struct {
Name string // Human-readable name (e.g., "Ollama", "LM Studio", "OpenAI")
BaseURL string // OpenAI-compatible API base (e.g., "http://127.0.0.1:11434/v1")
APIKey string // API key for authentication
DefaultModel string // Provider-specific default model (empty = use server default)
Source string // "local" or "cloud"
}
AIProvider represents a discovered AI backend capable of chat completions.
func DiscoverAIProvider ¶ added in v0.39.0
func DiscoverAIProvider() *AIProvider
DiscoverAIProvider probes for an available AI backend using a priority cascade:
- Local Ollama on port 11434
- Local LM Studio on port 1234
- OPENAI_API_KEY environment variable (cloud fallback)
Returns nil if no AI provider is found.
type BridgeEnv ¶
BridgeEnv wraps the auto-discovered environment variables and a boolean indicating if an engine was found.
func DiscoverHostLLMs ¶
DiscoverHostLLMs probes the native host (where devx is running) for common local AI engines (Ollama, LM Studio). If found, it computes the container-to-host injected environment variables.