Documentation
¶
Index ¶
Constants ¶
This section is empty.
Variables ¶
This section is empty.
Functions ¶
func DiagnoseFailure ¶ added in v0.42.0
DiagnoseFailure attempts to explain a devx command failure using a two-tier approach: (1) rule-based pattern matching against known failure modes, then (2) AI-enhanced diagnosis if a local LLM or cloud API is available.
Returns a styled, printable string, or "" if no diagnosis is available. This function never returns an error — it degrades gracefully.
func GenerateCompletion ¶ added in v0.39.0
func GenerateCompletion(provider *AIProvider, model, systemPrompt, userPrompt string) (string, error)
GenerateCompletion sends a chat completion request to an OpenAI-compatible API. It buffers the full response (non-streaming) and returns the completion text. Timeout is set to 120 seconds to accommodate large generation requests.
Types ¶
type AIProvider ¶ added in v0.39.0
type AIProvider struct {
Name string // Human-readable name (e.g., "Ollama", "LM Studio", "OpenAI")
BaseURL string // OpenAI-compatible API base (e.g., "http://127.0.0.1:11434/v1")
APIKey string // API key for authentication
DefaultModel string // Provider-specific default model (empty = use server default)
Source string // "local" or "cloud"
}
AIProvider represents a discovered AI backend capable of chat completions.
func DiscoverAIProvider ¶ added in v0.39.0
func DiscoverAIProvider() *AIProvider
DiscoverAIProvider probes for an available AI backend using a priority cascade:
- Local Ollama on port 11434
- Local LM Studio on port 1234
- OPENAI_API_KEY environment variable (cloud fallback)
Returns nil if no AI provider is found.
type AgentMode ¶ added in v0.41.0
type AgentMode string
AgentMode describes how an AI prompt was executed.
type AgentResult ¶ added in v0.41.0
type AgentResult struct {
Output string // The AI-generated text
Mode AgentMode // Which execution path was used
}
AgentResult holds the output of an AI-assisted operation.
func RunAgentPrompt ¶ added in v0.41.0
func RunAgentPrompt(prompt string) (*AgentResult, error)
RunAgentPrompt executes a prompt using the best available AI backend.
Priority:
- ollama launch <agent> -- -p "prompt" --permission-mode plan --print (full agentic, can read files and understand codebase context)
- internal/ai chat completion via DiscoverAIProvider (simple prompt/response, no file access)
- Returns AgentModeNone if no AI is available
The prompt should be self-contained — the ollama launch path has file access but the chat API fallback does not, so include any necessary context in the prompt.
type BridgeEnv ¶
BridgeEnv wraps the auto-discovered environment variables and a boolean indicating if an engine was found.
func DiscoverHostLLMs ¶
DiscoverHostLLMs probes the native host (where devx is running) for common local AI engines (Ollama, LM Studio). If found, it computes the container-to-host injected environment variables.