ai

package
v0.41.0 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: May 4, 2026 License: MIT Imports: 10 Imported by: 0

Documentation

Index

Constants

This section is empty.

Variables

This section is empty.

Functions

func GenerateCompletion added in v0.39.0

func GenerateCompletion(provider *AIProvider, model, systemPrompt, userPrompt string) (string, error)

GenerateCompletion sends a chat completion request to an OpenAI-compatible API. It buffers the full response (non-streaming) and returns the completion text. Timeout is set to 120 seconds to accommodate large generation requests.

Types

type AIProvider added in v0.39.0

type AIProvider struct {
	Name         string // Human-readable name (e.g., "Ollama", "LM Studio", "OpenAI")
	BaseURL      string // OpenAI-compatible API base (e.g., "http://127.0.0.1:11434/v1")
	APIKey       string // API key for authentication
	DefaultModel string // Provider-specific default model (empty = use server default)
	Source       string // "local" or "cloud"
}

AIProvider represents a discovered AI backend capable of chat completions.

func DiscoverAIProvider added in v0.39.0

func DiscoverAIProvider() *AIProvider

DiscoverAIProvider probes for an available AI backend using a priority cascade:

  1. Local Ollama on port 11434
  2. Local LM Studio on port 1234
  3. OPENAI_API_KEY environment variable (cloud fallback)

Returns nil if no AI provider is found.

type AgentMode added in v0.41.0

type AgentMode string

AgentMode describes how an AI prompt was executed.

const (
	AgentModeOllamaLaunch AgentMode = "ollama_launch" // Full agentic via ollama launch
	AgentModeChatAPI      AgentMode = "chat_api"      // Simple chat completion via internal/ai
	AgentModeNone         AgentMode = "none"          // No AI available
)

type AgentResult added in v0.41.0

type AgentResult struct {
	Output string    // The AI-generated text
	Mode   AgentMode // Which execution path was used
}

AgentResult holds the output of an AI-assisted operation.

func RunAgentPrompt added in v0.41.0

func RunAgentPrompt(prompt string) (*AgentResult, error)

RunAgentPrompt executes a prompt using the best available AI backend.

Priority:

  1. ollama launch <agent> -- -p "prompt" --permission-mode plan --print (full agentic, can read files and understand codebase context)
  2. internal/ai chat completion via DiscoverAIProvider (simple prompt/response, no file access)
  3. Returns AgentModeNone if no AI is available

The prompt should be self-contained — the ollama launch path has file access but the chat API fallback does not, so include any necessary context in the prompt.

type BridgeEnv

type BridgeEnv struct {
	EngineName string
	Active     bool
	EnvVars    map[string]string
}

BridgeEnv wraps the auto-discovered environment variables and a boolean indicating if an engine was found.

func DiscoverHostLLMs

func DiscoverHostLLMs(runtime string) BridgeEnv

DiscoverHostLLMs probes the native host (where devx is running) for common local AI engines (Ollama, LM Studio). If found, it computes the container-to-host injected environment variables.

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL