Documentation
¶
Index ¶
- type AnalysisPrompt
- type OllamaAgent
- func (a *OllamaAgent) Analyze(ctx context.Context, data []byte, analyzers []analyzer.AnalyzerSpec) (*analyzer.AgentResult, error)
- func (a *OllamaAgent) Capabilities() []string
- func (a *OllamaAgent) GetEndpoint() string
- func (a *OllamaAgent) GetModel() string
- func (a *OllamaAgent) HealthCheck(ctx context.Context) error
- func (a *OllamaAgent) IsAvailable() bool
- func (a *OllamaAgent) Name() string
- func (a *OllamaAgent) SetEnabled(enabled bool)
- func (a *OllamaAgent) UpdateModel(model string) error
- type OllamaAgentOptions
- type OllamaModelInfo
- type OllamaModelsResponse
- type OllamaRequest
- type OllamaResponse
Constants ¶
This section is empty.
Variables ¶
This section is empty.
Functions ¶
This section is empty.
Types ¶
type AnalysisPrompt ¶
AnalysisPrompt represents different types of analysis prompts
type OllamaAgent ¶
type OllamaAgent struct {
// contains filtered or unexported fields
}
OllamaAgent implements the Agent interface for self-hosted LLM analysis via Ollama
func NewOllamaAgent ¶
func NewOllamaAgent(opts *OllamaAgentOptions) (*OllamaAgent, error)
NewOllamaAgent creates a new Ollama-powered analysis agent
func (*OllamaAgent) Analyze ¶
func (a *OllamaAgent) Analyze(ctx context.Context, data []byte, analyzers []analyzer.AnalyzerSpec) (*analyzer.AgentResult, error)
Analyze performs AI-powered analysis using Ollama
func (*OllamaAgent) Capabilities ¶
func (a *OllamaAgent) Capabilities() []string
Capabilities returns the agent's capabilities
func (*OllamaAgent) GetEndpoint ¶
func (a *OllamaAgent) GetEndpoint() string
GetEndpoint returns the current Ollama endpoint
func (*OllamaAgent) GetModel ¶
func (a *OllamaAgent) GetModel() string
GetModel returns the current model name
func (*OllamaAgent) HealthCheck ¶
func (a *OllamaAgent) HealthCheck(ctx context.Context) error
HealthCheck verifies Ollama is accessible and the model is available
func (*OllamaAgent) IsAvailable ¶
func (a *OllamaAgent) IsAvailable() bool
IsAvailable checks if Ollama is available and the model is loaded
func (*OllamaAgent) SetEnabled ¶
func (a *OllamaAgent) SetEnabled(enabled bool)
SetEnabled enables or disables the Ollama agent
func (*OllamaAgent) UpdateModel ¶
func (a *OllamaAgent) UpdateModel(model string) error
UpdateModel changes the model used for analysis
type OllamaAgentOptions ¶
type OllamaAgentOptions struct {
Endpoint string // Ollama server endpoint (default: http://localhost:11434)
Model string // Model name (e.g., "codellama:13b", "llama2:7b")
Timeout time.Duration // Request timeout
MaxTokens int // Maximum tokens in response
Temperature float32 // Response creativity (0.0 to 1.0)
}
OllamaAgentOptions configures the Ollama agent
type OllamaModelInfo ¶
type OllamaModelInfo struct {
Name string `json:"name"`
Size int64 `json:"size"`
Digest string `json:"digest"`
ModifiedAt time.Time `json:"modified_at"`
}
OllamaModelInfo represents model information from Ollama
type OllamaModelsResponse ¶
type OllamaModelsResponse struct {
Models []OllamaModelInfo `json:"models"`
}
OllamaModelsResponse represents the response from the models endpoint
type OllamaRequest ¶
type OllamaRequest struct {
Model string `json:"model"`
Prompt string `json:"prompt"`
Stream bool `json:"stream"`
Options map[string]interface{} `json:"options,omitempty"`
Context []int `json:"context,omitempty"`
}
OllamaRequest represents a request to the Ollama API
type OllamaResponse ¶
type OllamaResponse struct {
Model string `json:"model"`
CreatedAt string `json:"created_at"`
Response string `json:"response"`
Done bool `json:"done"`
Context []int `json:"context,omitempty"`
TotalDuration int64 `json:"total_duration,omitempty"`
LoadDuration int64 `json:"load_duration,omitempty"`
PromptEvalCount int `json:"prompt_eval_count,omitempty"`
PromptEvalDuration int64 `json:"prompt_eval_duration,omitempty"`
EvalCount int `json:"eval_count,omitempty"`
EvalDuration int64 `json:"eval_duration,omitempty"`
}
OllamaResponse represents a response from the Ollama API