ai

package
v0.94.0 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Feb 10, 2026 License: MIT Imports: 14 Imported by: 0

Documentation

Index

Constants

This section is empty.

Variables

This section is empty.

Functions

This section is empty.

Types

type BlockContent added in v0.94.0

type BlockContent struct {
	UserInput        string
	AssistantContent string
}

BlockContent represents a simplified block for title generation.

type ChatResponse

type ChatResponse struct {
	Content   string
	ToolCalls []ToolCall
}

ChatResponse represents the LLM response including potential tool calls.

type Config

type Config struct {
	Embedding        EmbeddingConfig
	Reranker         RerankerConfig
	IntentClassifier IntentClassifierConfig
	LLM              LLMConfig
	UniversalParrot  UniversalParrotConfig // Phase 2: Configuration-driven parrots
	Enabled          bool
}

Config represents AI configuration.

func NewConfigFromProfile

func NewConfigFromProfile(p *profile.Profile) *Config

NewConfigFromProfile creates AI config from profile.

func (*Config) Validate

func (c *Config) Validate() error

Validate validates the configuration.

type EmbeddingConfig

type EmbeddingConfig struct {
	Provider   string
	Model      string
	APIKey     string
	BaseURL    string
	Dimensions int
}

EmbeddingConfig represents vector embedding configuration.

type EmbeddingService

type EmbeddingService interface {
	// Embed generates vector for a single text.
	Embed(ctx context.Context, text string) ([]float32, error)

	// EmbedBatch generates vectors for multiple texts.
	EmbedBatch(ctx context.Context, texts []string) ([][]float32, error)

	// Dimensions returns the vector dimension.
	Dimensions() int
}

EmbeddingService is the vector embedding service interface.

func NewEmbeddingService

func NewEmbeddingService(cfg *EmbeddingConfig) (EmbeddingService, error)

NewEmbeddingService creates a new EmbeddingService.

type FunctionCall

type FunctionCall struct {
	Name      string
	Arguments string
}

FunctionCall represents the function details.

type IntentClassifierConfig

type IntentClassifierConfig struct {
	Model   string
	APIKey  string
	BaseURL string
	Enabled bool
}

IntentClassifierConfig represents intent classification LLM configuration. Uses a lightweight model for fast, cost-effective classification.

type LLMCallStats added in v0.94.0

type LLMCallStats struct {
	// PromptTokens is the number of tokens in the input prompt.
	PromptTokens int `json:"prompt_tokens"`

	// CompletionTokens is the number of tokens in the generated response.
	CompletionTokens int `json:"completion_tokens"`

	// TotalTokens is the sum of prompt and completion tokens.
	TotalTokens int `json:"total_tokens"`

	// CacheReadTokens is the number of tokens read from cache (for providers that support it).
	CacheReadTokens int `json:"cache_read_tokens,omitempty"`

	// CacheWriteTokens is the number of tokens written to cache.
	CacheWriteTokens int `json:"cache_write_tokens,omitempty"`

	// ThinkingDurationMs is the time from request start to first chunk (TTFT - Time To First Token).
	// For non-streaming requests, this is the total request duration.
	ThinkingDurationMs int64 `json:"thinking_duration_ms"`

	// GenerationDurationMs is the time spent generating the response content.
	// For streaming, this is from first chunk to last chunk. For non-streaming, this is 0.
	GenerationDurationMs int64 `json:"generation_duration_ms,omitempty"`

	// TotalDurationMs is the total wall-clock time for the request.
	TotalDurationMs int64 `json:"total_duration_ms"`
}

LLMCallStats represents statistics for a single LLM call. This provides token usage and timing metrics for session summary and cost tracking.

type LLMConfig

type LLMConfig struct {
	Provider    string // deepseek, openai, ollama
	Model       string // deepseek-chat
	APIKey      string
	BaseURL     string
	MaxTokens   int     // default: 2048
	Temperature float32 // default: 0.7
}

LLMConfig represents LLM configuration.

type LLMService

type LLMService interface {
	// Chat performs synchronous chat. Returns content, statistics, and error.
	Chat(ctx context.Context, messages []Message) (string, *LLMCallStats, error)

	// ChatStream performs streaming chat. Returns content channel, stats channel, and error channel.
	// The stats channel is closed after sending the final stats when stream completes.
	ChatStream(ctx context.Context, messages []Message) (<-chan string, <-chan *LLMCallStats, <-chan error)

	// ChatWithTools performs chat with function calling support. Returns response, statistics, and error.
	ChatWithTools(ctx context.Context, messages []Message, tools []ToolDescriptor) (*ChatResponse, *LLMCallStats, error)
}

LLMService is the LLM service interface.

func NewLLMService

func NewLLMService(cfg *LLMConfig) (LLMService, error)

NewLLMService creates a new LLMService.

type Message

type Message struct {
	Role    string // system, user, assistant
	Content string
}

Message represents a chat message.

func AssistantMessage

func AssistantMessage(content string) Message

Helper for creating assistant messages.

func FormatMessages

func FormatMessages(systemPrompt string, userContent string, history []Message) []Message

FormatMessages formats messages for prompt templates.

func SystemPrompt

func SystemPrompt(content string) Message

Helper for creating system prompts.

func UserMessage

func UserMessage(content string) Message

Helper for creating user messages.

type RerankResult

type RerankResult struct {
	Index int     // Original index
	Score float32 // Relevance score
}

RerankResult represents a reranking result.

type RerankerConfig

type RerankerConfig struct {
	Provider string
	Model    string
	APIKey   string
	BaseURL  string
	Enabled  bool
}

RerankerConfig represents reranker configuration.

type RerankerService

type RerankerService interface {
	// Rerank reorders documents by relevance.
	Rerank(ctx context.Context, query string, documents []string, topN int) ([]RerankResult, error)

	// IsEnabled returns whether the service is enabled.
	IsEnabled() bool
}

RerankerService is the reranking service interface.

func NewRerankerService

func NewRerankerService(cfg *RerankerConfig) RerankerService

NewRerankerService creates a new RerankerService.

type TitleGenerator added in v0.94.0

type TitleGenerator struct {
	// contains filtered or unexported fields
}

TitleGenerator generates meaningful titles for AI conversations.

func NewTitleGenerator added in v0.94.0

func NewTitleGenerator(cfg TitleGeneratorConfig) *TitleGenerator

NewTitleGenerator creates a new title generator instance.

func (*TitleGenerator) Generate added in v0.94.0

func (tg *TitleGenerator) Generate(ctx context.Context, userMessage, aiResponse string) (string, error)

Generate generates a title based on the conversation content.

func (*TitleGenerator) GenerateTitleFromBlocks added in v0.94.0

func (tg *TitleGenerator) GenerateTitleFromBlocks(ctx context.Context, blocks []BlockContent) (string, error)

GenerateTitleFromBlocks generates a title from a slice of blocks.

type TitleGeneratorConfig added in v0.94.0

type TitleGeneratorConfig struct {
	APIKey  string
	BaseURL string
	Model   string
}

TitleGeneratorConfig holds configuration for the title generator.

type ToolCall

type ToolCall struct {
	ID       string
	Type     string
	Function FunctionCall
}

ToolCall represents a request to call a tool.

type ToolDescriptor

type ToolDescriptor struct {
	Name        string
	Description string
	Parameters  string // JSON Schema string
}

ToolDescriptor represents a function/tool available to the LLM.

type UniversalParrotConfig added in v0.94.0

type UniversalParrotConfig struct {
	Enabled      bool   // Enable UniversalParrot for creating parrots from YAML configs
	ConfigDir    string // Path to parrot YAML configs (default: ./config/parrots)
	FallbackMode string // "legacy" | "error" when config load fails (default: legacy)
}

UniversalParrotConfig represents configuration for UniversalParrot (configuration-driven parrots).

Directories

Path Synopsis
Package agent provides conversation context management for multi-turn dialogues.
Package agent provides conversation context management for multi-turn dialogues.
registry
Package registry provides metrics collection for UniversalParrot.
Package registry provides metrics collection for UniversalParrot.
tools
Package tools provides tool-level result caching for AI agents.
Package tools provides tool-level result caching for AI agents.
universal
Package universal provides configuration loading for UniversalParrot.
Package universal provides configuration loading for UniversalParrot.
Package aitime provides the time parsing service interface for AI agents.
Package aitime provides the time parsing service interface for AI agents.
Package cache provides the cache service interface for AI agents.
Package cache provides the cache service interface for AI agents.
Package context provides context building for LLM prompts.
Package context provides context building for LLM prompts.
core
Package duplicate provides memo duplicate detection for P2-C002.
Package duplicate provides memo duplicate detection for P2-C002.
Package filter provides optimized regex patterns for sensitive data detection.
Package filter provides optimized regex patterns for sensitive data detection.
Package genui provides Generative UI components for AI agents.
Package genui provides Generative UI components for AI agents.
Package graph - builder implementation for P3-C001.
Package graph - builder implementation for P3-C001.
Package habit provides user habit learning and analysis for AI agents.
Package habit provides user habit learning and analysis for AI agents.
Package memory provides the unified memory service interface for AI agents.
Package memory provides the unified memory service interface for AI agents.
Package metrics provides the evaluation metrics service interface for AI agents.
Package metrics provides the evaluation metrics service interface for AI agents.
Package prediction provides predictive interaction capabilities for AI agents.
Package prediction provides predictive interaction capabilities for AI agents.
Package preload provides predictive cache preloading for AI operations.
Package preload provides predictive cache preloading for AI operations.
Package rag provides Self-RAG optimization.
Package rag provides Self-RAG optimization.
Package reminder provides reminder management for schedules and todos.
Package reminder provides reminder management for schedules and todos.
Package review provides intelligent memo review system based on spaced repetition.
Package review provides intelligent memo review system based on spaced repetition.
Package router provides routing result caching for performance optimization.
Package router provides routing result caching for performance optimization.
Package schedule provides schedule-related AI agent utilities.
Package schedule provides schedule-related AI agent utilities.
Package session provides the session persistence service interface for AI agents.
Package session provides the session persistence service interface for AI agents.
Package stats provides cost alerting for agent sessions.
Package stats provides cost alerting for agent sessions.
Package tags provides intelligent tag suggestion for memos.
Package tags provides intelligent tag suggestion for memos.
Package timeout defines centralized timeout constants for AI operations.
Package timeout defines centralized timeout constants for AI operations.
Package tracing provides end-to-end request tracing for AI operations.
Package tracing provides end-to-end request tracing for AI operations.

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL