ai

package
v0.99.0 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Feb 12, 2026 License: MIT Imports: 10 Imported by: 0

README

AI Module

The ai package provides the core AI capabilities for DivineSense, including embedding services, language model integrations, intent routing, and agent implementations.

Overview

This module is organized into several key subsystems:

  • core/ - Foundational AI services (embedding, LLM, retrieval, reranking)
  • agents/ - AI agent implementations (Parrots)
  • routing/ - Intent classification and routing system
  • services/ - High-level service abstractions
  • observability/ - Metrics, tracing, and monitoring

Directory Structure

ai/
├── core/               # Core AI services
│   ├── embedding/      # Vector embedding service
│   ├── llm/           # Language Model client
│   ├── reranker/      # Result reranking service
│   └── retrieval/     # Hybrid retrieval (BM25 + vector)
├── agents/            # AI agent implementations
│   ├── universal/     # Configuration-driven parrot system
│   ├── tools/         # Agent tools (memo_search, schedule_add, etc.)
│   ├── registry/      # Tool registration and discovery
│   ├── runner/        # Agent execution runners
│   ├── base_parrot.go # Base parrot interface
│   ├── chat_router.go # Chat-to-agent routing
│   ├── geek_parrot.go # Claude Code CLI integration
│   └── evolution_parrot.go # Self-evolution agent
├── routing/           # Intent classification and routing
│   ├── cache.go       # LRU routing cache
│   ├── rule_matcher.go # Rule-based classification (0ms)
│   ├── history_matcher.go # History-aware matching (~10ms)
│   ├── llm_intent_classifier.go # LLM fallback (~400ms)
│   └── service.go     # Unified routing service
├── services/          # High-level services
│   ├── schedule/      # Schedule AI services
│   └── ...
├── observability/     # Metrics and monitoring
├── cache/            # Semantic caching layer
├── context/          # LLM context construction
├── memory/           # Episodic memory storage
├── session/          # Conversation persistence
└── metrics/          # Performance metrics

# Root-level files
├── config.go         # AI configuration
├── embedding.go      # Legacy embedding (use core/embedding)
├── llm.go           # Legacy LLM (use core/llm)
└── reranker.go      # Legacy reranker (use core/reranker)

Core Services

EmbeddingService
type EmbeddingService interface {
    Embed(ctx context.Context, text string) ([]float32, error)
    EmbedBatch(ctx context.Context, texts []string) ([][]float32, error)
    Dimensions() int
}
LLMService
type LLMService interface {
    Chat(ctx context.Context, messages []Message) (string, *LLMCallStats, error)
    ChatStream(ctx context.Context, messages []Message) (<-chan string, <-chan *LLMCallStats, <-chan error)
    ChatWithTools(ctx context.Context, messages []Message, tools []ToolDescriptor) (*ChatResponse, *LLMCallStats, error)
    Warmup(ctx context.Context)
}
RerankerService
type RerankerService interface {
    Rerank(ctx context.Context, query string, results []string) ([]int, error)
    RerankWithScores(ctx context.Context, query string, results []string) ([]RerankResult, error)
}

Agents (Parrots)

DivineSense uses a "parrot" metaphor for its AI agents:

Parrot ID Description Config
MemoParrot MEMO Note search and retrieval config/parrots/memo.yaml
ScheduleParrot SCHEDULE Schedule management config/parrots/schedule.yaml
AmazingParrot AMAZING Comprehensive assistant config/parrots/amazing.yaml
GeekParrot GEEK Claude Code CLI integration Code implementation
EvolutionParrot EVOLUTION Self-evolution Code implementation
Using Agents
import "github.com/hrygo/divinesense/ai/agents"

// Create a UniversalParrot from config
parrot, err := universal.NewUniversalParrot(config, llm, tools, userID)

// Execute with callback
err = parrot.ExecuteWithCallback(ctx, userInput, history, func(eventType string, data interface{}) error {
    switch eventType {
    case agents.EventTypeThinking:
        // Agent is thinking
    case agents.EventTypeToolUse:
        // Agent is using a tool
    case agents.EventTypeAnswer:
        // Final answer
    }
    return nil
})

Routing System

The routing system classifies user intent through four layers:

Layer 0: Cache (LRU) → ~0ms
Layer 1: RuleMatcher → ~0ms
Layer 2: HistoryMatcher → ~10ms
Layer 3: LLM Classifier → ~400ms
import "github.com/hrygo/divinesense/ai/routing"

router := routing.NewService(store, llm, config)
intent, confidence, err := router.ClassifyIntent(ctx, userInput)

Configuration

AI configuration is loaded from environment variables:

# Enable AI
DIVINESENSE_AI_ENABLED=true

# Unified LLM Configuration (Main Chat)
# Supports: zai, deepseek, openai, siliconflow, dashscope, openrouter, ollama
DIVINESENSE_AI_LLM_PROVIDER=zai
DIVINESENSE_AI_LLM_API_KEY=your_unified_key
DIVINESENSE_AI_LLM_BASE_URL=https://open.bigmodel.cn/api/paas/v4
DIVINESENSE_AI_LLM_MODEL=glm-4.7

# Embedding Service
DIVINESENSE_AI_EMBEDDING_PROVIDER=siliconflow
DIVINESENSE_AI_EMBEDDING_API_KEY=your_embedding_key
DIVINESENSE_AI_EMBEDDING_BASE_URL=https://api.siliconflow.cn/v1
DIVINESENSE_AI_EMBEDDING_MODEL=BAAI/bge-m3

# Reranker Service
DIVINESENSE_AI_RERANK_PROVIDER=siliconflow
DIVINESENSE_AI_RERANK_API_KEY=your_rerank_key
DIVINESENSE_AI_RERANK_BASE_URL=https://api.siliconflow.cn/v1
DIVINESENSE_AI_RERANK_MODEL=BAAI/bge-reranker-v2-m3

# Intent Classification
DIVINESENSE_AI_INTENT_PROVIDER=siliconflow
DIVINESENSE_AI_INTENT_API_KEY=your_intent_key
DIVINESENSE_AI_INTENT_BASE_URL=https://api.siliconflow.cn/v1
DIVINESENSE_AI_INTENT_MODEL=Qwen/Qwen2.5-7B-Instruct

Testing

Run tests for the AI module:

# All AI tests
go test ./ai/... -v

# Specific subsystem
go test ./ai/core/... -v
go test ./ai/agents/... -v
go test ./ai/routing/... -v

# With coverage
go test ./ai/... -cover

Event Types

Agents emit structured events during execution:

Event Description Data Type
thinking Agent is thinking string
tool_use Tool invocation ToolCallData
tool_result Tool result ToolResultData
answer Final answer string
error Error occurred error
phase_change Processing phase changed PhaseChangeEvent
progress Progress update ProgressEvent
session_stats Session statistics SessionStatsData

See Also

Documentation

Index

Constants

This section is empty.

Variables

This section is empty.

Functions

This section is empty.

Types

type BlockContent added in v0.94.0

type BlockContent struct {
	UserInput        string
	AssistantContent string
}

BlockContent represents a simplified block for title generation.

type ChatResponse deprecated

type ChatResponse = llm.ChatResponse

ChatResponse represents the LLM response including potential tool calls.

Deprecated: Use llm.ChatResponse directly.

type Config

type Config struct {
	Embedding        EmbeddingConfig
	Reranker         RerankerConfig
	IntentClassifier IntentClassifierConfig
	LLM              LLMConfig
	UniversalParrot  UniversalParrotConfig // Phase 2: Configuration-driven parrots
	Enabled          bool
}

Config represents AI configuration.

func NewConfigFromProfile

func NewConfigFromProfile(p *profile.Profile) *Config

NewConfigFromProfile creates AI config from profile.

func (*Config) Validate

func (c *Config) Validate() error

Validate validates the configuration.

type EmbeddingConfig

type EmbeddingConfig struct {
	Provider   string
	Model      string
	APIKey     string
	BaseURL    string
	Dimensions int
}

EmbeddingConfig represents vector embedding configuration.

type EmbeddingService

type EmbeddingService interface {
	// Embed generates vector for a single text.
	Embed(ctx context.Context, text string) ([]float32, error)

	// EmbedBatch generates vectors for multiple texts.
	EmbedBatch(ctx context.Context, texts []string) ([][]float32, error)

	// Dimensions returns the vector dimension.
	Dimensions() int
}

EmbeddingService is the vector embedding service interface.

func NewEmbeddingService

func NewEmbeddingService(cfg *EmbeddingConfig) (EmbeddingService, error)

NewEmbeddingService creates a new EmbeddingService.

Phase 1 Note: This is a bridge compatibility layer that maintains the original API. The actual embedding functionality has been moved to ai/core/embedding/provider.go. Future refactoring will deprecate this file in favor of the core package.

type FunctionCall deprecated

type FunctionCall = llm.FunctionCall

FunctionCall represents the function details.

Deprecated: Use llm.FunctionCall directly.

type IntentClassifierConfig

type IntentClassifierConfig struct {
	Model   string
	APIKey  string
	BaseURL string
	Enabled bool
}

IntentClassifierConfig represents intent classification LLM configuration. Uses a lightweight model for fast, cost-effective classification.

type LLMCallStats deprecated added in v0.94.0

type LLMCallStats = llm.LLMCallStats

LLMCallStats represents statistics for a single LLM call.

Deprecated: Use llm.LLMCallStats directly.

type LLMConfig

type LLMConfig struct {
	Provider    string // Provider identifier for logging/future extension: zai, deepseek, openai, ollama
	Model       string // Model name: glm-4.7, deepseek-chat, gpt-4o, etc.
	APIKey      string
	BaseURL     string
	MaxTokens   int     // default: 2048
	Temperature float32 // default: 0.7
}

LLMConfig represents LLM configuration.

type LLMService deprecated

type LLMService = llm.Service

LLMService is the LLM service interface.

Deprecated: Use llm.Service directly.

func NewLLMService

func NewLLMService(cfg *LLMConfig) (LLMService, error)

NewLLMService creates a new LLMService.

Phase 1 Note: This is a bridge compatibility layer that maintains the original API. The actual LLM functionality has been moved to ai/core/llm/service.go.

type Message deprecated

type Message = llm.Message

Message represents a chat message.

Deprecated: Use llm.Message directly.

func AssistantMessage deprecated

func AssistantMessage(content string) Message

AssistantMessage creates an assistant message.

Deprecated: Use llm.AssistantMessage directly.

func FormatMessages deprecated

func FormatMessages(systemPrompt string, userContent string, history []Message) []Message

FormatMessages formats messages for prompt templates.

Deprecated: Use llm.FormatMessages directly.

func SystemPrompt deprecated

func SystemPrompt(content string) Message

SystemPrompt creates a system message.

Deprecated: Use llm.SystemPrompt directly.

func UserMessage deprecated

func UserMessage(content string) Message

UserMessage creates a user message.

Deprecated: Use llm.UserMessage directly.

type RerankResult

type RerankResult = reranker.Result

RerankResult represents a reranking result. Deprecated: Use reranker.Result directly.

type RerankerConfig

type RerankerConfig struct {
	Provider string
	Model    string
	APIKey   string
	BaseURL  string
	Enabled  bool
}

RerankerConfig represents reranker configuration.

type RerankerService

type RerankerService = reranker.Service

RerankerService is the reranking service interface. Deprecated: Use reranker.Service directly.

func NewRerankerService

func NewRerankerService(cfg *RerankerConfig) RerankerService

NewRerankerService creates a new RerankerService.

Phase 1 Note: This is a bridge compatibility layer that maintains the original API. The actual reranker functionality has been moved to ai/core/reranker/service.go.

type TitleGenerator added in v0.94.0

type TitleGenerator struct {
	// contains filtered or unexported fields
}

TitleGenerator generates meaningful titles for AI conversations.

func NewTitleGenerator added in v0.94.0

func NewTitleGenerator(cfg TitleGeneratorConfig) *TitleGenerator

NewTitleGenerator creates a new title generator instance.

func (*TitleGenerator) Generate added in v0.94.0

func (tg *TitleGenerator) Generate(ctx context.Context, userMessage, aiResponse string) (string, error)

Generate generates a title based on the conversation content.

func (*TitleGenerator) GenerateTitleFromBlocks added in v0.94.0

func (tg *TitleGenerator) GenerateTitleFromBlocks(ctx context.Context, blocks []BlockContent) (string, error)

GenerateTitleFromBlocks generates a title from a slice of blocks.

type TitleGeneratorConfig added in v0.94.0

type TitleGeneratorConfig struct {
	APIKey  string
	BaseURL string
	Model   string
}

TitleGeneratorConfig holds configuration for the title generator.

type ToolCall deprecated

type ToolCall = llm.ToolCall

ToolCall represents a request to call a tool.

Deprecated: Use llm.ToolCall directly.

type ToolDescriptor deprecated

type ToolDescriptor = llm.ToolDescriptor

ToolDescriptor represents a function/tool available to the LLM.

Deprecated: Use llm.ToolDescriptor directly.

type UniversalParrotConfig added in v0.94.0

type UniversalParrotConfig struct {
	Enabled      bool   // Enable UniversalParrot for creating parrots from YAML configs
	ConfigDir    string // Path to parrot YAML configs (default: ./config/parrots)
	FallbackMode string // "legacy" | "error" when config load fails (default: legacy)
}

UniversalParrotConfig represents configuration for UniversalParrot (configuration-driven parrots).

Directories

Path Synopsis
Package agent provides conversation context management for multi-turn dialogues.
Package agent provides conversation context management for multi-turn dialogues.
orchestrator
Package orchestrator implements the Orchestrator-Workers pattern for multi-agent coordination.
Package orchestrator implements the Orchestrator-Workers pattern for multi-agent coordination.
registry
Package registry provides metrics collection for UniversalParrot.
Package registry provides metrics collection for UniversalParrot.
tools
Package tools provides tool-level result caching for AI agents.
Package tools provides tool-level result caching for AI agents.
universal
Package universal provides configuration loading for UniversalParrot.
Package universal provides configuration loading for UniversalParrot.
Package aitime provides the time parsing service interface for AI agents.
Package aitime provides the time parsing service interface for AI agents.
Package cache provides the cache service interface for AI agents.
Package cache provides the cache service interface for AI agents.
Package context provides context building for LLM prompts.
Package context provides context building for LLM prompts.
core
llm
Package duplicate provides memo duplicate detection for P2-C002.
Package duplicate provides memo duplicate detection for P2-C002.
Package graph - builder implementation for P3-C001.
Package graph - builder implementation for P3-C001.
observability
logging
Package logging provides structured logging utilities for AI modules.
Package logging provides structured logging utilities for AI modules.
metrics
Package metrics provides the evaluation metrics service interface for AI agents.
Package metrics provides the evaluation metrics service interface for AI agents.
tracing
Package tracing provides distributed tracing instrumentation for AI modules.
Package tracing provides distributed tracing instrumentation for AI modules.
Package review provides intelligent memo review system based on spaced repetition.
Package review provides intelligent memo review system based on spaced repetition.
Package routing provides routing result caching for performance optimization.
Package routing provides routing result caching for performance optimization.
services
schedule
Package schedule provides schedule-related AI agent utilities.
Package schedule provides schedule-related AI agent utilities.
session
Package session provides the session persistence service interface for AI agents.
Package session provides the session persistence service interface for AI agents.
stats
Package stats provides cost alerting for agent sessions.
Package stats provides cost alerting for agent sessions.
Package tags provides intelligent tag suggestion for memos.
Package tags provides intelligent tag suggestion for memos.
Package timeout defines centralized timeout constants for AI operations.
Package timeout defines centralized timeout constants for AI operations.

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL