ai

package
v0.100.2 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Feb 19, 2026 License: MIT Imports: 18 Imported by: 0

README

DivineSense AI Brain (ai/)

ai package is DivineSense's cognitive core, encompassing all intelligent capabilities from basic LLM integration to advanced autonomous Agents.

System Architecture (Knowledge Graph)

This architecture diagram shows the "macro architecture" and data flow of the AI module.

graph TD
    User[User] <--> API[API Layer]
    API <--> Router[Routing System]

    subgraph Brain [AI Brain]
        direction TB

        %% Layer 1: Decision & Orchestration
        Router --> |Intent| Agents[Agents / Parrots]

        subgraph Cortex [Cognition Engine]
            Agents --> Orchestrator[Orchestrator]
            Agents --> Universal[UniversalParrot]
            Agents --> Expert[Expert Agents]
        end

        %% Layer 2: Skills & Perception
        subgraph Skills [Skills & Perception]
            Universal --> Time[Time Parsing]
            Universal --> Summary[Summary]
            Universal --> Tags[Tags]
            Universal --> Format[Format]
            Universal --> Services[Business Services]
        end

        %% Layer 3: Memory & Context
        subgraph MemoryLobe [Memory & Context]
            Context[Context] --> Budget[Token Budget]
            Context --> ShortTerm[Short-term Memory]
            Context --> LongTerm[Episodic/Graph]
            Review[Review] --> SM2[SM-2 Algorithm]
        end

        %% Layer 4: Infrastructure
        subgraph Foundation [Core Infrastructure]
            LLM[core/llm]
            Embed[core/embedding]
            Rerank[core/reranker]
            Retrieval[core/retrieval]
            Cache[Cache]
            Config[Config Loader]
        end

        Agents --> Context
        Skills --> Foundation
        MemoryLobe --> Foundation
    end

    %% Cross-dependencies
    Router --> Cache
    Router --> LLM
    Expert --> Retrieval
    LongTerm --> Graph[Knowledge Graph]

    %% Output
    Agents --> Response[Response]

Micro-Architecture & Algorithms

1. Perception & Routing (Prefrontal Cortex)
  • routing: Four-layer intent classification architecture.
    • Algorithm: L0:LRU Cache -> L1:Rule Matching (Weighted Keywords) -> L2:History Matching (Vector Similarity) -> L3:LLM Fallback.
  • duplicate: Hybrid similarity detection.
    • Algorithm: Score = 0.5*Vector Similarity + 0.3*Tag Overlap + 0.2*Time Decay.
  • aitime: Natural language time parsing.
    • Flow: Regex matching -> NLP processing (relative time/Chinese semantics) -> Standardized time.
2. Agent System (Parrots)
  • agents: Autonomous entity system.
    • Orchestrator: LLM-driven task decomposition and multi-agent coordination. Contains Decomposer and Handoff mechanisms.
    • UniversalParrot: Config-driven general Agent (e.g., Memo, Schedule). Supports Direct, ReAct, Planning, Reflexion strategies.
    • Expert Agents: Domain-specific agents including MemoParrot and ScheduleParrot.
    • GeekParrot: Claude Code CLI integration for code execution.
  • services: Business logic encapsulation (e.g., schedule repeat rule processing).
3. Cognitive Capabilities (Skills)
  • tags: Three-layer tag recommendation system.
    • Algorithm: L1:Statistics -> L2:Rules -> L3:LLM Semantic.
  • summary: High-availability summary generation.
    • Flow: Try LLM -> Fallback to first paragraph extraction -> Fallback to truncation.
  • enrichment: Pipeline processing.
    • Mechanism: Pre-save (blocking) + Post-save (async parallel) enhancement.
4. Memory & Context (Hippocampus)
  • context: Dynamic Token Management.
    • Features: Token budget allocation (STM/LTM/RAG ratio), incremental updates (Context Caching).
  • graph: Personal Knowledge Graph.
    • Algorithm: PageRank (importance), Label Propagation (community detection).
  • review: Spaced repetition review.
    • Algorithm: SM-2 (SuperMemo-2) memory curve algorithm.
  • cache: Two-layer cache architecture.
    • Architecture: L1:LRU (exact SHA256) + L2:Semantic (vector cosine similarity).
5. Infrastructure (Brainstem)
  • core: Unified LLM, Embedding, Reranker, Retrieval interfaces.
  • retrieval: Adaptive retrieval with RRF fusion and quality assessment.
  • observability: Full-stack logging, metrics (Prometheus), tracing (OTEL).
  • configloader: YAML config loader with fallback mechanism.
  • timeout: Centralized system limits to prevent "cognitive overload".

Core Workflows

W1: User Query Processing
sequenceDiagram
    User->>Router: "Find notes about AI"
    Router->>Router: Classify -> Intent: MEMO_QUERY
    Router->>Agents: Route(MEMO_QUERY) -> MemoParrot

    Agents->>Context: Build context (history + RAG)
    Context-->>Agents: Return Prompt

    Agents->>LLM: Chat Completion
    LLM-->>Agents: Tool Call (memo_search)

    Agents->>Tools: Execute memo_search
    Tools-->>Agents: Return results

    Agents->>LLM: Generate response
    Agents-->>User: Final response
W2: Memo Knowledge Ingestion
flowchart LR
    Input[Raw Memo] --> Enrich[Enrichment Pipeline]

    subgraph Parallel Processing
        Enrich --> Tags[Tag Generation]
        Enrich --> Title[Title Generation]
        Enrich --> Summary[Summary Generation]
    end

    Tags & Title & Summary --> Save[Database Save]

    Save --> Embed[Vector Embedding]
    Save --> Graph[Update Graph]
    Save --> Review[Schedule Review]

Directory Structure

ai/
├── core/               # Layer 0: Foundation (LLM, Embed, Rerank, Retrieval)
│   ├── llm/            # LLM client with multi-provider support
│   ├── embedding/      # Vectorization with chunking
│   ├── reranker/       # Re-ranking for RAG
│   └── retrieval/      # Adaptive retrieval strategies
├── internal/           # Layer 0: Internal tools (strutil)
├── observability/      # Layer 0: Monitoring (Logs, Metrics, Traces)
├── configloader/       # Layer 0: Config loading
├── timeout/            # Layer 0: System limits
├── cache/              # Layer 1: Semantic cache
├── context/            # Layer 1: Context window management
├── services/           # Layer 2: Business logic (Schedule, Session)
├── agents/             # Layer 3: Autonomous Agents (Parrots)
├── routing/            # Layer 3: Intent classification & routing
├── aitime/            # Skill: Time parsing
├── tags/               # Skill: Tag recommendation
├── summary/            # Skill: Summary generation
├── format/             # Skill: Formatting
├── enrichment/         # Skill: Processing pipeline
├── duplicate/          # Skill: Deduplication
├── review/             # Skill: Spaced repetition
└── graph/              # Skill: Knowledge graph

Key Interfaces

LLM Service
type Service interface {
    Chat(ctx context.Context, messages []Message) (string, *LLMCallStats, error)
    ChatStream(ctx context.Context, messages []Message) (<-chan string, <-chan *LLMCallStats, <-chan error)
    ChatWithTools(ctx context.Context, messages []Message, tools []ToolDescriptor) (*ChatResponse, *LLMCallStats, error)
    Warmup(ctx context.Context)
}
Embedding Service
type Service interface {
    Embed(ctx context.Context, text string) ([]float32, error)
    EmbedBatch(ctx context.Context, texts []string) ([][]float32, error)
}
Retrieval Service
type AdaptiveRetriever interface {
    Retrieve(ctx context.Context, opts *RetrievalOptions) ([]*SearchResult, error)
}

Documentation

Index

Constants

View Source
const (
	SimpleTaskMaxTokens   = 1024 // Simple tasks don't need many tokens
	SimpleTaskTemperature = 0.3  // Lower temperature for deterministic output
	SimpleTaskTimeout     = 30   // Shorter timeout for simple tasks (seconds)
)

Configuration defaults for simple LLM tasks.

Variables

This section is empty.

Functions

func SetTitleConfigDir added in v0.100.0

func SetTitleConfigDir(dir string)

SetTitleConfigDir overrides the default config directory.

Types

type BlockContent added in v0.94.0

type BlockContent struct {
	UserInput        string
	AssistantContent string
}

BlockContent represents a simplified block for title generation.

type ChatResponse deprecated

type ChatResponse = llm.ChatResponse

ChatResponse represents the LLM response including potential tool calls.

Deprecated: Use llm.ChatResponse directly.

type Config

type Config struct {
	Embedding        EmbeddingConfig
	Reranker         RerankerConfig
	IntentClassifier IntentClassifierConfig
	LLM              LLMConfig
	UniversalParrot  UniversalParrotConfig // Phase 2: Configuration-driven parrots
	Enabled          bool
}

Config represents AI configuration.

func NewConfigFromProfile

func NewConfigFromProfile(p *profile.Profile) *Config

NewConfigFromProfile creates AI config from profile.

func (*Config) Validate

func (c *Config) Validate() error

Validate validates the configuration.

type ConversationPromptData added in v0.100.0

type ConversationPromptData struct {
	UserMessage string
	AIResponse  string
}

ConversationPromptData holds data for conversation title template.

type EmbeddingConfig

type EmbeddingConfig struct {
	Provider   string
	Model      string
	APIKey     string
	BaseURL    string
	Dimensions int
}

EmbeddingConfig represents vector embedding configuration.

type EmbeddingService

type EmbeddingService interface {
	// Embed generates vector for a single text.
	Embed(ctx context.Context, text string) ([]float32, error)

	// EmbedBatch generates vectors for multiple texts.
	EmbedBatch(ctx context.Context, texts []string) ([][]float32, error)

	// Dimensions returns the vector dimension.
	Dimensions() int
}

EmbeddingService is the vector embedding service interface.

func NewEmbeddingService

func NewEmbeddingService(cfg *EmbeddingConfig) (EmbeddingService, error)

NewEmbeddingService creates a new EmbeddingService.

Phase 1 Note: This is a bridge compatibility layer that maintains the original API. The actual embedding functionality has been moved to ai/core/embedding/provider.go. Future refactoring will deprecate this file in favor of the core package.

type FunctionCall deprecated

type FunctionCall = llm.FunctionCall

FunctionCall represents the function details.

Deprecated: Use llm.FunctionCall directly.

type IntentClassifierConfig

type IntentClassifierConfig struct {
	Provider string
	Model    string
	APIKey   string
	BaseURL  string
	Enabled  bool
}

IntentClassifierConfig represents intent classification LLM configuration. Uses a lightweight model for fast, cost-effective classification.

type LLMCallStats deprecated added in v0.94.0

type LLMCallStats = llm.LLMCallStats

LLMCallStats represents statistics for a single LLM call.

Deprecated: Use llm.LLMCallStats directly.

type LLMConfig

type LLMConfig struct {
	Provider    string // Provider identifier for logging/future extension: zai, deepseek, openai, ollama
	Model       string // Model name: glm-4.7, deepseek-chat, gpt-4o, etc.
	APIKey      string
	BaseURL     string
	MaxTokens   int     // default: 2048
	Temperature float32 // default: 0.7
	Timeout     int     // Request timeout in seconds (default: 120)
}

LLMConfig represents LLM configuration.

type LLMService deprecated

type LLMService = llm.Service

LLMService is the LLM service interface.

Deprecated: Use llm.Service directly.

func NewLLMService

func NewLLMService(cfg *LLMConfig) (LLMService, error)

NewLLMService creates a new LLMService.

Phase 1 Note: This is a bridge compatibility layer that maintains the original API. The actual LLM functionality has been moved to ai/core/llm/service.go.

func NewSimpleTaskLLMService added in v0.100.1

func NewSimpleTaskLLMService(p *profile.Profile, mainLLM LLMService) LLMService

NewSimpleTaskLLMService creates an LLM service for simple tasks. It uses the Intent provider configuration with fallback to main LLM.

Priority: 1. If AIIntentAPIKey is configured, use Intent provider (siliconflow by default) 2. Otherwise, fallback to main LLM service

Returns nil if both Intent service creation fails and mainLLM is nil. Callers must check for nil return value.

type MemoPromptData added in v0.100.0

type MemoPromptData struct {
	Content string
	Title   string
}

MemoPromptData holds data for memo title template.

type Message deprecated

type Message = llm.Message

Message represents a chat message.

Deprecated: Use llm.Message directly.

func AssistantMessage deprecated

func AssistantMessage(content string) Message

AssistantMessage creates an assistant message.

Deprecated: Use llm.AssistantMessage directly.

func FormatMessages deprecated

func FormatMessages(systemPrompt string, userContent string, history []Message) []Message

FormatMessages formats messages for prompt templates.

Deprecated: Use llm.FormatMessages directly.

func SystemPrompt deprecated

func SystemPrompt(content string) Message

SystemPrompt creates a system message.

Deprecated: Use llm.SystemPrompt directly.

func UserMessage deprecated

func UserMessage(content string) Message

UserMessage creates a user message.

Deprecated: Use llm.UserMessage directly.

type RerankResult deprecated

type RerankResult = reranker.Result

RerankResult represents a reranking result.

Deprecated: Use reranker.Result directly.

type RerankerConfig

type RerankerConfig struct {
	Provider string
	Model    string
	APIKey   string
	BaseURL  string
	Enabled  bool
}

RerankerConfig represents reranker configuration.

type RerankerService deprecated

type RerankerService = reranker.Service

RerankerService is the reranking service interface.

Deprecated: Use reranker.Service directly.

func NewRerankerService

func NewRerankerService(cfg *RerankerConfig) RerankerService

NewRerankerService creates a new RerankerService.

Phase 1 Note: This is a bridge compatibility layer that maintains the original API. The actual reranker functionality has been moved to ai/core/reranker/service.go.

type TitleGenerator added in v0.94.0

type TitleGenerator struct {
	// contains filtered or unexported fields
}

TitleGenerator generates meaningful titles for AI conversations. Uses configuration from config/prompts/title.yaml.

func NewTitleGenerator deprecated added in v0.94.0

func NewTitleGenerator(cfg TitleGeneratorConfig) *TitleGenerator

NewTitleGenerator creates a new title generator instance.

Deprecated: Use NewTitleGeneratorWithLLM(llm LLMService) instead. This constructor is kept for backward compatibility.

func NewTitleGeneratorWithLLM added in v0.100.0

func NewTitleGeneratorWithLLM(llmService LLMService) *TitleGenerator

NewTitleGeneratorWithLLM creates a new title generator with an existing LLMService. This is the preferred constructor for dependency injection. Panics if llmService is nil.

func (*TitleGenerator) Generate added in v0.94.0

func (tg *TitleGenerator) Generate(ctx context.Context, userMessage, aiResponse string) (string, error)

Generate generates a title based on the conversation content.

func (*TitleGenerator) GenerateTitleFromBlocks added in v0.94.0

func (tg *TitleGenerator) GenerateTitleFromBlocks(ctx context.Context, blocks []BlockContent) (string, error)

GenerateTitleFromBlocks generates a title from a slice of blocks.

type TitleGeneratorConfig deprecated added in v0.94.0

type TitleGeneratorConfig struct {
	APIKey  string
	BaseURL string
	Model   string
}

TitleGeneratorConfig holds configuration for the title generator.

Deprecated: Use NewTitleGeneratorWithLLM(llm LLMService) directly. This config is kept for backward compatibility.

type TitlePromptConfig added in v0.100.0

type TitlePromptConfig struct {
	Name                 string `yaml:"name"`
	Version              string `yaml:"version"`
	SystemPrompt         string `yaml:"system_prompt"`
	ConversationTemplate string `yaml:"conversation_template"`
	MemoTemplate         string `yaml:"memo_template"`
	Params               struct {
		MaxTokens          int     `yaml:"max_tokens"`
		Temperature        float64 `yaml:"temperature"`
		TimeoutSeconds     int     `yaml:"timeout_seconds"`
		InputTruncateChars int     `yaml:"input_truncate_chars"`
		MaxRunes           int     `yaml:"max_runes"`
	} `yaml:"params"`
}

TitlePromptConfig holds the configuration for title generation.

func GetTitlePromptConfig added in v0.100.0

func GetTitlePromptConfig() *TitlePromptConfig

GetTitlePromptConfig returns the global title prompt config, loading if necessary. Falls back to defaults if config file fails to load.

func LoadTitlePromptConfig added in v0.100.0

func LoadTitlePromptConfig() (*TitlePromptConfig, error)

LoadTitlePromptConfig loads the title prompt configuration from YAML.

func (*TitlePromptConfig) BuildConversationPrompt added in v0.100.0

func (c *TitlePromptConfig) BuildConversationPrompt(data *ConversationPromptData) (string, error)

BuildConversationPrompt builds the user prompt for conversation title generation.

func (*TitlePromptConfig) BuildMemoPrompt added in v0.100.0

func (c *TitlePromptConfig) BuildMemoPrompt(data *MemoPromptData) (string, error)

BuildMemoPrompt builds the user prompt for memo title generation.

type ToolCall deprecated

type ToolCall = llm.ToolCall

ToolCall represents a request to call a tool.

Deprecated: Use llm.ToolCall directly.

type ToolDescriptor deprecated

type ToolDescriptor = llm.ToolDescriptor

ToolDescriptor represents a function/tool available to the LLM.

Deprecated: Use llm.ToolDescriptor directly.

type UniversalParrotConfig added in v0.94.0

type UniversalParrotConfig struct {
	Enabled      bool   // Enable UniversalParrot for creating parrots from YAML configs
	ConfigDir    string // Path to parrot YAML configs (default: ./config/parrots)
	FallbackMode string // "legacy" | "error" when config load fails (default: legacy)
	BaseURL      string // Frontend base URL for generating links in prompts
}

UniversalParrotConfig represents configuration for UniversalParrot (configuration-driven parrots).

Directories

Path Synopsis
Package agent provides conversation context management for multi-turn dialogues.
Package agent provides conversation context management for multi-turn dialogues.
events
Package events provides event callback types for the agent system.
Package events provides event callback types for the agent system.
orchestrator
Package orchestrator implements the Orchestrator-Workers pattern for multi-agent coordination.
Package orchestrator implements the Orchestrator-Workers pattern for multi-agent coordination.
registry
Package registry provides metrics collection for UniversalParrot.
Package registry provides metrics collection for UniversalParrot.
tools
Package tools provides tool-level result caching for AI agents.
Package tools provides tool-level result caching for AI agents.
tools/schedule
Package schedule provides thin tool adapters for schedule operations.
Package schedule provides thin tool adapters for schedule operations.
universal
Package universal provides configuration loading for UniversalParrot.
Package universal provides configuration loading for UniversalParrot.
Package aitime provides the time parsing service interface for AI agents.
Package aitime provides the time parsing service interface for AI agents.
Package cache provides the cache service interface for AI agents.
Package cache provides the cache service interface for AI agents.
Package context provides context building for LLM prompts.
Package context provides context building for LLM prompts.
core
llm
Package duplicate provides memo duplicate detection for P2-C002.
Package duplicate provides memo duplicate detection for P2-C002.
Package format provides the formatter interface for AI text formatting.
Package format provides the formatter interface for AI text formatting.
Package graph - builder implementation for P3-C001.
Package graph - builder implementation for P3-C001.
internal
strutil
Package strutil provides string utility functions for the ai package.
Package strutil provides string utility functions for the ai package.
Package memory provides the memory extension point for AI agents.
Package memory provides the memory extension point for AI agents.
simple
Package simple provides a basic memory generator implementation.
Package simple provides a basic memory generator implementation.
observability
logging
Package logging provides structured logging utilities for AI modules.
Package logging provides structured logging utilities for AI modules.
metrics
Package metrics provides the evaluation metrics service interface for AI agents.
Package metrics provides the evaluation metrics service interface for AI agents.
tracing
Package tracing provides distributed tracing instrumentation for AI modules.
Package tracing provides distributed tracing instrumentation for AI modules.
Package review provides intelligent memo review system based on spaced repetition.
Package review provides intelligent memo review system based on spaced repetition.
Package routing provides routing result caching for performance optimization.
Package routing provides routing result caching for performance optimization.
services
session
Package session provides the session persistence service interface for AI agents.
Package session provides the session persistence service interface for AI agents.
stats
Package stats provides cost alerting for agent sessions.
Package stats provides cost alerting for agent sessions.
Package tags provides intelligent tag suggestion for memos.
Package tags provides intelligent tag suggestion for memos.
Package timeout defines centralized timeout constants for AI operations.
Package timeout defines centralized timeout constants for AI operations.

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL