engine

package
v0.0.0-...-7f6d387 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Mar 26, 2026 License: MIT Imports: 29 Imported by: 0

Documentation

Overview

Package engine orchestrates the RAG conversation loop. It retrieves relevant context from the vector store and knowledge graph, assembles a system prompt with auto-detected template variables, and streams the LLM response back to the caller. After each interaction it triggers background tasks for rolling memory summarization, knowledge extraction, session archival, and safety classification. The engine also manages onboarding phases, session timeouts, and a circuit breaker for Ollama connectivity.

Index

Constants

View Source
const MaxTraitDriftPerSession = 0.05

MaxTraitDriftPerSession limits how much any single trait axis can shift in one session (3.2 drift limiter).

Variables

This section is empty.

Functions

func FormatConversationQuality

func FormatConversationQuality(cq *ConversationQuality) string

FormatConversationQuality converts quality metrics into a prompt-ready instruction for the {{ .conversation_quality }} template variable. Only returns non-empty when there's a signal worth acting on.

func FormatConversationQualityDebug

func FormatConversationQualityDebug(cq *ConversationQuality) string

FormatConversationQualityDebug returns a compact debug string of all metrics.

func ParsePersonalityTraits

func ParsePersonalityTraits(traits string) map[string]float64

ParsePersonalityTraits parses the comma-separated key_traits config string into initial trait values. Unknown traits get a neutral default of 0.5. Supported traits: curiosity, playfulness, directness.

Types

type ConversationQuality

type ConversationQuality struct {
	// AvgUserMsgLength is the average character length of user messages.
	AvgUserMsgLength float64
	// AvgAIMsgLength is the average character length of AI messages.
	AvgAIMsgLength float64
	// UserMsgCount is the number of user messages in the window.
	UserMsgCount int
	// ShortResponseRatio is the fraction of user messages under 20 chars.
	ShortResponseRatio float64
	// RepetitionScore is 0.0-1.0 indicating how repetitive AI responses are.
	RepetitionScore float64
	// TopicDiversity is a rough count of distinct topic shifts detected.
	TopicDiversity int
}

ConversationQuality captures metrics about the current conversation flow. Used to detect disinterest, repetition, and overall engagement (6.3).

func AnalyzeConversationQuality

func AnalyzeConversationQuality(ctx context.Context, hist *store.RedisHistory, maxMessages int) (*ConversationQuality, error)

AnalyzeConversationQuality examines recent messages to assess conversation health. Operates on the last N messages from history. Lightweight — no LLM call.

type Engine

type Engine struct {
	// contains filtered or unexported fields
}

func NewEngine

func NewEngine(cfg *config.Config, ext *extractor.Service, sum *summarizer.Summarizer, saf *safety.Service) (*Engine, error)

func (*Engine) CheckAndRollMemoryAsync

func (e *Engine) CheckAndRollMemoryAsync(ctx context.Context, s *store.GORAStore, userID string, sessionID string)

CheckAndRollMemoryAsync triggers rolling memory summarization in the background. Only active when active_learning is enabled.

func (*Engine) DescribeImage

func (e *Engine) DescribeImage(ctx context.Context, imageBase64 string, mimeType string) (string, error)

DescribeImage uses the vision LLM to generate a natural-language description of an image provided as base64-encoded data (2.3). Returns empty string if the vision model is not configured.

func (*Engine) DetectAwayDuration

func (e *Engine) DetectAwayDuration(ctx context.Context, message string) time.Duration

DetectAwayDuration uses the draft LLM to determine if a user message indicates they'll be unavailable, and for how long. Returns zero duration if no away signal is detected or on error.

func (*Engine) GenerateAndStream

func (e *Engine) GenerateAndStream(ctx context.Context, s *store.GORAStore, userID string, sessionID string, query string, tokenChan chan<- string) (string, error)

GenerateAndStream produces an LLM response for the given query. userID is the owner for data isolation (empty string in documentation mode).

func (*Engine) GenerateOutreachMessage

func (e *Engine) GenerateOutreachMessage(
	ctx context.Context,
	s *store.GORAStore,
	userID string,
	level graph.EscalationLevel,
	candidate graph.OutreachCandidate,
) (string, error)

GenerateOutreachMessage produces an LLM-generated outreach message for a user who has been inactive. The message's emotional tone depends on the escalation level. Returns empty string if no outreach prompt is configured.

func (*Engine) GenerateStarter

func (e *Engine) GenerateStarter(ctx context.Context, s *store.GORAStore, userID string, sessionID string) (string, error)

GenerateStarter produces a conversation starter message if the session has been idle long enough. Returns empty string if no starter should be shown.

func (*Engine) HasVisionSupport

func (e *Engine) HasVisionSupport() bool

HasVisionSupport returns true if the engine has a vision model configured.

func (*Engine) LearnFromInteraction

func (e *Engine) LearnFromInteraction(store *store.GORAStore, userID string, userInput string, aiResponse string)

LearnFromInteraction extracts knowledge from user input and stores it in the graph. Only active when active_learning is enabled. userID scopes the data.

func (*Engine) SetMetrics

func (e *Engine) SetMetrics(mc *metrics.Collector)

SetMetrics registers a metrics collector for tracking session events (4.1).

func (*Engine) SetTaskQueue

func (e *Engine) SetTaskQueue(tq *resilience.TaskQueue)

SetTaskQueue registers a Redis-backed retry queue for failed background tasks (R.2). When set, tasks that fail after retries are enqueued for later processing instead of being silently dropped.

func (*Engine) WaitForBackgroundTasks

func (e *Engine) WaitForBackgroundTasks()

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL