store

package
v0.8.0 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Mar 29, 2026 License: MIT Imports: 27 Imported by: 0

Documentation

Overview

Package store — deterministic NL description generation for code nodes.

Converts code identifiers and signatures into natural-language text so that embedding similarity between code nodes and documentation nodes operates in the same NL↔NL domain instead of the weaker code-syntax↔NL domain.

Package store provides SQLite-backed persistence for the code graph. The graph is parsed once from source, saved here, and loaded in <1s on subsequent starts. Only files that changed (by mtime) are re-parsed.

Index

Constants

View Source
const (
	TierSessionLog = "session_log" // What happened — auto-captured session summaries.
	TierEntity     = "entity"      // Facts about code nodes — travels with the entity.
	TierProject    = "project"     // Conventions, decisions, gotchas — project-wide.
)

MemoryTier classifies the scope and lifespan of a memory.

View Source
const (
	SourceManual    = "manual"    // Agent explicitly called remember() or annotate_node().
	SourceAuto      = "auto"      // Auto-captured by end_session structured extraction.
	SourceExtracted = "extracted" // LLM-synthesized from session data by brain sidecar.
)

MemorySource indicates how the memory was created.

View Source
const DecayVisibilityThreshold = 0.05

DecayVisibilityThreshold is the minimum DecayedImportanceScore for a memory to be included in recall() results. Memories scoring below this threshold are demoted (excluded from results) but never deleted — they remain in the DB for audit queries (include_stale=true) and as_of temporal lookups.

At 0.05 with default importance (weight=1.0), visibility windows by tier:

  • session_log (72h): ~8 weeks without access
  • entity+auto (168h): ~19 weeks without access
  • project (336h): ~38 weeks (but TTL expires at 60 days first)
  • entity+manual (504h): ~57 weeks without access

Pinned memories always score 1.0 and are never demoted.

View Source
const DefaultMaxEpisodeRows = 10000

DefaultMaxEpisodeRows is the per-project episode cap.

View Source
const DefaultMaxMemoryRows = 10000

DefaultMaxMemoryRows is the per-project memory cap. Prevents unbounded disk growth from agents calling remember() in a loop. Configurable via Store.MaxMemoryRows.

View Source
const ImportancePinned = "pinned"

ImportancePinned is a special importance value that exempts a memory from decay scoring. Pinned memories always score 1.0 regardless of age, making them permanently visible in recall results. Use for security configs, compliance decisions, architectural invariants — facts that must never be silently demoted by time-based decay.

Variables

View Source
var DefaultConvexWeights = ConvexWeights{
	Alpha:         0.5,
	GraphBonus:    0.3,
	TemporalBonus: 0.2,
}

DefaultConvexWeights provides balanced defaults for score-aware fusion. Alpha=0.5 weights BM25 and semantic equally. Graph and temporal each contribute 30% and 20% bonus respectively when present.

View Source
var DefaultRRFWeights = map[string]float64{
	"bm25":     1.0,
	"semantic": 1.0,
	"graph":    1.0,
	"temporal": 0.5,
}

DefaultRRFWeights is the default set of per-channel RRF weight multipliers.

Functions

func CacheDir

func CacheDir() (string, error)

CacheDir returns the canonical directory where synapses stores all project index databases: ~/.synapses/cache/

Using the home directory (rather than os.UserCacheDir which resolves to ~/Library/Caches on macOS, ~/.cache on Linux, %LocalAppData% on Windows) gives a single, discoverable, cross-platform path that is not subject to OS or tool-driven cache eviction.

func DecayedImportanceScore

func DecayedImportanceScore(m Memory, halfLifeHours float64) float64

DecayedImportanceScore combines memory importance weight with recency decay.

Rules:

  • ImportancePinned ("pinned"): returns 1.0. Pinned memories are exempt from decay and always visible in recall results. Use for security configs, compliance decisions, architectural invariants.
  • Numeric string (e.g. "0.8"): parsed as the importance weight, then multiplied by RecencyDecayScore(lastAccessedAt, halfLifeHours).
  • Invalid or empty string: treated as weight 1.0 (pure recency decay).

halfLifeHours controls how fast scores decay. 0 = use tier-specific defaults (session_log 72h, project 336h, entity+auto 168h, entity+manual 504h). Result is in (0, 1] for pinned=false, exactly 1.0 for pinned.

func DefaultPath

func DefaultPath(repoRoot string) (string, error)

DefaultPath returns the canonical DB path for a repository root. The file lives at ~/.synapses/cache/<reponame>_<hash>.db

func GenerateNLDescription

func GenerateNLDescription(name string, nodeType graph.NodeType, sig, doc string, callees, callers []string) string

GenerateNLDescription produces a deterministic natural-language description of a code node from its name, type, signature, docstring, and call edges. Zero LLM dependency — pure string manipulation.

Hard cap: 400 characters. Returns "" for non-code node types.

func IsCodeNodeType

func IsCodeNodeType(t graph.NodeType) bool

IsCodeNodeType returns true for node types that represent code entities (as opposed to documentation, knowledge, or structural nodes).

func KnowledgePath

func KnowledgePath(graphPath string) string

KnowledgePath derives the knowledge DB path from the graph DB path. E.g. "/path/to/repo_hash.db" → "/path/to/repo_hash_knowledge.db"

func RRFMerge

func RRFMerge(channels map[string][]string, limit int, k int) ([]string, map[string][]string)

RRFMerge applies Reciprocal Rank Fusion across multiple ranked result lists.

func RecencyDecayScore

func RecencyDecayScore(createdAt time.Time, halfLifeHours float64, accessCount int) float64

RecencyDecayScore computes a decay score using ACT-R frequency-weighted power-law decay. Returns a value in (0, 1] where 1.0 = just created, 0.5 = one effective half-life old. The effective half-life grows logarithmically with access_count: a memory accessed 20 times decays ~4.4x slower than one accessed once.

Formula: score = 1 / (1 + ageHours / effectiveHalfLife) where effectiveHalfLife = halfLifeHours × log2(max(accessCount, 1) + 1).

Based on ACT-R base-level activation (Anderson & Lebiere, 1998). The log-frequency scaling is the core ACT-R insight: each additional access has diminishing returns on memory strength.

func TierHalfLife

func TierHalfLife(tier, source string) float64

TierHalfLife returns the tier-specific decay half-life in hours. Different memory tiers decay at different rates:

  • session_log: 72h (3 days) — ephemeral session summaries fade quickly
  • project: 336h (2 weeks) — conventions and decisions persist longer
  • entity + auto: 168h (1 week) — auto-captured code facts
  • entity + manual: 504h (3 weeks) — manually annotated code facts persist longest

Based on A-MAC (arXiv:2603.04549) differential decay rates.

func ViolationID

func ViolationID(ruleID, fromNode, toNode, edgeType string) string

ViolationID returns a stable SHA-256-derived ID for a violation so that re-detecting the same violation updates the existing row rather than inserting a duplicate. Exported so callers (e.g. the watcher) can compute IDs to compare against ViolationIDsForFile results.

Types

type Agent

type Agent struct {
	ID               string `json:"id"`
	LastSeen         string `json:"last_seen"`
	Metadata         string `json:"metadata"`
	CurrentTaskID    string `json:"current_task_id,omitempty"`
	CurrentTaskTitle string `json:"current_task_title,omitempty"`
	CurrentFocus     string `json:"current_focus,omitempty"`
	// B29: richer focus fields.
	CurrentFocusFile  string `json:"current_focus_file,omitempty"`
	CurrentFocusSince string `json:"current_focus_since,omitempty"`
	Intent            string `json:"intent,omitempty"`
	// ProjectID is non-empty only for remote agents synced from federated peers.
	// Local agents always have ProjectID = "".
	ProjectID string `json:"project_id,omitempty"`
	// Presence is computed from LastSeen: active (≤5min), idle (5–15min), inactive (>15min).
	Presence string `json:"presence,omitempty"`
}

Agent is a registered agent that has interacted with Synapses.

type AgentActivity

type AgentActivity struct {
	TaskID    string
	TaskTitle string
	Focus     string
	// B29: richer focus fields.
	FocusFile  string
	FocusSince string // RFC3339; DB only applies this when Focus changes entity name.
	Intent     string
}

AgentActivity carries optional activity fields for UpsertAgent. Only non-empty fields overwrite existing values (partial update semantics).

type AgentContext

type AgentContext struct {
	AgentID      string `json:"agent_id"`
	LastEventSeq int64  `json:"last_event_seq"` // last event seq the agent received
	IdentityHash string `json:"identity_hash"`  // SHA-1 of last ProjectIdentity sent
	LastSession  string `json:"last_session"`   // RFC3339 timestamp of last session_init
	TaskSeq      int64  `json:"task_seq"`       // sequence marker for task change detection
}

AgentContext tracks what an agent already knows so session_init can deliver incremental updates instead of repeating the full project identity every time.

type Annotation

type Annotation struct {
	ID        string `json:"id"`
	NodeID    string `json:"node_id"`
	AgentID   string `json:"agent_id,omitempty"`
	Note      string `json:"note"`
	CreatedAt string `json:"created_at"`
	// Source distinguishes manually-added agent notes ("agent") from
	// system-generated retrospective notes ("system"). Defaults to "agent".
	Source string `json:"source,omitempty"`
	// Stale is true when the node's call-graph changed significantly (fan-in delta
	// >20% or node removed) since the annotation was written. Treat stale
	// annotations as hints, not facts — they may describe outdated structure.
	Stale bool `json:"stale,omitempty"`
}

Annotation is a note attached to a graph node by an agent.

type Attribution

type Attribution map[string][]string

Attribution is the output of a recall merge operation (RRF or ConvexMerge).

Key direction: memID → []channelNames (opposite of the input channels map which is channelName → []memIDs). Attribution channels are sorted by contribution score descending, so [0] is always the highest-contributing channel for that result.

Use TopChannel to safely extract the best real channel for a result.

func ConvexMerge

func ConvexMerge(
	channels map[string]*ChannelScores,
	limit int,
	weights ConvexWeights,
) ([]string, Attribution)

ConvexMerge fuses multiple retrieval channels using score-magnitude-aware linear combination. Unlike RRF (which uses only rank positions), ConvexMerge preserves the information in how confident each channel is about a result.

Per-channel min-max normalization maps raw scores to [0, 1]:

norm(s) = (s - min) / (max - min)       if max > min
norm(s) = 1.0                           if max == min (single result or all tied)

Final score for each document:

score = α × norm_bm25 + (1-α) × norm_cosine + graph_bonus × norm_graph + temporal_bonus × norm_temporal

Returns top-N memory IDs sorted by fused score, plus per-memory channel attribution (same shape as RRFMergeWeighted for drop-in compatibility).

func RRFMergeWeighted

func RRFMergeWeighted(channels map[string][]string, limit int, k int, weights map[string]float64) ([]string, Attribution)

RRFMergeWeighted applies Reciprocal Rank Fusion with per-channel weights. weights maps channel name → weight multiplier (nil = DefaultRRFWeights). Channels not in the weights map get weight 1.0.

func (Attribution) TopChannel

func (a Attribution) TopChannel(memID string) string

TopChannel returns the highest-contributing real channel for memID. "Real" means not prefixed with "_" (metadata pseudo-channels are excluded). Returns "" if memID has no attribution or only metadata entries.

type ChannelScores

type ChannelScores struct {
	IDs    []string
	Scores []float64
}

ChannelScores carries per-ID raw scores from a single retrieval channel. IDs and Scores are parallel slices: Scores[i] is the raw score for IDs[i]. Higher scores = more relevant.

type ContextDelivery

type ContextDelivery struct {
	SessionID   string // Synapses session UUID (from sessions table); may be empty
	AgentID     string // agent_id from request; may be empty
	ToolName    string // "get_context" or "prepare_context"
	Entity      string // entity/target queried
	Refetched   bool   // true when this is a repeat request for the same entity in the same session
	TaskOutcome string // "", "success", "unknown" — populated at end_session via CorrelateSessionOutcome
}

ContextDelivery is a single recorded get_context or prepare_context call. Used by the Sprint 11 feedback loop to measure context quality outcomes.

type ConvexWeights

type ConvexWeights struct {
	Alpha         float64 // BM25 vs semantic balance: 0.0 = all semantic, 1.0 = all BM25
	GraphBonus    float64 // additive weight for graph channel
	TemporalBonus float64 // additive weight for temporal channel
}

ConvexWeights configures the linear combination coefficients for ConvexMerge. Alpha controls the BM25 vs semantic balance (α * bm25 + (1-α) * semantic). GraphBonus and TemporalBonus are additive weights for those channels. All values should be in [0, 1]. The sum need not equal 1 — scores are normalized per-channel before weighting.

type CrossProjectDep

type CrossProjectDep struct {
	FromEntity        string `json:"from_entity"`
	ToProject         string `json:"to_project"`
	ToEntity          string `json:"to_entity"`
	ToFile            string `json:"to_file"`
	VerifiedCommit    string `json:"verified_commit"`
	VerifiedAt        string `json:"verified_at"`
	DetectionTier     string `json:"detection_tier"`
	VerifiedSignature string `json:"verified_signature"` // entity signature at verification time; used for fallback comparison
}

CrossProjectDep represents a stored dependency on an entity in a sibling project.

type Episode

type Episode struct {
	ID            string  `json:"id"`
	AgentID       string  `json:"agent_id"`
	ProjectID     string  `json:"project_id,omitempty"`
	CreatedAt     int64   `json:"created_at"` // Unix seconds
	EpisodeType   string  `json:"episode_type"`
	Outcome       string  `json:"outcome"`
	Trigger       string  `json:"trigger,omitempty"`
	Decision      string  `json:"decision"`
	Rationale     string  `json:"rationale,omitempty"`
	AffectedFiles string  `json:"affected_files"` // JSON array string
	AffectedNodes string  `json:"affected_nodes"` // JSON array string
	Tags          string  `json:"tags"`           // JSON array string
	Importance    float64 `json:"importance"`
	PromotedRule  string  `json:"promoted_rule,omitempty"`
}

Episode records a decision or failure made by an agent so future sessions can recall it. episode_type distinguishes decisions from failures; outcome tracks whether the approach worked. promoted_rule links a failure to the dynamic_rule it eventually spawned.

type Event

type Event struct {
	Seq       int64  `json:"seq"`
	Type      string `json:"type"`
	AgentID   string `json:"agent_id,omitempty"`
	Payload   string `json:"payload"`
	CreatedAt string `json:"created_at"`
}

Event is a single entry in the pull-based event log.

type ExportSummary

type ExportSummary struct {
	MemoryCount          int `json:"memory_count"`
	MemoryVersionCount   int `json:"memory_version_count"`
	MemoryAnchorCount    int `json:"memory_anchor_count"`
	MemoryEmbeddingCount int `json:"memory_embedding_count"`
	EpisodeCount         int `json:"episode_count"`
	DynamicRuleCount     int `json:"dynamic_rule_count"`
	AnnotationCount      int `json:"annotation_count"`
	QualityGapCount      int `json:"quality_gap_count"`
}

ExportSummary provides quick stats without parsing the full arrays.

type ExportedMemAnchor

type ExportedMemAnchor struct {
	MemoryID  string `json:"memory_id"`
	NodeID    string `json:"node_id"`
	CreatedAt string `json:"created_at"`
}

ExportedMemAnchor represents a memory→node staleness anchor.

type ExportedMemEmbed

type ExportedMemEmbed struct {
	MemoryID     string `json:"memory_id"`
	Model        string `json:"model"`
	EmbeddingB64 string `json:"embedding_b64"` // base64-encoded normalized float32 BLOB
	ContentHash  string `json:"content_hash"`
	EmbeddedAt   int64  `json:"embedded_at"` // Unix seconds
	Stale        bool   `json:"stale,omitempty"`
}

ExportedMemEmbed holds a memory's embedding vector encoded as base64 for portability. The BLOB (normalized little-endian float32 array) is preserved verbatim so embeddings can be re-imported without re-computation. Stale embeddings (content changed since embedding was computed) are flagged — importers may choose to re-embed these rather than re-import them.

type ExportedMemVer

type ExportedMemVer struct {
	ID           string `json:"id"`
	MemoryID     string `json:"memory_id"`
	Version      int    `json:"version"`
	Content      string `json:"content"`
	SupersededBy string `json:"superseded_by,omitempty"`
	CreatedAt    string `json:"created_at"`
	SupersededAt string `json:"superseded_at"`
}

ExportedMemVer is a historical snapshot preserved when remember() deduplicates. Mirrors the memory_versions table fields.

type ExportedRule

type ExportedRule struct {
	ID              string `json:"id"`
	Description     string `json:"description"`
	Severity        string `json:"severity"`
	FromFilePattern string `json:"from_file_pattern,omitempty"`
	ToFilePattern   string `json:"to_file_pattern,omitempty"`
	FromType        string `json:"from_type,omitempty"`
	ToType          string `json:"to_type,omitempty"`
	EdgeType        string `json:"edge_type,omitempty"`
	ToNamePattern   string `json:"to_name_pattern,omitempty"`
	RuleType        string `json:"rule_type,omitempty"`
	PathPattern     string `json:"path_pattern,omitempty"` // comma-separated EdgeType list
	CreatedAt       string `json:"created_at"`
	UpdatedAt       string `json:"updated_at"`
}

ExportedRule captures a dynamic architectural rule in a portable form. Mirrors the dynamic_rules table without internal DB columns.

type GapFilter

type GapFilter struct {
	NodeID   string // filter by exact node ID
	File     string // filter by source file (matches any node in that file)
	Severity string // filter by severity ("low" | "medium" | "high" | "critical")
	Status   string // filter by status; default "open" when empty
}

GapFilter controls which quality gaps are returned by GetGaps.

type HibernateResumeContext

type HibernateResumeContext struct {
	// PriorIntent is the intent declared in the hibernated session (may be empty).
	PriorIntent string
	// PriorSummary is from end_session or an auto-log (may be empty).
	PriorSummary string
	// PriorToolCalls is the total tool calls made in the prior session segment.
	PriorToolCalls int
	// GapSeconds is how long the session was dormant before this resume.
	GapSeconds int64
	// StartedAt is the original session.started_at (Unix epoch), so the MCP
	// handler can display total session age across all resume cycles.
	StartedAt int64
	// ParentID is the sessions.id row that was resumed (now the current session).
	ParentID string
}

HibernateResumeContext carries prior session information surfaced when a cross-connection resume occurs (agent restarts editor and calls session_init). Non-nil only when GetOrResumeSession performs a Phase 2 hibernate resume.

type InvalidatedMemory

type InvalidatedMemory struct {
	ID            string `json:"id"`
	Content       string `json:"content"`
	Tier          string `json:"tier"`
	StaleReason   string `json:"stale_reason"`
	InvalidatedAt string `json:"invalidated_at"` // when the memory was invalidated (staled_at)
}

InvalidatedMemory is a stale memory surfaced once per agent at session start (AM-3). Per-agent tracking via memory_surfaced table ensures every agent sees each invalidation independently.

type KnowledgeExport

type KnowledgeExport struct {
	// Schema version — bump when fields are added to enable future importers.
	Version    string `json:"version"`
	ExportedAt string `json:"exported_at"`
	// ProjectID is the FNV hash of the project root path used by this daemon.
	// It is machine-specific (depends on the absolute path). Include it for
	// identification/correlation, not as a stable cross-machine identifier.
	ProjectID string `json:"project_id"`

	// TTL note: expires_at values are preserved as-is from the DB for audit
	// fidelity. Importers should reset expires_at on re-import if they want
	// memories to remain active — past-dated entries will be pruned on the
	// next PruneStaleData run.
	TTLNote string `json:"ttl_note"`

	// Core knowledge (all slices are always non-null, even when empty).
	Memories         []Memory            `json:"memories"`
	MemoryVersions   []ExportedMemVer    `json:"memory_versions"`
	MemoryAnchors    []ExportedMemAnchor `json:"memory_anchors"`
	MemoryEmbeddings []ExportedMemEmbed  `json:"memory_embeddings"`

	Episodes     []Episode      `json:"episodes"`
	DynamicRules []ExportedRule `json:"dynamic_rules"`
	Annotations  []Annotation   `json:"annotations"`
	QualityGaps  []QualityGap   `json:"quality_gaps"`

	// Summary counts for quick inspection.
	Summary ExportSummary `json:"summary"`
}

KnowledgeExport is the top-level envelope for a portable knowledge snapshot. It captures all durable, agent-generated knowledge for a project — ready for backup or migration. Graph nodes/edges are intentionally excluded (regenerable from source); transient tables (tool_calls, web_cache, sessions) are excluded.

type LedgerEntry

type LedgerEntry struct {
	SessionID string   `json:"session_id"`
	ProjectID string   `json:"project_id"`
	ToolName  string   `json:"tool_name"`
	EntityIDs []string `json:"entity_ids,omitempty"`
	FilePaths []string `json:"file_paths,omitempty"`
	CreatedAt string   `json:"created_at,omitempty"`
}

LedgerEntry records a single tool call's entity/file signals for cross-session awareness.

type ManualEdge

type ManualEdge struct {
	FromID     graph.NodeID
	ToID       graph.NodeID
	Relation   string
	Domain     string
	CreatedBy  string
	CreatedAt  int64
	Confidence float64
	Confirmed  bool
	Suppressed bool
}

ManualEdge represents a user-defined cross-domain edge created via link_entities.

type Memory

type Memory struct {
	ID             string `json:"id"`
	Tier           string `json:"tier"`
	Content        string `json:"content"`
	EntityID       string `json:"entity_id,omitempty"`
	AgentID        string `json:"agent_id,omitempty"`
	TaskID         string `json:"task_id,omitempty"`
	Tags           string `json:"tags,omitempty"` // JSON array string
	CreatedAt      string `json:"created_at"`
	ExpiresAt      string `json:"expires_at,omitempty"`
	LastAccessedAt string `json:"last_accessed_at,omitempty"`
	Source         string `json:"source"`
	Version        int    `json:"version,omitempty"` // Sprint 10.1: current version number (1-indexed)
	// Sprint 10.2: importance weight for decay scoring.
	// "pinned" = never decays. Numeric string (e.g. "0.8") = weight multiplier
	// applied to RecencyDecayScore. Default "1.0" = full recency decay.
	Importance  string `json:"importance,omitempty"`
	AccessCount int    `json:"access_count,omitempty"` // Sprint 11.5: ACT-R frequency counter
}

Memory represents a single memory entry in the unified memories table.

type MemorySearchResult

type MemorySearchResult struct {
	MemoryID       string  `json:"memory_id"`
	Content        string  `json:"content"`
	Tier           string  `json:"tier"`
	EntityID       string  `json:"entity_id,omitempty"`
	Score          float64 `json:"score"`                     // cosine similarity, higher = more relevant
	StaleEmbedding bool    `json:"stale_embedding,omitempty"` // true when anchored entity changed since embedding was computed
}

MemorySearchResult represents a memory matched by vector similarity search.

type MemoryVersion

type MemoryVersion struct {
	ID           string `json:"id"`
	MemoryID     string `json:"memory_id"`
	Version      int    `json:"version"`
	Content      string `json:"content"`
	SupersededBy string `json:"superseded_by"`
	CreatedAt    string `json:"created_at"`    // when this version was originally written
	SupersededAt string `json:"superseded_at"` // when it was replaced
}

MemoryVersion is a historical snapshot preserved when remember() deduplicates. The chain: version N → superseded_by → version N+1 (or current memory ID).

type Message

type Message struct {
	Seq       int64  `json:"seq"`
	ID        string `json:"id"`
	FromAgent string `json:"from_agent"`
	ToAgent   string `json:"to_agent,omitempty"` // empty = broadcast
	Topic     string `json:"topic"`
	Payload   string `json:"payload"` // arbitrary JSON
	ProjectID string `json:"project_id,omitempty"`
	CreatedAt int64  `json:"created_at"`        // Unix seconds
	ReadAt    *int64 `json:"read_at,omitempty"` // nil = unread
}

Message is a single entry in the agent message bus. Agents send messages to specific peers (to_agent non-empty) or broadcast to all agents (to_agent empty). The seq field acts as a cursor for polling.

type OrphanedTask

type OrphanedTask struct {
	TaskID       string `json:"task_id"`
	Title        string `json:"title"`
	Status       string `json:"status"`
	Action       string `json:"action"`        // what the stale session did: "created" | "claimed"
	LikelyStatus string `json:"likely_status"` // "likely_done" | "unclear" | "likely_abandoned"
	Evidence     string `json:"evidence,omitempty"`
}

OrphanedTask is a task started or created by a stale session that was never completed. Surfaced in session_init responses for human-confirmed resolution.

type Plan

type Plan struct {
	ID          string `json:"id"`
	Title       string `json:"title"`
	Description string `json:"description"`
	CreatedBy   string `json:"created_by,omitempty"`
	CreatedAt   string `json:"created_at"`
	UpdatedAt   string `json:"updated_at"`
	// CompletedAt is set (unix seconds) when all tasks in the plan reach done/cancelled.
	// Zero means still active.
	CompletedAt int64 `json:"completed_at,omitempty"`
}

Plan is a named collection of related tasks created during an LLM session. It persists in SQLite so future sessions can resume the agreed work.

type PlanSummary

type PlanSummary struct {
	Plan
	TotalTasks   int  `json:"total_tasks"`
	PendingTasks int  `json:"pending_tasks"`
	DoneTasks    int  `json:"done_tasks"`
	IsCompleted  bool `json:"is_completed"` // true when all tasks are done/cancelled
}

PlanSummary is a plan with task completion counts, used by GetPlans.

type ProjectStat

type ProjectStat struct {
	RepoID    string
	RepoRoot  string
	SavedAt   time.Time
	NodeCount int
	EdgeCount int
	FileCount int
	DBPath    string
}

ProjectStat holds the lightweight per-project metadata that can be read without loading the full graph. It is populated from the meta key-value table.

func ScanAll

func ScanAll() ([]ProjectStat, error)

ScanAll discovers every project that has been indexed by scanning the synapses cache directory for *.db files and reading their meta tables. Results are sorted by SavedAt descending (most recent first).

type QualityGap

type QualityGap struct {
	ID          string `json:"id"` // "{node_id}:{gap_id}"
	NodeID      string `json:"node_id"`
	GapID       string `json:"gap_id"` // slug: "dist-relative-path"
	Description string `json:"description"`
	Severity    string `json:"severity"` // low | medium | high | critical
	Status      string `json:"status"`   // open | fixed | wontfix
	FoundBy     string `json:"found_by,omitempty"`
	FoundAt     string `json:"found_at"`
	UpdatedAt   string `json:"updated_at"`
	FixNotes    string `json:"fix_notes,omitempty"`
}

QualityGap is an agent-discovered quality finding on a specific code entity. Unlike architecture violations (deterministic rule checks), quality gaps are asserted through reasoning — "I examined this function and found this edge case."

type QueryStats

type QueryStats struct {
	IndexHits int
	FullScans int
}

QueryStats reports index coverage for a set of representative hot-path queries. It runs EXPLAIN QUERY PLAN on each query and classifies each step as either an index hit ("SEARCH USING INDEX") or a full scan ("SCAN").

Only active when the environment variable SYNAPSES_QUERY_STATS=1 is set. Has no effect on normal query execution — purely observational.

Example output logged to stderr:

synapses: query_stats: edges(to_id,type) SEARCH USING INDEX idx_edges_to_type [hit]
synapses: query_stats: edges(type,to_id) SEARCH USING INDEX idx_edges_type_to [hit]
synapses: query_stats: nodes(type,package) SEARCH USING INDEX idx_nodes_type_pkg [hit]
synapses: query_stats: nodes(package) SEARCH USING INDEX idx_nodes_pkg [hit]
synapses: query_stats: summary — 4 index hits, 0 full scans

type RuleCandidate

type RuleCandidate struct {
	Decision    string `json:"decision"`
	Trigger     string `json:"trigger,omitempty"`
	Occurrences int    `json:"occurrences"`
	EpisodeIDs  string `json:"episode_ids"` // JSON array of ids
}

RuleCandidate is a failure pattern that has appeared enough times to be promoted to a dynamic architectural rule.

type ScoredMemory

type ScoredMemory struct {
	Memory Memory
	Score  float64 // raw BM25 score (higher = better match)
}

ScoredMemory pairs a Memory with a raw channel score for ConvexMerge fusion.

type SearchResult

type SearchResult struct {
	ID        string  `json:"id"`
	Name      string  `json:"name"`
	Signature string  `json:"signature,omitempty"`
	Doc       string  `json:"doc,omitempty"`
	File      string  `json:"file,omitempty"`
	Score     float64 `json:"score"` // higher = more relevant (normalised from BM25)
}

SearchResult is a node that matched a semantic_search query, annotated with its BM25 relevance rank.

type SessionState

type SessionState struct {
	ID              string   `json:"id"`
	TaskID          string   `json:"task_id"`
	AgentID         string   `json:"agent_id,omitempty"`
	Approach        string   `json:"approach,omitempty"`         // current strategy being taken
	FilesModified   []string `json:"files_modified,omitempty"`   // files being edited
	CompletedSteps  []string `json:"completed_steps,omitempty"`  // what's already done
	RemainingSteps  []string `json:"remaining_steps,omitempty"`  // what still needs doing
	Blockers        []string `json:"blockers,omitempty"`         // any known blockers
	Decisions       []string `json:"decisions,omitempty"`        // key decisions made
	ContextSnapshot string   `json:"context_snapshot,omitempty"` // free-form context dump
	CreatedAt       string   `json:"created_at"`
	UpdatedAt       string   `json:"updated_at"`
}

SessionState captures the precise working state of a task so that a future LLM session can resume from exactly where the previous session stopped. Unlike Task.Notes (append-only audit trail), this is a single mutable snapshot.

type SessionTaskAction

type SessionTaskAction string

SessionTaskAction is the relationship between a session and a task.

const (
	SessionTaskCreated   SessionTaskAction = "created"
	SessionTaskClaimed   SessionTaskAction = "claimed"
	SessionTaskCompleted SessionTaskAction = "completed"
	SessionTaskAbandoned SessionTaskAction = "abandoned"
)

Session task lifecycle actions.

type SessionWorkSummary

type SessionWorkSummary struct {
	SessionID  string   `json:"session_id"`
	AgentID    string   `json:"agent_id"`
	Intent     string   `json:"intent"`
	EntityIDs  []string `json:"entity_ids"`
	FilePaths  []string `json:"file_paths"`
	LastActive string   `json:"last_active"`
}

SessionWorkSummary aggregates a session's entity/file touchpoints for overlap detection.

type SignatureChange

type SignatureChange struct {
	NodeID   string // graph node ID
	Name     string
	NodeType string
	File     string
	Line     int
	OldSig   string // signature before the last SaveGraph
	NewSig   string // current signature
}

SignatureChange records an exported entity whose signature changed in the last SaveGraph.

type StaleSession

type StaleSession struct {
	SessionID     string         `json:"session_id"`
	AgentID       string         `json:"agent_id"`
	StartedAt     string         `json:"started_at"`   // RFC3339 for JSON consumers
	LastSeenAt    string         `json:"last_seen_at"` // RFC3339 for JSON consumers
	Intent        string         `json:"intent,omitempty"`
	ToolCalls     int            `json:"tool_calls"`
	OrphanedTasks []OrphanedTask `json:"orphaned_tasks,omitempty"`
}

StaleSession is a session that timed out without a clean end_session. Surfaced in session_init so the incoming agent can reconcile orphaned tasks.

type Store

type Store struct {

	// MaxMemoryRows caps the total number of memories per project.
	// An agent calling remember() in a loop can fill disk without this cap.
	// 0 means use DefaultMaxMemoryRows. Set via SetMaxMemoryRows().
	MaxMemoryRows int
	// MaxEpisodeRows caps the total number of episodes per project.
	// 0 means use DefaultMaxEpisodeRows. Set via SetMaxEpisodeRows().
	MaxEpisodeRows int
	// contains filtered or unexported fields
}

Store wraps two SQLite databases — one for the code graph (nodes, edges, call sites, file hashes) and one for universal knowledge (memories, episodes, sessions, events, messages, tasks, annotations, rules, gaps). Knowledge-mode projects open only the knowledgeDB; code-mode projects open both.

Cross-DB Consistency Model

graph.db and knowledge.db have NO cross-database transactional guarantee. SQLite does not support transactions spanning two separate database files, and Synapses deliberately avoids ATTACH DATABASE to prevent WAL-locking interactions between the two journals.

Cross-references that span the two databases:

Hard references (node_id is the primary lookup key — orphans cause stale results):

  • annotations.node_id → graphDB nodes.id
  • memory_anchors.node_id → graphDB nodes.id (triggers staleness tracking on node change)
  • quality_gaps.node_id → graphDB nodes.id (gaps for a node persist after the node is renamed/deleted)

Soft references (node IDs stored as JSON in TEXT columns — informational only, not cleaned up):

  • episodes.affected_nodes — historical record; stale IDs do not affect episode retrieval

These hard references may briefly point to non-existent node IDs during a reindex: the file watcher deletes stale graph nodes and re-inserts them as parsing completes, while knowledge records referencing those IDs persist in knowledgeDB. This is an intentional eventual-consistency window, not a data-loss event.

The fail-open design makes this safe:

  • Dangling anchor or annotation references are silently skipped when the referenced node is absent — callers receive fewer results, not an error.
  • PruneStaleData (runs daily) reconciles orphaned annotations AND quality gaps by cross-checking knowledgeDB against the current graphDB node set.
  • The staleness-tracking pipeline re-links anchors when the underlying node is re-added by the watcher after reindex completes.

Future improvement: a startup reconciliation pass in Open() that checks all hard-reference node_ids against graphDB and flags orphans immediately, rather than waiting for the next daily prune cycle.

func Open

func Open(path string) (*Store, error)

Open opens (or creates) both the graph and knowledge SQLite databases at the given path and applies schema migrations. The graph database lives at path; the knowledge database lives at KnowledgePath(path).

func OpenReadOnly

func OpenReadOnly(path string) (*Store, error)

OpenReadOnly opens an existing SQLite store at path in query-only mode. It does NOT run schema migrations or FTS rebuilds, making it safe to call concurrently with a running MCP server.

func (*Store) ActiveSessionWork

func (s *Store) ActiveSessionWork(projectID, excludeSessionID string, windowMinutes int) ([]SessionWorkSummary, error)

ActiveSessionWork returns aggregated entity/file sets for all sessions in projectID that are NOT excludeSessionID and have been active in the last windowMinutes. Returns nil if no overlapping sessions exist.

func (*Store) AddAnnotation

func (s *Store) AddAnnotation(nodeID, agentID, note string) (string, error)

AddAnnotation attaches a note to a graph node. Source is set to "agent".

func (*Store) AddAnnotationIfNew

func (s *Store) AddAnnotationIfNew(nodeID, agentID, note string, dedupeWindow time.Duration) (string, bool, error)

AddAnnotationIfNew attaches a note to a graph node only when no annotation from the same agentID with the same note content already exists within the last dedupeWindow seconds. Returns the new annotation ID and true on insert, or ("", false, nil) when deduplication suppresses the write.

func (*Store) AddSystemAnnotation

func (s *Store) AddSystemAnnotation(nodeID, note string) (string, error)

AddSystemAnnotation attaches a system-generated retrospective note to a graph node. Unlike AddAnnotation it sets source='system' so callers can distinguish automated notes from agent-authored ones. Used by the Reflective Synthesis auditor that runs when a task is marked done.

func (*Store) AppendEvent

func (s *Store) AppendEvent(typ, agentID, payload string) error

AppendEvent writes a new event to the log and prunes entries older than 24h. Non-fatal: errors are silently ignored by callers to avoid disrupting hot paths.

func (*Store) AppendLedger

func (s *Store) AppendLedger(e LedgerEntry) error

AppendLedger writes a single work ledger entry. Safe to call from bgQueue.

func (*Store) BatchGetNodeEmbeddings

func (s *Store) BatchGetNodeEmbeddings(ids []string) map[string][]float32

BatchGetNodeEmbeddings fetches pre-normalized float32 embedding vectors for a batch of node IDs. IDs with no stored embedding are silently omitted. The returned vectors are already unit-normalized (guaranteed by UpsertEmbedding) so callers can use dot product as cosine similarity.

Queries are chunked at 500 IDs per IN(...) clause to stay under SQLITE_MAX_VARIABLE_NUMBER (999). Returns nil when ids is empty or no rows match.

func (*Store) CheckPlanSafety

func (s *Store) CheckPlanSafety(planDesc, projectID string) (*Episode, error)

CheckPlanSafety searches failure episodes for the closest match to planDesc. Returns the top-1 matching failure episode (caller decides relevance). Returns nil, nil when no failure episodes exist yet (cold-start safe).

Uses OR matching (any key term matches) rather than AND matching, because plan descriptions use different words than stored episodes — "change auth handler" should match "modified auth token validation" via shared key terms.

func (*Store) CheckPlanSafetyCtx

func (s *Store) CheckPlanSafetyCtx(ctx context.Context, planDesc, projectID string) (*Episode, error)

CheckPlanSafetyCtx is the context-aware variant of CheckPlanSafety. The context is threaded into the SQL query — if it expires, the query cancels.

func (*Store) ClearAgentTask

func (s *Store) ClearAgentTask(agentID string) error

ClearAgentTask zeroes the current task fields for the given agent. Call when a task transitions to done/cancelled.

func (*Store) Close

func (s *Store) Close() error

Close releases all database connections (both reader and writer pools).

func (*Store) CollectQueryStats

func (s *Store) CollectQueryStats(w io.Writer) QueryStats

CollectQueryStats runs EXPLAIN QUERY PLAN on the four R32 hot queries and returns a QueryStats summarising how many use an index vs full scan. Call once at startup for observability; does not affect query execution. Pass os.Stderr for production output or io.Discard to suppress output in tests.

func (*Store) ConfirmEdge

func (s *Store) ConfirmEdge(fromID, toID graph.NodeID, relation string, confirmed bool) error

ConfirmEdge updates the human-review status of a persisted edge. confirmed=true → sets confirmed=1 and raises confidence to 1.0 (human-verified, never re-scored). confirmed=false → sets suppressed=1 (human-rejected; reinjectManualEdges and NameMatcher skip it). Returns an error if no matching edge exists in the store.

func (*Store) CorrelateSessionOutcome

func (s *Store) CorrelateSessionOutcome(sessionID, outcome string) (int64, error)

CorrelateSessionOutcome updates all context_deliveries rows for the given session with the resolved task outcome ("success" or "unknown"). Called synchronously from handleEndSession — outcome must be persisted before the session record is cleared so Sprint 11 queries see consistent state. sessionID must be the Synapses session UUID (not the MCP protocol session ID). Only rows with task_outcome=” are updated, making this safe to call multiple times (idempotent: already-correlated rows are never overwritten). Returns the number of rows updated and any database error.

func (*Store) CountActiveAgents

func (s *Store) CountActiveAgents(excludeAgentID string) (int, error)

CountActiveAgents returns the number of agents (excluding agentID) seen within the last 15 minutes.

func (*Store) CountEmbeddableMemories

func (s *Store) CountEmbeddableMemories() int

CountEmbeddableMemories returns the total number of non-expired, non-stale memories (P8-10). This is the denominator for embedding coverage percentage.

func (*Store) CountIndexedFiles

func (s *Store) CountIndexedFiles() (int, error)

CountIndexedFiles returns the number of files currently tracked in the index.

func (*Store) CountProjectSessions

func (s *Store) CountProjectSessions(projectID string) (int, error)

CountProjectSessions returns the total number of sessions (including closed) ever recorded for the given project. Returns 1 on the very first session_init call for a project, allowing callers to detect first-ever-session.

func (*Store) CountUnreadMessages

func (s *Store) CountUnreadMessages(agentID string) (int, error)

CountUnreadMessages returns the number of unread messages visible to agentID (direct messages to the agent or broadcasts). Fast indexed count query.

func (*Store) CreateMemoryVersion

func (s *Store) CreateMemoryVersion(memoryID, oldContent, activeFrom string) (int, error)

CreateMemoryVersion snapshots the current content of a memory as a historical version before the memory is updated (dedup overwrite). Returns the version number.

Temporal semantics:

  • created_at: when this version's content was originally written (the memory's created_at for v1, or the previous version's superseded_at for v2+).
  • superseded_at: NOW — when this version was replaced by the new content.

The live memory row in `memories` always holds the *current* content. Versions hold *previous* content that was active from created_at to superseded_at.

Caller must provide oldContent (the content being replaced) and memCreatedAt (the memory's created_at or last version's superseded_at as the start time).

Concurrency safety: uses INSERT ... SELECT for atomic version numbering. Enforces maxVersionsPerMemory cap — oldest version is pruned when exceeded.

func (*Store) CreatePlan

func (s *Store) CreatePlan(title, description, agentID string, tasks []TaskInput) (planID string, taskIDs []string, err error)

CreatePlan persists a plan and its initial tasks atomically. agentID is optional — if non-empty it records which agent created the plan. Returns the plan ID. CreatePlan persists a plan and its initial tasks atomically. agentID is optional — if non-empty it records which agent created the plan. Returns the plan ID and the IDs of all created tasks (for session-task linkage).

func (*Store) CrossDomainEdgeStats

func (s *Store) CrossDomainEdgeStats() (auto, confirmed, manual int, err error)

CrossDomainEdgeStats returns aggregate counts of cross-domain edges grouped into three buckets. Only non-suppressed edges are counted.

  • Auto: created by the name-matcher (created_by == "namematcher") and not yet human-reviewed (confirmed == 0).
  • Confirmed: human-approved via confirm_edge (confirmed == 1).
  • Manual: all other non-suppressed edges — created via link_entities or any path other than the name-matcher.

Uses a single aggregating SQL query to avoid loading all edge rows into Go.

func (*Store) DeleteCrossProjectDeps

func (s *Store) DeleteCrossProjectDeps(fromEntity string) error

DeleteCrossProjectDeps removes all cross-project deps for a local entity.

func (*Store) DeleteDynamicRule

func (s *Store) DeleteDynamicRule(ruleID string) (bool, error)

DeleteDynamicRule removes a dynamic rule by ID. Returns (true, nil) when the rule was found and deleted, (false, nil) when no rule with that ID exists.

func (*Store) DeleteManualEdge

func (s *Store) DeleteManualEdge(fromID, toID graph.NodeID, relation string) error

DeleteManualEdge removes a persisted user-defined edge.

func (*Store) DeleteMemoryByID

func (s *Store) DeleteMemoryByID(id string)

DeleteMemoryByID removes a single memory and its satellite rows by ID. Used by benchmarks to clean up test data immediately instead of waiting for TTL.

func (*Store) DeleteMemoryEmbeddings

func (s *Store) DeleteMemoryEmbeddings(memoryIDs []string) error

DeleteMemoryEmbeddings removes embeddings for the given memory IDs. Called during memory expiry cleanup. A no-op when memoryIDs is empty. Processes in batches of 500 to respect SQLite variable limits. Also removes the corresponding entries from the in-memory HNSW index.

func (*Store) DeleteWebCachePrefix

func (s *Store) DeleteWebCachePrefix(prefix string) error

DeleteWebCachePrefix removes all cache entries whose URL starts with prefix. Used to invalidate version-pinned entries when go.mod bumps a package version.

func (*Store) EmbeddingCount

func (s *Store) EmbeddingCount() int

EmbeddingCount returns the total number of stored embeddings.

func (*Store) EndSession

func (s *Store) EndSession(sessionID, reason, outcome, summary string) error

EndSession marks a session as closed with the given reason, outcome, and summary. reason: "clean" (end_session called), "timeout" (manual reconciliation). outcome: "success" | "failure" | "partial" | "unknown".

func (*Store) ExpireMemories

func (s *Store) ExpireMemories() (int64, error)

ExpireMemories deletes memories past their expires_at. Call periodically. Also cleans up orphaned memory_anchors and memory_surfaced rows for deleted memories.

func (*Store) ExportKnowledge

func (s *Store) ExportKnowledge(projectID string) (*KnowledgeExport, error)

ExportKnowledge serializes all durable knowledge to an atomic, consistent snapshot. All 8 queries run inside a single DEFERRED read transaction so concurrent writes do not produce partially-visible state (e.g. an anchor row that references a memory not yet visible in the memories query).

Intentionally excluded: graph nodes/edges (regenerable), file_hashes (ephemeral), tool_calls (analytics), web_cache (transient), sessions, agent_messages, events (operational logs).

All slice fields in the returned KnowledgeExport are non-nil even when empty, so JSON consumers always receive arrays rather than null.

func (*Store) FilterResultsByDomain

func (s *Store) FilterResultsByDomain(results []SearchResult, domain string) []SearchResult

FilterResultsByDomain post-filters search results by node domain. Looks up the domain column for each result and keeps only matching ones.

func (*Store) FindEpisodesByNodeID

func (s *Store) FindEpisodesByNodeID(nodeID string, limit int) ([]Episode, error)

FindEpisodesByNodeID searches episodes where affected_nodes contains the given node ID. Used by federation to find memories anchored to specific entities, which is more precise than text-based FTS search on entity names. Returns up to limit results ordered by recency (newest first).

func (*Store) FindNodesByNameCtx

func (s *Store) FindNodesByNameCtx(ctx context.Context, name string, limit int) ([]SearchResult, error)

FindNodesByNameCtx is the context-aware variant of FindNodesByName.

func (*Store) FindTasksByNodeID

func (s *Store) FindTasksByNodeID(nodeID string, limit int) ([]Task, error)

FindTasksByNodeID searches tasks where linked_nodes contains the given node ID. Returns up to limit results ordered by most recently updated first. Wraps the search term in JSON double quotes so "Auth" does NOT false-match "AuthService" — the closing quote acts as an exact-entry boundary.

func (*Store) GetAgent

func (s *Store) GetAgent(agentID string) (*Agent, error)

GetAgent returns the agent record for the given ID, or nil if not found. Used by context-weighted recall to fetch session state (intent, active task).

func (*Store) GetAgentContext

func (s *Store) GetAgentContext(agentID string) (*AgentContext, error)

GetAgentContext retrieves the context profile for the given agent. Returns nil if no profile exists yet (first session).

func (*Store) GetAgents

func (s *Store) GetAgents() ([]Agent, error)

GetAgents returns all known agents ordered by last_seen descending. Presence is computed from last_seen: active (≤5min), idle (5–15min), inactive (>15min).

func (*Store) GetAllMemoryAnchorNodeIDsInSet

func (s *Store) GetAllMemoryAnchorNodeIDsInSet(memIDs []string, nodeSet map[string]bool) (map[string][]string, error)

GetAllMemoryAnchorNodeIDsInSet returns ALL anchor node IDs in nodeSet for each memory, not just the first. Used by the spreading activation sort step to compute maximum activation across all of a memory's anchors.

Returns map[memoryID → []nodeID]. Memories with no anchors in nodeSet are absent. Batches like GetMemoryAnchorNodeIDsInSet (200 mem IDs per batch).

func (*Store) GetAnchorNodesByFTSQuery

func (s *Store) GetAnchorNodesByFTSQuery(query string, limit int) ([]string, error)

GetAnchorNodesByFTSQuery finds distinct anchor node IDs of memories whose content matches the given FTS5 query. These node IDs seed the graph channel's BFS traversal — structurally-related entities are discovered via graph edges.

Single SQL: memory_anchors JOIN memories JOIN memories_fts. Independent of the BM25 channel (different query path, different purpose). Returns at most limit node IDs. Returns (nil, nil) on empty/invalid query.

func (*Store) GetAnchorNodesByFTSQueryCtx

func (s *Store) GetAnchorNodesByFTSQueryCtx(ctx context.Context, query string, limit int) ([]string, error)

GetAnchorNodesByFTSQueryCtx is the context-aware variant of GetAnchorNodesByFTSQuery.

func (*Store) GetAnnotationsForNodes

func (s *Store) GetAnnotationsForNodes(nodeIDs []string) (map[string][]Annotation, error)

GetAnnotationsForNodes returns all annotations for the given node IDs, keyed by node ID. Returns an empty map if none exist.

func (*Store) GetCrossProjectDeps

func (s *Store) GetCrossProjectDeps(fromEntity string) ([]CrossProjectDep, error)

GetCrossProjectDeps returns all cross-project dependencies for a local entity.

func (*Store) GetCrossProjectDepsByProject

func (s *Store) GetCrossProjectDepsByProject(project string) ([]CrossProjectDep, error)

GetCrossProjectDepsByProject returns all deps targeting a specific sibling project.

func (*Store) GetEpisodes

func (s *Store) GetEpisodes(projectID, agentID, episodeType string, tags []string, limit int, sinceDays int) ([]Episode, error)

GetEpisodes lists episodes with optional filters, ordered by created_at DESC.

func (*Store) GetEvents

func (s *Store) GetEvents(sinceSeq int64, types []string, agentIDFilter string, limit int) ([]Event, int64, error)

GetEvents returns up to limit events with seq > sinceSeq, optionally filtered by event type and/or agent ID. Returns the latest seq seen so the caller can use it as a cursor. Pass agentIDFilter="" to disable agent filtering.

func (*Store) GetGaps

func (s *Store) GetGaps(f GapFilter) ([]QualityGap, error)

GetGaps returns quality gaps matching the filter. When filter.Status is empty it defaults to "open". Pass status="all" to return every status.

func (*Store) GetLastBranch

func (s *Store) GetLastBranch(agentID string) string

GetLastBranch returns the git branch from the most recent ended session for the given agent. Returns "" if no prior session exists or if the previous session never recorded a branch (pre-R22 sessions).

func (*Store) GetLatestWorkSummary

func (s *Store) GetLatestWorkSummary(agentID string) (*Memory, error)

GetLatestWorkSummary returns the most recent session-log work-summary memory for the given agent. Work summaries are stored by handleEndSession with the tag "work_summary" and contain a JSON array of PackageWork entries. Returns nil, nil when no unexpired work summary exists.

func (*Store) GetLearnedEdgeWeights

func (s *Store) GetLearnedEdgeWeights() map[graph.EdgeWeightKey]float64

GetLearnedEdgeWeights returns all per-edge learned weight multipliers. The result is served from an in-memory cache on the hot path (every get_context call) and reloaded from graphDB only when the cache has been invalidated by a write (UpsertLearnedEdgeWeights or MarkDormantEdges). Returns nil when no entries exist yet — neutral for all BFS/PPR multipliers.

func (*Store) GetLearnedEdgeWeightsVersion

func (s *Store) GetLearnedEdgeWeightsVersion() int64

GetLearnedEdgeWeightsVersion returns the current version of the learned weights table. The version increments on every successful write. It is included in CarveConfig and incorporated into the subgraph cache key so that stale subgraphs are automatically evicted after weight updates — without relying on the imprecise len(map) discriminator.

func (*Store) GetMemoriesByAnchorNode

func (s *Store) GetMemoriesByAnchorNode(nodeID string, limit int) ([]Memory, error)

GetMemoriesByAnchorNode returns memories anchored to the given node ID via the memory_anchors junction table. This finds memories linked through anchor_nodes= in remember(), which are NOT discoverable via QueryMemories(entityID=...) alone. Uses the idx_memory_anchors_node index for O(log N) lookup. Only returns non-expired, non-stale memories. Ordered by created_at DESC.

func (*Store) GetMemoriesByAnchorNodes

func (s *Store) GetMemoriesByAnchorNodes(nodeIDs []string, limit int, includeStale bool) ([]Memory, error)

GetMemoriesByAnchorNodes returns memories anchored to ANY of the given node IDs. Uses a batched IN-clause query (batches of 500 for SQLite variable limits). Only returns non-expired, non-stale memories. Ordered by created_at DESC. Deduplicates by memory ID across batches. When includeStale is true, stale memories are also returned.

func (*Store) GetMemoriesByAnchorNodesCtx

func (s *Store) GetMemoriesByAnchorNodesCtx(ctx context.Context, nodeIDs []string, limit int, includeStale bool) ([]Memory, error)

GetMemoriesByAnchorNodesCtx is the context-aware variant of GetMemoriesByAnchorNodes.

func (*Store) GetMemoriesByIDs

func (s *Store) GetMemoriesByIDs(ids []string) ([]Memory, error)

GetMemoriesByIDs returns full Memory structs for the given IDs. Missing IDs are silently skipped. Used by recall() to hydrate vector search results that only contain partial fields.

func (*Store) GetMemoriesByIDsCtx

func (s *Store) GetMemoriesByIDsCtx(ctx context.Context, ids []string) ([]Memory, error)

GetMemoriesByIDsCtx is the context-aware variant of GetMemoriesByIDs.

func (*Store) GetMemoriesWithoutEmbeddings

func (s *Store) GetMemoriesWithoutEmbeddings(limit int) ([]string, error)

GetMemoriesWithoutEmbeddings returns up to limit memory IDs that need (re-)embedding. A memory qualifies if:

  • it has no embedding row at all (new memory)
  • its content_hash no longer matches (content changed since embedding)
  • its embedding is marked stale=1 (model upgrade or anchor invalidation)

Only non-expired, non-stale (memory-level) memories are returned. Pass limit=0 to return all matching IDs (no cap).

func (*Store) GetMemoryAnchorNodeIDsInSet

func (s *Store) GetMemoryAnchorNodeIDsInSet(memIDs []string, nodeSet map[string]bool) (map[string]string, error)

GetMemoryAnchorNodeIDsInSet returns for each memory the first anchor node ID that is present in nodeSet (the BFS-discovered nodes). Returns map[memoryID → nodeID]. Memories with no anchors in nodeSet are absent from the map.

This is the correct method for path reconstruction: a memory may have multiple anchors, and only the one that was actually discovered by BFS is useful for tracing the path back to the query seed. Using GetMemoryAnchorNodeIDs (first by created_at) would silently drop paths when the first anchor is not the BFS-discovered one.

Batches memIDs in groups of 200 (leaves room for nodeSet placeholders within SQLite's 999-variable limit even when nodeSet has up to 500 entries).

func (*Store) GetMemoryAsOf

func (s *Store) GetMemoryAsOf(memoryIDs []string, asOf time.Time) ([]Memory, error)

GetMemoryAsOf returns memories with content as it existed at the given point in time. For each memory in the input set:

  • If memory.created_at > asOf → memory didn't exist yet → excluded
  • If a version existed that was active at asOf (created_at <= asOf < superseded_at) → return that version's content instead of current content
  • If no version covers asOf but the memory existed → current content is returned (the memory was never overwritten, or asOf is after the latest supersession)

All timestamps are UTC RFC3339 (guaranteed by prepareMemory), so string comparison is safe for temporal ordering.

func (*Store) GetMemoryContent

func (s *Store) GetMemoryContent(id string) (string, bool)

GetMemoryContent returns the content of a memory by ID. Returns ("", false) if not found.

func (*Store) GetMemoryEmbedding

func (s *Store) GetMemoryEmbedding(memoryID string) []float32

GetMemoryEmbedding returns the stored embedding vector for a memory, or nil if the memory has no embedding or the embedding is stale.

func (*Store) GetMemoryIDsByAnchorNodes

func (s *Store) GetMemoryIDsByAnchorNodes(nodeIDs []string, limit int) ([]string, error)

GetMemoryIDsByAnchorNodes returns the IDs of non-stale, non-expired memories that are anchored to ANY of the given node IDs via the memory_anchors table. Used by the file watcher to cheaply find which memory embeddings to invalidate after a node changes — we only need IDs, not full Memory structs. Processes in batches of 500 to respect SQLite variable limits. Returns (nil, nil) when nodeIDs is empty.

func (*Store) GetMemoryTextForEmbedding

func (s *Store) GetMemoryTextForEmbedding(memoryID string) (string, bool)

GetMemoryTextForEmbedding returns the text content that should be embedded for a memory. Returns ("", false) if the memory does not exist or is expired/stale.

func (*Store) GetMessages

func (s *Store) GetMessages(agentID string, sinceSeq int64, topicFilter string, unreadOnly bool, limit int) ([]Message, int64, error)

GetMessages returns messages visible to agentID with seq > sinceSeq. Visible means: addressed directly to agentID, OR broadcast (to_agent IS NULL). If topicFilter is non-empty, only messages with that exact topic are returned. If unreadOnly is true, only messages where read_at IS NULL are returned. Results are ordered by seq ASC (oldest first) so callers process in order. The returned latestSeq is the highest seq in the result set (use as next sinceSeq).

func (*Store) GetNodeTextForEmbedding

func (s *Store) GetNodeTextForEmbedding(nodeID string) (text string, ok bool)

GetNodeTextForEmbedding returns the text that should be embedded for a node. Includes caller/callee context from parser-derived edges (CALLS, IMPLEMENTS). Returns ("", false) if the node does not exist.

func (*Store) GetNodesWithoutEmbeddings

func (s *Store) GetNodesWithoutEmbeddings(limit int) ([]string, error)

GetNodesWithoutEmbeddings returns up to limit node IDs that either have no embedding yet or whose stored content_hash no longer matches the current node text (name+signature+doc). File and package nodes are excluded. Pass limit=0 to return all matching nodes (no cap).

Uses cursor-based pagination (WHERE n.rowid > ? LIMIT ?) to guarantee exactly limit results are returned when enough qualifying rows exist, regardless of the hash-match ratio.

func (*Store) GetOrResumeSession

func (s *Store) GetOrResumeSession(agentID, projectID, mcpSessionID, intent string, reconnectWindow, hibernateWindow int) (sessionID string, resumed bool, hibernateCtx *HibernateResumeContext, err error)

GetOrResumeSession is the single entry point for session creation at session_init time. It handles three scenarios in priority order:

  1. Same-connection resume (Phase 1): if the same MCP connection (identified by mcpSessionID) reconnects within the reconnect window, the existing session is resumed rather than a new row created. Handles MCP transport hiccups and rapid reconnects without creating duplicate rows.

  2. Cross-connection hibernate resume (Phase 2): if no same-connection match is found AND hibernateWindow > 0, Synapses looks for a recent session from the same (agentID, projectID) that went dormant for more than the reconnect window (i.e. not currently live on another connection) but less than the hibernate window (i.e. still within the resumable period). This covers the "user took a break / restarted editor" pattern. On a match, the session's mcp_session_id is updated to the new connection and the row is reactivated. A non-nil HibernateResumeContext is returned so the MCP handler can surface prior intent, summary, and gap duration to the agent.

  3. Fresh session (Phase 3+4): supersede any unclosed sessions for THIS physical connection and create a new row. Concurrent windows on the same project with different mcp_session_ids are never touched.

Parameters:

  • agentID: self-declared agent name (e.g. "claude-code"). May be "anonymous".
  • projectID: stable FNV hash of the project root path.
  • mcpSessionID: MCP transport connection ID ("stdio" for stdio mode).
  • intent: optional declared goal from session_init (may be empty).
  • reconnectWindow: from config.Session.ReconnectWindowSecs; 0 or negative → default (300 s).
  • hibernateWindow: from config.Session.HibernateWindowSecs. 0 → default (14400 s / 4 h). Positive → use that value as the window. Negative (e.g. -1) → disable cross-connection resume entirely.

func (*Store) GetOrphanedTasks

func (s *Store) GetOrphanedTasks(sessionID string) ([]OrphanedTask, error)

GetOrphanedTasks returns tasks that were started or created by the given stale session but never completed, and are still in an active state. These are candidates for human-confirmed reconciliation.

func (*Store) GetPendingTasks

func (s *Store) GetPendingTasks(planID, agentID string) ([]Task, error)

GetPendingTasks returns all tasks with status 'pending' or 'in_progress', ordered by priority (p0 first) then creation time. If planID is non-empty, results are filtered to that plan only. If agentID is non-empty, results are filtered to tasks assigned to that agent. Each task's Blocked/BlockedBy fields are computed from depends_on status.

func (*Store) GetPlans

func (s *Store) GetPlans() ([]PlanSummary, error)

GetPlans returns all plans with task completion summaries, ordered by creation time desc.

func (*Store) GetRuleCandidates

func (s *Store) GetRuleCandidates(minOccurrences int) ([]RuleCandidate, error)

GetRuleCandidates returns failure episodes that have appeared ≥minOccurrences times (matched by decision similarity) and have not yet been promoted to a rule. Uses exact decision-text grouping as a v1 approximation; FTS/vector grouping later.

func (*Store) GetSessionAllDeliveredEntities

func (s *Store) GetSessionAllDeliveredEntities(sessionID string) []string

GetSessionAllDeliveredEntities returns all distinct non-empty entity names that received context delivery in the given session (regardless of task_outcome). Used by Sprint 15 #3 edge-weight refinement at end_session AFTER CorrelateSessionOutcome has set task_outcome — the outcome is already known from the caller's local variable, so no filtering by outcome is needed.

func (*Store) GetSessionContextEntities

func (s *Store) GetSessionContextEntities(sessionID string) []string

GetSessionContextEntities returns the distinct non-empty entity names that received context delivery in the given session and have not yet been assigned a task_outcome. Used by emitAbandonedContextSignals at end_session to emit "task_abandoned" signals before the rows are bulk-updated to "unknown". Returns nil (not an error) when session_id is empty or no rows match.

func (*Store) GetSessionState

func (s *Store) GetSessionState(taskID string) (*SessionState, error)

GetSessionState returns the session state for a task, or nil if none exists.

func (*Store) GetSessionStateForTasks

func (s *Store) GetSessionStateForTasks(taskIDs []string) (map[string]*SessionState, error)

GetSessionStateForTasks returns session states for multiple task IDs, keyed by task_id. Used by GetPendingTasks to inline state into task results.

func (*Store) GetSignatureChanges

func (s *Store) GetSignatureChanges(file string) ([]SignatureChange, error)

GetSignatureChanges returns exported entities in the given file whose signature changed during the last SaveGraph call. The file argument is matched as a suffix (same semantics as Graph.FindByFile) so callers may pass either a relative or an absolute path. Returns an empty slice — not an error — when nothing changed.

func (*Store) GetStaleEmbeddingMemoryIDs

func (s *Store) GetStaleEmbeddingMemoryIDs(limit int) ([]string, error)

GetStaleEmbeddingMemoryIDs returns memory IDs whose embeddings are stale (stale=1 in memory_embeddings) but whose memory records are still valid (non-stale, non-expired). Up to limit IDs are returned. Used by the semantic recall channel to drive lazy re-embedding: stale embeddings are refreshed just before the vector search runs so they participate in scoring instead of being silently excluded.

func (*Store) GetStaleSessions

func (s *Store) GetStaleSessions(projectID, currentSessionID string, staleThreshold time.Duration) ([]StaleSession, error)

GetStaleSessions returns sessions for projectID that have not been seen within staleThreshold and have not been cleanly closed. currentSessionID is excluded so the caller never surfaces its own session. Results are capped at 5.

func (*Store) GetTask

func (s *Store) GetTask(id string) (*Task, error)

GetTask retrieves a single task by ID. Returns an error wrapping sql.ErrNoRows if not found.

func (*Store) GetToolCallSummary

func (s *Store) GetToolCallSummary(sessionID string) (ToolCallSummary, error)

GetToolCallSummary returns aggregated tool call stats for a session. DurationMs is the cumulative sum of all individual call durations — not a wall-clock span — so it correctly reflects actual work time regardless of idle gaps between calls. Returns an empty summary (not an error) when no calls exist for the session.

func (*Store) GetViolationLog

func (s *Store) GetViolationLog(ruleID string, limit int) ([]ViolationLogEntry, error)

GetViolationLog returns up to limit entries from the violation audit log, ordered by last_seen descending (most recent first). If ruleID is non-empty, only violations for that rule are returned.

func (*Store) GetWebCache

func (s *Store) GetWebCache(url string) (*WebCacheEntry, bool)

GetWebCache returns the cached entry for url, or (nil, false) if missing or expired.

func (*Store) HasNoFailureEpisodes

func (s *Store) HasNoFailureEpisodes() bool

HasNoFailureEpisodes reports whether there are zero failure episodes in the store. Uses a fast indexed EXISTS check — avoids the FTS5 scan entirely on cold-start (no episodes recorded yet).

func (*Store) HasTable

func (s *Store) HasTable(name string) bool

HasTable reports whether the given table exists in either the graph or knowledge SQLite database. Used by federation to check if a sibling store has specific tables (e.g., episodes, episodes_fts).

func (*Store) InsertContextDelivery

func (s *Store) InsertContextDelivery(cd ContextDelivery)

InsertContextDelivery records a context delivery row in knowledge.db. Safe to call from a goroutine — uses the single knowledgeDB connection (WAL mode). Errors are silently swallowed: instrumentation must never affect hot-path callers. Rows with empty ToolName are skipped to prevent dirty data in quality analysis.

func (*Store) InsertMemory

func (s *Store) InsertMemory(m Memory) (string, error)

InsertMemory writes a new memory, applying tier-based TTL and noise filtering. Returns the memory ID. Deduplicates against existing memories with similar content. Sprint 10.1: on dedup, snapshots the old content as a version before touching.

func (*Store) InsertMemoryAnchors

func (s *Store) InsertMemoryAnchors(memoryID string, nodeIDs []string) error

InsertMemoryAnchors links a memory to one or more graph node IDs. Used by AM-1: agents pass anchor_nodes when calling remember() to bind codebase-derived beliefs to the graph. AM-2 cascades invalidation via node_id index.

func (*Store) InsertMemoryWithAnchors

func (s *Store) InsertMemoryWithAnchors(m Memory, anchorNodes []string) (string, error)

InsertMemoryWithAnchors atomically inserts a memory and its anchor links in a single transaction. Both the memory INSERT and all anchor INSERTs run inside the same tx — if any step fails, the entire operation rolls back cleanly. If the memory deduplicates against an existing one, anchors are still added to the existing memory (additive enrichment) outside the tx.

func (*Store) InvalidateEmbeddingsByModel

func (s *Store) InvalidateEmbeddingsByModel(currentModel string) (int, error)

InvalidateEmbeddingsByModel marks all memory embeddings generated by a different model as stale. This is the migration step for embedding model upgrades: when the builtin model changes (e.g., MiniLM → nomic-embed-text), old embeddings are in a different vector space and produce meaningless similarity scores against new embeddings. Marking them stale forces re-embedding on the next EmbedAllMemories pass. Returns the number of embeddings invalidated. A no-op when all embeddings already match currentModel or the table is empty.

func (*Store) LinkSessionTask

func (s *Store) LinkSessionTask(sessionID, taskID string, action SessionTaskAction)

LinkSessionTask records the relationship between a session and a task at a point in time. action: "created" | "claimed" | "completed" | "abandoned". Fire-and-forget: all errors silently discarded — must never block task ops.

func (*Store) LoadCallSites

func (s *Store) LoadCallSites() ([]graph.CallSite, error)

LoadCallSites returns all persisted call sites from the last full index. Returns nil (not an error) if the table is empty.

func (*Store) LoadCallSitesForFiles

func (s *Store) LoadCallSitesForFiles(files []string) ([]graph.CallSite, error)

LoadCallSitesForFiles returns stored call sites whose caller_file is in the provided set. Used by the watcher to scope call-site reload to the invalidation set (changed file + its importers) instead of loading the entire call_sites table on every file change.

Falls back to LoadCallSites if files is empty or exceeds 900 entries (SQLite bound-parameter limit is 999; 900 gives safe headroom).

func (*Store) LoadCallerFilesForPkgAliases

func (s *Store) LoadCallerFilesForPkgAliases(aliases []string) ([]string, error)

LoadCallerFilesForPkgAliases returns distinct caller_file values for stored call sites whose pkg_alias matches any of the provided aliases.

This covers the "new public function" gap: when a file adds a new exported function, other files that import its package may have unresolved call sites (no CALLS edge yet) stored in call_sites with pkg_alias equal to the package name or the filename stem. Querying by pkg_alias finds these latent callers so they are included in the invalidation set and re-resolved on the next analysis pass.

Callers should pass both the package declaration name and the filename stem (e.g., ["models", "user"]) to maximize coverage across languages where the import alias may differ from the package name.

func (*Store) LoadDynamicRules

func (s *Store) LoadDynamicRules() ([]config.Rule, error)

LoadDynamicRules returns all dynamic rules persisted in the store, ordered by creation time. Called at server startup to restore rules from previous sessions so agents don't need to re-declare them after a restart.

func (*Store) LoadFileMtimes

func (s *Store) LoadFileMtimes() (map[string]int64, error)

LoadFileMtimes returns the stored path→mtime (UnixNano) map from the last successful index. Returns an empty map (not nil) if no data is stored yet.

func (*Store) LoadGraph

func (s *Store) LoadGraph() (*graph.Graph, error)

LoadGraph reads the persisted graph from the store and returns it. Returns (nil, nil) if the store is empty (first run).

func (*Store) LoadIndexSnapshot

func (s *Store) LoadIndexSnapshot() ([]byte, error)

LoadIndexSnapshot returns the raw BLOB previously saved by SaveIndexSnapshot, or (nil, nil) if no snapshot exists.

func (*Store) LoadManualEdges

func (s *Store) LoadManualEdges() ([]ManualEdge, error)

LoadManualEdges returns all persisted user-defined edges.

func (*Store) LogViolations

func (s *Store) LogViolations(vs []config.Violation) error

LogViolations upserts a batch of violations into the audit log. Re-detecting the same violation (same rule+from+to+edge) updates last_seen and increments occurrences instead of creating a duplicate row.

func (*Store) ManualEdgeExists

func (s *Store) ManualEdgeExists(fromID, toID graph.NodeID, relation string) (bool, error)

ManualEdgeExists returns true when a manual edge with the given endpoints and relation exists in the store (regardless of confirmed/suppressed state). Uses the primary-key index — O(log N), no full table scan.

func (*Store) MarkAnchoredMemoriesStale

func (s *Store) MarkAnchoredMemoriesStale(nodeIDs []string, reason string) error

MarkAnchoredMemoriesStale sets stale=1 on all memories that have at least one anchor in nodeIDs. Idempotent: calling twice with the same IDs is safe. reason is stored in stale_reason for surfacing in session_init (AM-3). A no-op when nodeIDs is empty.

nodeIDs is processed in batches of ≤500 to stay under SQLite's default SQLITE_MAX_VARIABLE_NUMBER limit of 999 (Gap 5 fix).

func (*Store) MarkAnnotationsStale

func (s *Store) MarkAnnotationsStale(nodeIDs []string) error

MarkAnnotationsStale marks all annotations on the given node IDs as stale. Called when a node's call-graph changes significantly (fan-in delta >20%) or when a node is removed, so agents see a warning in get_context.

func (*Store) MarkDormantEdges

func (s *Store) MarkDormantEdges(before time.Time)

MarkDormantEdges marks edges whose last_used timestamp is older than before as dormant and applies a one-time dormancyPenalty to weight_mult (floored at learnedWeightFloor). Only edges not already dormant are updated.

The sweep is debounced to at most once per 24 hours: calling this on every end_session is harmless but wastes a full-table scan when the 30-day window means nothing changes 99% of the time.

Errors are silently dropped — this is best-effort maintenance.

func (*Store) MarkEntityMemoriesStaleForNodes

func (s *Store) MarkEntityMemoriesStaleForNodes(nodeIDs []string, reason string) error

MarkEntityMemoriesStaleForNodes marks entity-tier memories stale (stale=1) for all entity IDs in nodeIDs in a single batch. Covers non-anchored entity memories (written with entity_id but no anchor_nodes) that MarkAnchoredMemoriesStale does not reach. nodeIDs is processed in batches of ≤500 to respect SQLite's SQLITE_MAX_VARIABLE_NUMBER limit. reason is stored in stale_reason. A no-op when nodeIDs is empty.

func (*Store) MarkMemoriesSurfaced

func (s *Store) MarkMemoriesSurfaced(agentID string, ids []string) error

MarkMemoriesSurfaced records that the given agent has seen these invalidated memories. Two paths:

  • Named agent (agentID != ""): INSERT into memory_surfaced table. Does NOT touch the legacy surfaced_at column — anonymous sessions must still be able to see these memories via their own fallback path.
  • Anonymous (agentID == ""): SET surfaced_at on the memories table directly. This is the only surfacing mechanism for anonymous sessions.

Idempotent: INSERT OR IGNORE on the composite PK (named); UPDATE is idempotent (anon).

func (*Store) MarkMemoryEmbeddingsStale

func (s *Store) MarkMemoryEmbeddingsStale(memoryIDs []string) error

MarkMemoryEmbeddingsStale sets stale=1 on embeddings for the given memory IDs. This is the foundation for Sprint 10.7 (graph-anchored embedding invalidation): when a file watcher detects an entity change, embeddings of memories anchored to that entity are marked stale. On next recall(), stale embeddings are re-embedded before scoring. Idempotent. A no-op when memoryIDs is empty. Processes in batches of 500 to respect SQLite variable limits. Also removes stale entries from the HNSW index — stale vectors shouldn't participate in ANN search until re-embedded.

func (*Store) MarkRead

func (s *Store) MarkRead(messageID, agentID string) error

MarkRead stamps a direct message as read by the given agent. Only direct messages (to_agent matches) can be marked read — broadcasts (to_agent IS NULL) remain visible to all agents and cannot be marked read. Calling MarkRead on an already-read message is a no-op (idempotent).

func (*Store) MemoryVectorSearchWithThreshold

func (s *Store) MemoryVectorSearchWithThreshold(queryVec []float32, limit int, minScore float64) ([]MemorySearchResult, error)

MemoryVectorSearchWithThreshold performs cosine similarity search with a minimum similarity threshold. Results below the threshold are excluded. Useful for recall() where low-confidence matches should not pollute results.

Uses HNSW fast path when available, with threshold applied post-search. Falls back to brute-force scan when HNSW index is not ready.

func (*Store) MemoryVectorSearchWithThresholdCtx

func (s *Store) MemoryVectorSearchWithThresholdCtx(ctx context.Context, queryVec []float32, limit int, minScore float64) ([]MemorySearchResult, error)

MemoryVectorSearchWithThresholdCtx is the context-aware variant of MemoryVectorSearchWithThreshold.

func (*Store) NodeCount

func (s *Store) NodeCount() int

NodeCount returns the number of nodes currently stored in the graph database.

func (*Store) NodeExistsByNameCtx

func (s *Store) NodeExistsByNameCtx(ctx context.Context, name string) (bool, error)

NodeExistsByNameCtx is the context-aware variant of NodeExistsByName. The context is threaded into the SQL query — if the context expires, the query is cancelled.

func (*Store) NodeHNSWSearch

func (s *Store) NodeHNSWSearch(queryVec []float32, limit int) (results []scoredID)

NodeHNSWSearch returns top-k candidate node IDs from the node HNSW index. Protected with recover: coder/hnsw panics on dimension mismatch.

func (*Store) PruneExpiredWebCache

func (s *Store) PruneExpiredWebCache() error

PruneExpiredWebCache removes all entries whose TTL has elapsed.

func (*Store) PruneLedger

func (s *Store) PruneLedger(maxAge time.Duration) (int64, error)

PruneLedger deletes work ledger entries older than the given duration.

func (*Store) PruneOldSessions

func (s *Store) PruneOldSessions(age time.Duration) (int64, error)

PruneOldSessions deletes sessions (and their linked session_tasks rows via CASCADE) that have been closed or hibernated for longer than age. This prevents unbounded growth in long-running installations.

Safe to call on every startup or from a periodic goroutine — a built-in 24-hour debounce ensures the DELETE runs at most once per day regardless of how many callers invoke it.

At ~5 sessions/day a 90-day window keeps fewer than 450 rows; the DELETE itself is effectively instantaneous at that scale.

func (*Store) PruneStaleData

func (s *Store) PruneStaleData(ctx context.Context, retentionDays int)

PruneStaleData removes old rows from tables that grow unbounded over time. retentionDays controls the cutoff; rows older than that are deleted. Safe to call concurrently — a built-in 23-hour debounce ensures at most one prune runs per day regardless of how many goroutines invoke it. Intended to be called at startup and then on a daily timer.

func (*Store) PruneStaleSyntheticEdges

func (s *Store) PruneStaleSyntheticEdges(g *graph.Graph) error

PruneStaleSyntheticEdges removes synthetic MENTIONS edges (created_by="namematcher") whose from_id or to_id no longer exists in g. Called at the start of each name-matching pass so stale DB entries from renamed/deleted entities do not accumulate indefinitely. Errors are logged but do not block the caller — a stale DB row is not a hard failure.

func (*Store) PruneToolCallsOlderThan

func (s *Store) PruneToolCallsOlderThan(age time.Duration) (int64, error)

PruneToolCallsOlderThan deletes tool_calls rows older than age. Returns the number of rows deleted. Safe to call concurrently — a built-in 1-hour debounce ensures at most one prune runs per hour regardless of how many goroutines invoke it (e.g. multiple parallel session_init calls). At 50 calls/session × 10 sessions/day, a 7-day window is ~35 KB.

func (*Store) QueryInvalidatedMemories

func (s *Store) QueryInvalidatedMemories(agentID string, limit int) ([]InvalidatedMemory, error)

QueryInvalidatedMemories returns stale memories that have not yet been surfaced to the given agent. Per-agent tracking: each agent has its own surfacing record in memory_surfaced, so every agent sees invalidated memories independently. Capped at limit rows, ordered by staled_at DESC so the most recently invalidated beliefs appear first. Used by AM-3: session_init surfaces these once, then MarkMemoriesSurfaced records the (memory_id, agent_id) pair so they don't re-appear for that agent.

func (*Store) QueryMemories

func (s *Store) QueryMemories(tier, entityID, agentID string, limit int) ([]Memory, error)

QueryMemories retrieves memories matching the given filters. All filter params are optional (empty string = no filter applied for that field). NOTE: passing empty entityID does NOT filter by entity — it returns all entities. Use QueryMemoriesForEntities for multi-entity batched lookups.

func (*Store) QueryMemoriesForEntities

func (s *Store) QueryMemoriesForEntities(entityIDs []string, limit int) (map[string][]Memory, error)

QueryMemoriesForEntities retrieves entity-tier memories for multiple entity IDs. Returns a map of entityID → []Memory. Non-expired only.

func (*Store) QueryMemoriesIncludingStale

func (s *Store) QueryMemoriesIncludingStale(tier, entityID, agentID string, limit int) ([]Memory, error)

QueryMemoriesIncludingStale is like QueryMemories but returns both active and stale memories. Use for audit scenarios (e.g. recall(include_stale=true)) where the agent explicitly wants to see the full history including invalidated entries.

func (*Store) QueryRecentSessionMemories

func (s *Store) QueryRecentSessionMemories(agentID string, limit int) ([]Memory, error)

QueryRecentSessionMemories retrieves the most recent session-log memories for the given agent, ordered newest-first. Returns at most limit rows.

func (*Store) RebuildMemoryHNSW

func (s *Store) RebuildMemoryHNSW()

RebuildMemoryHNSW loads all non-stale memory embeddings from SQLite and builds the HNSW index from scratch. Called during Store.Open() and can be called to rebuild after bulk operations (e.g., model migration).

This is O(N log N) where N is the number of embeddings. For 10K embeddings at 384 dims, this takes ~200-500ms. For 50K, ~2-5s. Acceptable at startup.

Thread-safe: acquires exclusive lock on hnswMemMu for the entire rebuild. Callers should not hold hnswMemMu when calling this method.

func (*Store) RebuildNodeHNSW

func (s *Store) RebuildNodeHNSW()

RebuildNodeHNSW loads all node embeddings from graphDB into an HNSW index.

func (*Store) RecallEpisodes

func (s *Store) RecallEpisodes(query, projectID, agentID, episodeType, outcomeFilter string, limit, sinceDays int) ([]Episode, error)

RecallEpisodes performs an FTS5 BM25 search over episodes matching query. Optional filters: projectID (empty = all), agentID (empty = all), episodeType (empty = all), outcomeFilter (empty = all). sinceDays limits results to the last N days (0 = no time filter). Returns up to limit results ordered by relevance (best match first). v1 uses top-N strategy with no score threshold — caller decides relevance.

func (*Store) RecallEpisodesCtx

func (s *Store) RecallEpisodesCtx(ctx context.Context, query, projectID, agentID, episodeType, outcomeFilter string, limit, sinceDays int) ([]Episode, error)

RecallEpisodesCtx is like RecallEpisodes but accepts a context for cancellation. Use this variant in federation cross-project search where a hung sibling store should not block the caller for the full busy timeout.

func (*Store) RecentMemories

func (s *Store) RecentMemories(limit, sinceDays int, until *time.Time, includeStale bool) ([]Memory, error)

RecentMemories returns the N most recent non-expired, non-stale memories regardless of text match. This is the data source for the temporal channel in quad-channel recall — it finds memories relevant by recency alone. sinceDays limits the lookback window (0 = 7 days default). until optionally caps the upper bound on created_at (nil = no upper bound). When includeStale is true, stale memories are also returned.

func (*Store) RecentMemoriesCtx

func (s *Store) RecentMemoriesCtx(ctx context.Context, limit, sinceDays int, until *time.Time, includeStale bool) ([]Memory, error)

RecentMemoriesCtx is the context-aware variant of RecentMemories.

func (*Store) RecordToolCall

func (s *Store) RecordToolCall(toolName, agentID, sessionID, entity string, durationMs int64, success bool)

RecordToolCall inserts one row into the tool_calls table. All errors are silently discarded — observability must never block the hot path. sessionID is the Synapses session UUID from CreateSession; empty for pre-init calls.

func (*Store) ReinjectManualEdges

func (s *Store) ReinjectManualEdges(g *graph.Graph) error

ReinjectManualEdges loads all persisted manual edges and adds them to g. Safe to call after any graph rebuild — AddEdge is idempotent and silently drops edges whose endpoints no longer exist.

func (*Store) RememberEpisode

func (s *Store) RememberEpisode(e Episode) (string, error)

RememberEpisode inserts a new episode and keeps the FTS5 index in sync.

func (*Store) SaveCallSites

func (s *Store) SaveCallSites(sites []graph.CallSite) error

SaveCallSites replaces the persisted call-site table with the given sites.

func (*Store) SaveDiscoveryEdges

func (s *Store) SaveDiscoveryEdges(edges []graph.Edge) error

SaveDiscoveryEdges persists a batch of edges created by post-embed discovery passes (DiscoverDocCodeRelations, DiscoverEmbedRelations). Uses INSERT OR IGNORE so existing edges are not duplicated. This is the lightweight alternative to SaveGraph for persisting edges created outside the normal parse→resolve→save cycle.

func (*Store) SaveFileMtimes

func (s *Store) SaveFileMtimes(m map[string]int64) error

SaveFileMtimes replaces the stored file-mtime table with the provided map. m maps absolute file path → mtime in UnixNano.

func (*Store) SaveGraph

func (s *Store) SaveGraph(g *graph.Graph) error

SaveGraph persists all nodes and edges of g, replacing any existing data. A metadata record stores the repo ID and the save timestamp.

func (*Store) SaveGraphDelta

func (s *Store) SaveGraphDelta(changedFile string, g *graph.Graph) error

SaveGraphDelta persists only the nodes and edges for changedFile, replacing any existing data for that file in a single transaction. This reduces write amplification from O(total graph) to O(changed file) — roughly 95% fewer writes for a typical single-file edit.

Incoming edges to nodes that were deleted from changedFile are also removed (to prevent unbounded accumulation over long sessions). Incoming edges to nodes that still exist in changedFile are preserved — their from_id is in another file and unaffected by this delta.

If changedFile is empty, falls back to the full SaveGraph.

func (*Store) SaveIndexSnapshot

func (s *Store) SaveIndexSnapshot(blob []byte) error

SaveIndexSnapshot persists a zstd-compressed GraphIndex BLOB to the meta table.

func (*Store) SaveManualEdge

func (s *Store) SaveManualEdge(fromID, toID graph.NodeID, relation, domain, createdBy string, confidence float64, clearSuppressed bool) (ManualEdge, error)

SaveManualEdge persists a user-defined edge. Upserts on (from_id, to_id, relation). Returns the actual stored row so callers see the true confirmed/suppressed state.

clearSuppressed=true — human-initiated call (link_entities): resets suppressed=0 so

a previously-rejected edge becomes active again. Does NOT touch confirmed —
the confirmed flag is owned exclusively by ConfirmEdge.

clearSuppressed=false — automated call (namematcher): preserves existing confirmed and

suppressed flags; also guards confirmed edge confidence against downgrade.

func (*Store) SaveSyntheticEdge

func (s *Store) SaveSyntheticEdge(fromID, toID graph.NodeID, edgeType graph.EdgeType, confidence float64) (ManualEdge, error)

SaveSyntheticEdge persists a synthetic MENTIONS edge created by the name matcher. Convenience wrapper around SaveManualEdge with "namematcher" as the creator and DomainKnowledge as the domain. Idempotent — upserts update confidence on re-run but never downgrade a human-confirmed edge's confidence or clear suppression.

func (*Store) SavedAt

func (s *Store) SavedAt() (time.Time, error)

SavedAt returns the timestamp of the last SaveGraph call, or zero if absent.

func (*Store) SearchMemories

func (s *Store) SearchMemories(query string, limit int) ([]Memory, error)

SearchMemories performs FTS5 BM25 full-text search over memory content. Returns non-expired memories ordered by relevance (best match first). The query uses FTS5 query syntax — each space-separated word is an implicit AND term.

func (*Store) SearchMemoriesCtx

func (s *Store) SearchMemoriesCtx(ctx context.Context, query string, limit int) ([]Memory, error)

SearchMemoriesCtx is the context-aware variant of SearchMemories.

func (*Store) SearchMemoriesIncludingStale

func (s *Store) SearchMemoriesIncludingStale(query string, limit int) ([]Memory, error)

SearchMemoriesIncludingStale is like SearchMemories but also returns stale memories. Use for audit scenarios where the agent explicitly passes include_stale=true to recall().

func (*Store) SearchMemoriesIncludingStaleCtx

func (s *Store) SearchMemoriesIncludingStaleCtx(ctx context.Context, query string, limit int) ([]Memory, error)

SearchMemoriesIncludingStaleCtx is the context-aware variant of SearchMemoriesIncludingStale.

func (*Store) SearchMemoriesWithScores

func (s *Store) SearchMemoriesWithScores(query string, limit int, includeStale bool) ([]ScoredMemory, error)

SearchMemoriesWithScores returns memories matching an FTS query along with their raw BM25 scores. Used by ConvexMerge to do score-magnitude-aware fusion instead of rank-only RRF.

func (*Store) SearchMemoriesWithScoresCtx

func (s *Store) SearchMemoriesWithScoresCtx(ctx context.Context, query string, limit int, includeStale bool) ([]ScoredMemory, error)

SearchMemoriesWithScoresCtx is the context-aware variant of SearchMemoriesWithScores.

func (*Store) SemanticSearch

func (s *Store) SemanticSearch(query string, limit int) ([]SearchResult, error)

SemanticSearch queries the FTS5 index using BM25 ranking. Returns up to limit results ordered by relevance. Column weights: name=10, split_name=8, signature=5, doc=2 — exact name matches rank highest. The query is sanitized to avoid FTS5 syntax errors; on failure a LIKE fallback is used.

func (*Store) SemanticSearchWithDomain

func (s *Store) SemanticSearchWithDomain(query string, limit int, domain string) ([]SearchResult, error)

SemanticSearchWithDomain queries the FTS5 index filtered by node domain. When domain is non-empty, results are restricted to nodes whose domain column matches (e.g. "docs" for doc sections, "code" for code entities). When domain is empty, behaves identically to SemanticSearch (no filtering).

func (*Store) SendMessage

func (s *Store) SendMessage(fromAgent, toAgent, topic, payload, projectID string) (string, error)

SendMessage stores a new inter-agent message and returns its ID.

func (*Store) SessionLedgerEntities

func (s *Store) SessionLedgerEntities(sessionID string) (entityIDs, filePaths []string, err error)

SessionLedgerEntities returns the deduplicated set of entity IDs and file paths recorded in the work ledger for a given session. Used for session resumption.

func (*Store) SetSemanticDedupFunc

func (s *Store) SetSemanticDedupFunc(fn func(text string) ([]float32, error))

SetSemanticDedupFunc sets the embedding function used for semantic dedup in prepareMemory. When Jaccard similarity is inconclusive (0.5–0.85), the function embeds the new content and compares against the candidate's stored embedding. Pass nil to disable semantic dedup (default).

func (*Store) SetSessionBranch

func (s *Store) SetSessionBranch(sessionID, branch string)

SetSessionBranch records the current git branch for a session. Fire-and-forget: used by session_init to persist branch state.

func (*Store) SetTaskCommits

func (s *Store) SetTaskCommits(taskID string, commits []string) error

SetTaskCommits stores the git log lines captured at task completion. commits may be nil (no commits made, or git unavailable) — stored as '[]'. This is a write-once operation per task: called once at update_task(done).

func (*Store) SetTaskStartCommit

func (s *Store) SetTaskStartCommit(taskID, sha string) error

SetTaskStartCommit records the git HEAD SHA captured when a task was set to in_progress. It is a no-op (not an error) when sha is empty, ensuring graceful degradation when git is unavailable.

func (*Store) Stat

func (s *Store) Stat(dbPath string) (*ProjectStat, error)

Stat reads only the meta key-value table and returns a ProjectStat without loading any nodes or edges. This is the fast path used by 'synapses list'.

func (*Store) ToolUsageStats

func (s *Store) ToolUsageStats(days, limit int) ([]ToolUsageStat, error)

ToolUsageStats returns the top-N tools by call count over the last `days` days.

func (*Store) TouchMemory

func (s *Store) TouchMemory(id string) error

TouchMemory updates last_accessed_at and extends expires_at by 50% of the tier's base TTL (capped at 2x base). This implements access-based decay renewal — memories that prove useful stay alive longer.

func (*Store) TouchSession

func (s *Store) TouchSession(sessionID string)

TouchSession updates last_seen_at, increments tool_calls, and ensures the session state is 'active' (a hibernated session becomes active again on first tool call). Fire-and-forget: all errors silently discarded (< 0.5 ms per call).

func (*Store) UpdateCallSitesForFile

func (s *Store) UpdateCallSitesForFile(file string, newSites []graph.CallSite) error

UpdateCallSitesForFile atomically replaces the persisted call sites whose caller_file matches file with newSites. This is used by the watcher after an incremental re-parse so the stored call-site table stays consistent with the live graph without a full table replacement.

func (*Store) UpdateLinkedNodes

func (s *Store) UpdateLinkedNodes(taskID string, nodeIDs []string) error

UpdateLinkedNodes replaces the linked_nodes for a task with nodeIDs. Call with the full desired set (existing + newly detected); deduplication is the caller's responsibility. Used by handleLinkTaskNodes and autoLinkNodes.

func (*Store) UpdateMemoryContent

func (s *Store) UpdateMemoryContent(memoryID, newContent string) error

UpdateMemoryContent updates the content of an existing memory in-place. Called after versioning to store the new (dedup-winning) content.

func (*Store) UpdateTask

func (s *Store) UpdateTask(id, status, appendNotes, agentID string) (unblocked []string, planCompleted bool, err error)

UpdateTask changes the status and optionally appends notes to a task. agentID is optional — if non-empty it is recorded as the last_updated_by agent. Appended notes are prefixed with a timestamp so they form an audit trail. Returns:

  • unblocked: task IDs that became unblocked (only meaningful when status=="done")
  • planCompleted: true when this update caused the parent plan to auto-complete (all tasks in the plan are now done/cancelled)

func (*Store) UpdateVerifiedCommit

func (s *Store) UpdateVerifiedCommit(toProject, toEntity, newCommit string) error

UpdateVerifiedCommit updates the verified_commit for a dependency after confirming the sibling entity hasn't drifted.

func (*Store) UpsertAgent

func (s *Store) UpsertAgent(id string, activity *AgentActivity) error

UpsertAgent records that an agent was seen. activity is optional — when nil, only last_seen is touched (fast path). When non-nil, non-empty fields replace existing values; empty fields leave existing values untouched.

func (*Store) UpsertAgentContext

func (s *Store) UpsertAgentContext(ac *AgentContext) error

UpsertAgentContext creates or updates the context profile for an agent. Called after session_init to record what the agent has received.

func (*Store) UpsertCrossProjectDep

func (s *Store) UpsertCrossProjectDep(dep CrossProjectDep) error

UpsertCrossProjectDep inserts or updates a cross-project dependency.

func (*Store) UpsertDynamicRule

func (s *Store) UpsertDynamicRule(r config.Rule) error

UpsertDynamicRule persists a dynamic architectural rule to the store. If a rule with the same ID already exists it is fully replaced; otherwise a new row is inserted. The rule takes effect in-memory immediately — see Server.handleUpsertRule for the in-memory update that accompanies this call.

func (*Store) UpsertEmbedding

func (s *Store) UpsertEmbedding(nodeID, model string, vec []float32) error

UpsertEmbedding stores or replaces the vector embedding for a graph node. vec is encoded as a little-endian float32 BLOB. model is the model name used to generate the embedding (for cache invalidation when the model changes). A content_hash of the node's name+signature+doc is computed and stored so that GetNodesWithoutEmbeddings can detect stale embeddings when the code changes. Thread-safe: each call is a single UPSERT.

func (*Store) UpsertFileMtime

func (s *Store) UpsertFileMtime(path string, mtime int64) error

UpsertFileMtime updates (or inserts) the stored mtime for a single file. Used by the watcher after a hot-reload to keep file_hashes current without rewriting the entire table (which would require loading all other entries).

func (*Store) UpsertGap

func (s *Store) UpsertGap(g QualityGap) (QualityGap, error)

UpsertGap creates or updates a quality gap. The primary key is "{nodeID}:{gapID}" — re-calling with the same pair updates the record.

func (*Store) UpsertLearnedEdgeWeights

func (s *Store) UpsertLearnedEdgeWeights(edges []graph.EdgeWeightKey, delta float64)

UpsertLearnedEdgeWeights applies a signed delta to each of the given edges' weight_mult, clamping to [learnedWeightFloor, learnedWeightCap]. last_used is set to now for all updated edges. dormant is cleared on update (the agent just used the edge, so it is no longer dormant). All edges are written in a single transaction for atomicity and performance. Errors are silently dropped — this is best-effort instrumentation.

func (*Store) UpsertMemoryEmbedding

func (s *Store) UpsertMemoryEmbedding(memoryID, model string, vec []float32) error

UpsertMemoryEmbedding stores or replaces the vector embedding for a memory. vec is encoded as a little-endian float32 BLOB. model is the model name used to generate the embedding (for cache invalidation when the model changes). A content_hash of the memory's content is computed and stored so that GetMemoriesWithoutEmbeddings can detect stale embeddings when memory content changes. Thread-safe: each call is a single UPSERT.

func (*Store) UpsertSessionState

func (s *Store) UpsertSessionState(state SessionState) error

UpsertSessionState saves or replaces the session state for a task. Each task has at most one session_state row (keyed by task_id). agentID is optional metadata for auditing.

func (*Store) UpsertWebCache

func (s *Store) UpsertWebCache(url, content string, ttlHours int) error

UpsertWebCache inserts or replaces a web cache entry. ttlHours=0 means never expire (used for version-pinned package docs).

func (*Store) VectorSearch

func (s *Store) VectorSearch(queryVec []float32, limit int) ([]SearchResult, error)

VectorSearch performs cosine similarity search over all stored node embeddings. Returns up to limit results ordered by descending similarity. Falls back gracefully with (nil, nil) when no embeddings are stored yet.

Uses HNSW approximate nearest-neighbor index when available (Sprint 12 #4):

  • O(log N) query time vs O(N) brute-force
  • 3× oversampling for ≥95% recall, then Pass 2 fetches node metadata

Falls back to brute-force scan when HNSW index is empty.

func (*Store) ViolationIDsForFile

func (s *Store) ViolationIDsForFile(file string) (map[string]struct{}, error)

ViolationIDsForFile returns the set of stable violation IDs already recorded in the log whose from_node or to_node contains the given file path as a substring. Used by the watcher to distinguish newly-detected violations (which should trigger an event) from pre-existing ones (which should not).

type Task

type Task struct {
	ID            string   `json:"id"`
	PlanID        string   `json:"plan_id"`
	Title         string   `json:"title"`
	Description   string   `json:"description"`
	Status        string   `json:"status"`       // pending | in_progress | done | cancelled
	Priority      string   `json:"priority"`     // p0 | p1 | p2 | p3
	LinkedNodes   []string `json:"linked_nodes"` // node IDs related to this task
	DependsOn     []string `json:"depends_on"`   // task IDs that must complete first
	Notes         string   `json:"notes"`        // append-only notes from each session
	AssignedTo    string   `json:"assigned_to,omitempty"`
	LastUpdatedBy string   `json:"last_updated_by,omitempty"`
	// R21: Commit tracking — populated when git is available in the project root.
	// StartCommit is the HEAD SHA captured when the task was set to in_progress.
	// CommitsSinceStart is the git log since StartCommit, captured at done time.
	// Both are empty when git is unavailable. Commits are repo-wide (not per-agent).
	StartCommit       string   `json:"start_commit,omitempty"`
	CommitsSinceStart []string `json:"commits_since_start,omitempty"`
	// Computed fields — not stored in DB; set by GetPendingTasks.
	Blocked   bool     `json:"blocked,omitempty"`
	BlockedBy []string `json:"blocked_by,omitempty"`
	CreatedAt string   `json:"created_at"`
	UpdatedAt string   `json:"updated_at"`
}

Task is a single actionable work item belonging to a plan. Status flows: pending → in_progress → done (or cancelled).

type TaskInput

type TaskInput struct {
	Title       string   `json:"title"`
	Description string   `json:"description"`
	Priority    string   `json:"priority"`
	LinkedNodes []string `json:"linked_nodes"`
	DependsOn   []string `json:"depends_on"` // task IDs that must be done before this task
}

TaskInput is used when creating a batch of tasks inside CreatePlan.

func (*TaskInput) UnmarshalJSON

func (t *TaskInput) UnmarshalJSON(data []byte) error

UnmarshalJSON allows `priority` to be either a string ("p0", "p1") or a number (0, 1, 2). LLMs naturally emit integer priorities; this coerces them to the internal string format.

type ToolCallCount

type ToolCallCount struct {
	ToolName string `json:"tool_name"`
	Count    int    `json:"count"`
}

ToolCallCount is a single tool with its invocation count in a session.

type ToolCallSummary

type ToolCallSummary struct {
	TotalCalls int             `json:"total_calls"`
	DurationMs int64           `json:"duration_ms"` // cumulative sum of all call durations
	TopTools   []ToolCallCount `json:"top_tools,omitempty"`
	ErrorRate  float64         `json:"error_rate"` // 0.0–1.0
}

ToolCallSummary is an aggregated view of tool calls for a session. Used to build the session retrospective in end_session responses.

type ToolUsageStat

type ToolUsageStat struct {
	ToolName  string  `json:"tool_name"`
	CallCount int     `json:"call_count"`
	AvgMs     float64 `json:"avg_ms"`
	ErrorRate float64 `json:"error_rate"` // 0.0–1.0 fraction of calls that errored
}

ToolUsageStat summarises call patterns for one MCP tool over a time window.

type ViolationLogEntry

type ViolationLogEntry struct {
	ID          string `json:"id"`
	RuleID      string `json:"rule_id"`
	Severity    string `json:"severity"`
	FromNode    string `json:"from_node"`
	ToNode      string `json:"to_node"`
	EdgeType    string `json:"edge_type"`
	FirstSeen   string `json:"first_seen"`
	LastSeen    string `json:"last_seen"`
	Occurrences int    `json:"occurrences"`
}

ViolationLogEntry is a single entry in the violation audit log.

type WebCacheEntry

type WebCacheEntry struct {
	URL       string
	Content   string
	FetchedAt time.Time
	TTLHours  int
}

WebCacheEntry holds a single cached web document.

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL