reasoningbank

package
v0.3.2 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Jan 14, 2026 License: GPL-3.0 Imports: 15 Imported by: 0

Documentation

Index

Constants

View Source
const (
	// MinConfidence is the minimum confidence threshold for search results.
	MinConfidence = 0.7

	// ExplicitRecordConfidence is the initial confidence for explicitly recorded memories.
	ExplicitRecordConfidence = 0.8

	// DistilledConfidence is the initial confidence for distilled memories.
	DistilledConfidence = 0.6

	// DefaultSearchLimit is the default maximum number of search results.
	DefaultSearchLimit = 10
)

Variables

View Source
var (
	ErrMemoryNotFound    = errors.New("memory not found")
	ErrInvalidMemory     = errors.New("invalid memory")
	ErrEmptyTitle        = errors.New("memory title cannot be empty")
	ErrEmptyContent      = errors.New("memory content cannot be empty")
	ErrInvalidConfidence = errors.New("confidence must be between 0.0 and 1.0")
	ErrInvalidOutcome    = errors.New("outcome must be 'success' or 'failure'")
	ErrEmptyProjectID    = errors.New("project ID cannot be empty")
)

Common errors for ReasoningBank operations.

View Source
var (
	ErrEmptyMemoryID = errors.New("memory ID cannot be empty")
)

Signal-related errors.

Functions

func ComputeConfidenceFromHybrid added in v0.3.0

func ComputeConfidenceFromHybrid(agg *SignalAggregate, recentSignals []Signal, weights *ProjectWeights) float64

ComputeConfidenceFromHybrid calculates confidence using both historical aggregates and recent signals.

This is the core Bayesian confidence calculation that combines: - Historical data from aggregates (signals older than 30 days, rolled up) - Recent signals (last 30 days, stored individually) - Learned project weights for each signal type

The formula uses Beta distribution: confidence = alpha / (alpha + beta)

func CosineSimilarity added in v0.3.0

func CosineSimilarity(vec1, vec2 []float32) float64

CosineSimilarity computes the cosine similarity between two embedding vectors.

Cosine similarity measures the cosine of the angle between two vectors, producing a value between -1 and 1:

  • 1.0: vectors point in the same direction (identical)
  • 0.0: vectors are orthogonal (unrelated)
  • -1.0: vectors point in opposite directions (opposite)

For embedding vectors, similarity is typically in the range [0, 1] since embeddings generally have positive components.

Formula: cos(θ) = (A · B) / (||A|| * ||B||)

Returns 0.0 for invalid inputs (empty vectors, zero-magnitude vectors, or vectors of different lengths).

Types

type ConfidenceCalculator added in v0.3.0

type ConfidenceCalculator struct {
	// contains filtered or unexported fields
}

ConfidenceCalculator provides methods for computing and updating memory confidence.

func NewConfidenceCalculator added in v0.3.0

func NewConfidenceCalculator(store SignalStore) *ConfidenceCalculator

NewConfidenceCalculator creates a new confidence calculator.

func (*ConfidenceCalculator) ComputeConfidence added in v0.3.0

func (c *ConfidenceCalculator) ComputeConfidence(ctx context.Context, memoryID, projectID string) (float64, error)

ComputeConfidence calculates the current confidence for a memory.

func (*ConfidenceCalculator) LearnFromFeedback added in v0.3.0

func (c *ConfidenceCalculator) LearnFromFeedback(ctx context.Context, projectID, memoryID string, helpful bool) error

LearnFromFeedback updates project weights based on feedback accuracy.

func (*ConfidenceCalculator) RecordSignal added in v0.3.0

func (c *ConfidenceCalculator) RecordSignal(ctx context.Context, signal *Signal) error

RecordSignal stores a new signal and updates confidence.

type ConsolidatedMemory added in v0.3.0

type ConsolidatedMemory struct {
	// Memory is the consolidated memory record.
	*Memory

	// SourceIDs contains the IDs of all source memories that were consolidated.
	SourceIDs []string `json:"source_ids"`

	// ConsolidationType indicates the method used for consolidation.
	ConsolidationType ConsolidationType `json:"consolidation_type"`

	// SourceAttribution provides context about how the source memories contributed.
	// This is a human-readable description generated by the LLM during synthesis.
	SourceAttribution string `json:"source_attribution,omitempty"`
}

ConsolidatedMemory represents a memory created by consolidating multiple source memories.

ConsolidatedMemories are created by the Distiller when it detects similar or related memories that can be merged into more valuable synthesized knowledge. The original source memories are preserved with their ConsolidationID field pointing to this consolidated memory.

type ConsolidationOptions added in v0.3.0

type ConsolidationOptions struct {
	// SimilarityThreshold is the minimum cosine similarity score (0.0-1.0) for
	// memories to be considered similar enough for consolidation.
	// Default: 0.8
	// Higher values require more similarity, lower values allow looser grouping.
	SimilarityThreshold float64 `json:"similarity_threshold"`

	// MaxClustersPerRun limits the number of similarity clusters to process in
	// a single consolidation run. This helps control resource usage and runtime.
	// Set to 0 for no limit (process all clusters found).
	MaxClustersPerRun int `json:"max_clusters_per_run"`

	// DryRun, when true, performs similarity detection and reports what would be
	// consolidated without actually creating consolidated memories or archiving
	// source memories. Useful for previewing consolidation impact.
	DryRun bool `json:"dry_run"`

	// ForceAll, when true, ignores recent consolidation timestamps and re-evaluates
	// all memories for consolidation, even if they were recently processed.
	// Use this to force a complete re-consolidation of the project's memory base.
	ForceAll bool `json:"force_all"`
}

ConsolidationOptions configures the behavior of memory consolidation operations.

These options control how consolidation runs, including similarity thresholds, resource limits, and whether to perform a dry run or force consolidation regardless of recent runs.

type ConsolidationResult added in v0.3.0

type ConsolidationResult struct {
	// CreatedMemories contains the IDs of newly created consolidated memories.
	CreatedMemories []string `json:"created_memories"`

	// ArchivedMemories contains the IDs of source memories that were archived
	// after being consolidated into new memories. These memories are preserved
	// with their ConsolidationID field pointing to the consolidated memory.
	ArchivedMemories []string `json:"archived_memories"`

	// SkippedCount is the number of memories that were evaluated but not
	// consolidated (e.g., no similar memories found, below threshold).
	SkippedCount int `json:"skipped_count"`

	// TotalProcessed is the total number of memories examined during consolidation.
	TotalProcessed int `json:"total_processed"`

	// Duration is how long the consolidation operation took to complete.
	Duration time.Duration `json:"duration"`
}

ConsolidationResult contains the results of a memory consolidation operation.

This structure tracks the outcome of running memory consolidation, including which memories were created (consolidated memories), which were archived (source memories linked to consolidated versions), how many were skipped (didn't meet consolidation criteria), and performance metrics.

type ConsolidationScheduler added in v0.3.0

type ConsolidationScheduler struct {
	// contains filtered or unexported fields
}

ConsolidationScheduler manages automatic scheduled memory consolidation.

The scheduler runs consolidation periodically in the background for configured projects. It provides lifecycle management (Start/Stop) with graceful shutdown and ensures consolidation runs on a predictable schedule.

Thread Safety: All public methods are thread-safe. The running state is protected by a mutex to prevent race conditions during Start/Stop operations.

func NewConsolidationScheduler added in v0.3.0

func NewConsolidationScheduler(distiller *Distiller, logger *zap.Logger, opts ...SchedulerOption) (*ConsolidationScheduler, error)

NewConsolidationScheduler creates a new consolidation scheduler.

The scheduler does not start automatically - call Start() to begin scheduled consolidation runs.

Parameters:

  • distiller: The distiller to use for consolidation
  • logger: Logger for structured logging
  • opts: Optional configuration options

Returns:

  • A new scheduler instance
  • Error if distiller or logger is nil

func (*ConsolidationScheduler) Start added in v0.3.0

func (s *ConsolidationScheduler) Start() error

Start begins the background consolidation scheduler.

The scheduler runs consolidation at the configured interval until Stop() is called. This method is idempotent - calling Start() on an already running scheduler returns an error without starting a second goroutine.

Thread Safety: This method is thread-safe and can be called concurrently.

Returns:

  • Error if the scheduler is already running

func (*ConsolidationScheduler) Stop added in v0.3.0

func (s *ConsolidationScheduler) Stop() error

Stop gracefully stops the consolidation scheduler.

Signals the background goroutine to stop and waits for it to finish. This method is idempotent - calling Stop() on an already stopped scheduler is a no-op.

Thread Safety: This method is thread-safe and can be called concurrently.

Returns:

  • Always returns nil (for interface compatibility and future error handling)

type ConsolidationType added in v0.3.0

type ConsolidationType string

ConsolidationType represents the method used to create a consolidated memory.

const (
	// ConsolidationMerged indicates memories were merged into a single synthesized memory.
	ConsolidationMerged ConsolidationType = "merged"

	// ConsolidationDeduplicated indicates duplicate or near-duplicate memories were combined.
	ConsolidationDeduplicated ConsolidationType = "deduplicated"

	// ConsolidationSynthesized indicates memories were synthesized into higher-level knowledge.
	ConsolidationSynthesized ConsolidationType = "synthesized"
)

type Distiller

type Distiller struct {
	// contains filtered or unexported fields
}

Distiller extracts learnings from completed sessions and creates memories.

FR-006: Distillation pipeline for async memory extraction FR-009: Outcome differentiation (success vs failure)

func NewDistiller

func NewDistiller(service *Service, logger *zap.Logger, opts ...DistillerOption) (*Distiller, error)

NewDistiller creates a new session distiller.

func (*Distiller) Consolidate added in v0.3.0

func (d *Distiller) Consolidate(ctx context.Context, projectID string, opts ConsolidationOptions) (*ConsolidationResult, error)

Consolidate runs the full memory consolidation process for a project.

This method orchestrates the complete consolidation workflow:

  1. Check if consolidation was run recently (unless ForceAll is set)
  2. Find all similarity clusters above the specified threshold
  3. Limit to MaxClustersPerRun if specified (0 = no limit)
  4. For each cluster, merge into a consolidated memory
  5. Link source memories to their consolidated versions
  6. Track last consolidation time to avoid re-processing
  7. Return statistics about the consolidation run

In DryRun mode, the method performs similarity detection and reports what would be consolidated without actually creating consolidated memories or archiving source memories.

Parameters:

  • ctx: Context for cancellation and timeouts
  • projectID: Project to consolidate memories for
  • opts: Configuration options (threshold, limits, dry-run mode, etc.)

Returns:

  • ConsolidationResult with statistics and outcomes
  • Error if consolidation fails

func (*Distiller) ConsolidateAll added in v0.3.0

func (d *Distiller) ConsolidateAll(ctx context.Context, projectIDs []string, opts ConsolidationOptions) (*ConsolidationResult, error)

ConsolidateAll runs memory consolidation across all specified projects.

This method is designed for scheduled background runs and batch processing. It runs consolidation on each project with the same options and aggregates the results. If consolidation fails for individual projects, the error is logged and the method continues processing remaining projects.

This is useful for:

  • Scheduled background consolidation (e.g., daily cron job)
  • Bulk maintenance operations
  • Organization-wide memory cleanup

Parameters:

  • ctx: Context for cancellation and timeouts
  • projectIDs: List of project IDs to consolidate
  • opts: Configuration options applied to all projects

Returns:

  • Aggregated ConsolidationResult combining all project results
  • Error only if all projects fail (partial failures are logged)

func (*Distiller) DistillSession

func (d *Distiller) DistillSession(ctx context.Context, summary SessionSummary) error

DistillSession extracts learnings from a completed session and creates memories.

This is called asynchronously after a session ends, so it should not block.

Success patterns (outcome="success") become positive memories. Failure patterns (outcome="failure") become anti-pattern warnings.

Initial confidence is set to DistilledConfidence (0.6) since distilled memories are less reliable than explicit captures (0.8).

func (*Distiller) FindSimilarClusters added in v0.3.0

func (d *Distiller) FindSimilarClusters(ctx context.Context, projectID string, threshold float64) ([]SimilarityCluster, error)

FindSimilarClusters detects groups of similar memories for a project.

Searches all memories in the project and groups those with similarity scores above the threshold. Uses greedy clustering: for each memory, finds all similar memories above threshold, forms cluster if >=2 members.

The algorithm:

  1. Retrieve all memories for the project
  2. Get embedding vectors for each memory
  3. For each memory, compute similarity with all other memories
  4. Group memories with similarity > threshold
  5. Form clusters only if they have >= 2 members
  6. Calculate cluster statistics (centroid, average similarity, min similarity)

Parameters:

  • ctx: Context for cancellation and timeouts
  • projectID: Project to search for similar memories
  • threshold: Minimum similarity score (0.0-1.0, typically 0.8)

Returns:

  • Slice of similarity clusters, each containing related memories
  • Error if clustering fails

func (*Distiller) MergeCluster added in v0.3.0

func (d *Distiller) MergeCluster(ctx context.Context, cluster *SimilarityCluster) (*Memory, error)

MergeCluster synthesizes a cluster of similar memories into one consolidated memory.

This method uses the configured LLM client to analyze the cluster members and create a synthesized memory that captures their common themes and key insights. The process:

  1. Validates the cluster has at least 2 members and LLM client is configured
  2. Builds a consolidation prompt from cluster members
  3. Calls the LLM to synthesize the memories
  4. Parses the LLM response into a Memory struct
  5. Calculates consolidated confidence from source memories
  6. Stores the new consolidated memory
  7. Links source memories to the consolidated version

The consolidated memory includes source attribution and links back to the original memories via their ConsolidationID fields.

Parameters:

  • ctx: Context for cancellation and timeouts
  • cluster: Similarity cluster to merge (must have >= 2 members)

Returns:

  • The newly created consolidated memory
  • Error if LLM client not configured, synthesis fails, or storage fails

type DistillerOption added in v0.3.0

type DistillerOption func(*Distiller)

DistillerOption configures a Distiller.

func WithConsolidationWindow added in v0.3.0

func WithConsolidationWindow(window time.Duration) DistillerOption

WithConsolidationWindow sets the minimum time between consolidations. If not set, defaults to 24 hours.

func WithLLMClient added in v0.3.0

func WithLLMClient(client LLMClient) DistillerOption

WithLLMClient sets the LLM client for memory consolidation. This is required for MergeCluster to work.

type InMemorySignalStore added in v0.3.0

type InMemorySignalStore struct {
	// contains filtered or unexported fields
}

InMemorySignalStore is an in-memory implementation of SignalStore for testing.

func NewInMemorySignalStore added in v0.3.0

func NewInMemorySignalStore() *InMemorySignalStore

NewInMemorySignalStore creates a new in-memory signal store.

func (*InMemorySignalStore) GetAggregate added in v0.3.0

func (s *InMemorySignalStore) GetAggregate(ctx context.Context, memoryID string) (*SignalAggregate, error)

GetAggregate retrieves an aggregate, returning empty aggregate if not found.

func (*InMemorySignalStore) GetProjectWeights added in v0.3.0

func (s *InMemorySignalStore) GetProjectWeights(ctx context.Context, projectID string) (*ProjectWeights, error)

GetProjectWeights retrieves project weights, returning defaults if not found.

func (*InMemorySignalStore) GetRecentSignals added in v0.3.0

func (s *InMemorySignalStore) GetRecentSignals(ctx context.Context, memoryID string, duration time.Duration) ([]Signal, error)

GetRecentSignals retrieves signals newer than the cutoff.

func (*InMemorySignalStore) RollupOldSignals added in v0.3.0

func (s *InMemorySignalStore) RollupOldSignals(ctx context.Context, memoryID string, cutoff time.Duration) error

RollupOldSignals moves old signals into aggregates.

func (*InMemorySignalStore) StoreAggregate added in v0.3.0

func (s *InMemorySignalStore) StoreAggregate(ctx context.Context, agg *SignalAggregate) error

StoreAggregate saves an aggregate.

func (*InMemorySignalStore) StoreProjectWeights added in v0.3.0

func (s *InMemorySignalStore) StoreProjectWeights(ctx context.Context, weights *ProjectWeights) error

StoreProjectWeights saves project weights.

func (*InMemorySignalStore) StoreSignal added in v0.3.0

func (s *InMemorySignalStore) StoreSignal(ctx context.Context, signal *Signal) error

StoreSignal adds a signal to the store.

type LLMClient added in v0.3.0

type LLMClient interface {
	// Complete generates a completion from the given prompt.
	//
	// The context can be used for cancellation and deadline control.
	// The prompt will be a structured memory consolidation request.
	//
	// Returns the generated text containing TITLE:, CONTENT:, TAGS:,
	// OUTCOME:, and optionally SOURCE_ATTRIBUTION: fields.
	//
	// Returns an error if:
	//   - The API request fails
	//   - The context is cancelled or times out
	//   - Rate limits are exceeded (after retries)
	Complete(ctx context.Context, prompt string) (string, error)
}

LLMClient provides an interface for interacting with LLM backends.

This interface allows pluggable LLM providers (Claude, OpenAI, local models) to be used for memory synthesis and consolidation tasks. Implementations should handle retries, rate limiting, and error handling internally.

Expected Prompt Format

The Complete method will receive structured prompts from the distiller containing memory consolidation instructions. The prompt format is:

You are a memory consolidation assistant...
## Source Memories
### Memory 1: [Title]
**Description:** [Description]
**Content:** [Content]
...
## Your Task
...
## Output Format
```
TITLE: [consolidated title]
CONTENT: [consolidated content]
TAGS: [comma-separated tags]
OUTCOME: [success|failure]
SOURCE_ATTRIBUTION: [attribution note]
```

Expected Response Format

The LLM response MUST include these fields in the exact format above:

  • TITLE: (required) A clear, concise title
  • CONTENT: (required) The synthesized content
  • TAGS: (optional) Comma-separated tags
  • OUTCOME: (required) Either "success" or "failure"
  • SOURCE_ATTRIBUTION: (optional) Attribution note

Example Implementation

type ClaudeLLMClient struct {
    client *anthropic.Client
    model  string
}

func (c *ClaudeLLMClient) Complete(ctx context.Context, prompt string) (string, error) {
    resp, err := c.client.CreateMessage(ctx, anthropic.MessageRequest{
        Model:     c.model,
        MaxTokens: 4096,
        Messages: []anthropic.Message{
            {Role: "user", Content: prompt},
        },
    })
    if err != nil {
        return "", fmt.Errorf("anthropic API error: %w", err)
    }
    return resp.Content[0].Text, nil
}

type OpenAILLMClient struct {
    client *openai.Client
    model  string
}

func (c *OpenAILLMClient) Complete(ctx context.Context, prompt string) (string, error) {
    resp, err := c.client.CreateChatCompletion(ctx, openai.ChatCompletionRequest{
        Model: c.model,
        Messages: []openai.ChatCompletionMessage{
            {Role: "user", Content: prompt},
        },
    })
    if err != nil {
        return "", fmt.Errorf("openai API error: %w", err)
    }
    return resp.Choices[0].Message.Content, nil
}

Implementation Requirements

  • Handle rate limiting and retries internally
  • Respect context cancellation and deadlines
  • Return meaningful errors for debugging
  • Support prompts up to ~32K tokens (for large memory clusters)
  • Use temperature ~0.3-0.5 for consistent, factual outputs

type Memory

type Memory struct {
	// ID is the unique memory identifier (UUID).
	ID string `json:"id"`

	// ProjectID identifies which project this memory belongs to.
	ProjectID string `json:"project_id"`

	// Title is a brief summary of the memory (e.g., "Go error handling with context").
	Title string `json:"title"`

	// Description provides additional context about when/why this memory is useful.
	Description string `json:"description,omitempty"`

	// Content is the main memory content (strategy, anti-pattern, code example).
	Content string `json:"content"`

	// Outcome indicates if this is a success pattern or failure anti-pattern.
	Outcome Outcome `json:"outcome"`

	// Confidence is a score from 0.0 to 1.0 indicating reliability.
	// Higher confidence memories are prioritized in search results.
	// Adjusted based on feedback and usage patterns.
	Confidence float64 `json:"confidence"`

	// UsageCount tracks how many times this memory has been retrieved.
	UsageCount int `json:"usage_count"`

	// Tags are labels for categorization (e.g., "go", "error-handling", "auth").
	Tags []string `json:"tags,omitempty"`

	// ConsolidationID links this memory to a consolidated memory it was merged into.
	// When a memory is consolidated with others, this field is set to the ID of the
	// resulting ConsolidatedMemory. The original memory is preserved for attribution.
	ConsolidationID *string `json:"consolidation_id,omitempty"`

	// State indicates the lifecycle state of this memory (active or archived).
	// Archived memories have been consolidated into other memories but are preserved
	// for attribution and traceability. They are excluded from normal searches.
	State MemoryState `json:"state"`

	// CreatedAt is when the memory was created.
	CreatedAt time.Time `json:"created_at"`

	// UpdatedAt is when the memory was last modified.
	UpdatedAt time.Time `json:"updated_at"`
}

Memory represents a cross-session memory in the ReasoningBank.

Memories are distilled strategies learned from agent interactions. They can represent successful patterns (outcome="success") or anti-patterns to avoid (outcome="failure").

Confidence is tracked and adjusted based on feedback signals:

  • Explicit ratings from users
  • Implicit success (memory helped solve a task)
  • Code stability (solution didn't need rework)

func NewMemory

func NewMemory(projectID, title, content string, outcome Outcome, tags []string) (*Memory, error)

NewMemory creates a new memory with a generated UUID and default values.

func (*Memory) AdjustConfidence

func (m *Memory) AdjustConfidence(helpful bool)

AdjustConfidence updates the confidence based on feedback.

For helpful feedback:

  • Increases confidence by up to 0.1 (capped at 1.0)

For unhelpful feedback:

  • Decreases confidence by up to 0.15 (floored at 0.0)

func (*Memory) IncrementUsage

func (m *Memory) IncrementUsage()

IncrementUsage increments the usage count and updates timestamp.

func (*Memory) Validate

func (m *Memory) Validate() error

Validate checks if the memory has valid fields.

type MemoryConfidence added in v0.3.0

type MemoryConfidence struct {
	// MemoryID identifies the memory.
	MemoryID string `json:"memory_id"`

	// Alpha represents positive evidence (starts at 1 for uniform prior).
	Alpha float64 `json:"alpha"`

	// Beta represents negative evidence (starts at 1 for uniform prior).
	Beta float64 `json:"beta"`
}

MemoryConfidence tracks confidence for a single memory using Beta distribution.

Each memory maintains its own alpha/beta counts which are updated by weighted signals. The confidence score is the Beta distribution mean: alpha / (alpha + beta).

func NewMemoryConfidence added in v0.3.0

func NewMemoryConfidence(memoryID string) *MemoryConfidence

NewMemoryConfidence creates a new MemoryConfidence with uniform prior (1:1 = 50%).

func (*MemoryConfidence) Score added in v0.3.0

func (mc *MemoryConfidence) Score() float64

Score returns the confidence score: alpha / (alpha + beta).

This is the mean of the Beta distribution, representing our best estimate of the memory's usefulness based on accumulated evidence.

func (*MemoryConfidence) Update added in v0.3.0

func (mc *MemoryConfidence) Update(signal Signal, weights *ProjectWeights)

Update adjusts confidence based on a new signal.

The signal's contribution is weighted by the project's learned weights. Positive signals increase alpha, negative signals increase beta.

type MemoryConsolidator added in v0.3.0

type MemoryConsolidator interface {
	// FindSimilarClusters detects groups of similar memories for a project.
	//
	// Searches all memories in the project and groups those with similarity
	// scores above the threshold. Uses greedy clustering: for each memory,
	// finds all similar memories above threshold, forms cluster if >=2 members.
	//
	// Parameters:
	//   - ctx: Context for cancellation and timeouts
	//   - projectID: Project to search for similar memories
	//   - threshold: Minimum similarity score (0.0-1.0, typically 0.8)
	//
	// Returns:
	//   - Slice of similarity clusters, each containing related memories
	//   - Error if clustering fails
	FindSimilarClusters(ctx context.Context, projectID string, threshold float64) ([]SimilarityCluster, error)

	// MergeCluster synthesizes a cluster of similar memories into one consolidated memory.
	//
	// Uses an LLM to analyze the cluster members and create a synthesized memory
	// that captures their common themes and key insights. The consolidated memory
	// includes source attribution and links back to the original memories.
	//
	// Parameters:
	//   - ctx: Context for cancellation and timeouts
	//   - cluster: Similarity cluster to merge
	//
	// Returns:
	//   - The newly created consolidated memory
	//   - Error if synthesis or storage fails
	MergeCluster(ctx context.Context, cluster *SimilarityCluster) (*Memory, error)

	// Consolidate runs the full memory consolidation process for a project.
	//
	// Orchestrates the complete workflow:
	//  1. Find all similarity clusters above threshold
	//  2. Merge each cluster into a consolidated memory
	//  3. Link source memories to their consolidated versions
	//  4. Return statistics about the consolidation run
	//
	// Parameters:
	//   - ctx: Context for cancellation and timeouts
	//   - projectID: Project to consolidate memories for
	//   - opts: Configuration options (threshold, limits, dry-run mode, etc.)
	//
	// Returns:
	//   - ConsolidationResult with statistics and outcomes
	//   - Error if consolidation fails
	Consolidate(ctx context.Context, projectID string, opts ConsolidationOptions) (*ConsolidationResult, error)
}

MemoryConsolidator defines the interface for memory consolidation operations.

Implementations of this interface (such as the Distiller) are responsible for detecting similar memories, merging them into consolidated entries, and orchestrating the overall consolidation process.

The consolidation workflow:

  1. FindSimilarClusters detects groups of similar memories above a threshold
  2. MergeCluster synthesizes each cluster into a single consolidated memory
  3. Consolidate orchestrates the full process with configurable options

Original memories are preserved with back-links to their consolidated versions via the ConsolidationID field.

type MemoryState added in v0.3.0

type MemoryState string

MemoryState represents the lifecycle state of a memory.

const (
	// MemoryStateActive indicates the memory is actively used in searches.
	MemoryStateActive MemoryState = "active"

	// MemoryStateArchived indicates the memory has been consolidated into another memory.
	// Archived memories are preserved for attribution but excluded from normal searches.
	MemoryStateArchived MemoryState = "archived"
)

type Outcome

type Outcome string

Outcome represents the result type of a memory.

const (
	// OutcomeSuccess indicates a successful strategy or pattern.
	OutcomeSuccess Outcome = "success"

	// OutcomeFailure indicates an anti-pattern or failed approach.
	OutcomeFailure Outcome = "failure"
)

type ProjectWeights added in v0.3.0

type ProjectWeights struct {
	// ProjectID identifies which project these weights belong to.
	ProjectID string `json:"project_id"`

	// ExplicitAlpha is the success count for explicit signal predictions.
	ExplicitAlpha float64 `json:"explicit_alpha"`

	// ExplicitBeta is the failure count for explicit signal predictions.
	ExplicitBeta float64 `json:"explicit_beta"`

	// UsageAlpha is the success count for usage signal predictions.
	UsageAlpha float64 `json:"usage_alpha"`

	// UsageBeta is the failure count for usage signal predictions.
	UsageBeta float64 `json:"usage_beta"`

	// OutcomeAlpha is the success count for outcome signal predictions.
	OutcomeAlpha float64 `json:"outcome_alpha"`

	// OutcomeBeta is the failure count for outcome signal predictions.
	OutcomeBeta float64 `json:"outcome_beta"`
}

ProjectWeights tracks learned signal weights per project using Beta distributions.

Each signal type has alpha/beta parameters that form a Beta distribution. The mean of the distribution (alpha / (alpha + beta)) represents how well that signal type predicts memory usefulness.

The system learns by observing which signals correctly predict explicit feedback: - If usage signals predict helpful feedback, UsageAlpha increases - If usage signals incorrectly predict, UsageBeta increases

Initial priors (from DESIGN.md): - Explicit: 7:3 (70% weight) - trust user feedback highly - Usage: 5:5 (50% weight) - uncertain initially - Outcome: 5:5 (50% weight) - uncertain initially

func NewProjectWeights added in v0.3.0

func NewProjectWeights(projectID string) *ProjectWeights

NewProjectWeights creates a new ProjectWeights with initial priors.

Initial priors from DESIGN.md: - Explicit 7:3 (70%) - trust user feedback - Usage/Outcome 5:5 (50%) - uncertain initially

func (*ProjectWeights) ComputeWeights added in v0.3.0

func (pw *ProjectWeights) ComputeWeights() (explicit, usage, outcome float64)

ComputeWeights returns normalized weights for each signal type.

Uses Beta distribution mean: alpha / (alpha + beta) Then normalizes so all weights sum to 1.0.

func (*ProjectWeights) LearnFromFeedback added in v0.3.0

func (pw *ProjectWeights) LearnFromFeedback(helpful bool, recentSignals []Signal)

LearnFromFeedback updates weights based on whether signals correctly predicted feedback.

When explicit feedback arrives (helpful or unhelpful), we check if other signals (usage, outcome) correctly predicted this feedback. If they did, their alpha increases. If they didn't, their beta increases.

This allows the system to learn which signal types are reliable predictors of memory usefulness for this specific project.

func (*ProjectWeights) WeightFor added in v0.3.0

func (pw *ProjectWeights) WeightFor(signalType SignalType) float64

WeightFor returns the normalized weight for a specific signal type.

type SchedulerOption added in v0.3.0

type SchedulerOption func(*ConsolidationScheduler)

SchedulerOption configures a ConsolidationScheduler.

func WithConsolidationOptions added in v0.3.0

func WithConsolidationOptions(opts ConsolidationOptions) SchedulerOption

WithConsolidationOptions sets the consolidation options. If not set, uses default options (threshold: 0.8, dry_run: false).

func WithInterval added in v0.3.0

func WithInterval(interval time.Duration) SchedulerOption

WithInterval sets the consolidation interval. If not set, defaults to 24 hours.

func WithProjectIDs added in v0.3.0

func WithProjectIDs(projectIDs []string) SchedulerOption

WithProjectIDs sets the project IDs to consolidate. If not set, the scheduler will not consolidate any projects.

type Service

type Service struct {
	// contains filtered or unexported fields
}

Service provides cross-session memory storage and retrieval.

It stores memories in Qdrant using semantic search to surface relevant strategies based on similarity to the current task. Memories can be created explicitly via Record() or extracted asynchronously from sessions via the Distiller.

The service uses a Bayesian confidence system that learns which signals (explicit feedback, usage, outcomes) best predict memory usefulness.

func NewService

func NewService(store vectorstore.Store, logger *zap.Logger, opts ...ServiceOption) (*Service, error)

NewService creates a new ReasoningBank service.

func NewServiceWithStoreProvider added in v0.3.0

func NewServiceWithStoreProvider(stores vectorstore.StoreProvider, defaultTenant string, logger *zap.Logger, opts ...ServiceOption) (*Service, error)

NewServiceWithStoreProvider creates a ReasoningBank service using StoreProvider for database-per-project isolation.

The defaultTenant is used when deriving the store path from projectID. Typically this is the git username or "default" for local-first usage.

This constructor enables the new architecture where each project gets its own chromem.DB instance at a unique filesystem path, providing physical isolation.

func (*Service) Count added in v0.3.0

func (s *Service) Count(ctx context.Context, projectID string) (int, error)

Count returns the number of memories for a specific project.

func (*Service) Delete

func (s *Service) Delete(ctx context.Context, id string) error

Delete removes a memory by ID.

Note: This method requires the legacy single-store configuration. When using StoreProvider (database-per-project), use DeleteByProjectID instead.

func (*Service) DeleteByProjectID added in v0.3.0

func (s *Service) DeleteByProjectID(ctx context.Context, projectID, memoryID string) error

DeleteByProjectID removes a memory by ID within a specific project.

This is the preferred method when using StoreProvider (database-per-project isolation) as it directly accesses the project's store without enumeration.

func (*Service) Feedback

func (s *Service) Feedback(ctx context.Context, memoryID string, helpful bool) error

Feedback updates a memory's confidence based on user feedback.

This method: 1. Records an explicit signal for the feedback 2. Learns which signal types predicted this feedback correctly (weight learning) 3. Recalculates the memory's confidence using the Bayesian system

FR-008: Feedback loop affecting confidence FR-005: Confidence tracking

func (*Service) Get

func (s *Service) Get(ctx context.Context, id string) (*Memory, error)

Get retrieves a memory by ID.

This searches across all project collections to find the memory. In practice, you'd typically know the project ID, but this provides a fallback for when you only have the memory ID.

Note: This method requires the legacy single-store configuration. When using StoreProvider (database-per-project), use GetByProjectID instead.

func (*Service) GetByProjectID added in v0.3.0

func (s *Service) GetByProjectID(ctx context.Context, projectID, memoryID string) (*Memory, error)

GetByProjectID retrieves a memory by ID within a specific project.

This is the preferred method when using StoreProvider (database-per-project isolation) as it directly accesses the project's store without enumeration.

func (*Service) GetMemoryVector added in v0.3.0

func (s *Service) GetMemoryVector(ctx context.Context, memoryID string) ([]float32, error)

GetMemoryVector retrieves the embedding vector for a memory by ID.

This method re-embeds the memory content to retrieve its vector representation. The content is embedded the same way as during storage (title + content).

Note: This method requires the legacy single-store configuration. When using StoreProvider (database-per-project), use GetMemoryVectorByProjectID instead.

Returns the embedding vector or an error if the memory doesn't exist or embedder is not configured.

func (*Service) GetMemoryVectorByProjectID added in v0.3.0

func (s *Service) GetMemoryVectorByProjectID(ctx context.Context, projectID, memoryID string) ([]float32, error)

GetMemoryVectorByProjectID retrieves the embedding vector for a memory within a specific project.

This is the preferred method when using StoreProvider (database-per-project isolation) as it directly accesses the project's store without enumeration.

The method re-embeds the memory content to retrieve its vector representation. The content is embedded the same way as during storage (title + content).

Returns the embedding vector or an error if the memory doesn't exist or embedder is not configured.

func (*Service) ListMemories added in v0.3.0

func (s *Service) ListMemories(ctx context.Context, projectID string, limit, offset int) ([]Memory, error)

ListMemories retrieves all memories for a project with pagination support.

This method is used by the memory consolidation system to iterate over all memories in a project. Unlike Search, it doesn't filter by semantic similarity - it returns memories in storage order.

Parameters:

  • limit: Maximum number of memories to return (0 = return all)
  • offset: Number of memories to skip (for pagination)

Returns memories in storage order. For large projects, use pagination to avoid loading all memories at once.

func (*Service) Record

func (s *Service) Record(ctx context.Context, memory *Memory) error

Record creates a new memory explicitly (bypasses distillation).

Sets initial confidence to ExplicitRecordConfidence (0.8) since explicit captures are more reliable than distilled ones.

FR-007: Explicit capture via memory_record FR-002: Memory schema validation

func (*Service) RecordOutcome added in v0.3.0

func (s *Service) RecordOutcome(ctx context.Context, memoryID string, succeeded bool, sessionID string) (float64, error)

RecordOutcome records a task outcome signal for a memory.

This is called by the memory_outcome MCP tool when an agent reports whether a task succeeded after using a retrieved memory.

The outcome signal contributes to the memory's confidence score through the Bayesian confidence system. Positive outcomes increase confidence, negative outcomes decrease it based on learned weights.

Returns the new confidence score after the update.

FR-005d: Outcome reporting via memory_outcome tool

func (*Service) Search

func (s *Service) Search(ctx context.Context, projectID, query string, limit int) ([]Memory, error)

Search retrieves memories by semantic similarity to the query.

Returns memories with confidence >= MinConfidence, ordered by similarity score. Filters to only memories belonging to the specified project.

FR-003: Semantic search by similarity FR-002: Memories include required fields

func (*Service) Stats added in v0.3.0

func (s *Service) Stats() Stats

Stats returns current memory statistics for statusline display.

type ServiceOption added in v0.3.0

type ServiceOption func(*Service)

ServiceOption configures a Service.

func WithDefaultTenant added in v0.3.0

func WithDefaultTenant(tenantID string) ServiceOption

WithDefaultTenant sets the default tenant ID for single-store mode. Required when using a single vectorstore instead of StoreProvider.

func WithEmbedder added in v0.3.0

func WithEmbedder(embedder vectorstore.Embedder) ServiceOption

WithEmbedder sets a custom embedder for the service. Required for GetMemoryVector to re-embed memory content.

func WithSignalStore added in v0.3.0

func WithSignalStore(ss SignalStore) ServiceOption

WithSignalStore sets a custom signal store. If not provided, an in-memory signal store is used.

type SessionOutcome

type SessionOutcome string

SessionOutcome represents the overall outcome of a session.

const (
	// SessionSuccess indicates the session achieved its goal.
	SessionSuccess SessionOutcome = "success"

	// SessionFailure indicates the session did not achieve its goal.
	SessionFailure SessionOutcome = "failure"

	// SessionPartial indicates partial success or mixed results.
	SessionPartial SessionOutcome = "partial"
)

type SessionSummary

type SessionSummary struct {
	// SessionID uniquely identifies the session.
	SessionID string

	// ProjectID identifies the project this session belongs to.
	ProjectID string

	// Outcome is the overall session result.
	Outcome SessionOutcome

	// Task is a brief description of what the session was trying to accomplish.
	Task string

	// Approach is the strategy or method used (extracted from session).
	Approach string

	// Result describes what happened (success details or failure reasons).
	Result string

	// Tags are labels for categorization (language, domain, problem type).
	Tags []string

	// Duration is how long the session lasted.
	Duration time.Duration

	// CompletedAt is when the session ended.
	CompletedAt time.Time
}

SessionSummary contains distilled information from a completed session.

type Signal added in v0.3.0

type Signal struct {
	// ID is the unique signal identifier.
	ID string `json:"id"`

	// MemoryID is the memory this signal relates to.
	MemoryID string `json:"memory_id"`

	// ProjectID is the project context for this signal.
	ProjectID string `json:"project_id"`

	// Type identifies the signal source.
	Type SignalType `json:"type"`

	// Positive indicates if this was a positive signal (helpful, success).
	Positive bool `json:"positive"`

	// SessionID is optional session context for correlation.
	SessionID string `json:"session_id,omitempty"`

	// Timestamp is when this signal was recorded.
	Timestamp time.Time `json:"timestamp"`
}

Signal represents a single confidence event.

Signals are recorded when: - User provides explicit feedback (memory_feedback) → SignalExplicit - Memory is retrieved in search results (memory_search) → SignalUsage - Agent reports task outcome (memory_outcome) → SignalOutcome

func NewSignal added in v0.3.0

func NewSignal(memoryID, projectID string, signalType SignalType, positive bool, sessionID string) (*Signal, error)

NewSignal creates a new Signal with generated ID and current timestamp.

type SignalAggregate added in v0.3.0

type SignalAggregate struct {
	// MemoryID is the memory this aggregate belongs to.
	MemoryID string `json:"memory_id"`

	// ProjectID is the project context.
	ProjectID string `json:"project_id"`

	// ExplicitPos is the count of positive explicit signals.
	ExplicitPos int `json:"explicit_pos"`

	// ExplicitNeg is the count of negative explicit signals.
	ExplicitNeg int `json:"explicit_neg"`

	// UsagePos is the count of positive usage signals.
	UsagePos int `json:"usage_pos"`

	// UsageNeg is the count of negative usage signals.
	UsageNeg int `json:"usage_neg"`

	// OutcomePos is the count of positive outcome signals.
	OutcomePos int `json:"outcome_pos"`

	// OutcomeNeg is the count of negative outcome signals.
	OutcomeNeg int `json:"outcome_neg"`

	// LastRollup is when signals were last rolled up into this aggregate.
	LastRollup time.Time `json:"last_rollup"`
}

SignalAggregate stores rolled-up signal counts for data older than 30 days.

Instead of storing individual events forever, old signals are aggregated into counts per signal type per memory. This provides storage efficiency while preserving the statistical information needed for confidence calculation.

func NewSignalAggregate added in v0.3.0

func NewSignalAggregate(memoryID, projectID string) *SignalAggregate

NewSignalAggregate creates a new SignalAggregate with zero counts.

func (*SignalAggregate) AddSignal added in v0.3.0

func (agg *SignalAggregate) AddSignal(signalType SignalType, positive bool)

AddSignal increments the appropriate counter based on signal type and polarity.

type SignalStore added in v0.3.0

type SignalStore interface {
	// StoreSignal persists a new signal event.
	StoreSignal(ctx context.Context, signal *Signal) error

	// GetRecentSignals retrieves signals within the given duration.
	GetRecentSignals(ctx context.Context, memoryID string, duration time.Duration) ([]Signal, error)

	// StoreAggregate persists an aggregate for a memory.
	StoreAggregate(ctx context.Context, agg *SignalAggregate) error

	// GetAggregate retrieves the aggregate for a memory.
	GetAggregate(ctx context.Context, memoryID string) (*SignalAggregate, error)

	// StoreProjectWeights persists project weights.
	StoreProjectWeights(ctx context.Context, weights *ProjectWeights) error

	// GetProjectWeights retrieves weights for a project.
	// Returns default weights if none exist.
	GetProjectWeights(ctx context.Context, projectID string) (*ProjectWeights, error)

	// RollupOldSignals moves signals older than the cutoff into aggregates.
	// This is called by a background worker daily.
	RollupOldSignals(ctx context.Context, memoryID string, cutoff time.Duration) error
}

SignalStore defines the interface for signal persistence.

Implementations can use vectorstore, SQL database, or in-memory storage. The interface supports the hybrid storage model: - Individual signals (last 30 days) - Rolled-up aggregates (older than 30 days) - Per-project weight learning

type SignalType added in v0.3.0

type SignalType string

SignalType identifies the source of a confidence signal.

const (
	// SignalExplicit is from memory_feedback tool - user rates helpful/unhelpful.
	SignalExplicit SignalType = "explicit"

	// SignalUsage is from memory_search tool - memory retrieved in search results.
	SignalUsage SignalType = "usage"

	// SignalOutcome is from memory_outcome tool - agent reports task success/failure.
	SignalOutcome SignalType = "outcome"
)

type SimilarityCluster added in v0.3.0

type SimilarityCluster struct {
	// Members contains all memories in this similarity cluster.
	Members []*Memory `json:"members"`

	// CentroidVector is the average embedding vector of all cluster members.
	// Used to represent the cluster's semantic center.
	CentroidVector []float32 `json:"centroid_vector,omitempty"`

	// AverageSimilarity is the mean pairwise similarity score between cluster members.
	// Range: 0.0 to 1.0, where 1.0 means all members are identical.
	AverageSimilarity float64 `json:"average_similarity"`

	// MinSimilarity is the lowest pairwise similarity score in the cluster.
	// Indicates the cluster's cohesion - higher values mean tighter clustering.
	MinSimilarity float64 `json:"min_similarity"`
}

SimilarityCluster represents a group of similar memories detected during consolidation.

The Distiller uses vector similarity search to find clusters of related memories that can be merged. Each cluster contains memories above a similarity threshold and statistics about their relationships.

type Stats added in v0.3.0

type Stats struct {
	LastConfidence float64
}

Stats contains memory service statistics for statusline display.

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL