memory

package
v0.8.5 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Jan 17, 2026 License: MIT Imports: 11 Imported by: 0

README

Memory Management Strategies

This package provides various memory management strategies for AI agents, optimized for different use cases and token efficiency.

Overview

Memory management is crucial for AI agents to maintain context while controlling token costs. This package implements multiple strategies based on research in optimizing AI agent memory.

Strategies

1. Sequential Memory (Keep-It-All)

Use Case: When you need perfect recall and token cost is not a concern

Pros:

  • Perfect recall of all interactions
  • Simple implementation
  • No information loss

Cons:

  • Unbounded token growth
  • Can become very expensive
  • No optimization

Example:

mem := memory.NewSequentialMemory()

msg := memory.NewMessage("user", "Hello, AI!")
mem.AddMessage(ctx, msg)

response := memory.NewMessage("assistant", "Hello! How can I help?")
mem.AddMessage(ctx, response)

// Get all messages
messages, _ := mem.GetContext(ctx, "")
2. Sliding Window Memory

Use Case: Maintaining recent conversation context with bounded size

Pros:

  • Prevents unbounded growth
  • Maintains recent conversation flow
  • Simple and predictable

Cons:

  • Loses older context
  • May forget important earlier information

Example:

// Keep only last 10 messages
mem := memory.NewSlidingWindowMemory(10)

for i := 0; i < 20; i++ {
    msg := memory.NewMessage("user", fmt.Sprintf("Message %d", i))
    mem.AddMessage(ctx, msg)
}

// Only last 10 messages retained
messages, _ := mem.GetContext(ctx, "")
3. Summarization-Based Memory

Use Case: Long conversations where historical context matters but needs compression

Pros:

  • Maintains historical awareness
  • Reduces token consumption
  • Preserves important information

Cons:

  • Requires LLM calls for summarization
  • May lose specific details
  • Summary quality depends on LLM

Example:

mem := memory.NewSummarizationMemory(&memory.SummarizationConfig{
    RecentWindowSize: 10,   // Keep last 10 messages full
    SummarizeAfter:   20,   // Summarize when exceeds 20
    Summarizer: func(ctx context.Context, messages []*Message) (string, error) {
        // Call your LLM to generate summary
        return llm.Summarize(messages)
    },
})

// As messages accumulate, older ones are automatically summarized
4. Retrieval-Based Memory

Use Case: Large conversation histories where only relevant context is needed

Pros:

  • Highly efficient token usage
  • Retrieves only relevant information
  • Scales well with large histories

Cons:

  • Requires embedding model
  • May miss chronologically important context
  • Additional latency for embedding generation

Example:

mem := memory.NewRetrievalMemory(&memory.RetrievalConfig{
    TopK: 5, // Retrieve top 5 most relevant messages
    EmbeddingFunc: func(ctx context.Context, text string) ([]float64, error) {
        // Call embedding API (e.g., OpenAI embeddings)
        return openai.CreateEmbedding(text)
    },
})

// Add many messages
for _, msg := range manyMessages {
    mem.AddMessage(ctx, msg)
}

// Retrieve only relevant ones
relevantMessages, _ := mem.GetContext(ctx, "Tell me about pricing")
5. Hierarchical Memory

Use Case: Complex conversations with varying importance levels

Pros:

  • Balances recency and importance
  • Flexible prioritization
  • Maintains critical information

Cons:

  • More complex management
  • Requires importance scoring
  • Higher implementation complexity

Example:

mem := memory.NewHierarchicalMemory(&memory.HierarchicalConfig{
    RecentLimit:    10,  // Recent messages
    ImportantLimit: 20,  // Important messages
    ImportanceScorer: func(msg *Message) float64 {
        // Custom scoring logic
        if strings.Contains(msg.Content, "IMPORTANT") {
            return 0.9
        }
        return 0.5
    },
})

// Mark important messages
importantMsg := memory.NewMessage("user", "IMPORTANT: Remember this rule")
importantMsg.Metadata["importance"] = 0.95
mem.AddMessage(ctx, importantMsg)
6. Buffer Memory

Use Case: General-purpose memory with flexible limits (similar to LangChain)

Pros:

  • Flexible configuration
  • Optional auto-summarization
  • Can limit by messages or tokens

Cons:

  • May need tuning for optimal performance

Example:

mem := memory.NewBufferMemory(&memory.BufferConfig{
    MaxMessages:   50,    // Limit to 50 messages
    MaxTokens:     2000,  // Or 2000 tokens, whichever comes first
    AutoSummarize: true,  // Auto-summarize when limits exceeded
})

// Messages automatically managed
7. Graph-Based Memory

Use case: When you need to track relationships between topics and messages

Pros:

  • Captures relationships between topics
  • Better context understanding through connections
  • Query-relevant retrieval via graph traversal

Cons:

  • More complex implementation
  • Requires relationship tracking
  • Higher memory overhead for graph structure

Example:

mem := memory.NewGraphBasedMemory(&memory.GraphConfig{
    TopK: 5, // Retrieve top 5 related messages
    RelationExtractor: func(msg *Message) []string {
        // Custom logic to extract topics/entities
        // Default uses simple keyword matching
        return extractTopics(msg.Content)
    },
})

// Messages are connected based on shared topics
msg1 := memory.NewMessage("user", "What's the price?")
mem.AddMessage(ctx, msg1)

msg2 := memory.NewMessage("assistant", "The price is $99")
mem.AddMessage(ctx, msg2)

// Later queries retrieve connected messages
messages, _ := mem.GetContext(ctx, "price information")
8. Compression Memory

Use case: Long conversations where aggressive compression is needed

Pros:

  • Maintains long-term context efficiently
  • Removes redundancy through compression
  • Consolidates old blocks to save space

Cons:

  • Compression requires LLM calls
  • May lose granular details
  • More complex management

Example:

mem := memory.NewCompressionMemory(&memory.CompressionConfig{
    CompressionTrigger: 20,           // Compress after 20 messages
    ConsolidateAfter:   time.Hour,    // Consolidate blocks after 1 hour
    Compressor: func(ctx context.Context, messages []*Message) (*CompressedBlock, error) {
        // Use LLM to compress messages
        return llm.Compress(messages)
    },
})

// Messages are automatically compressed and consolidated
for i := 0; i < 100; i++ {
    msg := memory.NewMessage("user", fmt.Sprintf("Message %d", i))
    mem.AddMessage(ctx, msg)
}

// Returns compressed blocks + recent messages
messages, _ := mem.GetContext(ctx, "")
9. OS-Like Memory

Use case: When you need sophisticated memory management like operating systems

Pros:

  • Sophisticated lifecycle management (active/cache/archive)
  • Optimal memory usage with paging
  • LRU eviction for automatic cleanup

Cons:

  • Complex implementation
  • Overhead of management structures
  • Requires tuning of limits

Example:

mem := memory.NewOSLikeMemory(&memory.OSLikeConfig{
    ActiveLimit:  10,             // Max pages in active memory (RAM)
    CacheLimit:   20,             // Max pages in cache
    AccessWindow: time.Minute * 5, // Access tracking window
})

// Memory automatically managed in 3 tiers
for i := 0; i < 100; i++ {
    msg := memory.NewMessage("user", fmt.Sprintf("Message %d", i))
    mem.AddMessage(ctx, msg)
}

// Most recent and frequently accessed in active memory
// Less recent in cache
// Rarely accessed in archive
messages, _ := mem.GetContext(ctx, "")

// Get detailed memory info
info := mem.GetMemoryInfo()
fmt.Printf("Active pages: %d\n", info["active_pages"])
fmt.Printf("Cached pages: %d\n", info["cached_pages"])

Interface

All strategies implement the Memory interface:

type Memory interface {
    // Add a message to memory
    AddMessage(ctx context.Context, msg *Message) error

    // Get relevant context for current query
    GetContext(ctx context.Context, query string) ([]*Message, error)

    // Clear all memory
    Clear(ctx context.Context) error

    // Get statistics
    GetStats(ctx context.Context) (*Stats, error)
}

Message Structure

type Message struct {
    ID         string                 // Unique identifier
    Role       string                 // "user", "assistant", "system"
    Content    string                 // Message content
    Timestamp  time.Time              // Creation time
    Metadata   map[string]any // Additional metadata
    TokenCount int                    // Estimated tokens
}

Statistics

All strategies provide statistics:

stats, _ := mem.GetStats(ctx)
fmt.Printf("Total Messages: %d\n", stats.TotalMessages)
fmt.Printf("Active Messages: %d\n", stats.ActiveMessages)
fmt.Printf("Total Tokens: %d\n", stats.TotalTokens)
fmt.Printf("Compression Rate: %.2f\n", stats.CompressionRate)

Choosing a Strategy

Scenario Recommended Strategy
Short conversations, low cost concern Sequential
Chat with bounded history Sliding Window
Long conversations, need compression Summarization
Large knowledge base, query-driven Retrieval
Complex multi-topic conversations Hierarchical
General purpose, flexible Buffer
Relationship tracking between topics Graph-Based
Aggressive compression with consolidation Compression
Sophisticated memory lifecycle management OS-Like

Integration Example

// Create your preferred strategy
strategy := memory.NewSlidingWindowMemory(20)

// Add messages as conversation progresses
userMsg := memory.NewMessage("user", "What's the weather?")
strategy.AddMessage(ctx, userMsg)

// Get context for LLM
messages, _ := strategy.GetContext(ctx, "current query")

// Format for your LLM
prompt := formatMessagesForLLM(messages)
response := llm.Generate(prompt)

// Add response to memory
assistantMsg := memory.NewMessage("assistant", response)
strategy.AddMessage(ctx, assistantMsg)

Advanced Usage

Custom Importance Scorer
scorer := func(msg *Message) float64 {
    score := 0.5

    // Boost system messages
    if msg.Role == "system" {
        score += 0.3
    }

    // Boost messages with keywords
    if strings.Contains(msg.Content, "remember") {
        score += 0.2
    }

    return math.Min(score, 1.0)
}

mem := memory.NewHierarchicalMemory(&memory.HierarchicalConfig{
    ImportanceScorer: scorer,
})
Custom Summarizer
summarizer := func(ctx context.Context, messages []*Message) (string, error) {
    // Use your LLM
    prompt := "Summarize the following conversation:\n\n"
    for _, msg := range messages {
        prompt += fmt.Sprintf("%s: %s\n", msg.Role, msg.Content)
    }

    return llm.Complete(ctx, prompt)
}

mem := memory.NewSummarizationMemory(&memory.SummarizationConfig{
    Summarizer: summarizer,
})
Custom Embeddings
embedder := func(ctx context.Context, text string) ([]float64, error) {
    // Use OpenAI, Cohere, or your embedding model
    return openai.CreateEmbedding(ctx, text)
}

mem := memory.NewRetrievalMemory(&memory.RetrievalConfig{
    EmbeddingFunc: embedder,
})

Testing

Run tests:

go test ./memory -v

References

  • Based on research from optimize-ai-agent-memory
  • Implements patterns similar to LangChain's memory systems
  • Optimized for Go and LangGraphGo integration

Documentation

Overview

Package memory provides various memory management strategies for conversational AI applications.

This package implements multiple approaches to managing conversation history and context, from simple buffers to sophisticated OS-inspired memory management with paging and eviction. It's designed to help maintain relevant context within token limits while preserving important information from long conversations.

Core Interface

The Memory interface defines the contract that all memory strategies must implement:

  • AddMessage: Add a new message to memory
  • GetContext: Retrieve relevant context for the current query
  • Clear: Remove all messages from memory
  • GetStats: Get statistics about memory usage

Available Memory Strategies

## Buffer Memory Simple first-in-first-out buffer with configurable size:

buffer := memory.NewBufferMemory(100) // Keep last 100 messages
buffer.AddMessage(ctx, message)
context, _ := buffer.GetContext(ctx, "current query")

## Sliding Window Memory Maintains a sliding window of recent messages with overlap:

window := memory.NewSlidingWindowMemory(50, 5) // 50 messages with 5 overlap

## Summarization Memory Automatically summarizes older messages to save tokens:

summ := memory.NewSummarizationMemory(llmClient, 1000) // 1000 token limit

## Hierarchical Memory Multi-level memory with different retention policies:

hierarchical := memory.NewHierarchicalMemory(
	&memory.Config{
		WorkingMemorySize: 50,
		LongTermSize:      1000,
		ArchiveSize:       10000,
	},
)

## OS-Inspired Memory Sophisticated memory management with active, cached, and archived pages:

osMemory := memory.NewOSLikeMemory(&memory.OSLikeConfig{
	ActiveLimit:  100,
	CacheLimit:   500,
	AccessWindow: time.Hour,
})

## Graph-Based Memory Organizes messages as a graph for better context retrieval:

graphMemory := memory.NewGraphBasedMemory(
	embeddingModel,
	&memory.GraphConfig{
		MaxNodes:      1000,
		SimilarityThreshold: 0.7,
	},
)

Message Structure

Each message contains:

type Message struct {
	ID         string         // Unique identifier
	Role       string         // "user", "assistant", "system"
	Content    string         // Message content
	Timestamp  time.Time      // When created
	Metadata   map[string]any // Additional metadata
	TokenCount int            // Approximate token count
}

Example Usage

## Basic Buffer Memory

import (
	"context"
	"time"

	"github.com/smallnest/langgraphgo/memory"
)

ctx := context.Background()
mem := memory.NewBufferMemory(50)

// Add messages
mem.AddMessage(ctx, memory.NewMessage("user", "Hello!"))
mem.AddMessage(ctx, memory.NewMessage("assistant", "Hi there!"))

// Get context for next query
context, err := mem.GetContext(ctx, "How are you?")
if err != nil {
	return err
}

// Use context in LLM prompt
for _, msg := range context {
	prompt += fmt.Sprintf("%s: %s\n", msg.Role, msg.Content)
}

## Summarization Memory

// Requires an LLM client that implements the Summarizer interface
type MyLLM struct{}
func (m *MyLLM) Summarize(ctx context.Context, messages []*memory.Message) (string, error) {
	// Implementation for summarizing messages
	return "", nil
}

llm := &MyLLM{}
mem := memory.NewSummarizationMemory(llm, 2000) // 2000 token limit

// Add many messages - older ones will be summarized
for i := 0; i < 100; i++ {
	mem.AddMessage(ctx, &memory.Message{
		Role:    "user",
		Content: fmt.Sprintf("Message %d", i),
	})
}

// Context will include recent messages + summaries of older ones
context, _ := mem.GetContext(ctx, "latest query")

## Hierarchical Memory

config := &memory.HierarchicalConfig{
	WorkingMemorySize: 20,  // Recent messages
	LongTermSize:      200, // Important messages
	ArchiveSize:       2000, // All other messages
	ImportanceThreshold: 0.5,
}

mem := memory.NewHierarchicalMemory(config)

// Messages with metadata can be marked as important
mem.AddMessage(ctx, &memory.Message{
	Role:    "user",
	Content: "Critical information",
	Metadata: map[string]any{"importance": 0.9},
})

Memory Statistics

All implementations provide statistics:

stats, _ := mem.GetStats(ctx)
fmt.Printf("Total messages: %d\n", stats.TotalMessages)
fmt.Printf("Total tokens: %d\n", stats.TotalTokens)
fmt.Printf("Active tokens: %d\n", stats.ActiveTokens)
fmt.Printf("Compression rate: %.2f\n", stats.CompressionRate)

Integration with LangChain

The package includes adapters for LangChain compatibility:

// Convert to LangChain ChatMemory
langchainMem := memory.NewLangchainAdapter(mem)

Compression Strategies

For long conversations, the package provides compression:

compressor := memory.NewSemanticCompressor(embeddings, 0.3)
compressed := compressor.Compress(messages)

Retrieval-Augmented Memory

Combine with vector storage for semantic retrieval:

retriever := memory.NewRetrievalMemory(
	vectorStore,
	embeddingModel,
	&memory.RetrievalConfig{
		TopK:              5,
		MinSimilarity:     0.7,
		ContextWindow:    4000,
	},
)

Choosing a Strategy

  • Buffer: Simple conversations, fixed context size
  • Sliding Window: Need some context continuity
  • Summarization: Long conversations, need to preserve all information
  • Hierarchical: Complex applications with different retention needs
  • OS-Inspired: Performance-critical applications with access patterns
  • Graph-Based: Semantic relationships between messages matter
  • Retrieval: Need to find relevant messages based on content similarity

Thread Safety

All memory implementations are thread-safe and can be used concurrently from multiple goroutines. They use internal mutexes or atomic operations for synchronization.

Custom Memory Strategies

Implement the Memory interface for custom strategies:

type CustomMemory struct {
	// Custom fields
}

func (m *CustomMemory) AddMessage(ctx context.Context, msg *memory.Message) error {
	// Custom implementation
	return nil
}

func (m *CustomMemory) GetContext(ctx context.Context, query string) ([]*memory.Message, error) {
	// Custom retrieval logic
	return nil, nil
}

func (m *CustomMemory) Clear(ctx context.Context) error {
	// Clear memory
	return nil
}

func (m *CustomMemory) GetStats(ctx context.Context) (*memory.Stats, error) {
	// Return statistics
	return nil, nil
}

Best Practices

  1. Choose appropriate strategy based on your use case
  2. Monitor memory usage with GetStats()
  3. Set reasonable limits to prevent memory bloat
  4. Use metadata to mark important messages
  5. Consider token costs when using LLM-based summarization
  6. Test with realistic conversation lengths
  7. Clear memory between unrelated conversations

Index

Constants

This section is empty.

Variables

This section is empty.

Functions

This section is empty.

Types

type BufferConfig

type BufferConfig struct {
	MaxMessages   int  // Maximum number of messages (0 = unlimited)
	MaxTokens     int  // Maximum total tokens (0 = unlimited)
	AutoSummarize bool // Enable automatic summarization
	Summarizer    func(ctx context.Context, messages []*Message) (string, error)
}

BufferConfig holds configuration for buffer memory

type BufferMemory

type BufferMemory struct {

	// Optional summarizer
	Summarizer func(ctx context.Context, messages []*Message) (string, error)
	// contains filtered or unexported fields
}

BufferMemory is a simple buffer-based memory implementation Similar to LangChain's ConversationBufferMemory Combines sliding window with optional summarization

func NewBufferMemory

func NewBufferMemory(config *BufferConfig) *BufferMemory

NewBufferMemory creates a new buffer memory

func (*BufferMemory) AddMessage

func (b *BufferMemory) AddMessage(ctx context.Context, msg *Message) error

AddMessage adds a message to the buffer

func (*BufferMemory) Clear

func (b *BufferMemory) Clear(ctx context.Context) error

Clear removes all messages

func (*BufferMemory) GetContext

func (b *BufferMemory) GetContext(ctx context.Context, query string) ([]*Message, error)

GetContext returns all messages in the buffer

func (*BufferMemory) GetMessages

func (b *BufferMemory) GetMessages() []*Message

GetMessages returns a copy of all messages

func (*BufferMemory) GetStats

func (b *BufferMemory) GetStats(ctx context.Context) (*Stats, error)

GetStats returns buffer memory statistics

func (*BufferMemory) LoadMessages

func (b *BufferMemory) LoadMessages(messages []*Message)

LoadMessages loads messages into the buffer (replaces existing)

type ChatMemory

type ChatMemory interface {
	// SaveContext saves the context from this conversation to buffer
	SaveContext(ctx context.Context, inputValues map[string]any, outputValues map[string]any) error
	// LoadMemoryVariables loads memory variables
	LoadMemoryVariables(ctx context.Context, inputs map[string]any) (map[string]any, error)
	// Clear clears memory contents
	Clear(ctx context.Context) error
	// GetMessages returns all messages in memory
	GetMessages(ctx context.Context) ([]llms.ChatMessage, error)
}

ChatMemory is the interface for conversation memory management in langgraphgo

type ChatMessageHistory

type ChatMessageHistory struct {
	// contains filtered or unexported fields
}

ChatMessageHistory provides direct access to chat message history

func NewChatMessageHistory

func NewChatMessageHistory(options ...langchainmemory.ChatMessageHistoryOption) *ChatMessageHistory

NewChatMessageHistory creates a new chat message history

func (*ChatMessageHistory) AddAIMessage

func (h *ChatMessageHistory) AddAIMessage(ctx context.Context, message string) error

AddAIMessage adds an AI message to the history

func (*ChatMessageHistory) AddMessage

func (h *ChatMessageHistory) AddMessage(ctx context.Context, message llms.ChatMessage) error

AddMessage adds a message to the history

func (*ChatMessageHistory) AddUserMessage

func (h *ChatMessageHistory) AddUserMessage(ctx context.Context, message string) error

AddUserMessage adds a user message to the history

func (*ChatMessageHistory) Clear

func (h *ChatMessageHistory) Clear(ctx context.Context) error

Clear clears all messages from the history

func (*ChatMessageHistory) GetHistory

GetHistory returns the underlying langchaingo ChatMessageHistory

func (*ChatMessageHistory) Messages

func (h *ChatMessageHistory) Messages(ctx context.Context) ([]llms.ChatMessage, error)

Messages returns all messages in the history

func (*ChatMessageHistory) SetMessages

func (h *ChatMessageHistory) SetMessages(ctx context.Context, messages []llms.ChatMessage) error

SetMessages sets the messages in the history

type CompressedBlock

type CompressedBlock struct {
	ID               string    // Unique block ID
	Summary          string    // Compressed summary
	OriginalCount    int       // Number of original messages
	OriginalTokens   int       // Original token count
	CompressedTokens int       // Compressed token count
	TimeRange        TimeRange // Time range of messages
	Topics           []string  // Main topics covered
}

CompressedBlock represents a compressed group of messages

type CompressionConfig

type CompressionConfig struct {
	CompressionTrigger int           // Messages before compression
	ConsolidateAfter   time.Duration // Duration before consolidation
	Compressor         func(ctx context.Context, messages []*Message) (*CompressedBlock, error)
	Consolidator       func(ctx context.Context, blocks []*CompressedBlock) (*CompressedBlock, error)
}

CompressionConfig holds configuration for compression memory

type CompressionMemory

type CompressionMemory struct {

	// Compressor compresses a group of messages into a single block
	Compressor func(ctx context.Context, messages []*Message) (*CompressedBlock, error)

	// Consolidator merges multiple compressed blocks
	Consolidator func(ctx context.Context, blocks []*CompressedBlock) (*CompressedBlock, error)
	// contains filtered or unexported fields
}

CompressionMemory periodically compresses and consolidates memory Pros: Maintains long-term context efficiently, removes redundancy Cons: Compression requires LLM calls, may lose granular details

func NewCompressionMemory

func NewCompressionMemory(config *CompressionConfig) *CompressionMemory

NewCompressionMemory creates a new compression-based memory strategy

func (*CompressionMemory) AddMessage

func (c *CompressionMemory) AddMessage(ctx context.Context, msg *Message) error

AddMessage adds a message and triggers compression if needed

func (*CompressionMemory) Clear

func (c *CompressionMemory) Clear(ctx context.Context) error

Clear removes all memory

func (*CompressionMemory) ForceCompression

func (c *CompressionMemory) ForceCompression(ctx context.Context) error

ForceCompression manually triggers compression

func (*CompressionMemory) ForceConsolidation

func (c *CompressionMemory) ForceConsolidation(ctx context.Context) error

ForceConsolidation manually triggers consolidation

func (*CompressionMemory) GetContext

func (c *CompressionMemory) GetContext(ctx context.Context, query string) ([]*Message, error)

GetContext returns compressed blocks and recent messages

func (*CompressionMemory) GetStats

func (c *CompressionMemory) GetStats(ctx context.Context) (*Stats, error)

GetStats returns compression statistics

type GraphBasedMemory

type GraphBasedMemory struct {

	// RelationExtractor identifies relationships between messages
	// In production, this could use NER or topic modeling
	RelationExtractor func(msg *Message) []string
	// contains filtered or unexported fields
}

GraphBasedMemory models conversations as knowledge graphs Pros: Captures relationships between topics, better context understanding Cons: More complex, requires relationship tracking

func NewGraphBasedMemory

func NewGraphBasedMemory(config *GraphConfig) *GraphBasedMemory

NewGraphBasedMemory creates a new graph-based memory strategy

func (*GraphBasedMemory) AddMessage

func (g *GraphBasedMemory) AddMessage(ctx context.Context, msg *Message) error

AddMessage adds a message to the graph and establishes connections

func (*GraphBasedMemory) Clear

func (g *GraphBasedMemory) Clear(ctx context.Context) error

Clear removes all nodes and relationships

func (*GraphBasedMemory) GetContext

func (g *GraphBasedMemory) GetContext(ctx context.Context, query string) ([]*Message, error)

GetContext retrieves messages based on graph traversal Uses breadth-first search starting from most recent messages

func (*GraphBasedMemory) GetRelationships

func (g *GraphBasedMemory) GetRelationships() map[string]int

GetRelationships returns all topics and their associated message counts

func (*GraphBasedMemory) GetStats

func (g *GraphBasedMemory) GetStats(ctx context.Context) (*Stats, error)

GetStats returns statistics about the graph

type GraphConfig

type GraphConfig struct {
	TopK              int                         // Number of messages to retrieve
	RelationExtractor func(msg *Message) []string // Custom relation extractor
}

GraphConfig holds configuration for graph-based memory

type GraphNode

type GraphNode struct {
	Message     *Message
	Connections []string // IDs of connected messages
	Weight      float64  // Importance/relevance weight
}

GraphNode represents a node in the conversation graph

type HierarchicalConfig

type HierarchicalConfig struct {
	RecentLimit      int                        // Max recent messages
	ImportantLimit   int                        // Max important messages
	ImportanceScorer func(msg *Message) float64 // Custom importance scorer
}

HierarchicalConfig holds configuration for hierarchical memory

type HierarchicalMemory

type HierarchicalMemory struct {

	// ImportanceScorer determines message importance (0.0 to 1.0)
	// Higher scores = more important
	ImportanceScorer func(msg *Message) float64
	// contains filtered or unexported fields
}

HierarchicalMemory organizes messages in layers based on importance and recency Pros: Balances recent context with important historical information Cons: More complex management, requires importance scoring

func NewHierarchicalMemory

func NewHierarchicalMemory(config *HierarchicalConfig) *HierarchicalMemory

NewHierarchicalMemory creates a new hierarchical memory strategy

func (*HierarchicalMemory) AddMessage

func (h *HierarchicalMemory) AddMessage(ctx context.Context, msg *Message) error

AddMessage adds a message and organizes it into appropriate layer

func (*HierarchicalMemory) Clear

func (h *HierarchicalMemory) Clear(ctx context.Context) error

Clear removes all messages from all layers

func (*HierarchicalMemory) GetContext

func (h *HierarchicalMemory) GetContext(ctx context.Context, query string) ([]*Message, error)

GetContext returns messages from all layers, prioritized by importance

func (*HierarchicalMemory) GetStats

func (h *HierarchicalMemory) GetStats(ctx context.Context) (*Stats, error)

GetStats returns statistics about hierarchical memory

type LangChainMemory

type LangChainMemory struct {
	// contains filtered or unexported fields
}

LangChainMemory adapts langchaingo's memory implementations to our ChatMemory interface

func NewConversationBufferMemory

func NewConversationBufferMemory(options ...langchainmemory.ConversationBufferOption) *LangChainMemory

NewConversationBufferMemory creates a new conversation buffer memory with default settings

func NewConversationTokenBufferMemory

func NewConversationTokenBufferMemory(llm llms.Model, maxTokenLimit int, options ...langchainmemory.ConversationBufferOption) *LangChainMemory

NewConversationTokenBufferMemory creates a new conversation token buffer memory that keeps conversation history within a token limit

func NewConversationWindowBufferMemory

func NewConversationWindowBufferMemory(windowSize int, options ...langchainmemory.ConversationBufferOption) *LangChainMemory

NewConversationWindowBufferMemory creates a new conversation window buffer memory that keeps only the last N conversation turns

func NewLangChainMemory

func NewLangChainMemory(buffer schema.Memory) *LangChainMemory

NewLangChainMemory creates a new adapter for langchaingo memory Supports ConversationBuffer, ConversationWindowBuffer, ConversationTokenBuffer, etc.

func (*LangChainMemory) Clear

func (m *LangChainMemory) Clear(ctx context.Context) error

Clear clears memory contents

func (*LangChainMemory) GetMessages

func (m *LangChainMemory) GetMessages(ctx context.Context) ([]llms.ChatMessage, error)

GetMessages returns all messages in memory This is a convenience method that extracts messages from the memory buffer

func (*LangChainMemory) LoadMemoryVariables

func (m *LangChainMemory) LoadMemoryVariables(ctx context.Context, inputs map[string]any) (map[string]any, error)

LoadMemoryVariables loads memory variables

func (*LangChainMemory) SaveContext

func (m *LangChainMemory) SaveContext(ctx context.Context, inputValues map[string]any, outputValues map[string]any) error

SaveContext saves the context from this conversation to buffer

type Memory

type Memory interface {
	// AddMessage adds a new message to memory
	AddMessage(ctx context.Context, msg *Message) error

	// GetContext retrieves relevant context for the current conversation
	// Returns messages that should be included in the LLM prompt
	GetContext(ctx context.Context, query string) ([]*Message, error)

	// Clear removes all messages from memory
	Clear(ctx context.Context) error

	// GetStats returns statistics about the current memory state
	GetStats(ctx context.Context) (*Stats, error)
}

Memory defines the interface for memory management strategies All memory strategies must implement these methods

type MemoryPage

type MemoryPage struct {
	ID          string
	Messages    []*Message
	LastAccess  time.Time
	AccessCount int
	Priority    int  // Higher priority = less likely to be evicted
	Dirty       bool // Has been modified
	Size        int  // Token count
}

MemoryPage represents a page of memory (like OS paging)

type Message

type Message struct {
	ID         string         // Unique identifier
	Role       string         // "user", "assistant", "system"
	Content    string         // Message content
	Timestamp  time.Time      // When the message was created
	Metadata   map[string]any // Additional metadata
	TokenCount int            // Approximate token count
}

Message represents a single conversation message

func NewMessage

func NewMessage(role, content string) *Message

NewMessage creates a new message with the given role and content

type OSLikeConfig

type OSLikeConfig struct {
	ActiveLimit  int           // Pages in active memory
	CacheLimit   int           // Pages in cache
	AccessWindow time.Duration // Access tracking window
}

OSLikeConfig holds configuration for OS-like memory

type OSLikeMemory

type OSLikeMemory struct {
	// contains filtered or unexported fields
}

OSLikeMemory implements OS-inspired memory management with paging and eviction Pros: Sophisticated lifecycle management, optimal memory usage Cons: Complex implementation, overhead of management

func NewOSLikeMemory

func NewOSLikeMemory(config *OSLikeConfig) *OSLikeMemory

NewOSLikeMemory creates a new OS-like memory strategy

func (*OSLikeMemory) AddMessage

func (o *OSLikeMemory) AddMessage(ctx context.Context, msg *Message) error

AddMessage adds a message using OS-like memory management

func (*OSLikeMemory) Clear

func (o *OSLikeMemory) Clear(ctx context.Context) error

Clear removes all memory

func (*OSLikeMemory) GetContext

func (o *OSLikeMemory) GetContext(ctx context.Context, query string) ([]*Message, error)

GetContext retrieves messages from memory hierarchy

func (*OSLikeMemory) GetMemoryInfo

func (o *OSLikeMemory) GetMemoryInfo() map[string]any

GetMemoryInfo returns detailed information about memory usage

func (*OSLikeMemory) GetStats

func (o *OSLikeMemory) GetStats(ctx context.Context) (*Stats, error)

GetStats returns OS-like memory statistics

type RetrievalConfig

type RetrievalConfig struct {
	TopK          int                                                       // Number of messages to retrieve
	EmbeddingFunc func(ctx context.Context, text string) ([]float64, error) // Custom embedding function
}

RetrievalConfig holds configuration for retrieval-based memory

type RetrievalMemory

type RetrievalMemory struct {

	// EmbeddingFunc generates embeddings for text
	// In production, this would call an embedding API like OpenAI embeddings
	EmbeddingFunc func(ctx context.Context, text string) ([]float64, error)
	// contains filtered or unexported fields
}

RetrievalMemory uses vector embeddings to retrieve relevant past messages Pros: Only fetches contextually relevant history, efficient token usage Cons: Requires embedding model, may miss chronologically important context

func NewRetrievalMemory

func NewRetrievalMemory(config *RetrievalConfig) *RetrievalMemory

NewRetrievalMemory creates a new retrieval-based memory strategy

func (*RetrievalMemory) AddMessage

func (r *RetrievalMemory) AddMessage(ctx context.Context, msg *Message) error

AddMessage adds a message and generates its embedding

func (*RetrievalMemory) Clear

func (r *RetrievalMemory) Clear(ctx context.Context) error

Clear removes all messages and embeddings

func (*RetrievalMemory) GetContext

func (r *RetrievalMemory) GetContext(ctx context.Context, query string) ([]*Message, error)

GetContext retrieves the most semantically similar messages to the query

func (*RetrievalMemory) GetStats

func (r *RetrievalMemory) GetStats(ctx context.Context) (*Stats, error)

GetStats returns statistics about retrieval memory

func (*RetrievalMemory) SetTopK

func (r *RetrievalMemory) SetTopK(k int)

SetTopK updates the number of messages to retrieve

type SequentialMemory

type SequentialMemory struct {
	// contains filtered or unexported fields
}

SequentialMemory implements the "Keep-It-All" strategy Stores complete conversation history in chronological order Pros: Perfect recall of all interactions Cons: Token costs grow unbounded with conversation length

func NewSequentialMemory

func NewSequentialMemory() *SequentialMemory

NewSequentialMemory creates a new sequential memory strategy

func (*SequentialMemory) AddMessage

func (s *SequentialMemory) AddMessage(ctx context.Context, msg *Message) error

AddMessage appends a new message to the conversation history

func (*SequentialMemory) Clear

func (s *SequentialMemory) Clear(ctx context.Context) error

Clear removes all messages from memory

func (*SequentialMemory) GetContext

func (s *SequentialMemory) GetContext(ctx context.Context, query string) ([]*Message, error)

GetContext returns all messages in chronological order The query parameter is ignored for sequential memory

func (*SequentialMemory) GetStats

func (s *SequentialMemory) GetStats(ctx context.Context) (*Stats, error)

GetStats returns statistics about the sequential memory

type SlidingWindowMemory

type SlidingWindowMemory struct {
	// contains filtered or unexported fields
}

SlidingWindowMemory maintains only the most recent N messages Pros: Bounded context size, prevents unbounded token growth Cons: Loses older context, may forget important earlier information

func NewSlidingWindowMemory

func NewSlidingWindowMemory(windowSize int) *SlidingWindowMemory

NewSlidingWindowMemory creates a new sliding window memory strategy windowSize determines how many recent messages to keep

func (*SlidingWindowMemory) AddMessage

func (s *SlidingWindowMemory) AddMessage(ctx context.Context, msg *Message) error

AddMessage adds a new message, removing oldest if window is full

func (*SlidingWindowMemory) Clear

func (s *SlidingWindowMemory) Clear(ctx context.Context) error

Clear removes all messages from memory

func (*SlidingWindowMemory) GetContext

func (s *SlidingWindowMemory) GetContext(ctx context.Context, query string) ([]*Message, error)

GetContext returns messages within the sliding window

func (*SlidingWindowMemory) GetStats

func (s *SlidingWindowMemory) GetStats(ctx context.Context) (*Stats, error)

GetStats returns statistics about the sliding window memory

func (*SlidingWindowMemory) GetWindowSize

func (s *SlidingWindowMemory) GetWindowSize() int

GetWindowSize returns the current window size

func (*SlidingWindowMemory) SetWindowSize

func (s *SlidingWindowMemory) SetWindowSize(size int)

SetWindowSize updates the window size If the new size is smaller than current messages, oldest are removed

type Stats

type Stats struct {
	TotalMessages   int     // Total number of messages stored
	TotalTokens     int     // Total tokens across all messages
	ActiveMessages  int     // Messages currently in active context
	ActiveTokens    int     // Tokens in active context
	CompressionRate float64 // Compression rate (if applicable)
}

Stats contains statistics about memory usage

type SummarizationConfig

type SummarizationConfig struct {
	RecentWindowSize int                                                            // Number of recent messages to keep
	SummarizeAfter   int                                                            // Trigger summarization after this many messages
	Summarizer       func(ctx context.Context, messages []*Message) (string, error) // Custom summarizer
}

SummarizationConfig holds configuration for summarization memory

type SummarizationMemory

type SummarizationMemory struct {

	// Summarizer is a function that takes messages and returns a summary
	// In production, this would call an LLM
	Summarizer func(ctx context.Context, messages []*Message) (string, error)
	// contains filtered or unexported fields
}

SummarizationMemory condenses older messages into summaries Pros: Maintains historical context while reducing token count Cons: May lose specific details in summarization

func NewSummarizationMemory

func NewSummarizationMemory(config *SummarizationConfig) *SummarizationMemory

NewSummarizationMemory creates a new summarization-based memory strategy

func (*SummarizationMemory) AddMessage

func (s *SummarizationMemory) AddMessage(ctx context.Context, msg *Message) error

AddMessage adds a new message and triggers summarization if needed

func (*SummarizationMemory) Clear

func (s *SummarizationMemory) Clear(ctx context.Context) error

Clear removes all messages and summaries

func (*SummarizationMemory) GetContext

func (s *SummarizationMemory) GetContext(ctx context.Context, query string) ([]*Message, error)

GetContext returns summaries plus recent messages

func (*SummarizationMemory) GetStats

func (s *SummarizationMemory) GetStats(ctx context.Context) (*Stats, error)

GetStats returns statistics about the summarization memory

type TimeRange

type TimeRange struct {
	Start time.Time
	End   time.Time
}

TimeRange represents a time period

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL