Documentation
¶
Overview ¶
Package memory provides various memory management strategies for conversational AI applications.
This package implements multiple approaches to managing conversation history and context, from simple buffers to sophisticated OS-inspired memory management with paging and eviction. It's designed to help maintain relevant context within token limits while preserving important information from long conversations.
Core Interface ¶
The Memory interface defines the contract that all memory strategies must implement:
- AddMessage: Add a new message to memory
- GetContext: Retrieve relevant context for the current query
- Clear: Remove all messages from memory
- GetStats: Get statistics about memory usage
Available Memory Strategies ¶
## Buffer Memory Simple first-in-first-out buffer with configurable size:
buffer := memory.NewBufferMemory(100) // Keep last 100 messages buffer.AddMessage(ctx, message) context, _ := buffer.GetContext(ctx, "current query")
## Sliding Window Memory Maintains a sliding window of recent messages with overlap:
window := memory.NewSlidingWindowMemory(50, 5) // 50 messages with 5 overlap
## Summarization Memory Automatically summarizes older messages to save tokens:
summ := memory.NewSummarizationMemory(llmClient, 1000) // 1000 token limit
## Hierarchical Memory Multi-level memory with different retention policies:
hierarchical := memory.NewHierarchicalMemory(
&memory.Config{
WorkingMemorySize: 50,
LongTermSize: 1000,
ArchiveSize: 10000,
},
)
## OS-Inspired Memory Sophisticated memory management with active, cached, and archived pages:
osMemory := memory.NewOSLikeMemory(&memory.OSLikeConfig{
ActiveLimit: 100,
CacheLimit: 500,
AccessWindow: time.Hour,
})
## Graph-Based Memory Organizes messages as a graph for better context retrieval:
graphMemory := memory.NewGraphBasedMemory(
embeddingModel,
&memory.GraphConfig{
MaxNodes: 1000,
SimilarityThreshold: 0.7,
},
)
Message Structure ¶
Each message contains:
type Message struct {
ID string // Unique identifier
Role string // "user", "assistant", "system"
Content string // Message content
Timestamp time.Time // When created
Metadata map[string]any // Additional metadata
TokenCount int // Approximate token count
}
Example Usage ¶
## Basic Buffer Memory
import (
"context"
"time"
"github.com/smallnest/langgraphgo/memory"
)
ctx := context.Background()
mem := memory.NewBufferMemory(50)
// Add messages
mem.AddMessage(ctx, memory.NewMessage("user", "Hello!"))
mem.AddMessage(ctx, memory.NewMessage("assistant", "Hi there!"))
// Get context for next query
context, err := mem.GetContext(ctx, "How are you?")
if err != nil {
return err
}
// Use context in LLM prompt
for _, msg := range context {
prompt += fmt.Sprintf("%s: %s\n", msg.Role, msg.Content)
}
## Summarization Memory
// Requires an LLM client that implements the Summarizer interface
type MyLLM struct{}
func (m *MyLLM) Summarize(ctx context.Context, messages []*memory.Message) (string, error) {
// Implementation for summarizing messages
return "", nil
}
llm := &MyLLM{}
mem := memory.NewSummarizationMemory(llm, 2000) // 2000 token limit
// Add many messages - older ones will be summarized
for i := 0; i < 100; i++ {
mem.AddMessage(ctx, &memory.Message{
Role: "user",
Content: fmt.Sprintf("Message %d", i),
})
}
// Context will include recent messages + summaries of older ones
context, _ := mem.GetContext(ctx, "latest query")
## Hierarchical Memory
config := &memory.HierarchicalConfig{
WorkingMemorySize: 20, // Recent messages
LongTermSize: 200, // Important messages
ArchiveSize: 2000, // All other messages
ImportanceThreshold: 0.5,
}
mem := memory.NewHierarchicalMemory(config)
// Messages with metadata can be marked as important
mem.AddMessage(ctx, &memory.Message{
Role: "user",
Content: "Critical information",
Metadata: map[string]any{"importance": 0.9},
})
Memory Statistics ¶
All implementations provide statistics:
stats, _ := mem.GetStats(ctx)
fmt.Printf("Total messages: %d\n", stats.TotalMessages)
fmt.Printf("Total tokens: %d\n", stats.TotalTokens)
fmt.Printf("Active tokens: %d\n", stats.ActiveTokens)
fmt.Printf("Compression rate: %.2f\n", stats.CompressionRate)
Integration with LangChain ¶
The package includes adapters for LangChain compatibility:
// Convert to LangChain ChatMemory langchainMem := memory.NewLangchainAdapter(mem)
Compression Strategies ¶
For long conversations, the package provides compression:
compressor := memory.NewSemanticCompressor(embeddings, 0.3) compressed := compressor.Compress(messages)
Retrieval-Augmented Memory ¶
Combine with vector storage for semantic retrieval:
retriever := memory.NewRetrievalMemory(
vectorStore,
embeddingModel,
&memory.RetrievalConfig{
TopK: 5,
MinSimilarity: 0.7,
ContextWindow: 4000,
},
)
Choosing a Strategy ¶
- Buffer: Simple conversations, fixed context size
- Sliding Window: Need some context continuity
- Summarization: Long conversations, need to preserve all information
- Hierarchical: Complex applications with different retention needs
- OS-Inspired: Performance-critical applications with access patterns
- Graph-Based: Semantic relationships between messages matter
- Retrieval: Need to find relevant messages based on content similarity
Thread Safety ¶
All memory implementations are thread-safe and can be used concurrently from multiple goroutines. They use internal mutexes or atomic operations for synchronization.
Custom Memory Strategies ¶
Implement the Memory interface for custom strategies:
type CustomMemory struct {
// Custom fields
}
func (m *CustomMemory) AddMessage(ctx context.Context, msg *memory.Message) error {
// Custom implementation
return nil
}
func (m *CustomMemory) GetContext(ctx context.Context, query string) ([]*memory.Message, error) {
// Custom retrieval logic
return nil, nil
}
func (m *CustomMemory) Clear(ctx context.Context) error {
// Clear memory
return nil
}
func (m *CustomMemory) GetStats(ctx context.Context) (*memory.Stats, error) {
// Return statistics
return nil, nil
}
Best Practices ¶
- Choose appropriate strategy based on your use case
- Monitor memory usage with GetStats()
- Set reasonable limits to prevent memory bloat
- Use metadata to mark important messages
- Consider token costs when using LLM-based summarization
- Test with realistic conversation lengths
- Clear memory between unrelated conversations
Index ¶
- type BufferConfig
- type BufferMemory
- func (b *BufferMemory) AddMessage(ctx context.Context, msg *Message) error
- func (b *BufferMemory) Clear(ctx context.Context) error
- func (b *BufferMemory) GetContext(ctx context.Context, query string) ([]*Message, error)
- func (b *BufferMemory) GetMessages() []*Message
- func (b *BufferMemory) GetStats(ctx context.Context) (*Stats, error)
- func (b *BufferMemory) LoadMessages(messages []*Message)
- type ChatMemory
- type ChatMessageHistory
- func (h *ChatMessageHistory) AddAIMessage(ctx context.Context, message string) error
- func (h *ChatMessageHistory) AddMessage(ctx context.Context, message llms.ChatMessage) error
- func (h *ChatMessageHistory) AddUserMessage(ctx context.Context, message string) error
- func (h *ChatMessageHistory) Clear(ctx context.Context) error
- func (h *ChatMessageHistory) GetHistory() schema.ChatMessageHistory
- func (h *ChatMessageHistory) Messages(ctx context.Context) ([]llms.ChatMessage, error)
- func (h *ChatMessageHistory) SetMessages(ctx context.Context, messages []llms.ChatMessage) error
- type CompressedBlock
- type CompressionConfig
- type CompressionMemory
- func (c *CompressionMemory) AddMessage(ctx context.Context, msg *Message) error
- func (c *CompressionMemory) Clear(ctx context.Context) error
- func (c *CompressionMemory) ForceCompression(ctx context.Context) error
- func (c *CompressionMemory) ForceConsolidation(ctx context.Context) error
- func (c *CompressionMemory) GetContext(ctx context.Context, query string) ([]*Message, error)
- func (c *CompressionMemory) GetStats(ctx context.Context) (*Stats, error)
- type GraphBasedMemory
- func (g *GraphBasedMemory) AddMessage(ctx context.Context, msg *Message) error
- func (g *GraphBasedMemory) Clear(ctx context.Context) error
- func (g *GraphBasedMemory) GetContext(ctx context.Context, query string) ([]*Message, error)
- func (g *GraphBasedMemory) GetRelationships() map[string]int
- func (g *GraphBasedMemory) GetStats(ctx context.Context) (*Stats, error)
- type GraphConfig
- type GraphNode
- type HierarchicalConfig
- type HierarchicalMemory
- func (h *HierarchicalMemory) AddMessage(ctx context.Context, msg *Message) error
- func (h *HierarchicalMemory) Clear(ctx context.Context) error
- func (h *HierarchicalMemory) GetContext(ctx context.Context, query string) ([]*Message, error)
- func (h *HierarchicalMemory) GetStats(ctx context.Context) (*Stats, error)
- type LangChainMemory
- func NewConversationBufferMemory(options ...langchainmemory.ConversationBufferOption) *LangChainMemory
- func NewConversationTokenBufferMemory(llm llms.Model, maxTokenLimit int, ...) *LangChainMemory
- func NewConversationWindowBufferMemory(windowSize int, options ...langchainmemory.ConversationBufferOption) *LangChainMemory
- func NewLangChainMemory(buffer schema.Memory) *LangChainMemory
- func (m *LangChainMemory) Clear(ctx context.Context) error
- func (m *LangChainMemory) GetMessages(ctx context.Context) ([]llms.ChatMessage, error)
- func (m *LangChainMemory) LoadMemoryVariables(ctx context.Context, inputs map[string]any) (map[string]any, error)
- func (m *LangChainMemory) SaveContext(ctx context.Context, inputValues map[string]any, outputValues map[string]any) error
- type Memory
- type MemoryPage
- type Message
- type OSLikeConfig
- type OSLikeMemory
- func (o *OSLikeMemory) AddMessage(ctx context.Context, msg *Message) error
- func (o *OSLikeMemory) Clear(ctx context.Context) error
- func (o *OSLikeMemory) GetContext(ctx context.Context, query string) ([]*Message, error)
- func (o *OSLikeMemory) GetMemoryInfo() map[string]any
- func (o *OSLikeMemory) GetStats(ctx context.Context) (*Stats, error)
- type RetrievalConfig
- type RetrievalMemory
- func (r *RetrievalMemory) AddMessage(ctx context.Context, msg *Message) error
- func (r *RetrievalMemory) Clear(ctx context.Context) error
- func (r *RetrievalMemory) GetContext(ctx context.Context, query string) ([]*Message, error)
- func (r *RetrievalMemory) GetStats(ctx context.Context) (*Stats, error)
- func (r *RetrievalMemory) SetTopK(k int)
- type SequentialMemory
- func (s *SequentialMemory) AddMessage(ctx context.Context, msg *Message) error
- func (s *SequentialMemory) Clear(ctx context.Context) error
- func (s *SequentialMemory) GetContext(ctx context.Context, query string) ([]*Message, error)
- func (s *SequentialMemory) GetStats(ctx context.Context) (*Stats, error)
- type SlidingWindowMemory
- func (s *SlidingWindowMemory) AddMessage(ctx context.Context, msg *Message) error
- func (s *SlidingWindowMemory) Clear(ctx context.Context) error
- func (s *SlidingWindowMemory) GetContext(ctx context.Context, query string) ([]*Message, error)
- func (s *SlidingWindowMemory) GetStats(ctx context.Context) (*Stats, error)
- func (s *SlidingWindowMemory) GetWindowSize() int
- func (s *SlidingWindowMemory) SetWindowSize(size int)
- type Stats
- type SummarizationConfig
- type SummarizationMemory
- func (s *SummarizationMemory) AddMessage(ctx context.Context, msg *Message) error
- func (s *SummarizationMemory) Clear(ctx context.Context) error
- func (s *SummarizationMemory) GetContext(ctx context.Context, query string) ([]*Message, error)
- func (s *SummarizationMemory) GetStats(ctx context.Context) (*Stats, error)
- type TimeRange
Constants ¶
This section is empty.
Variables ¶
This section is empty.
Functions ¶
This section is empty.
Types ¶
type BufferConfig ¶
type BufferConfig struct {
MaxMessages int // Maximum number of messages (0 = unlimited)
MaxTokens int // Maximum total tokens (0 = unlimited)
AutoSummarize bool // Enable automatic summarization
Summarizer func(ctx context.Context, messages []*Message) (string, error)
}
BufferConfig holds configuration for buffer memory
type BufferMemory ¶
type BufferMemory struct {
// Optional summarizer
Summarizer func(ctx context.Context, messages []*Message) (string, error)
// contains filtered or unexported fields
}
BufferMemory is a simple buffer-based memory implementation Similar to LangChain's ConversationBufferMemory Combines sliding window with optional summarization
func NewBufferMemory ¶
func NewBufferMemory(config *BufferConfig) *BufferMemory
NewBufferMemory creates a new buffer memory
func (*BufferMemory) AddMessage ¶
func (b *BufferMemory) AddMessage(ctx context.Context, msg *Message) error
AddMessage adds a message to the buffer
func (*BufferMemory) Clear ¶
func (b *BufferMemory) Clear(ctx context.Context) error
Clear removes all messages
func (*BufferMemory) GetContext ¶
GetContext returns all messages in the buffer
func (*BufferMemory) GetMessages ¶
func (b *BufferMemory) GetMessages() []*Message
GetMessages returns a copy of all messages
func (*BufferMemory) GetStats ¶
func (b *BufferMemory) GetStats(ctx context.Context) (*Stats, error)
GetStats returns buffer memory statistics
func (*BufferMemory) LoadMessages ¶
func (b *BufferMemory) LoadMessages(messages []*Message)
LoadMessages loads messages into the buffer (replaces existing)
type ChatMemory ¶
type ChatMemory interface {
// SaveContext saves the context from this conversation to buffer
SaveContext(ctx context.Context, inputValues map[string]any, outputValues map[string]any) error
// LoadMemoryVariables loads memory variables
LoadMemoryVariables(ctx context.Context, inputs map[string]any) (map[string]any, error)
// Clear clears memory contents
Clear(ctx context.Context) error
// GetMessages returns all messages in memory
GetMessages(ctx context.Context) ([]llms.ChatMessage, error)
}
ChatMemory is the interface for conversation memory management in langgraphgo
type ChatMessageHistory ¶
type ChatMessageHistory struct {
// contains filtered or unexported fields
}
ChatMessageHistory provides direct access to chat message history
func NewChatMessageHistory ¶
func NewChatMessageHistory(options ...langchainmemory.ChatMessageHistoryOption) *ChatMessageHistory
NewChatMessageHistory creates a new chat message history
func (*ChatMessageHistory) AddAIMessage ¶
func (h *ChatMessageHistory) AddAIMessage(ctx context.Context, message string) error
AddAIMessage adds an AI message to the history
func (*ChatMessageHistory) AddMessage ¶
func (h *ChatMessageHistory) AddMessage(ctx context.Context, message llms.ChatMessage) error
AddMessage adds a message to the history
func (*ChatMessageHistory) AddUserMessage ¶
func (h *ChatMessageHistory) AddUserMessage(ctx context.Context, message string) error
AddUserMessage adds a user message to the history
func (*ChatMessageHistory) Clear ¶
func (h *ChatMessageHistory) Clear(ctx context.Context) error
Clear clears all messages from the history
func (*ChatMessageHistory) GetHistory ¶
func (h *ChatMessageHistory) GetHistory() schema.ChatMessageHistory
GetHistory returns the underlying langchaingo ChatMessageHistory
func (*ChatMessageHistory) Messages ¶
func (h *ChatMessageHistory) Messages(ctx context.Context) ([]llms.ChatMessage, error)
Messages returns all messages in the history
func (*ChatMessageHistory) SetMessages ¶
func (h *ChatMessageHistory) SetMessages(ctx context.Context, messages []llms.ChatMessage) error
SetMessages sets the messages in the history
type CompressedBlock ¶
type CompressedBlock struct {
ID string // Unique block ID
Summary string // Compressed summary
OriginalCount int // Number of original messages
OriginalTokens int // Original token count
CompressedTokens int // Compressed token count
TimeRange TimeRange // Time range of messages
Topics []string // Main topics covered
}
CompressedBlock represents a compressed group of messages
type CompressionConfig ¶
type CompressionConfig struct {
CompressionTrigger int // Messages before compression
ConsolidateAfter time.Duration // Duration before consolidation
Compressor func(ctx context.Context, messages []*Message) (*CompressedBlock, error)
Consolidator func(ctx context.Context, blocks []*CompressedBlock) (*CompressedBlock, error)
}
CompressionConfig holds configuration for compression memory
type CompressionMemory ¶
type CompressionMemory struct {
// Compressor compresses a group of messages into a single block
Compressor func(ctx context.Context, messages []*Message) (*CompressedBlock, error)
// Consolidator merges multiple compressed blocks
Consolidator func(ctx context.Context, blocks []*CompressedBlock) (*CompressedBlock, error)
// contains filtered or unexported fields
}
CompressionMemory periodically compresses and consolidates memory Pros: Maintains long-term context efficiently, removes redundancy Cons: Compression requires LLM calls, may lose granular details
func NewCompressionMemory ¶
func NewCompressionMemory(config *CompressionConfig) *CompressionMemory
NewCompressionMemory creates a new compression-based memory strategy
func (*CompressionMemory) AddMessage ¶
func (c *CompressionMemory) AddMessage(ctx context.Context, msg *Message) error
AddMessage adds a message and triggers compression if needed
func (*CompressionMemory) Clear ¶
func (c *CompressionMemory) Clear(ctx context.Context) error
Clear removes all memory
func (*CompressionMemory) ForceCompression ¶
func (c *CompressionMemory) ForceCompression(ctx context.Context) error
ForceCompression manually triggers compression
func (*CompressionMemory) ForceConsolidation ¶
func (c *CompressionMemory) ForceConsolidation(ctx context.Context) error
ForceConsolidation manually triggers consolidation
func (*CompressionMemory) GetContext ¶
GetContext returns compressed blocks and recent messages
type GraphBasedMemory ¶
type GraphBasedMemory struct {
// RelationExtractor identifies relationships between messages
// In production, this could use NER or topic modeling
RelationExtractor func(msg *Message) []string
// contains filtered or unexported fields
}
GraphBasedMemory models conversations as knowledge graphs Pros: Captures relationships between topics, better context understanding Cons: More complex, requires relationship tracking
func NewGraphBasedMemory ¶
func NewGraphBasedMemory(config *GraphConfig) *GraphBasedMemory
NewGraphBasedMemory creates a new graph-based memory strategy
func (*GraphBasedMemory) AddMessage ¶
func (g *GraphBasedMemory) AddMessage(ctx context.Context, msg *Message) error
AddMessage adds a message to the graph and establishes connections
func (*GraphBasedMemory) Clear ¶
func (g *GraphBasedMemory) Clear(ctx context.Context) error
Clear removes all nodes and relationships
func (*GraphBasedMemory) GetContext ¶
GetContext retrieves messages based on graph traversal Uses breadth-first search starting from most recent messages
func (*GraphBasedMemory) GetRelationships ¶
func (g *GraphBasedMemory) GetRelationships() map[string]int
GetRelationships returns all topics and their associated message counts
type GraphConfig ¶
type GraphConfig struct {
TopK int // Number of messages to retrieve
RelationExtractor func(msg *Message) []string // Custom relation extractor
}
GraphConfig holds configuration for graph-based memory
type GraphNode ¶
type GraphNode struct {
Message *Message
Connections []string // IDs of connected messages
Weight float64 // Importance/relevance weight
}
GraphNode represents a node in the conversation graph
type HierarchicalConfig ¶
type HierarchicalConfig struct {
RecentLimit int // Max recent messages
ImportantLimit int // Max important messages
ImportanceScorer func(msg *Message) float64 // Custom importance scorer
}
HierarchicalConfig holds configuration for hierarchical memory
type HierarchicalMemory ¶
type HierarchicalMemory struct {
// ImportanceScorer determines message importance (0.0 to 1.0)
// Higher scores = more important
ImportanceScorer func(msg *Message) float64
// contains filtered or unexported fields
}
HierarchicalMemory organizes messages in layers based on importance and recency Pros: Balances recent context with important historical information Cons: More complex management, requires importance scoring
func NewHierarchicalMemory ¶
func NewHierarchicalMemory(config *HierarchicalConfig) *HierarchicalMemory
NewHierarchicalMemory creates a new hierarchical memory strategy
func (*HierarchicalMemory) AddMessage ¶
func (h *HierarchicalMemory) AddMessage(ctx context.Context, msg *Message) error
AddMessage adds a message and organizes it into appropriate layer
func (*HierarchicalMemory) Clear ¶
func (h *HierarchicalMemory) Clear(ctx context.Context) error
Clear removes all messages from all layers
func (*HierarchicalMemory) GetContext ¶
GetContext returns messages from all layers, prioritized by importance
type LangChainMemory ¶
type LangChainMemory struct {
// contains filtered or unexported fields
}
LangChainMemory adapts langchaingo's memory implementations to our ChatMemory interface
func NewConversationBufferMemory ¶
func NewConversationBufferMemory(options ...langchainmemory.ConversationBufferOption) *LangChainMemory
NewConversationBufferMemory creates a new conversation buffer memory with default settings
func NewConversationTokenBufferMemory ¶
func NewConversationTokenBufferMemory(llm llms.Model, maxTokenLimit int, options ...langchainmemory.ConversationBufferOption) *LangChainMemory
NewConversationTokenBufferMemory creates a new conversation token buffer memory that keeps conversation history within a token limit
func NewConversationWindowBufferMemory ¶
func NewConversationWindowBufferMemory(windowSize int, options ...langchainmemory.ConversationBufferOption) *LangChainMemory
NewConversationWindowBufferMemory creates a new conversation window buffer memory that keeps only the last N conversation turns
func NewLangChainMemory ¶
func NewLangChainMemory(buffer schema.Memory) *LangChainMemory
NewLangChainMemory creates a new adapter for langchaingo memory Supports ConversationBuffer, ConversationWindowBuffer, ConversationTokenBuffer, etc.
func (*LangChainMemory) Clear ¶
func (m *LangChainMemory) Clear(ctx context.Context) error
Clear clears memory contents
func (*LangChainMemory) GetMessages ¶
func (m *LangChainMemory) GetMessages(ctx context.Context) ([]llms.ChatMessage, error)
GetMessages returns all messages in memory This is a convenience method that extracts messages from the memory buffer
func (*LangChainMemory) LoadMemoryVariables ¶
func (m *LangChainMemory) LoadMemoryVariables(ctx context.Context, inputs map[string]any) (map[string]any, error)
LoadMemoryVariables loads memory variables
func (*LangChainMemory) SaveContext ¶
func (m *LangChainMemory) SaveContext(ctx context.Context, inputValues map[string]any, outputValues map[string]any) error
SaveContext saves the context from this conversation to buffer
type Memory ¶
type Memory interface {
// AddMessage adds a new message to memory
AddMessage(ctx context.Context, msg *Message) error
// GetContext retrieves relevant context for the current conversation
// Returns messages that should be included in the LLM prompt
GetContext(ctx context.Context, query string) ([]*Message, error)
// Clear removes all messages from memory
Clear(ctx context.Context) error
// GetStats returns statistics about the current memory state
GetStats(ctx context.Context) (*Stats, error)
}
Memory defines the interface for memory management strategies All memory strategies must implement these methods
type MemoryPage ¶
type MemoryPage struct {
ID string
Messages []*Message
LastAccess time.Time
AccessCount int
Priority int // Higher priority = less likely to be evicted
Dirty bool // Has been modified
Size int // Token count
}
MemoryPage represents a page of memory (like OS paging)
type Message ¶
type Message struct {
ID string // Unique identifier
Role string // "user", "assistant", "system"
Content string // Message content
Timestamp time.Time // When the message was created
Metadata map[string]any // Additional metadata
TokenCount int // Approximate token count
}
Message represents a single conversation message
func NewMessage ¶
NewMessage creates a new message with the given role and content
type OSLikeConfig ¶
type OSLikeConfig struct {
ActiveLimit int // Pages in active memory
CacheLimit int // Pages in cache
AccessWindow time.Duration // Access tracking window
}
OSLikeConfig holds configuration for OS-like memory
type OSLikeMemory ¶
type OSLikeMemory struct {
// contains filtered or unexported fields
}
OSLikeMemory implements OS-inspired memory management with paging and eviction Pros: Sophisticated lifecycle management, optimal memory usage Cons: Complex implementation, overhead of management
func NewOSLikeMemory ¶
func NewOSLikeMemory(config *OSLikeConfig) *OSLikeMemory
NewOSLikeMemory creates a new OS-like memory strategy
func (*OSLikeMemory) AddMessage ¶
func (o *OSLikeMemory) AddMessage(ctx context.Context, msg *Message) error
AddMessage adds a message using OS-like memory management
func (*OSLikeMemory) Clear ¶
func (o *OSLikeMemory) Clear(ctx context.Context) error
Clear removes all memory
func (*OSLikeMemory) GetContext ¶
GetContext retrieves messages from memory hierarchy
func (*OSLikeMemory) GetMemoryInfo ¶
func (o *OSLikeMemory) GetMemoryInfo() map[string]any
GetMemoryInfo returns detailed information about memory usage
type RetrievalConfig ¶
type RetrievalConfig struct {
TopK int // Number of messages to retrieve
EmbeddingFunc func(ctx context.Context, text string) ([]float64, error) // Custom embedding function
}
RetrievalConfig holds configuration for retrieval-based memory
type RetrievalMemory ¶
type RetrievalMemory struct {
// EmbeddingFunc generates embeddings for text
// In production, this would call an embedding API like OpenAI embeddings
EmbeddingFunc func(ctx context.Context, text string) ([]float64, error)
// contains filtered or unexported fields
}
RetrievalMemory uses vector embeddings to retrieve relevant past messages Pros: Only fetches contextually relevant history, efficient token usage Cons: Requires embedding model, may miss chronologically important context
func NewRetrievalMemory ¶
func NewRetrievalMemory(config *RetrievalConfig) *RetrievalMemory
NewRetrievalMemory creates a new retrieval-based memory strategy
func (*RetrievalMemory) AddMessage ¶
func (r *RetrievalMemory) AddMessage(ctx context.Context, msg *Message) error
AddMessage adds a message and generates its embedding
func (*RetrievalMemory) Clear ¶
func (r *RetrievalMemory) Clear(ctx context.Context) error
Clear removes all messages and embeddings
func (*RetrievalMemory) GetContext ¶
GetContext retrieves the most semantically similar messages to the query
func (*RetrievalMemory) GetStats ¶
func (r *RetrievalMemory) GetStats(ctx context.Context) (*Stats, error)
GetStats returns statistics about retrieval memory
func (*RetrievalMemory) SetTopK ¶
func (r *RetrievalMemory) SetTopK(k int)
SetTopK updates the number of messages to retrieve
type SequentialMemory ¶
type SequentialMemory struct {
// contains filtered or unexported fields
}
SequentialMemory implements the "Keep-It-All" strategy Stores complete conversation history in chronological order Pros: Perfect recall of all interactions Cons: Token costs grow unbounded with conversation length
func NewSequentialMemory ¶
func NewSequentialMemory() *SequentialMemory
NewSequentialMemory creates a new sequential memory strategy
func (*SequentialMemory) AddMessage ¶
func (s *SequentialMemory) AddMessage(ctx context.Context, msg *Message) error
AddMessage appends a new message to the conversation history
func (*SequentialMemory) Clear ¶
func (s *SequentialMemory) Clear(ctx context.Context) error
Clear removes all messages from memory
func (*SequentialMemory) GetContext ¶
GetContext returns all messages in chronological order The query parameter is ignored for sequential memory
type SlidingWindowMemory ¶
type SlidingWindowMemory struct {
// contains filtered or unexported fields
}
SlidingWindowMemory maintains only the most recent N messages Pros: Bounded context size, prevents unbounded token growth Cons: Loses older context, may forget important earlier information
func NewSlidingWindowMemory ¶
func NewSlidingWindowMemory(windowSize int) *SlidingWindowMemory
NewSlidingWindowMemory creates a new sliding window memory strategy windowSize determines how many recent messages to keep
func (*SlidingWindowMemory) AddMessage ¶
func (s *SlidingWindowMemory) AddMessage(ctx context.Context, msg *Message) error
AddMessage adds a new message, removing oldest if window is full
func (*SlidingWindowMemory) Clear ¶
func (s *SlidingWindowMemory) Clear(ctx context.Context) error
Clear removes all messages from memory
func (*SlidingWindowMemory) GetContext ¶
GetContext returns messages within the sliding window
func (*SlidingWindowMemory) GetStats ¶
func (s *SlidingWindowMemory) GetStats(ctx context.Context) (*Stats, error)
GetStats returns statistics about the sliding window memory
func (*SlidingWindowMemory) GetWindowSize ¶
func (s *SlidingWindowMemory) GetWindowSize() int
GetWindowSize returns the current window size
func (*SlidingWindowMemory) SetWindowSize ¶
func (s *SlidingWindowMemory) SetWindowSize(size int)
SetWindowSize updates the window size If the new size is smaller than current messages, oldest are removed
type Stats ¶
type Stats struct {
TotalMessages int // Total number of messages stored
TotalTokens int // Total tokens across all messages
ActiveMessages int // Messages currently in active context
ActiveTokens int // Tokens in active context
CompressionRate float64 // Compression rate (if applicable)
}
Stats contains statistics about memory usage
type SummarizationConfig ¶
type SummarizationConfig struct {
RecentWindowSize int // Number of recent messages to keep
SummarizeAfter int // Trigger summarization after this many messages
Summarizer func(ctx context.Context, messages []*Message) (string, error) // Custom summarizer
}
SummarizationConfig holds configuration for summarization memory
type SummarizationMemory ¶
type SummarizationMemory struct {
// Summarizer is a function that takes messages and returns a summary
// In production, this would call an LLM
Summarizer func(ctx context.Context, messages []*Message) (string, error)
// contains filtered or unexported fields
}
SummarizationMemory condenses older messages into summaries Pros: Maintains historical context while reducing token count Cons: May lose specific details in summarization
func NewSummarizationMemory ¶
func NewSummarizationMemory(config *SummarizationConfig) *SummarizationMemory
NewSummarizationMemory creates a new summarization-based memory strategy
func (*SummarizationMemory) AddMessage ¶
func (s *SummarizationMemory) AddMessage(ctx context.Context, msg *Message) error
AddMessage adds a new message and triggers summarization if needed
func (*SummarizationMemory) Clear ¶
func (s *SummarizationMemory) Clear(ctx context.Context) error
Clear removes all messages and summaries
func (*SummarizationMemory) GetContext ¶
GetContext returns summaries plus recent messages