cache

package module
v1.0.2 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Feb 18, 2026 License: MIT Imports: 8 Imported by: 0

README

Go Cache Library

Go Reference Go Report Card

A framework-agnostic, store-driven cache library for Go that unifies local and cloud cache stores behind a single generic interface. Built with Go 1.23+ generics for type safety and performance.

Features

  • Type-Safe API — Generic Cache[T] interface eliminates casting and provides compile-time type safety
  • Pluggable Stores — Unified interface for memory (Ristretto), Redis, SQLite, and PostgreSQL backends
  • Multi-Layer Caching — Chain multiple cache layers (L1/L2/L3) with automatic backfill
  • Batch Operations — Efficient GetMany/SetMany/DeleteMany reduce round-trips by 8-10x
  • Loader PatternGetOrLoad with single-flight deduplication prevents thundering herd (99.9% load reduction)
  • Flexible Serialization — Pluggable codecs (JSON, MessagePack, Gob) with custom codec support
  • Performance Optimized — Sub-microsecond memory cache, ~40μs Redis, ~8μs SQLite
  • Context-Aware — All operations respect context cancellation and timeouts
  • Metrics Built-In — Optional hit/miss tracking with WithMetrics()
  • Production Ready — Comprehensive tests, benchmarks, and battle-tested design patterns

Quick Start

package main

import (
    "context"
    "fmt"
    "time"

    "github.com/kaptinlin/cache"
    "github.com/kaptinlin/cache/store/memory"
)

type User struct {
    ID   string
    Name string
    Age  int
}

func main() {
    // Create in-memory store with 100MB capacity
    store, _ := memory.New(memory.Config{MaxCost: 100 << 20})
    defer store.Close()

    // Create type-safe cache with 5-minute default TTL
    userCache := cache.New[User](store, cache.WithTTL(5*time.Minute))
    defer userCache.Close()

    ctx := context.Background()

    // Set a user
    user := User{ID: "123", Name: "Alice", Age: 30}
    userCache.Set(ctx, "user:123", user, 0)

    // Get the user
    retrieved, _ := userCache.Get(ctx, "user:123")
    fmt.Printf("%s is %d years old\n", retrieved.Name, retrieved.Age)
    // Output: Alice is 30 years old
}

Installation

go get github.com/kaptinlin/cache
Optional Dependencies

Install store-specific dependencies as needed:

# Redis support
go get github.com/redis/rueidis

# PostgreSQL support
go get github.com/jackc/pgx/v5

# SQLite support (pure Go)
go get modernc.org/sqlite

# MessagePack codec (recommended for production)
go get github.com/vmihailenco/msgpack/v5

Usage Examples

Basic In-Memory Cache
store, _ := memory.New(memory.Config{MaxCost: 100 << 20}) // 100MB
cache := cache.New[User](store, cache.WithTTL(5*time.Minute))
defer cache.Close()

ctx := context.Background()
user := User{ID: "123", Name: "Alice", Age: 30}

// Set with default TTL (5 minutes)
cache.Set(ctx, "user:123", user, 0)

// Set with custom TTL (1 minute)
cache.Set(ctx, "user:456", user, 1*time.Minute)

// Get from cache
retrieved, err := cache.Get(ctx, "user:123")
if err != nil {
    // Handle cache miss or error
}
Redis Cache
import (
    "github.com/kaptinlin/cache/store/redis"
    "github.com/redis/rueidis"
)

// Create Redis client
client, _ := rueidis.NewClient(rueidis.ClientOption{
    InitAddress: []string{"localhost:6379"},
})

// Create Redis store
store := redis.New(client)
cache := cache.New[User](store, cache.WithTTL(10*time.Minute))
defer cache.Close()

// Use like any other cache
cache.Set(ctx, "user:123", user, 0)
SQLite Cache (Persistent Local Cache)
import "github.com/kaptinlin/cache/store/sqlite"

// Create SQLite store with file-based database
store, _ := sqlite.New(sqlite.Config{
    Path: "/var/cache/myapp.db",
})
defer store.Close()

cache := cache.New[User](store, cache.WithTTL(1*time.Hour))
defer cache.Close()

// Data persists across restarts
cache.Set(ctx, "user:123", user, 0)
PostgreSQL Cache (Distributed Persistent Cache)
import (
    "github.com/kaptinlin/cache/store/postgres"
    "github.com/jackc/pgx/v5/pgxpool"
)

// Create PostgreSQL connection pool
pool, _ := pgxpool.New(ctx, "postgres://user:pass@localhost/cache")
defer pool.Close()

// Create PostgreSQL store
store, _ := postgres.New(postgres.Config{
    Pool:      pool,
    TableName: "cache_entries",
})
defer store.Close()

cache := cache.New[User](store, cache.WithTTL(24*time.Hour))
defer cache.Close()
Multi-Layer Cache (L1 + L2)
// L1: Fast in-memory cache (1MB, 1-minute TTL)
l1Store, _ := memory.New(memory.Config{MaxCost: 1 << 20})
l1Cache := cache.New[User](l1Store, cache.WithTTL(1*time.Minute))

// L2: Distributed Redis cache (10-minute TTL)
l2Store := redis.New(redisClient)
l2Cache := cache.New[User](l2Store, cache.WithTTL(10*time.Minute))

// Create chain cache (L1 -> L2)
chainCache := cache.NewChain[User](l1Cache, l2Cache)
defer chainCache.Close()

// Get checks L1 first, then L2, and backfills L1 on L2 hit
user, _ := chainCache.Get(ctx, "user:123")
Batch Operations
batchCache := cache.NewBatch[User](store, cache.WithTTL(5*time.Minute))
defer batchCache.Close()

// Set multiple users at once (8-10x faster than individual Sets)
users := map[string]User{
    "user:100": {ID: "100", Name: "Alice", Age: 30},
    "user:101": {ID: "101", Name: "Bob", Age: 25},
    "user:102": {ID: "102", Name: "Charlie", Age: 35},
}
batchCache.SetMany(ctx, users, 0)

// Get multiple users at once
keys := []string{"user:100", "user:101", "user:102"}
retrieved, _ := batchCache.GetMany(ctx, keys)

// Delete multiple keys at once
batchCache.DeleteMany(ctx, keys)
Loader Pattern (Cache-Aside)
loaderCache := cache.NewLoader[User](store, cache.WithTTL(5*time.Minute))
defer loaderCache.Close()

// Define loader function (e.g., database query)
loadUserFromDB := func(ctx context.Context, key string) (User, error) {
    // Query database, call API, etc.
    return db.GetUser(ctx, key)
}

// GetOrLoad checks cache first, then calls loader on miss
// Uses single-flight deduplication to prevent thundering herd
user, err := loaderCache.GetOrLoad(ctx, "user:123", loadUserFromDB)
Custom Codec (MessagePack)
import "github.com/kaptinlin/cache/codec"

// Use MessagePack for 15-63% faster serialization than JSON
cache := cache.New[User](
    store,
    cache.WithCodec(codec.MessagePackCodec{}),
    cache.WithTTL(5*time.Minute),
)
defer cache.Close()
Metrics Tracking
metrics := &cache.Metrics{}

cache := cache.New[User](
    store,
    cache.WithMetrics(metrics),
    cache.WithTTL(5*time.Minute),
)
defer cache.Close()

// Perform operations
cache.Set(ctx, "key1", user, 0)
cache.Get(ctx, "key1") // Hit
cache.Get(ctx, "key2") // Miss

// Check metrics
fmt.Printf("Hits: %d, Misses: %d, Hit Rate: %.2f%%\n",
    metrics.Hits.Load(),
    metrics.Misses.Load(),
    metrics.HitRate()*100,
)

Store Comparison

Store Latency Persistence Distribution Use Case
MemoryStore < 100 ns No Single-server Hot data, session cache, rate limiting
RedisStore ~40 μs Optional Multi-server Shared cache, distributed systems
SQLiteStore ~8 μs Yes Single-server Local persistent cache, edge deployments
PostgresStore ~10 ms Yes Multi-server Distributed persistent cache, queryable cache
Store Features
Feature Memory Redis SQLite Postgres
Type-safe API
Batch operations
TTL expiration
Automatic eviction
Concurrent reads
Concurrent writes ⚠️
Survives restarts ⚠️
Client-side caching
Auto-pipelining

⚠️ = Optional or limited support

Codec Comparison

Codec Encode Decode Round-Trip Size Use Case
MessagePack 371 ns 496 ns 800 ns Compact Production (recommended)
JSON 436 ns 1,348 ns 1,818 ns Medium Development, debugging
Gob 4,955 ns 44,384 ns 70,878 ns Large Go-specific complex types
Codec Features
Feature MessagePack JSON Gob
Performance ⭐⭐⭐⭐⭐ ⭐⭐⭐⭐ ⭐⭐
Human-readable
Cross-language
Binary format
Compact size
Complex types ⚠️

Recommendation: Use MessagePack for production workloads, JSON for development/debugging.

Performance Benchmarks

Store Performance
MemoryStore:
  Get:    < 100 ns/op,  0 allocs/op
  Set:    < 150 ns/op,  2 allocs/op

RedisStore:
  Get:    39,854 ns/op,  1 allocs/op
  Set:    56,367 ns/op,  2 allocs/op
  Client-side caching: 172.7 ns/op (230x faster)

SQLiteStore:
  Get:    7,802 ns/op,  19 allocs/op
  Set:    14,646 ns/op, 11 allocs/op

PostgresStore:
  Get:    < 10 ms/op
  Set:    < 10 ms/op
Codec Performance
MessagePack:
  Encode: 371 ns/op   (15% faster than JSON)
  Decode: 496 ns/op   (63% faster than JSON)

JSON:
  Encode: 436 ns/op
  Decode: 1,348 ns/op

Gob:
  Encode: 4,955 ns/op  (11x slower than JSON)
  Decode: 44,384 ns/op (33x slower than JSON)
Batch Operations
RedisStore GetMany (10 keys):  ~50 μs  (8x faster than 10 individual Gets)
RedisStore SetMany (10 keys):  ~70 μs  (8x faster than 10 individual Sets)
Multi-Layer Caching
L1 Hit Rate: 90% (served from memory at ~100 ns)
L2 Hit Rate: 9%  (served from Redis at ~40 μs)
L3 Miss Rate: 1% (load from source at ~100 ms)

Network Call Reduction: 90% (L1 cache eliminates Redis calls)

See BENCHMARKS.md for comprehensive performance analysis.

API Reference

Core Interfaces
// Cache provides type-safe cache operations
type Cache[T any] interface {
    Get(ctx context.Context, key string) (T, error)
    Set(ctx context.Context, key string, value T, ttl time.Duration) error
    Delete(ctx context.Context, key string) error
    Clear(ctx context.Context) error
    Close() error
}

// BatchCache adds batch operations
type BatchCache[T any] interface {
    Cache[T]
    GetMany(ctx context.Context, keys []string) (map[string]T, error)
    SetMany(ctx context.Context, items map[string]T, ttl time.Duration) error
    DeleteMany(ctx context.Context, keys []string) error
}

// LoaderCache adds loader pattern with single-flight deduplication
type LoaderCache[T any] interface {
    BatchCache[T]
    GetOrLoad(ctx context.Context, key string, loader LoaderFunc[T]) (T, error)
}
Configuration Options
// WithTTL sets default TTL for cache entries
cache.WithTTL(5 * time.Minute)

// WithCodec sets custom serialization codec
cache.WithCodec(codec.MessagePackCodec{})

// WithMetrics enables metrics collection
cache.WithMetrics(&cache.Metrics{})

// WithSingleFlight enables single-flight deduplication (default: enabled)
cache.WithSingleFlight(true)
Store Constructors
// In-memory store (Ristretto)
memory.New(memory.Config{MaxCost: 100 << 20})

// Redis store (Rueidis)
redis.New(client, redis.WithKeyPrefix("myapp:"))

// SQLite store
sqlite.New(sqlite.Config{Path: "/var/cache/app.db"})

// PostgreSQL store
postgres.New(postgres.Config{Pool: pool, TableName: "cache"})

Full API documentation: pkg.go.dev/github.com/kaptinlin/cache

Architecture

cache/
├── cache.go           # Generic Cache[T] interface and implementation
├── store.go           # Store interface (byte-oriented)
├── codec.go           # Codec interface for serialization
├── chain.go           # Multi-layer cache implementation
├── loader.go          # Loader pattern with single-flight
├── codec/
│   ├── json.go        # JSON codec (default)
│   ├── msgpack.go     # MessagePack codec (recommended)
│   └── gob.go         # Gob codec (Go-specific)
└── store/
    ├── memory/        # MemoryStore using Ristretto
    ├── redis/         # RedisStore using Rueidis
    ├── sqlite/        # SQLiteStore implementation
    └── postgres/      # PostgresStore implementation
Design Patterns
  • Store Abstraction — Pluggable backends implement Store interface
  • Generic Cache WrapperCache[T] wraps Store with type-safe API
  • Functional Options — Configuration via WithTTL(), WithCodec(), etc.
  • Single-Flight DeduplicationGetOrLoad prevents thundering herd
  • Multi-Layer CachingChainCache supports L1/L2/L3 with automatic backfill

Testing

# Run all tests with race detection
go test -race ./...

# Run specific package tests
go test -race ./store/memory/
go test -race ./store/redis/

# Run benchmarks
go test -bench=. -benchmem ./...

# Run with coverage
go test -cover ./...

Contributing

Contributions are welcome! Please see CONTRIBUTING.md for guidelines.

Development Setup
# Clone repository
git clone https://github.com/kaptinlin/cache.git
cd cache

# Install dependencies
go mod download

# Run tests
go test -race ./...

# Run linting
golangci-lint run

License

MIT License - see LICENSE for details.

Acknowledgments

This library draws inspiration from:

  • Ristretto — High-performance in-memory cache
  • Rueidis — Modern Redis client with auto-pipelining
  • Otter — Generic cache interface design
  • golang-lru — Simple LRU cache patterns

Support

Documentation

Overview

Example

Example demonstrates basic in-memory cache usage with type-safe operations.

package main

import (
	"context"
	"fmt"
	"time"

	"github.com/kaptinlin/cache"
	"github.com/kaptinlin/cache/store/memory"
)

// ExampleUser represents a simple user type for examples.
type ExampleUser struct {
	ID   string
	Name string
	Age  int
}

func main() {
	// Create an in-memory store with 10MB capacity
	store, err := memory.New(memory.Config{
		MaxCost: 10 << 20, // 10MB
	})
	if err != nil {
		panic(err)
	}
	defer store.Close()

	// Create a type-safe cache for ExampleUser objects with 5-minute default TTL
	userCache := cache.New[ExampleUser](store, cache.WithTTL(5*time.Minute))
	defer userCache.Close()

	ctx := context.Background()

	// Set a user in the cache
	user := ExampleUser{ID: "123", Name: "Alice", Age: 30}
	if err = userCache.Set(ctx, "user:123", user, 0); err != nil {
		panic(err)
	}

	// Wait briefly for async write to complete (Ristretto processes writes asynchronously)
	time.Sleep(10 * time.Millisecond)

	// Get the user from the cache
	retrieved, err := userCache.Get(ctx, "user:123")
	if err != nil {
		panic(err)
	}

	fmt.Printf("%s is %d years old\n", retrieved.Name, retrieved.Age)
}
Output:

Alice is 30 years old

Index

Examples

Constants

This section is empty.

Variables

View Source
var (
	// ErrNotFound is returned when a cache key is not found.
	ErrNotFound = errors.New("cache: key not found")

	// ErrInvalidKey is returned when a cache key is invalid.
	ErrInvalidKey = errors.New("cache: invalid key")

	// ErrInvalidValue is returned when a cache value is invalid.
	ErrInvalidValue = errors.New("cache: invalid value")

	// ErrClosed is returned when an operation is attempted on a closed cache.
	ErrClosed = errors.New("cache: cache closed")

	// ErrTypeAssertion is returned when a type assertion fails in loader cache.
	ErrTypeAssertion = errors.New("cache: type assertion failed")
)

Sentinel errors for common cache operations.

Reference implementations: - .references/gocache: NotFound error with Is/Unwrap methods - .references/hot: Simple error propagation with fmt.Errorf wrapping

Functions

This section is empty.

Types

type BatchCache

type BatchCache[T any] interface {
	Cache[T]

	// GetMany retrieves multiple values from the cache by keys.
	// Returns a map containing only the keys that were found.
	// Missing keys are not included in the result (not an error).
	GetMany(ctx context.Context, keys []string) (map[string]T, error)

	// SetMany stores multiple values in the cache with the specified TTL.
	// Uses all-or-nothing semantics: if any item fails, the entire operation fails.
	// If ttl is 0, uses the cache's default TTL for all items.
	SetMany(ctx context.Context, items map[string]T, ttl time.Duration) error

	// DeleteMany removes multiple values from the cache.
	// Idempotent - deleting non-existent keys is not an error.
	DeleteMany(ctx context.Context, keys []string) error
}

BatchCache extends Cache with batch operations for improved performance.

GetMany returns partial results (found keys only) with nil error. SetMany uses all-or-nothing semantics. DeleteMany is idempotent.

Reference: .references/hot (GetMany/SetMany with partial results) Reference: .references/otter (BulkGet/BulkLoad operations)

func NewBatch

func NewBatch[T any](store Store, opts ...Option) BatchCache[T]

NewBatch creates a new BatchCache[T] instance wrapping the provided Store.

The store parameter is required. Options can be provided to configure TTL, codec, single-flight, and metrics.

Example

ExampleNewBatch demonstrates batch operations for improved performance.

package main

import (
	"context"
	"fmt"
	"time"

	"github.com/kaptinlin/cache"
	"github.com/kaptinlin/cache/store/memory"
)

// ExampleUser represents a simple user type for examples.
type ExampleUser struct {
	ID   string
	Name string
	Age  int
}

func main() {
	store, _ := memory.New(memory.Config{MaxCost: 10 << 20})
	defer store.Close()

	// Create a batch cache
	batchCache := cache.NewBatch[ExampleUser](store, cache.WithTTL(5*time.Minute))
	defer batchCache.Close()

	ctx := context.Background()

	// Set multiple users at once
	users := map[string]ExampleUser{
		"user:100": {ID: "100", Name: "Alice", Age: 30},
		"user:101": {ID: "101", Name: "Bob", Age: 25},
		"user:102": {ID: "102", Name: "Charlie", Age: 35},
	}
	if err := batchCache.SetMany(ctx, users, 0); err != nil {
		panic(err)
	}

	time.Sleep(10 * time.Millisecond)
	// Get multiple users at once
	keys := []string{"user:100", "user:101", "user:102"}
	retrieved, err := batchCache.GetMany(ctx, keys)
	if err != nil {
		panic(err)
	}

	fmt.Printf("Retrieved %d users\n", len(retrieved))
}
Output:

Retrieved 3 users

type Cache

type Cache[T any] interface {
	// Get retrieves a value from the cache by key.
	Get(ctx context.Context, key string) (T, error)

	// Set stores a value in the cache with the specified TTL.
	// If ttl is 0, uses the cache's default TTL.
	Set(ctx context.Context, key string, value T, ttl time.Duration) error

	// Delete removes a value from the cache.
	// Idempotent - deleting a non-existent key returns nil.
	Delete(ctx context.Context, key string) error

	// Clear removes all entries from the cache.
	Clear(ctx context.Context) error

	// Close releases resources associated with the cache.
	// Idempotent - safe to call multiple times.
	// After Close, all other operations return ErrClosed.
	Close() error
}

Cache provides type-safe cache operations with automatic serialization.

TTL of 0 uses the cache's default TTL. Negative TTL values return ErrInvalidValue.

Reference: .references/otter/cache.go (generic Cache[K,V] interface) Reference: .references/gocache (CacheInterface[T] with generic type safety)

func New

func New[T any](store Store, opts ...Option) Cache[T]

New creates a new type-safe Cache[T] instance wrapping the provided Store.

The store parameter is required. Options can be provided to configure TTL, codec, single-flight, and metrics.

Reference: .references/seaguest-cache/cache.go (New function with options)

Example (WithTTL)

ExampleNew_withTTL demonstrates creating a cache with a default TTL.

package main

import (
	"context"
	"fmt"
	"time"

	"github.com/kaptinlin/cache"
	"github.com/kaptinlin/cache/store/memory"
)

func main() {
	store, _ := memory.New(memory.Config{MaxCost: 10 << 20})
	defer store.Close()

	// Create cache with 10-minute default TTL
	cache := cache.New[string](store, cache.WithTTL(10*time.Minute))
	defer cache.Close()

	ctx := context.Background()

	// Set with default TTL (10 minutes)
	_ = cache.Set(ctx, "key1", "value1", 0)

	// Set with custom TTL (1 minute)
	_ = cache.Set(ctx, "key2", "value2", 1*time.Minute)

	time.Sleep(10 * time.Millisecond)
	val, _ := cache.Get(ctx, "key1")
	fmt.Println(val)
}
Output:

value1

type ChainCache

type ChainCache[T any] interface {
	Cache[T]

	// GetLayers returns the cache layers in order (L1, L2, L3, ...).
	GetLayers() []Cache[T]
}

ChainCache implements multi-layer caching with automatic backfill. Tries each cache layer in order (L1 -> L2 -> L3) and backfills upper layers on cache hits in lower layers.

Typical usage: L1 (memory) + L2 (Redis) + L3 (database)

Reference: .references/gocache/lib/cache/chain.go (ChainCache implementation) Reference: .references/hot (loader chain pattern)

func NewChain

func NewChain[T any](layers ...Cache[T]) ChainCache[T]

NewChain creates a new multi-layer cache with automatic backfill.

Layers are tried in order (L1, L2, L3, ...). When a value is found in a lower layer, it is automatically backfilled to all upper layers.

Example:

l1 := memory.New(memory.Config{MaxCost: 10 << 20})
l2 := redis.New(redisClient)
cache := cache.NewChain[User](
    cache.New[User](l1, cache.WithTTL(1*time.Minute)),
    cache.New[User](l2, cache.WithTTL(10*time.Minute)),
)
Example

ExampleNewChain demonstrates multi-layer caching with L1 (memory) and L2 (memory) stores. In production, L2 would typically be Redis or another distributed cache.

package main

import (
	"context"
	"fmt"
	"time"

	"github.com/kaptinlin/cache"
	"github.com/kaptinlin/cache/store/memory"
)

// ExampleUser represents a simple user type for examples.
type ExampleUser struct {
	ID   string
	Name string
	Age  int
}

func main() {
	// L1: Fast in-memory cache with 1MB capacity and 1-minute TTL
	l1Store, _ := memory.New(memory.Config{MaxCost: 1 << 20})
	l1Cache := cache.New[ExampleUser](l1Store, cache.WithTTL(1*time.Minute))

	// L2: Larger in-memory cache with 10MB capacity and 10-minute TTL
	l2Store, _ := memory.New(memory.Config{MaxCost: 10 << 20})
	l2Cache := cache.New[ExampleUser](l2Store, cache.WithTTL(10*time.Minute))

	// Create chain cache (L1 -> L2)
	chainCache := cache.NewChain[ExampleUser](l1Cache, l2Cache)
	defer chainCache.Close()

	ctx := context.Background()

	// Set a user (stored in both L1 and L2)
	user := ExampleUser{ID: "456", Name: "Bob", Age: 25}
	_ = chainCache.Set(ctx, "user:456", user, 0)

	time.Sleep(10 * time.Millisecond)
	// Get will try L1 first, then L2, and backfill L1 on L2 hit
	retrieved, _ := chainCache.Get(ctx, "user:456")
	fmt.Printf("%s is %d years old\n", retrieved.Name, retrieved.Age)
}
Output:

Bob is 25 years old

type Codec

type Codec interface {
	Encode(v any) ([]byte, error)
	Decode(data []byte, v any) error
}

Codec handles serialization/deserialization between types and bytes.

type Config

type Config struct {
	// Store is the underlying storage backend (REQUIRED).
	Store Store

	// Codec handles serialization/deserialization between T and []byte.
	// Default: JSONCodec
	Codec Codec

	// DefaultTTL is the time-to-live for cache entries when Set() is called with ttl=0.
	// Default: 0 (no expiration)
	DefaultTTL time.Duration

	// SingleFlight enables deduplication of concurrent GetOrLoad calls for the same key.
	// Default: false
	SingleFlight bool

	// Metrics enables collection of cache statistics (hits, misses, errors).
	// Default: nil (disabled)
	Metrics *Metrics

	// OnError is called when non-critical errors occur.
	// Default: nil (errors ignored)
	OnError func(error)
}

Config holds the configuration for a Cache instance. Constructed via New() with functional options. All fields have sensible defaults.

Reference: .references/otter/options.go (Options struct with defaults) Reference: .references/seaguest-cache/option.go (Options with functional pattern)

type LoaderCache

type LoaderCache[T any] interface {
	BatchCache[T]

	// GetOrLoad retrieves a value from the cache, or loads it using the loader function.
	// Uses single-flight deduplication: concurrent requests for the same key result in
	// only one loader call, with all waiters receiving the same result.
	//
	// On successful load, stores the value in cache with the cache's default TTL.
	// On loader error, does not cache the error (no negative caching by default).
	GetOrLoad(ctx context.Context, key string, loader LoaderFunc[T]) (T, error)
}

LoaderCache extends BatchCache with the loader pattern for cache-aside operations. Automatically loads and caches values on cache miss with single-flight deduplication to prevent thundering herd.

Reference: .references/hot (loader pattern with chain support) Reference: .references/otter (Loader interface with refresh semantics) Reference: .references/rueidis (single-flight pattern for deduplication)

func NewLoader

func NewLoader[T any](store Store, opts ...Option) LoaderCache[T]

NewLoader creates a new LoaderCache[T] instance wrapping the provided Store.

The store parameter is required. Options can be provided to configure TTL, codec, single-flight, and metrics.

Reference: .references/seaguest-cache/cache.go (New with singleflight.Group)

Example

ExampleNewLoader demonstrates the loader pattern with automatic cache population.

package main

import (
	"context"
	"fmt"
	"time"

	"github.com/kaptinlin/cache"
	"github.com/kaptinlin/cache/store/memory"
)

// ExampleUser represents a simple user type for examples.
type ExampleUser struct {
	ID   string
	Name string
	Age  int
}

func main() {
	store, _ := memory.New(memory.Config{MaxCost: 10 << 20})
	defer store.Close()

	// Create a loader cache with 5-minute default TTL
	loaderCache := cache.NewLoader[ExampleUser](store, cache.WithTTL(5*time.Minute))
	defer loaderCache.Close()

	ctx := context.Background()

	// Simulate a database lookup function
	loadUserFromDB := func(ctx context.Context, key string) (ExampleUser, error) {
		// In production, this would query a database
		return ExampleUser{ID: "789", Name: "Charlie", Age: 35}, nil
	}

	// GetOrLoad will check cache first, then call loader on miss
	user, err := loaderCache.GetOrLoad(ctx, "user:789", loadUserFromDB)
	if err != nil {
		panic(err)
	}

	fmt.Printf("%s is %d years old\n", user.Name, user.Age)
}
Output:

Charlie is 35 years old

type LoaderFunc

type LoaderFunc[T any] func(ctx context.Context, key string) (T, error)

LoaderFunc loads a value for a cache miss.

Must return (value, nil) on success or (zero, error) on failure. Must respect context cancellation and return ctx.Err(). Must not update the cache directly (causes deadlock).

Reference: .references/hot/loader.go (Loader function type) Reference: .references/otter/loader.go (Loader interface with Load/Reload)

type Metrics

type Metrics struct {
	Hits      atomic.Int64
	Misses    atomic.Int64
	Errors    atomic.Int64
	Evictions atomic.Int64
}

Metrics tracks cache operation statistics.

func (*Metrics) HitRate

func (m *Metrics) HitRate() float64

HitRate returns the cache hit rate (hits / total requests).

type Option

type Option func(*Config)

Option is a function that modifies a Config.

Reference: .references/seaguest-cache/option.go (Option function pattern)

func WithCodec

func WithCodec(codec Codec) Option

WithCodec sets the codec for serialization/deserialization.

Reference: .references/otter/options.go (field-based configuration)

Example

ExampleWithCodec demonstrates using a custom codec for serialization.

package main

import (
	"context"
	"fmt"
	"time"

	"github.com/kaptinlin/cache"
	"github.com/kaptinlin/cache/codec"
	"github.com/kaptinlin/cache/store/memory"
)

// ExampleUser represents a simple user type for examples.
type ExampleUser struct {
	ID   string
	Name string
	Age  int
}

func main() {
	store, _ := memory.New(memory.Config{MaxCost: 10 << 20})
	defer store.Close()

	// Create cache with MessagePack codec for efficient binary serialization
	cache := cache.New[ExampleUser](
		store,
		cache.WithCodec(codec.MessagePackCodec{}),
		cache.WithTTL(5*time.Minute),
	)
	defer cache.Close()

	ctx := context.Background()

	user := ExampleUser{ID: "999", Name: "Dave", Age: 40}
	_ = cache.Set(ctx, "user:999", user, 0)

	time.Sleep(10 * time.Millisecond)
	retrieved, _ := cache.Get(ctx, "user:999")
	fmt.Printf("%s is %d years old\n", retrieved.Name, retrieved.Age)
}
Output:

Dave is 40 years old

func WithMetrics

func WithMetrics(metrics *Metrics) Option

WithMetrics enables metrics collection for cache operations.

Reference: .references/ristretto (Metrics with atomic counters) Reference: .references/hot (Prometheus metrics integration)

Example

ExampleWithMetrics demonstrates enabling metrics collection.

package main

import (
	"context"
	"fmt"
	"time"

	"github.com/kaptinlin/cache"
	"github.com/kaptinlin/cache/store/memory"
)

func main() {
	store, _ := memory.New(memory.Config{MaxCost: 10 << 20})
	defer store.Close()

	// Create metrics collector
	metrics := &cache.Metrics{}

	// Create cache with metrics enabled
	cache := cache.New[string](
		store,
		cache.WithMetrics(metrics),
		cache.WithTTL(5*time.Minute),
	)
	defer cache.Close()

	ctx := context.Background()

	// Perform some operations
	_ = cache.Set(ctx, "key1", "value1", 0)
	time.Sleep(10 * time.Millisecond)
	_, _ = cache.Get(ctx, "key1") // Hit
	_, _ = cache.Get(ctx, "key2") // Miss

	// Check metrics
	fmt.Printf("Hits: %d, Misses: %d, Hit Rate: %.2f\n",
		metrics.Hits.Load(),
		metrics.Misses.Load(),
		metrics.HitRate(),
	)
}
Output:

Hits: 1, Misses: 1, Hit Rate: 0.50

func WithOnError

func WithOnError(handler func(error)) Option

WithOnError sets the error handler for non-critical errors.

Reference: .references/seaguest-cache/option.go (OnError callback)

func WithSingleFlight

func WithSingleFlight() Option

WithSingleFlight enables single-flight deduplication for GetOrLoad operations. Prevents thundering herd on cache miss by deduplicating concurrent requests.

Reference: .references/rueidis (single-flight implementation) Reference: golang.org/x/sync/singleflight (standard library)

func WithTTL

func WithTTL(ttl time.Duration) Option

WithTTL sets the default time-to-live for cache entries. This TTL is used when Set() is called with ttl=0.

Reference: .references/hot/config.go (WithTTL method) Reference: .references/otter/options.go (ExpiryCalculator field)

type Store

type Store interface {
	Get(ctx context.Context, key string) ([]byte, error)
	Set(ctx context.Context, key string, value []byte, ttl time.Duration) error
	Delete(ctx context.Context, key string) error
	Clear(ctx context.Context) error
	Close() error
}

Store is the low-level storage interface (byte-oriented).

Directories

Path Synopsis
examples
basic command
Package main demonstrates basic in-memory cache usage.
Package main demonstrates basic in-memory cache usage.
chain command
Package main demonstrates multi-layer caching with ChainCache (L1 memory + L2 memory).
Package main demonstrates multi-layer caching with ChainCache (L1 memory + L2 memory).
distributed command
Package main demonstrates a Redis-backed distributed cache.
Package main demonstrates a Redis-backed distributed cache.
loader command
Package main demonstrates the loader pattern with single-flight deduplication.
Package main demonstrates the loader pattern with single-flight deduplication.
store

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL