Documentation
¶
Overview ¶
Example ¶
Example demonstrates basic in-memory cache usage with type-safe operations.
package main
import (
"context"
"fmt"
"time"
"github.com/kaptinlin/cache"
"github.com/kaptinlin/cache/store/memory"
)
// ExampleUser represents a simple user type for examples.
type ExampleUser struct {
ID string
Name string
Age int
}
func main() {
// Create an in-memory store with 10MB capacity
store, err := memory.New(memory.Config{
MaxCost: 10 << 20, // 10MB
})
if err != nil {
panic(err)
}
defer store.Close()
// Create a type-safe cache for ExampleUser objects with 5-minute default TTL
userCache := cache.New[ExampleUser](store, cache.WithTTL(5*time.Minute))
defer userCache.Close()
ctx := context.Background()
// Set a user in the cache
user := ExampleUser{ID: "123", Name: "Alice", Age: 30}
if err = userCache.Set(ctx, "user:123", user, 0); err != nil {
panic(err)
}
// Wait briefly for async write to complete (Ristretto processes writes asynchronously)
time.Sleep(10 * time.Millisecond)
// Get the user from the cache
retrieved, err := userCache.Get(ctx, "user:123")
if err != nil {
panic(err)
}
fmt.Printf("%s is %d years old\n", retrieved.Name, retrieved.Age)
}
Output: Alice is 30 years old
Index ¶
Examples ¶
Constants ¶
This section is empty.
Variables ¶
var ( // ErrNotFound is returned when a cache key is not found. ErrNotFound = errors.New("cache: key not found") // ErrInvalidKey is returned when a cache key is invalid. ErrInvalidKey = errors.New("cache: invalid key") // ErrInvalidValue is returned when a cache value is invalid. ErrInvalidValue = errors.New("cache: invalid value") // ErrClosed is returned when an operation is attempted on a closed cache. ErrClosed = errors.New("cache: cache closed") // ErrTypeAssertion is returned when a type assertion fails in loader cache. ErrTypeAssertion = errors.New("cache: type assertion failed") )
Sentinel errors for common cache operations.
Reference implementations: - .references/gocache: NotFound error with Is/Unwrap methods - .references/hot: Simple error propagation with fmt.Errorf wrapping
Functions ¶
This section is empty.
Types ¶
type BatchCache ¶
type BatchCache[T any] interface { Cache[T] // GetMany retrieves multiple values from the cache by keys. // Returns a map containing only the keys that were found. // Missing keys are not included in the result (not an error). GetMany(ctx context.Context, keys []string) (map[string]T, error) // SetMany stores multiple values in the cache with the specified TTL. // Uses all-or-nothing semantics: if any item fails, the entire operation fails. // If ttl is 0, uses the cache's default TTL for all items. SetMany(ctx context.Context, items map[string]T, ttl time.Duration) error // DeleteMany removes multiple values from the cache. // Idempotent - deleting non-existent keys is not an error. DeleteMany(ctx context.Context, keys []string) error }
BatchCache extends Cache with batch operations for improved performance.
GetMany returns partial results (found keys only) with nil error. SetMany uses all-or-nothing semantics. DeleteMany is idempotent.
Reference: .references/hot (GetMany/SetMany with partial results) Reference: .references/otter (BulkGet/BulkLoad operations)
func NewBatch ¶
func NewBatch[T any](store Store, opts ...Option) BatchCache[T]
NewBatch creates a new BatchCache[T] instance wrapping the provided Store.
The store parameter is required. Options can be provided to configure TTL, codec, single-flight, and metrics.
Example ¶
ExampleNewBatch demonstrates batch operations for improved performance.
package main
import (
"context"
"fmt"
"time"
"github.com/kaptinlin/cache"
"github.com/kaptinlin/cache/store/memory"
)
// ExampleUser represents a simple user type for examples.
type ExampleUser struct {
ID string
Name string
Age int
}
func main() {
store, _ := memory.New(memory.Config{MaxCost: 10 << 20})
defer store.Close()
// Create a batch cache
batchCache := cache.NewBatch[ExampleUser](store, cache.WithTTL(5*time.Minute))
defer batchCache.Close()
ctx := context.Background()
// Set multiple users at once
users := map[string]ExampleUser{
"user:100": {ID: "100", Name: "Alice", Age: 30},
"user:101": {ID: "101", Name: "Bob", Age: 25},
"user:102": {ID: "102", Name: "Charlie", Age: 35},
}
if err := batchCache.SetMany(ctx, users, 0); err != nil {
panic(err)
}
time.Sleep(10 * time.Millisecond)
// Get multiple users at once
keys := []string{"user:100", "user:101", "user:102"}
retrieved, err := batchCache.GetMany(ctx, keys)
if err != nil {
panic(err)
}
fmt.Printf("Retrieved %d users\n", len(retrieved))
}
Output: Retrieved 3 users
type Cache ¶
type Cache[T any] interface { // Get retrieves a value from the cache by key. Get(ctx context.Context, key string) (T, error) // Set stores a value in the cache with the specified TTL. // If ttl is 0, uses the cache's default TTL. Set(ctx context.Context, key string, value T, ttl time.Duration) error // Delete removes a value from the cache. // Idempotent - deleting a non-existent key returns nil. Delete(ctx context.Context, key string) error // Clear removes all entries from the cache. Clear(ctx context.Context) error // Close releases resources associated with the cache. // Idempotent - safe to call multiple times. // After Close, all other operations return ErrClosed. Close() error }
Cache provides type-safe cache operations with automatic serialization.
TTL of 0 uses the cache's default TTL. Negative TTL values return ErrInvalidValue.
Reference: .references/otter/cache.go (generic Cache[K,V] interface) Reference: .references/gocache (CacheInterface[T] with generic type safety)
func New ¶
New creates a new type-safe Cache[T] instance wrapping the provided Store.
The store parameter is required. Options can be provided to configure TTL, codec, single-flight, and metrics.
Reference: .references/seaguest-cache/cache.go (New function with options)
Example (WithTTL) ¶
ExampleNew_withTTL demonstrates creating a cache with a default TTL.
package main
import (
"context"
"fmt"
"time"
"github.com/kaptinlin/cache"
"github.com/kaptinlin/cache/store/memory"
)
func main() {
store, _ := memory.New(memory.Config{MaxCost: 10 << 20})
defer store.Close()
// Create cache with 10-minute default TTL
cache := cache.New[string](store, cache.WithTTL(10*time.Minute))
defer cache.Close()
ctx := context.Background()
// Set with default TTL (10 minutes)
_ = cache.Set(ctx, "key1", "value1", 0)
// Set with custom TTL (1 minute)
_ = cache.Set(ctx, "key2", "value2", 1*time.Minute)
time.Sleep(10 * time.Millisecond)
val, _ := cache.Get(ctx, "key1")
fmt.Println(val)
}
Output: value1
type ChainCache ¶
type ChainCache[T any] interface { Cache[T] // GetLayers returns the cache layers in order (L1, L2, L3, ...). GetLayers() []Cache[T] }
ChainCache implements multi-layer caching with automatic backfill. Tries each cache layer in order (L1 -> L2 -> L3) and backfills upper layers on cache hits in lower layers.
Typical usage: L1 (memory) + L2 (Redis) + L3 (database)
Reference: .references/gocache/lib/cache/chain.go (ChainCache implementation) Reference: .references/hot (loader chain pattern)
func NewChain ¶
func NewChain[T any](layers ...Cache[T]) ChainCache[T]
NewChain creates a new multi-layer cache with automatic backfill.
Layers are tried in order (L1, L2, L3, ...). When a value is found in a lower layer, it is automatically backfilled to all upper layers.
Example:
l1 := memory.New(memory.Config{MaxCost: 10 << 20})
l2 := redis.New(redisClient)
cache := cache.NewChain[User](
cache.New[User](l1, cache.WithTTL(1*time.Minute)),
cache.New[User](l2, cache.WithTTL(10*time.Minute)),
)
Example ¶
ExampleNewChain demonstrates multi-layer caching with L1 (memory) and L2 (memory) stores. In production, L2 would typically be Redis or another distributed cache.
package main
import (
"context"
"fmt"
"time"
"github.com/kaptinlin/cache"
"github.com/kaptinlin/cache/store/memory"
)
// ExampleUser represents a simple user type for examples.
type ExampleUser struct {
ID string
Name string
Age int
}
func main() {
// L1: Fast in-memory cache with 1MB capacity and 1-minute TTL
l1Store, _ := memory.New(memory.Config{MaxCost: 1 << 20})
l1Cache := cache.New[ExampleUser](l1Store, cache.WithTTL(1*time.Minute))
// L2: Larger in-memory cache with 10MB capacity and 10-minute TTL
l2Store, _ := memory.New(memory.Config{MaxCost: 10 << 20})
l2Cache := cache.New[ExampleUser](l2Store, cache.WithTTL(10*time.Minute))
// Create chain cache (L1 -> L2)
chainCache := cache.NewChain[ExampleUser](l1Cache, l2Cache)
defer chainCache.Close()
ctx := context.Background()
// Set a user (stored in both L1 and L2)
user := ExampleUser{ID: "456", Name: "Bob", Age: 25}
_ = chainCache.Set(ctx, "user:456", user, 0)
time.Sleep(10 * time.Millisecond)
// Get will try L1 first, then L2, and backfill L1 on L2 hit
retrieved, _ := chainCache.Get(ctx, "user:456")
fmt.Printf("%s is %d years old\n", retrieved.Name, retrieved.Age)
}
Output: Bob is 25 years old
type Config ¶
type Config struct {
// Store is the underlying storage backend (REQUIRED).
Store Store
// Codec handles serialization/deserialization between T and []byte.
// Default: JSONCodec
Codec Codec
// DefaultTTL is the time-to-live for cache entries when Set() is called with ttl=0.
// Default: 0 (no expiration)
DefaultTTL time.Duration
// SingleFlight enables deduplication of concurrent GetOrLoad calls for the same key.
// Default: false
SingleFlight bool
// Metrics enables collection of cache statistics (hits, misses, errors).
// Default: nil (disabled)
Metrics *Metrics
// OnError is called when non-critical errors occur.
// Default: nil (errors ignored)
OnError func(error)
}
Config holds the configuration for a Cache instance. Constructed via New() with functional options. All fields have sensible defaults.
Reference: .references/otter/options.go (Options struct with defaults) Reference: .references/seaguest-cache/option.go (Options with functional pattern)
type LoaderCache ¶
type LoaderCache[T any] interface { BatchCache[T] // GetOrLoad retrieves a value from the cache, or loads it using the loader function. // Uses single-flight deduplication: concurrent requests for the same key result in // only one loader call, with all waiters receiving the same result. // // On successful load, stores the value in cache with the cache's default TTL. // On loader error, does not cache the error (no negative caching by default). GetOrLoad(ctx context.Context, key string, loader LoaderFunc[T]) (T, error) }
LoaderCache extends BatchCache with the loader pattern for cache-aside operations. Automatically loads and caches values on cache miss with single-flight deduplication to prevent thundering herd.
Reference: .references/hot (loader pattern with chain support) Reference: .references/otter (Loader interface with refresh semantics) Reference: .references/rueidis (single-flight pattern for deduplication)
func NewLoader ¶
func NewLoader[T any](store Store, opts ...Option) LoaderCache[T]
NewLoader creates a new LoaderCache[T] instance wrapping the provided Store.
The store parameter is required. Options can be provided to configure TTL, codec, single-flight, and metrics.
Reference: .references/seaguest-cache/cache.go (New with singleflight.Group)
Example ¶
ExampleNewLoader demonstrates the loader pattern with automatic cache population.
package main
import (
"context"
"fmt"
"time"
"github.com/kaptinlin/cache"
"github.com/kaptinlin/cache/store/memory"
)
// ExampleUser represents a simple user type for examples.
type ExampleUser struct {
ID string
Name string
Age int
}
func main() {
store, _ := memory.New(memory.Config{MaxCost: 10 << 20})
defer store.Close()
// Create a loader cache with 5-minute default TTL
loaderCache := cache.NewLoader[ExampleUser](store, cache.WithTTL(5*time.Minute))
defer loaderCache.Close()
ctx := context.Background()
// Simulate a database lookup function
loadUserFromDB := func(ctx context.Context, key string) (ExampleUser, error) {
// In production, this would query a database
return ExampleUser{ID: "789", Name: "Charlie", Age: 35}, nil
}
// GetOrLoad will check cache first, then call loader on miss
user, err := loaderCache.GetOrLoad(ctx, "user:789", loadUserFromDB)
if err != nil {
panic(err)
}
fmt.Printf("%s is %d years old\n", user.Name, user.Age)
}
Output: Charlie is 35 years old
type LoaderFunc ¶
LoaderFunc loads a value for a cache miss.
Must return (value, nil) on success or (zero, error) on failure. Must respect context cancellation and return ctx.Err(). Must not update the cache directly (causes deadlock).
Reference: .references/hot/loader.go (Loader function type) Reference: .references/otter/loader.go (Loader interface with Load/Reload)
type Metrics ¶
type Metrics struct {
Hits atomic.Int64
Misses atomic.Int64
Errors atomic.Int64
Evictions atomic.Int64
}
Metrics tracks cache operation statistics.
type Option ¶
type Option func(*Config)
Option is a function that modifies a Config.
Reference: .references/seaguest-cache/option.go (Option function pattern)
func WithCodec ¶
WithCodec sets the codec for serialization/deserialization.
Reference: .references/otter/options.go (field-based configuration)
Example ¶
ExampleWithCodec demonstrates using a custom codec for serialization.
package main
import (
"context"
"fmt"
"time"
"github.com/kaptinlin/cache"
"github.com/kaptinlin/cache/codec"
"github.com/kaptinlin/cache/store/memory"
)
// ExampleUser represents a simple user type for examples.
type ExampleUser struct {
ID string
Name string
Age int
}
func main() {
store, _ := memory.New(memory.Config{MaxCost: 10 << 20})
defer store.Close()
// Create cache with MessagePack codec for efficient binary serialization
cache := cache.New[ExampleUser](
store,
cache.WithCodec(codec.MessagePackCodec{}),
cache.WithTTL(5*time.Minute),
)
defer cache.Close()
ctx := context.Background()
user := ExampleUser{ID: "999", Name: "Dave", Age: 40}
_ = cache.Set(ctx, "user:999", user, 0)
time.Sleep(10 * time.Millisecond)
retrieved, _ := cache.Get(ctx, "user:999")
fmt.Printf("%s is %d years old\n", retrieved.Name, retrieved.Age)
}
Output: Dave is 40 years old
func WithMetrics ¶
WithMetrics enables metrics collection for cache operations.
Reference: .references/ristretto (Metrics with atomic counters) Reference: .references/hot (Prometheus metrics integration)
Example ¶
ExampleWithMetrics demonstrates enabling metrics collection.
package main
import (
"context"
"fmt"
"time"
"github.com/kaptinlin/cache"
"github.com/kaptinlin/cache/store/memory"
)
func main() {
store, _ := memory.New(memory.Config{MaxCost: 10 << 20})
defer store.Close()
// Create metrics collector
metrics := &cache.Metrics{}
// Create cache with metrics enabled
cache := cache.New[string](
store,
cache.WithMetrics(metrics),
cache.WithTTL(5*time.Minute),
)
defer cache.Close()
ctx := context.Background()
// Perform some operations
_ = cache.Set(ctx, "key1", "value1", 0)
time.Sleep(10 * time.Millisecond)
_, _ = cache.Get(ctx, "key1") // Hit
_, _ = cache.Get(ctx, "key2") // Miss
// Check metrics
fmt.Printf("Hits: %d, Misses: %d, Hit Rate: %.2f\n",
metrics.Hits.Load(),
metrics.Misses.Load(),
metrics.HitRate(),
)
}
Output: Hits: 1, Misses: 1, Hit Rate: 0.50
func WithOnError ¶
WithOnError sets the error handler for non-critical errors.
Reference: .references/seaguest-cache/option.go (OnError callback)
func WithSingleFlight ¶
func WithSingleFlight() Option
WithSingleFlight enables single-flight deduplication for GetOrLoad operations. Prevents thundering herd on cache miss by deduplicating concurrent requests.
Reference: .references/rueidis (single-flight implementation) Reference: golang.org/x/sync/singleflight (standard library)
type Store ¶
type Store interface {
Get(ctx context.Context, key string) ([]byte, error)
Set(ctx context.Context, key string, value []byte, ttl time.Duration) error
Delete(ctx context.Context, key string) error
Clear(ctx context.Context) error
Close() error
}
Store is the low-level storage interface (byte-oriented).
Source Files
¶
Directories
¶
| Path | Synopsis |
|---|---|
|
examples
|
|
|
basic
command
Package main demonstrates basic in-memory cache usage.
|
Package main demonstrates basic in-memory cache usage. |
|
chain
command
Package main demonstrates multi-layer caching with ChainCache (L1 memory + L2 memory).
|
Package main demonstrates multi-layer caching with ChainCache (L1 memory + L2 memory). |
|
distributed
command
Package main demonstrates a Redis-backed distributed cache.
|
Package main demonstrates a Redis-backed distributed cache. |
|
loader
command
Package main demonstrates the loader pattern with single-flight deduplication.
|
Package main demonstrates the loader pattern with single-flight deduplication. |
|
store
|
|