bdcache

package module
v0.9.1 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Dec 2, 2025 License: Apache-2.0 Imports: 9 Imported by: 0

README ΒΆ

bdcache - Big Dumb Cache

bdcache logo

Go Reference Go Report Card License


Stupid fast in-memory Go cache with optional L2 persistence layer.

Designed originally for persistently caching HTTP fetches in unreliable environments like Google Cloud Run, this cache has something for everyone.

Features

  • Faster than a bat out of hell - Best-in-class latency and throughput
  • S3-FIFO eviction - Better hit-rates than LRU (learn more)
  • Pluggable persistence - Bring your own database or use built-in backends:
  • Per-item TTL - Optional expiration
  • Graceful degradation - Cache works even if persistence fails
  • Zero allocation reads - minimal GC thrashing
  • Type safe - Go generics

Usage

As a stupid-fast in-memory cache:

import "github.com/codeGROOVE-dev/bdcache"

// strings as keys, ints as values
cache, _ := bdcache.New[string, int](ctx)
cache.Set(ctx, "answer", 42, 0)
val, found, err := cache.Get(ctx, "answer")

With local file persistence to survive restarts:

import (
  "github.com/codeGROOVE-dev/bdcache"
  "github.com/codeGROOVE-dev/bdcache/persist/localfs"
)

p, err := localfs.New[string, User]("myapp", "")
cache, _ := bdcache.New[string, User](ctx, bdcache.WithPersistence(p))

cache.SetAsync(ctx, "answer", 42, 0) // Don't wait for the key to persist

A persistent cache suitable for Cloud Run or local development; uses Cloud Datastore if available

p, _ := cloudrun.New[string, User](ctx, "myapp")
cache, _ := bdcache.New[string, User](ctx, bdcache.WithPersistence(p))

Performance against the Competition

bdcache prioritizes high hit-rates and low read latency, but it performs quite well all around.

Here's the results from an M4 MacBook Pro - run make bench to see the results for yourself:

Hit Rate (Zipf Ξ±=0.99, 1M ops, 1M keyspace)
Cache Size=1% Size=2.5% Size=5%
bdcache 🟑 94.46% 94.89% 95.09%
otter 🦦 94.27% 94.68% 95.09%
ristretto β˜• 91.63% 92.44% 93.02%
tinylfu πŸ”¬ 94.31% 94.87% 95.09%
freecache πŸ†“ 94.03% 94.15% 94.75%
lru πŸ“š 94.10% 94.84% 95.09%

πŸ† Hit rate: +0.1% better than 2nd best (tinylfu)

Single-Threaded Latency (sorted by Get)
Cache Get ns/op Get B/op Get allocs Set ns/op Set B/op Set allocs
bdcache 🟑 9.0 0 0 20.0 0 0
lru πŸ“š 22.0 0 0 22.0 0 0
ristretto β˜• 31.0 14 0 68.0 120 3
otter 🦦 34.0 0 0 138.0 51 1
freecache πŸ†“ 71.0 15 1 56.0 4 0
tinylfu πŸ”¬ 84.0 3 0 105.0 175 3

πŸ† Get latency: +144% faster than 2nd best (lru) πŸ† Set latency: +10% faster than 2nd best (lru)

Single-Threaded Throughput (mixed read/write)
Cache Get QPS Set QPS
bdcache 🟑 79.25M 43.15M
lru πŸ“š 36.39M 36.88M
ristretto β˜• 28.22M 13.46M
otter 🦦 25.46M 7.16M
freecache πŸ†“ 13.30M 16.32M
tinylfu πŸ”¬ 11.32M 9.34M

πŸ† Get throughput: +118% faster than 2nd best (lru) πŸ† Set throughput: +17% faster than 2nd best (lru)

Concurrent Throughput (mixed read/write): 4 threads
Cache Get QPS Set QPS
bdcache 🟑 29.62M 29.92M
ristretto β˜• 25.98M 13.12M
freecache πŸ†“ 25.36M 21.84M
otter 🦦 23.14M 3.99M
lru πŸ“š 9.39M 9.64M
tinylfu πŸ”¬ 5.75M 4.91M

πŸ† Get throughput: +14% faster than 2nd best (ristretto) πŸ† Set throughput: +37% faster than 2nd best (freecache)

Concurrent Throughput (mixed read/write): 8 threads
Cache Get QPS Set QPS
bdcache 🟑 22.19M 18.68M
otter 🦦 19.74M 3.03M
ristretto β˜• 18.82M 11.39M
freecache πŸ†“ 16.83M 16.30M
lru πŸ“š 7.55M 7.68M
tinylfu πŸ”¬ 4.95M 4.15M

πŸ† Get throughput: +12% faster than 2nd best (otter) πŸ† Set throughput: +15% faster than 2nd best (freecache)

Concurrent Throughput (mixed read/write): 12 threads
Cache Get QPS Set QPS
bdcache 🟑 24.49M 24.03M
ristretto β˜• 22.85M 11.48M
otter 🦦 21.77M 2.92M
freecache πŸ†“ 17.45M 16.70M
lru πŸ“š 7.42M 7.62M
tinylfu πŸ”¬ 4.55M 3.70M

πŸ† Get throughput: +7.2% faster than 2nd best (ristretto) πŸ† Set throughput: +44% faster than 2nd best (freecache)

Concurrent Throughput (mixed read/write): 16 threads
Cache Get QPS Set QPS
bdcache 🟑 15.96M 15.55M
otter 🦦 15.64M 2.84M
ristretto β˜• 15.59M 12.31M
freecache πŸ†“ 15.24M 14.72M
lru πŸ“š 7.47M 7.42M
tinylfu πŸ”¬ 4.71M 3.43M

πŸ† Get throughput: +2.0% faster than 2nd best (otter) πŸ† Set throughput: +5.6% faster than 2nd best (freecache)

Concurrent Throughput (mixed read/write): 24 threads
Cache Get QPS Set QPS
bdcache 🟑 15.93M 15.41M
otter 🦦 15.81M 2.88M
ristretto β˜• 15.57M 13.20M
freecache πŸ†“ 14.58M 14.10M
lru πŸ“š 7.59M 7.80M
tinylfu πŸ”¬ 4.96M 3.73M

πŸ† Get throughput: +0.7% faster than 2nd best (otter) πŸ† Set throughput: +9.2% faster than 2nd best (freecache)

Concurrent Throughput (mixed read/write): 32 threads
Cache Get QPS Set QPS
bdcache 🟑 16.68M 15.38M
otter 🦦 15.87M 2.87M
ristretto β˜• 15.55M 13.50M
freecache πŸ†“ 14.64M 13.84M
lru πŸ“š 7.87M 8.01M
tinylfu πŸ”¬ 5.12M 3.01M

πŸ† Get throughput: +5.1% faster than 2nd best (otter) πŸ† Set throughput: +11% faster than 2nd best (freecache)

NOTE: Performance characteristics often have trade-offs. There are almost certainly workloads where other cache implementations are faster, but nobody blends speed and persistence the way that bdcache does.

License

Apache 2.0

Documentation ΒΆ

Overview ΒΆ

Package bdcache provides a high-performance cache with S3-FIFO eviction and optional persistence.

Index ΒΆ

Constants ΒΆ

This section is empty.

Variables ΒΆ

This section is empty.

Functions ΒΆ

This section is empty.

Types ΒΆ

type Cache ΒΆ

type Cache[K comparable, V any] struct {
	// contains filtered or unexported fields
}

Cache is a generic cache with memory and optional persistence layers.

func New ΒΆ

func New[K comparable, V any](ctx context.Context, options ...Option) (*Cache[K, V], error)

New creates a new cache with the given options.

func (*Cache[K, V]) Close ΒΆ

func (c *Cache[K, V]) Close() error

Close releases resources held by the cache.

func (*Cache[K, V]) Delete ΒΆ

func (c *Cache[K, V]) Delete(ctx context.Context, key K)

Delete removes a value from the cache.

func (*Cache[K, V]) Flush ΒΆ added in v0.7.0

func (c *Cache[K, V]) Flush(ctx context.Context) (int, error)

Flush removes all entries from the cache, including persistent storage. Returns the total number of entries removed from memory and persistence.

func (*Cache[K, V]) Get ΒΆ

func (c *Cache[K, V]) Get(ctx context.Context, key K) (V, bool, error)

Get retrieves a value from the cache. It first checks the memory cache, then falls back to persistence if available.

func (*Cache[K, V]) Len ΒΆ

func (c *Cache[K, V]) Len() int

Len returns the number of items in the memory cache.

func (*Cache[K, V]) Set ΒΆ

func (c *Cache[K, V]) Set(ctx context.Context, key K, value V, ttl time.Duration) error

Set stores a value in the cache with an optional TTL. A zero TTL means no expiration (or uses DefaultTTL if configured). The value is ALWAYS stored in memory, even if persistence fails. Returns an error if the key violates persistence constraints or if persistence fails. Even when an error is returned, the value is cached in memory.

func (*Cache[K, V]) SetAsync ΒΆ added in v0.6.0

func (c *Cache[K, V]) SetAsync(ctx context.Context, key K, value V, ttl time.Duration) error

SetAsync adds or updates a value in the cache with optional TTL, handling persistence asynchronously. Key validation and in-memory caching happen synchronously. Persistence errors are logged but not returned. Returns an error only for validation failures (e.g., invalid key format).

type Entry ΒΆ

type Entry[K comparable, V any] struct {
	Key       K
	Value     V
	Expiry    time.Time
	UpdatedAt time.Time
}

Entry represents a cache entry with its metadata.

type Option ΒΆ

type Option func(*Options)

Option is a functional option for configuring a Cache.

func WithDefaultTTL ΒΆ

func WithDefaultTTL(d time.Duration) Option

WithDefaultTTL sets the default TTL for cache items.

func WithMemorySize ΒΆ

func WithMemorySize(n int) Option

WithMemorySize sets the maximum number of items in the memory cache.

func WithPersistence ΒΆ added in v0.6.0

func WithPersistence[K comparable, V any](p PersistenceLayer[K, V]) Option

WithPersistence sets the persistence layer for the cache. Pass a PersistenceLayer implementation from packages like:

  • github.com/codeGROOVE-dev/bdcache/persist/localfs
  • github.com/codeGROOVE-dev/bdcache/persist/datastore

Example:

p, _ := localfs.New[string, int]("myapp")
cache, _ := bdcache.New[string, int](ctx, bdcache.WithPersistence(p))

func WithWarmup ΒΆ

func WithWarmup(n int) Option

WithWarmup enables cache warmup by loading the N most recently updated entries from persistence on startup. By default, warmup is disabled (0). Set to a positive number to load that many entries.

type Options ΒΆ

type Options struct {
	Persister   any
	MemorySize  int
	DefaultTTL  time.Duration
	WarmupLimit int
}

Options configures a Cache instance.

type PersistenceLayer ΒΆ

type PersistenceLayer[K comparable, V any] interface {
	// ValidateKey checks if a key is valid for this persistence layer.
	// Returns an error if the key violates constraints.
	ValidateKey(key K) error

	// Load retrieves a value from persistent storage.
	// Returns the value, expiry time, whether it was found, and any error.
	Load(ctx context.Context, key K) (V, time.Time, bool, error)

	// Store saves a value to persistent storage with an expiry time.
	Store(ctx context.Context, key K, value V, expiry time.Time) error

	// Delete removes a value from persistent storage.
	Delete(ctx context.Context, key K) error

	// LoadRecent returns channels for streaming the most recently updated entries from persistent storage.
	// Used for warming up the cache on startup. Returns up to 'limit' most recently updated entries.
	// If limit is 0, returns all entries.
	// The entry channel should be closed when all entries have been sent.
	// If an error occurs, send it on the error channel.
	LoadRecent(ctx context.Context, limit int) (<-chan Entry[K, V], <-chan error)

	// Cleanup removes expired entries from persistent storage.
	// maxAge specifies how old entries must be before deletion.
	// Returns the number of entries deleted and any error.
	Cleanup(ctx context.Context, maxAge time.Duration) (int, error)

	// Location returns the storage location/identifier for a given key.
	// For file-based persistence, this returns the file path.
	// For database persistence, this returns the database key/ID.
	// Useful for testing and debugging to verify where items are stored.
	Location(key K) string

	// Flush removes all entries from persistent storage.
	// Returns the number of entries removed and any error.
	Flush(ctx context.Context) (int, error)

	// Len returns the number of entries in persistent storage.
	Len(ctx context.Context) (int, error)

	// Close releases any resources held by the persistence layer.
	Close() error
}

PersistenceLayer defines the interface for cache persistence backends.

Directories ΒΆ

Path Synopsis
persist
cloudrun module
datastore module
localfs module

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL