sfcache

package module
v1.0.0 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Dec 5, 2025 License: Apache-2.0 Imports: 9 Imported by: 0

README

sfcache - Stupid Fast Cache

sfcache logo

Go Reference Go Report Card License


Stupid fast in-memory Go cache with optional L2 persistence layer.

Designed for persistently caching API requests in an unreliable environment, this cache has something for everyone.

Features

  • Faster than a bat out of hell - Best-in-class latency and throughput
  • S3-FIFO eviction - Better hit-rates than LRU (learn more)
  • L2 Persistence (optional) - Bring your own database or use built-in backends:
  • Per-item TTL - Optional expiration
  • Graceful degradation - Cache works even if persistence fails
  • Zero allocation reads - minimal GC thrashing
  • Type safe - Go generics

Usage

As a stupid-fast in-memory cache:

import "github.com/codeGROOVE-dev/sfcache"

// strings as keys, ints as values
cache := sfcache.Memory[string, int]()
cache.Set("answer", 42)
val, found := cache.Get("answer")

or with local file persistence to survive restarts:

import (
  "github.com/codeGROOVE-dev/sfcache"
  "github.com/codeGROOVE-dev/sfcache/pkg/persist/localfs"
)

p, _ := localfs.New[string, User]("myapp", "")
cache, _ := sfcache.Persistent[string, User](ctx, p)

cache.SetAsync(ctx, "user:123", user) // Don't wait for the key to persist
cache.Store.Len(ctx)                  // Access persistence layer directly

A persistent cache suitable for Cloud Run or local development; uses Cloud Datastore if available

import "github.com/codeGROOVE-dev/sfcache/pkg/persist/cloudrun"

p, _ := cloudrun.New[string, User](ctx, "myapp")
cache, _ := sfcache.Persistent[string, User](ctx, p)

Performance against the Competition

sfcache prioritizes high hit-rates and low read latency, but it performs quite well all around.

Here's the results from an M4 MacBook Pro - run make bench to see the results for yourself:

>>> TestLatency: Single-Threaded Latency (go test -run=TestLatency -v)

### Single-Threaded Latency (sorted by Get)

| Cache         | Get ns/op | Get B/op | Get allocs | Set ns/op | Set B/op | Set allocs |
|---------------|-----------|----------|------------|-----------|----------|------------|
| sfcache       |       8.0 |        0 |          0 |      21.0 |        0 |          0 |
| lru           |      23.0 |        0 |          0 |      22.0 |        0 |          0 |
| ristretto     |      32.0 |       14 |          0 |      65.0 |      118 |          3 |
| otter         |      35.0 |        0 |          0 |     131.0 |       48 |          1 |
| freecache     |      62.0 |        8 |          1 |      49.0 |        0 |          0 |
| tinylfu       |      75.0 |        0 |          0 |      97.0 |      168 |          3 |

- 🔥 Get: 188% better than next best (lru)
- 🔥 Set: 4.8% better than next best (lru)

>>> TestZipfThroughput1: Zipf Throughput (1 thread) (go test -run=TestZipfThroughput1 -v)

### Zipf Throughput (alpha=0.99, 75% read / 25% write): 1 threads

| Cache         | QPS        |
|---------------|------------|
| sfcache       |   96.94M   |
| lru           |   46.24M   |
| tinylfu       |   19.21M   |
| freecache     |   15.02M   |
| otter         |   12.95M   |
| ristretto     |   11.34M   |

- 🔥 Throughput: 110% faster than next best (lru)

>>> TestZipfThroughput16: Zipf Throughput (16 threads) (go test -run=TestZipfThroughput16 -v)

### Zipf Throughput (alpha=0.99, 75% read / 25% write): 16 threads

| Cache         | QPS        |
|---------------|------------|
| sfcache       |   43.27M   |
| freecache     |   15.08M   |
| ristretto     |   14.20M   |
| otter         |   10.85M   |
| lru           |    5.64M   |
| tinylfu       |    4.25M   |

- 🔥 Throughput: 187% faster than next best (freecache)

>>> TestMetaTrace: Meta Trace Hit Rate (10M ops) (go test -run=TestMetaTrace -v)

### Meta Trace Hit Rate (10M ops from Meta KVCache)

| Cache         | 50K cache | 100K cache |
|---------------|-----------|------------|
| sfcache       |   68.19%  |   76.03%   |
| otter         |   41.31%  |   55.41%   |
| ristretto     |   40.33%  |   48.91%   |
| tinylfu       |   53.70%  |   54.79%   |
| freecache     |   56.86%  |   65.52%   |
| lru           |   65.21%  |   74.22%   |

- 🔥 Meta trace: 2.4% better than next best (lru)

>>> TestHitRate: Zipf Hit Rate (go test -run=TestHitRate -v)

### Hit Rate (Zipf alpha=0.99, 1M ops, 1M keyspace)

| Cache         | Size=1% | Size=2.5% | Size=5% |
|---------------|---------|-----------|---------|
| sfcache       |  64.19% |    69.23% |  72.50% |
| otter         |  61.64% |    67.94% |  71.38% |
| ristretto     |  34.88% |    41.25% |  46.62% |
| tinylfu       |  63.83% |    68.25% |  71.56% |
| freecache     |  56.65% |    57.75% |  63.39% |
| lru           |  57.33% |    64.55% |  69.92% |

- 🔥 Hit rate: 1.1% better than next best (tinylfu)

Cache performance is a game of balancing trade-offs. There will be workloads where other cache implementations are better, but nobody blends speed and persistence like we do.

License

Apache 2.0

Documentation

Overview

Package sfcache provides a high-performance cache with S3-FIFO eviction and optional persistence.

Index

Constants

This section is empty.

Variables

This section is empty.

Functions

This section is empty.

Types

type MemoryCache

type MemoryCache[K comparable, V any] struct {
	// contains filtered or unexported fields
}

MemoryCache is a fast in-memory cache without persistence. All operations are context-free and never return errors.

func Memory

func Memory[K comparable, V any](opts ...Option) *MemoryCache[K, V]

Memory creates a new memory-only cache.

Example:

cache := sfcache.Memory[string, User](
    sfcache.WithSize(10000),
    sfcache.WithTTL(time.Hour),
)
defer cache.Close()

cache.Set("user:123", user)              // uses default TTL
cache.Set("user:123", user, time.Hour)   // explicit TTL
user, ok := cache.Get("user:123")

func (*MemoryCache[K, V]) Close

func (*MemoryCache[K, V]) Close()

Close releases resources held by the cache. For MemoryCache this is a no-op, but provided for API consistency.

func (*MemoryCache[K, V]) Delete

func (c *MemoryCache[K, V]) Delete(key K)

Delete removes a value from the cache.

func (*MemoryCache[K, V]) Flush

func (c *MemoryCache[K, V]) Flush() int

Flush removes all entries from the cache. Returns the number of entries removed.

func (*MemoryCache[K, V]) Get

func (c *MemoryCache[K, V]) Get(key K) (V, bool)

Get retrieves a value from the cache. Returns the value and true if found, or the zero value and false if not found.

func (*MemoryCache[K, V]) GetOrSet

func (c *MemoryCache[K, V]) GetOrSet(key K, loader func() V, ttl ...time.Duration) V

GetOrSet retrieves a value from the cache, or computes and stores it if not found. The loader function is only called if the key is not in the cache. If no TTL is provided, the default TTL is used. This is optimized to perform a single shard lookup and lock acquisition.

func (*MemoryCache[K, V]) Len

func (c *MemoryCache[K, V]) Len() int

Len returns the number of entries in the cache.

func (*MemoryCache[K, V]) Set

func (c *MemoryCache[K, V]) Set(key K, value V, ttl ...time.Duration)

Set stores a value in the cache. If no TTL is provided, the default TTL is used. If no default TTL is configured, the entry never expires.

func (*MemoryCache[K, V]) SetIfAbsent

func (c *MemoryCache[K, V]) SetIfAbsent(key K, value V, ttl ...time.Duration) (V, bool)

SetIfAbsent stores a value only if the key is not already in the cache. Returns the existing value and true if found, or the new value and false if inserted. This is optimized to perform a single shard lookup and lock acquisition.

type Option

type Option func(*config)

Option configures a MemoryCache or PersistentCache.

func WithGhostRatio

func WithGhostRatio(r float64) Option

WithGhostRatio sets the ratio of the ghost queue to the total cache size. Default is 1.0 (100%).

func WithSize

func WithSize(n int) Option

WithSize sets the maximum number of entries in the memory cache.

func WithSmallRatio

func WithSmallRatio(r float64) Option

WithSmallRatio sets the ratio of the small queue to the total cache size. Default is 0.1 (10%).

func WithTTL

func WithTTL(d time.Duration) Option

WithTTL sets the default TTL for cache entries. Entries without an explicit TTL will use this value.

func WithWarmup

func WithWarmup(n int) Option

WithWarmup enables cache warmup by loading the N most recently updated entries from persistence on startup. Only applies to PersistentCache. By default, warmup is disabled (0). Set to a positive number to load that many entries.

type PersistentCache

type PersistentCache[K comparable, V any] struct {
	// Store provides direct access to the persistence layer.
	// Use this for persistence-specific operations:
	//   cache.Store.Len(ctx)
	//   cache.Store.Flush(ctx)
	//   cache.Store.Cleanup(ctx, maxAge)
	Store persist.Store[K, V]
	// contains filtered or unexported fields
}

PersistentCache is a cache backed by both memory and persistent storage. Core operations require context for I/O, while memory operations like Len() do not.

func Persistent

func Persistent[K comparable, V any](ctx context.Context, p persist.Store[K, V], opts ...Option) (*PersistentCache[K, V], error)

Persistent creates a cache with persistence backing.

Example:

store, _ := localfs.New[string, User]("myapp", "")
cache, err := sfcache.Persistent[string, User](ctx, store,
    sfcache.WithSize(10000),
    sfcache.WithTTL(time.Hour),
    sfcache.WithWarmup(1000),
)
if err != nil {
    return err
}
defer cache.Close()

cache.Set(ctx, "user:123", user)              // uses default TTL
cache.Set(ctx, "user:123", user, time.Hour)   // explicit TTL
user, ok, err := cache.Get(ctx, "user:123")
storeCount, _ := cache.Store.Len(ctx)

func (*PersistentCache[K, V]) Close

func (c *PersistentCache[K, V]) Close() error

Close releases resources held by the cache.

func (*PersistentCache[K, V]) Delete

func (c *PersistentCache[K, V]) Delete(ctx context.Context, key K) error

Delete removes a value from the cache. The value is always removed from memory. Returns an error if persistence deletion fails.

func (*PersistentCache[K, V]) Flush

func (c *PersistentCache[K, V]) Flush(ctx context.Context) (int, error)

Flush removes all entries from the cache, including persistent storage. Returns the total number of entries removed from memory and persistence.

func (*PersistentCache[K, V]) Get

func (c *PersistentCache[K, V]) Get(ctx context.Context, key K) (V, bool, error)

Get retrieves a value from the cache. It first checks the memory cache, then falls back to persistence.

func (*PersistentCache[K, V]) GetOrSet

func (c *PersistentCache[K, V]) GetOrSet(ctx context.Context, key K, loader func(context.Context) (V, error), ttl ...time.Duration) (V, error)

GetOrSet retrieves a value from the cache, or computes and stores it if not found. The loader function is only called if the key is not in the cache. If no TTL is provided, the default TTL is used. If the loader returns an error, it is propagated.

func (*PersistentCache[K, V]) Len

func (c *PersistentCache[K, V]) Len() int

Len returns the number of entries in the memory cache. For persistence entry count, use cache.Store.Len(ctx).

func (*PersistentCache[K, V]) Set

func (c *PersistentCache[K, V]) Set(ctx context.Context, key K, value V, ttl ...time.Duration) error

Set stores a value in the cache. If no TTL is provided, the default TTL is used. The value is ALWAYS stored in memory, even if persistence fails. Returns an error if the key violates persistence constraints or if persistence fails.

func (*PersistentCache[K, V]) SetAsync

func (c *PersistentCache[K, V]) SetAsync(ctx context.Context, key K, value V, ttl ...time.Duration) error

SetAsync stores a value in the cache, handling persistence asynchronously. If no TTL is provided, the default TTL is used. Key validation and in-memory caching happen synchronously. Persistence errors are logged but not returned (fire-and-forget). Returns an error only for validation failures (e.g., invalid key format).

Directories

Path Synopsis
pkg
persist module
store/localfs module
store/null module

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL