Documentation
¶
Overview ¶
Package lrucache provides a thread-safe, in-memory LRU (Least Recently Used) cache with TTL (Time To Live) support and size-based eviction.
This cache is designed for multi-threaded applications where expensive computations or I/O operations need to be cached. It provides automatic synchronization to ensure that the same value is not computed multiple times concurrently.
Key features:
- Thread-safe: Safe for concurrent access from multiple goroutines
- LRU eviction: Automatically evicts least recently used entries when memory limit is reached
- TTL support: Entries expire after a configurable time-to-live
- Lazy computation: Values are computed on-demand and only once per key
- Concurrent computation prevention: If multiple goroutines request the same key, only one computes the value while others wait for the result
- HTTP middleware: Includes an HTTP handler for caching HTTP responses
Basic usage:
cache := lrucache.New(1000) // maxmemory in arbitrary units
value := cache.Get("key", func() (interface{}, time.Duration, int) {
// This closure is called only if the value is not cached or expired
result := expensiveComputation()
return result, 10 * time.Minute, len(result) // value, ttl, size
})
The size parameter is a user-defined estimate in any consistent unit. It can be:
- Actual bytes: len(string) or len(slice) * sizeof(element)
- Entry count: Use 1 for each entry to limit by number of entries
- Custom metric: Any measure that makes sense for your use case
See the README.md for more detailed examples and explanations.
Copyright (C) NHR@FAU, University Erlangen-Nuremberg. All rights reserved. Use of this source code is governed by a MIT-style license that can be found in the LICENSE file.
Index ¶
Constants ¶
This section is empty.
Variables ¶
This section is empty.
Functions ¶
func NewMiddleware ¶
NewMiddleware returns a gorilla/mux style middleware function.
This is a convenience wrapper around NewHttpHandler for use with middleware chains.
Example:
r := mux.NewRouter() r.Use(lrucache.NewMiddleware(100*1024*1024, 1*time.Hour))
Types ¶
type Cache ¶
type Cache struct {
// contains filtered or unexported fields
}
Cache is a thread-safe LRU cache with TTL support.
The cache uses a mutex for synchronization and a condition variable to coordinate goroutines waiting for values being computed.
Concurrency model:
- All public methods are thread-safe
- Multiple goroutines can read different keys concurrently
- If multiple goroutines request the same uncached key, only one computes the value while others wait
- The cache uses a condition variable (cond) to wake up waiting goroutines
Memory management:
- maxmemory: Maximum total size (in user-defined units)
- usedmemory: Current total size of all cached entries
- When usedmemory exceeds maxmemory, least recently used entries are evicted
Data structures:
- entries: Hash map for O(1) key lookup
- head/tail: Doubly-linked list for LRU ordering (head = most recent)
func New ¶
New creates and returns a new LRU cache instance.
The maxmemory parameter sets the maximum total size of all cached entries. The size is measured in user-defined units (see package documentation). When the total size exceeds maxmemory, the least recently used entries are evicted until the size is below the limit.
Common strategies for maxmemory:
- Bytes: Set to actual memory limit (e.g., 100*1024*1024 for 100MB)
- Entry count: Set to max number of entries (use size=1 for each entry)
- Custom: Any consistent unit that makes sense for your use case
Example:
// Limit cache to approximately 10MB cache := lrucache.New(10 * 1024 * 1024) // Limit cache to 1000 entries (using size=1 per entry) cache := lrucache.New(1000)
func (*Cache) Del ¶
Del removes the entry with the given key from the cache.
Returns:
- true if the key was in the cache (even if expired)
- false if the key was not found
Note: This function may return false even if the value will appear in the cache later, if called while another goroutine is computing that key. It may return true even if the value has already expired.
Example:
if cache.Del("old-key") {
log.Println("Removed old-key from cache")
}
func (*Cache) Get ¶
func (c *Cache) Get(key string, computeValue ComputeValue) interface{}
Get retrieves the cached value for the given key or computes it using computeValue.
Behavior:
- If the key exists and hasn't expired: Returns the cached value immediately
- If the key doesn't exist or has expired: Calls computeValue to compute the value
- If computeValue is nil and key not found: Returns nil
- If another goroutine is computing the same key: Waits for that computation to complete
Concurrency guarantees:
- Only one goroutine will execute computeValue for a given key at a time
- Other goroutines requesting the same key will wait for the result
- Different keys can be computed concurrently without blocking each other
- The computeValue closure is called synchronously (not in a separate goroutine)
IMPORTANT: The computeValue closure must NOT call methods on the same cache instance, as this will cause a deadlock. If you need to access other cache entries, do so before or after the Get call.
Parameters:
- key: The cache key to look up
- computeValue: Closure to compute the value if not cached (can be nil for lookup-only)
Returns:
- The cached or computed value, or nil if computeValue is nil and key not found
Examples:
// Basic usage with computation
value := cache.Get("user:123", func() (interface{}, time.Duration, int) {
user := fetchUserFromDB(123)
return user, 10 * time.Minute, 1
}).(User)
// Lookup-only (no computation)
value := cache.Get("user:123", nil)
if value == nil {
// Key not found or expired
}
// With size calculation
value := cache.Get("data", func() (interface{}, time.Duration, int) {
data := expensiveComputation()
return data, 1 * time.Hour, len(data) * 8 // Approximate bytes
})
func (*Cache) Keys ¶
Keys iterates over all entries in the cache and calls f for each one.
The function f receives the key and value of each entry. During iteration, expired entries are automatically evicted and sanity checks are performed on the internal data structures.
IMPORTANT: The cache is fully locked for the entire duration of this call. This means no other cache operations can proceed while Keys is running. Keep the function f as fast as possible to minimize lock contention.
The iteration order is not guaranteed.
Example:
cache.Keys(func(key string, val interface{}) {
fmt.Printf("Key: %s, Value: %v\n", key, val)
})
func (*Cache) Put ¶
Put stores a value in the cache with the specified key, size, and TTL.
If another goroutine is currently computing this key via Get, Put will wait for the computation to complete before overwriting the value.
If the key already exists, the old value is replaced and the entry is moved to the front of the LRU list (marked as most recently used).
Parameters:
- key: The cache key
- value: The value to store (can be any type)
- size: Size estimate in user-defined units
- ttl: Time-to-live duration until the value expires
Example:
cache.Put("config", configData, len(configData), 1 * time.Hour)
type ComputeValue ¶
ComputeValue is the type of the closure that must be passed to Get to compute a value when it is not cached or has expired.
The closure should perform the expensive computation or I/O operation and return three values:
- value: The computed value to be stored in the cache (can be any type)
- ttl: Time-to-live duration until this value expires and needs recomputation
- size: A size estimate in user-defined units (see package documentation)
The closure is called synchronously and must not call methods on the same cache instance to avoid deadlocks. If multiple goroutines request the same key concurrently, only one will execute this closure while others wait.
Example:
computeValue := func() (interface{}, time.Duration, int) {
data := fetchFromDatabase() // Expensive operation
return data, 5 * time.Minute, len(data)
}
type HttpHandler ¶
type HttpHandler struct {
// CacheKey allows overriding the way the cache key is extracted
// from the http request. The default is to use the RequestURI.
CacheKey func(*http.Request) string
// contains filtered or unexported fields
}
HttpHandler is an HTTP middleware that caches HTTP responses using an LRU cache.
It can be used to cache static assets, API responses, or any GET requests. By default, the request's RequestURI is used as the cache key.
Caching behavior:
- Only GET requests are cached (other methods pass through)
- Responses with status code 200 are cached with the configured TTL
- Responses with other status codes are cached with TTL=0 (immediate re-fetch)
- If the response sets an "Expires" header, it overrides the default TTL
- The "Age" header is automatically set to indicate cache age
The CacheKey function can be customized to change how cache keys are generated from requests (e.g., to include query parameters or headers).
func NewHttpHandler ¶
NewHttpHandler creates a new caching HTTP handler.
The handler caches responses from the fetcher handler. If no cached response is found or it has expired, fetcher is called to generate the response.
If the fetcher sets an "Expires" header, the TTL is calculated from that header. Otherwise, the default TTL is used. Responses with status codes other than 200 are cached with TTL=0 (immediate expiration).
Parameters:
- maxmemory: Maximum cache size in bytes (size of response bodies)
- ttl: Default time-to-live for cached responses
- fetcher: The handler to call when cache misses occur
Example:
// Cache static files for 1 hour, max 100MB
fileServer := http.FileServer(http.Dir("./static"))
cachedHandler := lrucache.NewHttpHandler(100*1024*1024, 1*time.Hour, fileServer)
http.Handle("/static/", cachedHandler)
func (*HttpHandler) ServeHTTP ¶
func (h *HttpHandler) ServeHTTP(rw http.ResponseWriter, r *http.Request)
ServeHTTP implements the http.Handler interface.
It attempts to serve the response from cache. If not cached or expired, it calls the fetcher handler and caches the result.
Only GET requests are cached; other HTTP methods pass through to the fetcher.