README
¶
gomemcache
A high-performance memcached client library for Go.
Features
- Production Ready: Battle-tested and used in production environments
- Full Protocol Support: Implements the complete memcache protocol
- Advanced Connection Pooling: Efficient connection management with configurable pool size, lifetime, health checks, and lifecycle callbacks
- Cluster Support: Automatic sharding across multiple memcached servers
- Auto-Discovery: Support for AWS ElastiCache, GCP Memorystore and other cluster configurations
- High Performance: Optimized for speed with minimal allocations
- Pool Statistics: Monitor connection pool health with detailed statistics
Installation
go get github.com/Assertive-Yield/gomemcache/memcache
Quick Start
import (
"fmt"
"github.com/Assertive-Yield/gomemcache/memcache"
)
func main() {
// Connect to multiple memcached servers
mc, err := memcache.New("10.0.0.1:11211", "10.0.0.2:11211", "10.0.0.3:11212")
if err != nil {
panic(err)
}
// Set an item
err = mc.Set(&memcache.Item{
Key: []byte("foo"),
Value: []byte("my value"),
Expiration: 3600, // seconds
})
if err != nil {
panic(err)
}
// Get an item
item, err := mc.Get([]byte("foo"))
if err != nil {
panic(err)
}
fmt.Printf("Value: %s\n", item.Value)
}
Usage Examples
Basic Operations
// Add (only store if key doesn't exist)
err := mc.Add(&memcache.Item{Key: []byte("key"), Value: []byte("value")})
// Replace (only store if key exists)
err := mc.Replace(&memcache.Item{Key: []byte("key"), Value: []byte("new value")})
// Delete a key
err := mc.Delete([]byte("key"))
// Get multiple keys at once
items, err := mc.GetMulti([][]byte{[]byte("key1"), []byte("key2"), []byte("key3")})
// Touch (update expiration time without fetching)
err := mc.Touch([]byte("key"), 3600)
// Flush all items
err := mc.FlushAll()
Increment/Decrement
// Increment a counter
newValue, err := mc.Increment([]byte("counter"), 1)
// Decrement a counter
newValue, err := mc.Decrement([]byte("counter"), 1)
Compare-And-Swap (CAS)
// Get item with CAS ID
item, err := mc.Get([]byte("key"))
if err != nil {
panic(err)
}
// Modify and update only if unchanged
item.Value = []byte("new value")
err = mc.CompareAndSwap(item)
if err == memcache.ErrCASConflict {
// Item was modified by another client
}
Basic Configuration
mc, err := memcache.New("localhost:11211")
if err != nil {
panic(err)
}
// Set timeout for network operations
mc.Timeout = 100 * time.Millisecond
Advanced Pool Configuration
For fine-grained control over connection pooling, use NewConfig and NewWithConfig:
import (
"context"
"net"
"time"
"github.com/Assertive-Yield/gomemcache/memcache"
)
// Create configuration with default values
config := memcache.NewConfig("localhost:11211", "localhost:11212")
// Customize pool settings
config.Timeout = 100 * time.Millisecond
config.MaxConns = 20 // Maximum connections per server (default: max(4, NumCPU))
config.MinConns = 2 // Minimum connections per server (default: 0)
config.MaxConnLifetime = 30 * time.Minute // Connection max lifetime (default: 1 hour)
config.MaxConnLifetimeJitter = 5 * time.Minute // Random jitter to prevent thundering herd
config.MaxConnIdleTime = 10 * time.Minute // Max idle time before closing (default: 30 minutes)
config.HealthCheckPeriod = 30 * time.Second // Health check interval (default: 1 minute)
// Optional lifecycle callbacks
config.BeforeConnect = func(ctx context.Context, addr net.Addr) error {
// Called before establishing a new connection
return nil
}
config.AfterConnect = func(ctx context.Context, conn net.Conn) error {
// Called after connection is established
return nil
}
config.BeforeAcquire = func(ctx context.Context, conn net.Conn) bool {
// Return true to allow acquisition, false to destroy and get another connection
return true
}
config.AfterRelease = func(conn net.Conn) bool {
// Return true to return to pool, false to destroy
return true
}
config.BeforeClose = func(conn net.Conn) {
// Called before a connection is closed
}
// Create client with configuration
mc, err := memcache.NewWithConfig(config)
if err != nil {
panic(err)
}
defer mc.Close()
Pool Statistics
Monitor connection pool health with statistics:
stats := mc.Stat()
fmt.Printf("Acquired connections: %d\n", stats.AcquiredConns())
fmt.Printf("Idle connections: %d\n", stats.IdleConns())
fmt.Printf("Total connections: %d\n", stats.TotalConns())
fmt.Printf("Max connections: %d\n", stats.MaxConns())
fmt.Printf("New connections created: %d\n", stats.NewConnsCount())
fmt.Printf("Connections closed (lifetime): %d\n", stats.MaxLifetimeDestroyCount())
fmt.Printf("Connections closed (idle): %d\n", stats.MaxIdleDestroyCount())
Pool Reset
Reset all connections without closing the client (useful for handling network disruptions):
// Close all connections but keep client open
mc.Reset()
Error Handling
item, err := mc.Get([]byte("key"))
if err == memcache.ErrCacheMiss {
// Key doesn't exist
} else if err != nil {
// Other error
}
Zero-Allocation Gets with sync.Pool
For high-performance applications, use GetWithItem with sync.Pool to avoid allocations:
import (
"sync"
"github.com/Assertive-Yield/gomemcache/memcache"
)
// Create a pool of reusable Item instances
var itemPool = sync.Pool{
New: func() interface{} {
return &memcache.Item{}
},
}
func getKey(mc *memcache.Client, key []byte) ([]byte, error) {
// Get an item from the pool
item := itemPool.Get().(*memcache.Item)
defer func() {
// Reset and return to pool
item.Reset()
itemPool.Put(item)
}()
// Populate the item from memcache
err := mc.GetWithItem(key, item)
if err != nil {
return nil, err
}
// Make a copy of the value before returning the item to the pool
value := make([]byte, len(item.Value))
copy(value, item.Value)
return value, nil
}
This approach is significantly more efficient than Get() for high-throughput scenarios as it eliminates Item allocations.
Cloud Auto-Discovery
AWS ElastiCache and GCP Memorystore
Both AWS ElastiCache and GCP Memorystore support auto-discovery using the same config get cluster command. The NewDiscoveryClient works with both platforms:
import (
"time"
"github.com/Assertive-Yield/gomemcache/memcache"
)
// AWS ElastiCache configuration endpoint example
mc, err := memcache.NewDiscoveryClient(
"my-cluster.cfg.use1.cache.amazonaws.com:11211",
60 * time.Second, // polling interval (minimum 1 second)
)
// GCP Memorystore discovery endpoint example
// mc, err := memcache.NewDiscoveryClient(
// "10.0.0.1:11211", // Your GCP Memorystore discovery endpoint
// 60 * time.Second,
// )
if err != nil {
panic(err)
}
// Stop polling when done (optional, prevents goroutine leak)
defer mc.StopPolling()
How it works:
- The client sends
config get clusterto the discovery endpoint - It receives the list of cluster nodes and configuration version
- Automatically updates the server list when cluster topology changes
- Polls periodically at the specified interval (minimum 1 second)
Note: For GCP Memorystore, ensure you're using an instance that supports the config get command (typically available in instances with auto-discovery enabled).
API Documentation
Full API documentation is available at: https://pkg.go.dev/github.com/Assertive-Yield/gomemcache/memcache
Benchmarks
To run benchmarks, first start a local memcached instance:
# Linux/macOS
memcached -d -m 64 -p 11211
# Or using Docker
docker run -d -p 11211:11211 memcached:latest
Run all benchmarks:
go test -bench=. -benchmem -benchtime=3s ./memcache/ | tee benchmark_results.txt
See memcache/benchmark_operations_test.go for benchmark implementations.
Testing
# Run all tests
go test ./...
# Run tests with coverage
go test -cover ./...
# Run tests with race detector
go test -race ./...
Contributing
Contributions are welcome! Please feel free to submit a Pull Request.
License
Apache License 2.0 - see LICENSE for details.
Credits
Originally created by Brad Fitzpatrick.
This fork is maintained and refactored by Assertive Yield and Vahid Sohrabloo.
See AUTHORS for the complete list of contributors.