Documentation
¶
Index ¶
- Constants
- Variables
- type DistributedLock
- func (dl *DistributedLock) TryLock(ctx context.Context, lockKey string) (*redsync.Mutex, bool, error)
- func (dl *DistributedLock) Unlock(ctx context.Context, mutex *redsync.Mutex) error
- func (dl *DistributedLock) WithLock(ctx context.Context, lockKey string, fn func() error) error
- func (dl *DistributedLock) WithLockOptions(ctx context.Context, lockKey string, opts LockOptions, fn func() error) error
- type DistributedLocker
- type LockOptions
- type Mode
- type RedisConnection
- type RedisLimiter
Constants ¶
Variables ¶
var Factory = func(client redis.UniversalClient, config ratelimit.Config) ratelimit.Limiter { return NewRedisLimiter(client, config) }
Factory is a ready-to-use LimiterFactory for creating Redis-backed rate limiters. This is the standard factory that consumers should use to avoid boilerplate.
Example usage:
handler := ratelimit.NewGlobalHandler(&ratelimit.GlobalHandlerConfig{
// ... config
LimiterFactory: redis.Factory,
}, logger)
Functions ¶
This section is empty.
Types ¶
type DistributedLock ¶ added in v2.4.0
type DistributedLock struct {
// contains filtered or unexported fields
}
DistributedLock provides distributed locking capabilities using Redis and the RedLock algorithm. This implementation ensures mutual exclusion across multiple service instances, preventing race conditions in critical sections such as: - Password update operations - Cache invalidation - Rate limiting checks - Any other operation requiring distributed coordination
The RedLock algorithm provides strong guarantees even in the presence of: - Network partitions - Process crashes - Clock drift
Example usage:
lock, err := redis.NewDistributedLock(redisConnection)
if err != nil {
return err
}
err = lock.WithLock(ctx, "lock:user:123", func() error {
// Critical section - only one instance will execute this at a time
return updateUser(123)
})
func NewDistributedLock ¶ added in v2.4.0
func NewDistributedLock(conn *RedisConnection) (*DistributedLock, error)
NewDistributedLock creates a new distributed lock manager. The lock manager uses the RedLock algorithm for distributed consensus.
Thread-safe: Yes - multiple goroutines can use the same DistributedLock instance.
Example:
lock, err := redis.NewDistributedLock(redisConnection)
if err != nil {
return fmt.Errorf("failed to initialize lock: %w", err)
}
func (*DistributedLock) TryLock ¶ added in v2.4.0
func (dl *DistributedLock) TryLock(ctx context.Context, lockKey string) (*redsync.Mutex, bool, error)
TryLock attempts to acquire a lock without retrying. Returns the mutex and true if lock was acquired, false if lock is busy. Returns an error for unexpected failures (network errors, context cancellation, etc.)
Use this when you want to skip the operation if the lock is busy:
mutex, acquired, err := lock.TryLock(ctx, "lock:cache:refresh")
if err != nil {
// Unexpected error (network, context cancellation, etc.) - should be propagated
return fmt.Errorf("failed to attempt lock acquisition: %w", err)
}
if !acquired {
logger.Info("Lock busy, skipping cache refresh")
return nil
}
defer lock.Unlock(ctx, mutex)
// Perform cache refresh...
func (*DistributedLock) Unlock ¶ added in v2.4.0
func (dl *DistributedLock) Unlock(ctx context.Context, mutex *redsync.Mutex) error
Unlock releases a previously acquired lock. This is only needed if you use TryLock(). WithLock() handles unlocking automatically.
func (*DistributedLock) WithLock ¶ added in v2.4.0
WithLock executes a function while holding a distributed lock. The lock is automatically released when the function returns, even on panic.
Parameters:
- ctx: context for cancellation and tracing
- lockKey: unique identifier for the lock (e.g., "lock:user:123")
- fn: function to execute under lock
Returns:
- error: from fn() or lock acquisition failure
Example:
err := lock.WithLock(ctx, "lock:user:password:123", func() error {
return updatePassword(123, newPassword)
})
func (*DistributedLock) WithLockOptions ¶ added in v2.4.0
func (dl *DistributedLock) WithLockOptions(ctx context.Context, lockKey string, opts LockOptions, fn func() error) error
WithLockOptions executes a function while holding a distributed lock with custom options. Use this when you need fine-grained control over lock behavior.
Example with custom timeout:
opts := redis.LockOptions{
Expiry: 30 * time.Second, // Long-running operation
Tries: 5, // More aggressive retries
RetryDelay: 1 * time.Second,
}
err := lock.WithLockOptions(ctx, "lock:report:generation", opts, func() error {
return generateReport()
})
type DistributedLocker ¶ added in v2.4.0
type DistributedLocker interface {
// WithLock executes a function while holding a distributed lock with default options.
// The lock is automatically released when the function returns.
WithLock(ctx context.Context, lockKey string, fn func() error) error
// WithLockOptions executes a function while holding a distributed lock with custom options.
// Use this for fine-grained control over lock behavior.
WithLockOptions(ctx context.Context, lockKey string, opts LockOptions, fn func() error) error
// TryLock attempts to acquire a lock without retrying.
// Returns the mutex and true if lock was acquired, nil and false otherwise.
TryLock(ctx context.Context, lockKey string) (*redsync.Mutex, bool, error)
// Unlock releases a previously acquired lock (used with TryLock).
Unlock(ctx context.Context, mutex *redsync.Mutex) error
}
DistributedLocker provides an interface for distributed locking operations. This interface allows for easy mocking in tests without requiring a real Redis instance.
Example test implementation:
type MockDistributedLock struct{}
func (m *MockDistributedLock) WithLock(ctx context.Context, lockKey string, fn func() error) error {
// In tests, just execute the function without actual locking
return fn()
}
func (m *MockDistributedLock) WithLockOptions(ctx context.Context, lockKey string, opts LockOptions, fn func() error) error {
return fn()
}
type LockOptions ¶ added in v2.4.0
type LockOptions struct {
// Expiry is how long the lock is held before auto-expiring (prevents deadlocks)
// Default: 10 seconds
Expiry time.Duration
// Tries is the number of attempts to acquire the lock before giving up
// Default: 3
Tries int
// RetryDelay is the delay between retry attempts
// Default: 500ms
RetryDelay time.Duration
// DriftFactor accounts for clock drift in distributed systems (RedLock algorithm)
// Default: 0.01 (1%)
DriftFactor float64
}
LockOptions configures lock behavior for advanced use cases. Use DefaultLockOptions() for sensible defaults.
func DefaultLockOptions ¶ added in v2.4.0
func DefaultLockOptions() LockOptions
DefaultLockOptions returns production-ready defaults for distributed locking. These values are tuned for typical microservice scenarios with: - Operations completing within seconds - Network latency < 100ms - Acceptable retry overhead
func RateLimiterLockOptions ¶ added in v2.4.0
func RateLimiterLockOptions() LockOptions
RateLimiterLockOptions returns optimized defaults for rate limiter locking. These values are tuned for short, fast operations like rate limiting: - Quick operations (< 100ms) - Fast retry for better throughput - Lower expiry to reduce contention
type RedisConnection ¶
type RedisConnection struct {
Mode Mode
Address []string
DB int
MasterName string
Password string
Protocol int
UseTLS bool
Logger log.Logger
Connected bool
Client redis.UniversalClient
CACert string
UseGCPIAMAuth bool
GoogleApplicationCredentials string
ServiceAccount string
TokenLifeTime time.Duration
RefreshDuration time.Duration
PoolSize int
MinIdleConns int
ReadTimeout time.Duration
WriteTimeout time.Duration
DialTimeout time.Duration
PoolTimeout time.Duration
MaxRetries int
MinRetryBackoff time.Duration
MaxRetryBackoff time.Duration
// contains filtered or unexported fields
}
RedisConnection represents a Redis connection hub
func (*RedisConnection) BuildTLSConfig ¶
func (rc *RedisConnection) BuildTLSConfig() (*tls.Config, error)
BuildTLSConfig generates a *tls.Config configuration using ca cert on base64
func (*RedisConnection) Close ¶
func (rc *RedisConnection) Close() error
Close closes the Redis connection
func (*RedisConnection) Connect ¶
func (rc *RedisConnection) Connect(ctx context.Context) error
Connect initializes a Redis connection
func (*RedisConnection) GetClient ¶
func (rc *RedisConnection) GetClient(ctx context.Context) (redis.UniversalClient, error)
GetClient always returns a pointer to a Redis client
func (*RedisConnection) InitVariables ¶
func (rc *RedisConnection) InitVariables()
InitVariables sets default values for RedisConnection
type RedisLimiter ¶ added in v2.4.0
type RedisLimiter struct {
// UseLocking enables distributed locking for atomic rate limit checks.
// Default: false (uses faster pipeline-based approach)
UseLocking bool
// DistributedLocker provides the locking mechanism when UseLocking is true.
// If UseLocking is true but this is nil, falls back to non-locking behavior.
DistributedLocker DistributedLocker
// contains filtered or unexported fields
}
RedisLimiter implements distributed rate limiting using Redis. It uses sorted sets with sliding window algorithm for accurate rate limiting. This implementation is production-ready and can handle high-throughput scenarios.
Optional Race Condition Protection: By default, RedisLimiter uses Redis pipelines which provide good performance but have a race condition window where concurrent requests can exceed the limit. For strict rate limiting guarantees, set UseLocking to true and provide a DistributedLocker. This adds ~10-50ms latency but ensures atomic operations.
func NewRedisLimiter ¶ added in v2.4.0
func NewRedisLimiter(client redis.UniversalClient, config ratelimit.Config) *RedisLimiter
NewRedisLimiter creates a new Redis-backed rate limiter. The client parameter accepts redis.UniversalClient which supports: - Standalone Redis - Redis Sentinel (high availability) - Redis Cluster (horizontal scaling)
func (*RedisLimiter) Allow ¶ added in v2.4.0
Allow implements sliding window rate limiting using Redis sorted sets. Algorithm: 1. Remove old entries outside the current time window 2. Count remaining entries in the window 3. Add current request with timestamp as score 4. Set expiration on the key for automatic cleanup
This approach provides: - Accurate rate limiting (no burst issues at window boundaries) - Automatic cleanup of old data - Atomic operations via Redis pipeline - Distributed consistency across multiple service instances
Race Condition Protection: If UseLocking is enabled and a DistributedLocker is provided, the entire operation is wrapped in a distributed lock to prevent concurrent requests from exceeding the limit. This adds latency but ensures strict guarantees.
func (*RedisLimiter) GetConfig ¶ added in v2.4.0
func (rl *RedisLimiter) GetConfig() ratelimit.Config
GetConfig returns the limiter's configuration. This is used by middleware to populate response headers.
func (*RedisLimiter) Reset ¶ added in v2.4.0
func (rl *RedisLimiter) Reset(ctx context.Context, key string) error
Reset clears all rate limit data for a specific key. This is useful for: - Administrative overrides (clearing rate limits for specific users) - Testing scenarios - Implementing "forgiveness" logic after temporary blocks