windowedlimiter

package module
v0.0.0-...-cfa4dcf Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Aug 22, 2025 License: MIT Imports: 18 Imported by: 0

README

go-windowedlimiter

An implementation of a fully-featured sliding window rate limiter in golang.

Currently extremely pre-alpha. The API is guaranteed to change, as are the internals.

Usage

See example for basic usage.

Goals

  • Sliding window algorithm for simple settings and automatic smoothing past an initial burst
  • Optimized for distributed systems with an arbitrary number of processes/goroutines fighting over available rate
    • The plan here is to add a registration system such that when a mitigation (see below) is present, all processes will register their number of waiting goroutines, such that each mitigation can track the total global count and divide the hits to the centralized kv store by that number is present, all processes will register their number of waiting goroutines, such that each mitigation can track the total global count and divide the hits to the centralized kv store by that number. Details TBD
  • Simple Redis commands both for Redis CPU reasons, and for ease of porting to other backends
  • Comprehensive testing/benchmarking suite to verify not only that the limiter is basically working, but also that it is doing what it's supposed to on a fine time granularity, in a relatively distributed use-case
  • Explicitly tested for correctness and performance
    • While Redis is degraded
      • Redis slow
      • Redis down (fast-fail, e.g. connection refused)
      • Redis down (slow-fail, e.g. DNS or connection timeout)
    • Low volume (cpu usage/latency of a single request)
    • High volume (max_rate-1)
    • Max volume (max_rate)
      • Use a mitigation cache to efficiently block requests
      • Use a single goroutine per mitigation for local atomicity
      • Load balance the allowed rate across all waiting threads
      • Have the mitigation cache coordinate cross-process such that all processes/threads only try at the allowed rate

Design

High-level concepts
Use-Cases
  • I am an distributed HTTP service accepting client requests and want to rate limit in a way where I can return a 429
    • No control over request rate at the edge, so can't be blocking
  • I am a distributed service that sometimes needs to do a thing that needs to be rate limited, but not on any known cadence
    • Non-persistent blocking client
  • I am a distributed service that is a tight loop on doing a thing that needs to be rate limited
    • Persistent (looping) blocking client
  • I am multiple of any or all of the above at the same time?
Current API

The initial API has been mindlessly copied from other implementations, but is probably not ideal from a humane golang point of view.

  • Generic keys allow arbitrary data to be stored in a retrievable way
    • The key must be both a fmt.Stringer and a comparable type, so that it can be used as a map key and sent to redis as a string
    • This is useful because you can implement whatever namespacing you want in your keys (e.g. namespace, account id, user id, etc.), have the key available to the Key Config Function so that you can use any fields set separately in a type safe way while actually looking up the allowed rates (vs having to parse parts out of an already stringified key)
  • A single instance of the rate limiter is expected to be used, with separation determined by the Key type which is passed in
  • Async Allow(context.Context, Key) bool method for when you need to return an error to a client immediately if the rate limit is exceeded
    • This is for usecases where an external client is authoritative on the cadence of attempts but you don't want blocked goroutines piling up, such as HTTP
  • Wait(context.Context, Key) error method for when you just want to block until you're able to do your operation
    • This is for usecases like work running systems or queue subscribers where the max number is bounded
Components
Mitigation Cache

internal/mitigation/ is a thread safe in-memory cache of keys that have exceeded the rate limit. This is in place to ensure that the rate limiter is efficient when a rate limit is exceeded. If a key isn't in the mitigation cache, it is allowed immediately with a sync.Map read. Increments are sent via channel to a single global incrementer goroutine, which batches them pipelines them to redis.

For each key in the mitigation cache, a single goroutine will be spawned that will check redis and distribute the allowed rate across all waiting goroutines. The mitigation cache entries get extended while there are active subscribers.

FIFO Queue

internal/fifo/ is a simple FIFO queue used by the mitigation cache to evenly distribute available rate to future Allow calls and waiting goroutines.

Rate Limiter
TODO
  • TTL for keyConfCache entries?
  • Exponential backoff of retries in wait? If the mitigation cache can actually return real errors in the future
  • Investigate channel-based synchronous API so clients can use select to wait for available rate in a more go-like way

Documentation

Index

Constants

This section is empty.

Variables

This section is empty.

Functions

func NewLogger

func NewLogger(t logger) *zap.Logger

Types

type Key

type Key interface {
	comparable
	fmt.Stringer
}

type KeyConf

type KeyConf struct {
	Rate     int64
	Interval time.Duration
}

KeyConf is a configuration for a key. It's set lazily per-key by calling Options.KeyConfFn

Rate is the number of allowed calls per interval.

Interval is the duration of the sliding window for the rate limit. Larger intervals will decrease the requests to redis.

type KeyConfFn

type KeyConfFn[K Key] func(ctx context.Context, key K) *KeyConf

KeyConfFn is a function that returns a KeyConf for a given key. It will be called lazily and cached forever, unless explicitly evicted.

type Limiter

type Limiter[K Key] struct {
	// contains filtered or unexported fields
}

func New

func New[K Key](ctx context.Context, rdb redis.Cmdable, keyConfFn KeyConfFn[K], options ...Option[K]) *Limiter[K]

New creates a new Limiter with the provided Redis client and options

Note that rdb needs to be carefully configured for timeouts if you expect redis outages to not have severe impacts on latency.

KeyConfFn returns a KeyConf for a given key. This will be called lazily, once per key, until `Limiter.Refresh()` or `Limiter.RefreshKey(key string)` is called.

func (*Limiter[K]) Allow

func (l *Limiter[K]) Allow(ctx context.Context, key K) (allowed bool)

func (*Limiter[K]) Close

func (l *Limiter[K]) Close()

Close stops all goroutines, flushes logging, and waits for completion.

func (*Limiter[K]) Refresh

func (l *Limiter[K]) Refresh(ctx context.Context)

Refresh causes the KeyConfFn to be called again for all keys (lazily).

If your rate limit is changing globally, you should call this once the keyConfFn is returning the new result

func (*Limiter[K]) RefreshKey

func (l *Limiter[K]) RefreshKey(ctx context.Context, key K)

RefreshKey causes the KeyConfFn to be called again for the specified key (lazily).

If your rate limit is changing for a specific key, you should call this once the keyConfFn is returning the new result

func (*Limiter[K]) SetKeyConfFn

func (l *Limiter[K]) SetKeyConfFn(ctx context.Context, keyConfFn KeyConfFn[K])

SetKeyConfFn sets the KeyConfFn on the limiter and clears all mitigations

func (*Limiter[K]) Wait

func (l *Limiter[K]) Wait(ctx context.Context, key K)

type Option

type Option[K Key] func(*Limiter[K])

func OptionWithBatchDuration

func OptionWithBatchDuration[K Key](t time.Duration) Option[K]

OptionWithBatchDuration sets how long to wait maximum before flushing all waiting increments.

This is proportional to both how much over the limit a key can go before being mitigated, and inversely proportional to how many writes to the remote cache are made.

It is important that this is set to a value that is smaller thant the smallest KeyConf interval, or a mitigation will never be triggered. Ideally at least 4x smaller, if you want smooth rates.

The default is set with the assumption that per second limits are going to happen.

func OptionWithLogger

func OptionWithLogger[K Key](l *zap.Logger) Option[K]

Directories

Path Synopsis
internal
fifo
Package fifo provides a linked-list based fifo queue optimized for pushing and shifting
Package fifo provides a linked-list based fifo queue optimized for pushing and shifting
mitigation
Package mitigation provides a way to fast-path allows for keys that are not actively hitting their rate limit.
Package mitigation provides a way to fast-path allows for keys that are not actively hitting their rate limit.

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL