Documentation
¶
Index ¶
- Constants
- Variables
- type GateType
- type MemoryRateLimiterBackend
- type RateLimitBackend
- type RateLimitConfig
- type RateLimitedClient
- type RateLimitedClients
- type RateLimiter
- func (rl *RateLimiter) GetRate(ctx context.Context) bool
- func (rl *RateLimiter) GetRateForClient(ctx context.Context, clientID string) bool
- func (rl *RateLimiter) GetRateWithError(ctx context.Context) (bool, error)
- func (rl *RateLimiter) GetRateWithErrorForClient(ctx context.Context, clientID string) (bool, error)
- func (rl *RateLimiter) GetThroughput() (rate int, overTime time.Duration, burstCapacity int)
- func (rl *RateLimiter) SetThroughput(rate int, overTime time.Duration, burstCapacity int)
- func (rl *RateLimiter) WaitForRate(ctx context.Context) bool
- func (rl *RateLimiter) WaitForRateForClient(ctx context.Context, clientID string) bool
- func (rl *RateLimiter) WaitForRateWithError(ctx context.Context) (bool, error)
- func (rl *RateLimiter) WaitForRateWithErrorAndTimeout(ctx context.Context, timeout time.Duration) (bool, error)
- func (rl *RateLimiter) WaitForRateWithErrorAndTimeoutForClient(ctx context.Context, clientID string, timeout time.Duration) (bool, error)
- func (rl *RateLimiter) WaitForRateWithErrorForClient(ctx context.Context, clientID string) (bool, error)
- func (rl *RateLimiter) WaitForRateWithTimeout(ctx context.Context, timeout time.Duration) bool
- func (rl *RateLimiter) WaitForRateWithTimeoutForClient(ctx context.Context, clientID string, timeout time.Duration) bool
- type RedisClient
- type RedisRateLimiterBackend
- type ThroughputProvider
Constants ¶
const ( UnlimitedRate = -1 MaxUint = ^uint(0) MaxInt = int(MaxUint >> 1) )
Variables ¶
var WithPrefixOption = func(prefix string) rateLimitOption { return func(cfg RateLimitConfig) RateLimitConfig { cfg.limiterPrefix = prefix return cfg } }
WithPrefixOption - if provided, all keys used in the backend will be prefixed with the provided string `${prefix}-${clientID}` This is useful if you're using multiple rate limiters with the same backend and the same client set but wish to rate-limit them independently.
var WithRateWaitDuration = func(duration time.Duration) rateLimitOption { return func(cfg RateLimitConfig) RateLimitConfig { cfg.rateWaitDuration = duration return cfg } }
WithRateWaitDuration - if provided, the rate limiter will wait for the provided duration before checking the backend for more available rate. Default is 25ms - you may want to increase this if your backend is a remote service to prevent high traffic throughput checking for available rate.
var WithThroughputProvider = func(throughputProvider ThroughputProvider, throughputCheckFrequency time.Duration) rateLimitOption { return func(cfg RateLimitConfig) RateLimitConfig { cfg.throughputProvider = throughputProvider cfg.throughputCheckFrequency = throughputCheckFrequency return cfg } }
WithThroughputProvider - if provided, the rate limiter consult this provider function and update the rate limiter backend to the throughput returned by this function. The provided updateThroughputFrequency variable guides how often the rate limiter adjusts throughput. This is useful if you want your rate limiter to continually adjust allowed throughput based on some external variable.
Functions ¶
This section is empty.
Types ¶
type GateType ¶
type GateType bool
const ( // RateWaitDuration is the default amount of time before the rate limiter will check the backend for more available rate. RateWaitDuration = time.Millisecond * 25 // FailOpen will make errors from Redis result in automatic rate being allowed FailOpen GateType = true // FailClosed will make errors from Redis result in automatic rate being denied. FailClosed GateType = false )
type MemoryRateLimiterBackend ¶
type MemoryRateLimiterBackend struct {
// contains filtered or unexported fields
}
func NewMemoryRateLimiterBackend ¶
func NewMemoryRateLimiterBackend(rateLimit int, overTime time.Duration, burstCapacity int) *MemoryRateLimiterBackend
NewMemoryRateLimiterBackend creates a new rate limiter backend in memory using Go's built-in rate limiter. Each client gets its own rate limiter instance with individually maintained rates. Parameters:
rate: The rate at which the client can consume rate. If set to -1, the rate limiter will allow all requests. overTime: The duration over which the rate is calculated. For instance, if you want to provide a client 1 request per second, you would set rate=1, overTime=1s. burstCapacity: total burst capacity of the limiter. This must be >= `rate`. If set to less than rate, it will be set to rate (which means this bucket cannot burst beyond the throughput defined by rate) for instance, if you want to provide a client 1 request per second, with a max burst of 5 requests, you would set rate=1, overTime=1s, and burstCapacity=5.
func (*MemoryRateLimiterBackend) GetRate ¶
GetRate Returns T/F if there is rate available for execution, decrements if return is true
func (*MemoryRateLimiterBackend) GetThroughput ¶
func (rl *MemoryRateLimiterBackend) GetThroughput() (int, time.Duration, int)
GetThroughput returns the current configured rate, timeframe, and burst capacity
func (*MemoryRateLimiterBackend) SetThroughput ¶
func (rl *MemoryRateLimiterBackend) SetThroughput(rateLimit int, overTime time.Duration, burstCapacity int)
SetThroughput sets the current configured rate, timeframe, and burst capacity and updates all existing client rate limiters
type RateLimitBackend ¶
type RateLimitConfig ¶
type RateLimitConfig struct {
// contains filtered or unexported fields
}
func NewRateLimitConfig ¶
func NewRateLimitConfig() RateLimitConfig
type RateLimitedClient ¶
type RateLimitedClient struct {
// contains filtered or unexported fields
}
type RateLimitedClients ¶
type RateLimitedClients map[string]*RateLimitedClient
type RateLimiter ¶
type RateLimiter struct {
// contains filtered or unexported fields
}
func NewRateLimiter ¶
func NewRateLimiter(gateType GateType, backend RateLimitBackend, opts ...rateLimitOption) *RateLimiter
NewRateLimiter creates a new rate limiter with a redis backend
func (*RateLimiter) GetRate ¶
func (rl *RateLimiter) GetRate(ctx context.Context) bool
GetRate Returns T/F if there is rate available for execution, decrements if return is true. If an error occurs, the rate will be made available, or rejected, depending on GateType configuration.
func (*RateLimiter) GetRateForClient ¶
func (rl *RateLimiter) GetRateForClient(ctx context.Context, clientID string) bool
GetRateForClient Returns T/F if there is rate available for execution for the specific clientID, decrements if return is true
func (*RateLimiter) GetRateWithError ¶
func (rl *RateLimiter) GetRateWithError(ctx context.Context) (bool, error)
GetRateWithError Returns T/F if there is rate available for execution, or error if rate cannot be determined
func (*RateLimiter) GetRateWithErrorForClient ¶
func (rl *RateLimiter) GetRateWithErrorForClient(ctx context.Context, clientID string) (bool, error)
GetRateWithErrorForClient Returns T/F if there is rate available for execution, or error if rate cannot be determined
func (*RateLimiter) GetThroughput ¶
func (rl *RateLimiter) GetThroughput() (rate int, overTime time.Duration, burstCapacity int)
func (*RateLimiter) SetThroughput ¶
func (rl *RateLimiter) SetThroughput(rate int, overTime time.Duration, burstCapacity int)
func (*RateLimiter) WaitForRate ¶
func (rl *RateLimiter) WaitForRate(ctx context.Context) bool
WaitForRate blocks until rate is available. When rate is available, true is returned.
func (*RateLimiter) WaitForRateForClient ¶
func (rl *RateLimiter) WaitForRateForClient(ctx context.Context, clientID string) bool
WaitForRateForClient blocks until rate is available for the specified clientID. When rate is available, true is returned.
func (*RateLimiter) WaitForRateWithError ¶
func (rl *RateLimiter) WaitForRateWithError(ctx context.Context) (bool, error)
WaitForRateWithError blocks until rate is available or an error occurs. When rate is available, true is returned. If an error occurs, false is returned.
func (*RateLimiter) WaitForRateWithErrorAndTimeout ¶
func (rl *RateLimiter) WaitForRateWithErrorAndTimeout(ctx context.Context, timeout time.Duration) (bool, error)
WaitForRateWithErrorAndTimeout returns True if rate is available and false if timeout has expired with no rate available
func (*RateLimiter) WaitForRateWithErrorAndTimeoutForClient ¶
func (rl *RateLimiter) WaitForRateWithErrorAndTimeoutForClient(ctx context.Context, clientID string, timeout time.Duration) (bool, error)
WaitForRateWithErrorAndTimeoutForClient returns True if rate is available and false if timeout has expired with no rate available, or an error if an error is encountered while waiting
func (*RateLimiter) WaitForRateWithErrorForClient ¶
func (*RateLimiter) WaitForRateWithTimeout ¶
WaitForRateWithTimeout returns True if rate is available and false if timeout has expired with no rate available
func (*RateLimiter) WaitForRateWithTimeoutForClient ¶
func (rl *RateLimiter) WaitForRateWithTimeoutForClient(ctx context.Context, clientID string, timeout time.Duration) bool
WaitForRateWithTimeoutForClient returns True if rate is available for the specified clientID and false if timeout has expired with no rate available
type RedisClient ¶
type RedisClient interface {
Eval(ctx context.Context, script string, keys []string, args ...interface{}) *redis.Cmd
EvalSha(ctx context.Context, sha1 string, keys []string, args ...interface{}) *redis.Cmd
ScriptExists(ctx context.Context, hashes ...string) *redis.BoolSliceCmd
ScriptLoad(ctx context.Context, script string) *redis.StringCmd
Del(ctx context.Context, keys ...string) *redis.IntCmd
EvalRO(ctx context.Context, script string, keys []string, args ...interface{}) *redis.Cmd
EvalShaRO(ctx context.Context, sha1 string, keys []string, args ...interface{}) *redis.Cmd
}
RedisClient is implemented by *redis.Client
type RedisRateLimiterBackend ¶
type RedisRateLimiterBackend struct {
// contains filtered or unexported fields
}
func NewRedisRateLimiterBackend ¶
func NewRedisRateLimiterBackend(rate int, overTime time.Duration, burstCapacity int, client RedisClient) *RedisRateLimiterBackend
NewRedisRateLimiterBackend creates a new rate limiter with a redis backend
rate: The rate at which the client can consume rate. If set to -1, the rate limiter will allow all requests. overTime: The duration over which the rate is calculated. For instance, if you want to provide a client 1 request per second, you would set rate=1, overTime=1s. burstCapacity: total burst capacity of the limiter. This must be >= `rate`. If set to less than rate, it will be set to rate (which means this bucket cannot burst beyond the throughput defined by rate) for instance, if you want to provide a client 1 request per second, with a max burst of 5 requests, you would set rate=1, overTime=1s, and burstCapacity=5.
func (*RedisRateLimiterBackend) GetRate ¶
GetRate Returns T/F if there is rate available for execution, decrements if return is true
func (*RedisRateLimiterBackend) GetThroughput ¶
func (rrl *RedisRateLimiterBackend) GetThroughput() (int, time.Duration, int)
GetThroughput returns the current configured rate, timeframe, and burst capacity
func (*RedisRateLimiterBackend) SetThroughput ¶
func (rrl *RedisRateLimiterBackend) SetThroughput(rate int, overTime time.Duration, burstCapacity int)
SetThroughput returns the sets the current configured rate, timeframe, and burst capacity