ratelimit

package
v1.0.2 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Jan 15, 2024 License: MIT Imports: 9 Imported by: 0

README

Rate Limit

Rate limiting is enabled by default in a default equinox client. For now the only store available is internal, though I want to add Redis support in the future, maybe using a lua script.

Info on rate limiting:

You can create an InternalRateLimit with NewInternalRateLimit(). RateLimit includes the following:

type RateLimit struct {
	Route  map[string]*Limits
	Enabled bool
	// Factor to be applied to the limit. E.g. if set to 0.5, the limit will be reduced by 50%.
	LimitUsageFactor float32
	// Delay in milliseconds to be add to reset intervals.
	IntervalOverhead time.Duration
	mutex            sync.Mutex
}

func NewInternalRateLimit(limitUsageFactor float32, intervalOverhead time.Duration) *RateLimit {
	if limitUsageFactor < 0.0 || limitUsageFactor > 1.0 {
		limitUsageFactor = 0.99
	}
	if intervalOverhead < 0 {
		intervalOverhead = time.Second
	}
	return &RateLimit{
		Route:            make(map[string]*Limits, 1),
		LimitUsageFactor: limitUsageFactor,
		IntervalOverhead: intervalOverhead,
		Enabled:          true,
	}
}
Bucket

A Limit for the App/Method is a Bucket:

type Bucket struct {
	// Next reset
	next time.Time
	// Current number of tokens, starts at limit
	tokens int
	// The limit given in the header without any modification
	baseLimit int
	// Maximum number of tokens
	limit int
	// Time interval in seconds
	interval         time.Duration
	intervalOverhead time.Duration
	mutex            sync.Mutex
}

When creating a bucket, interval is the time in seconds between resets, limit is the maximum number of tokens taking into account the LimitUsageFactor from the RateLimit, and tokens is the current number of tokens.

func NewBucket(interval time.Duration, intervalOverhead time.Duration, baseLimit int, limit int, tokens int) *Bucket {
	return &Bucket{
		interval:         interval,
		intervalOverhead: intervalOverhead,
		baseLimit:        baseLimit,
		limit:            limit,
		tokens:           tokens,
		next:             time.Now().Add(interval + intervalOverhead),
		mutex:            sync.Mutex{},
	}
}
  • When a bucket is full, the amount of tokens will be the same as the limit - Rate limited, not able to Reserve() a request from the bucket.
  • When a bucket is empty, the amount of tokens will be 0 - Not rate limited, able to Reserve() a request.

When initializing a bucket, the current amount of tokens will be the same as the count provided from the X-App-Rate-Limit-Count/X-Method-Rate-Limit-Count headers.

func (r *RateLimit) parseHeaders(limitHeader string, countHeader string, limitType string) *Limit {
	if limitHeader == "" || countHeader == "" {
		return NewLimit(limitType)
	}

	limits := strings.Split(limitHeader, ",")
	counts := strings.Split(countHeader, ",")

	if len(limits) == 0 {
		return NewLimit(limitType)
	}

	limit := &Limit{
		buckets:    make([]*Bucket, len(limits)),
		limitType:  limitType,
		retryAfter: 0,
		mutex:      sync.Mutex{},
	}

	for i, limitString := range limits {
		baseLimit, interval := getNumbersFromPair(limitString)
		count, _ := getNumbersFromPair(counts[i])
		newLimit := int(math.Max(1, float64(baseLimit)*r.LimitUsageFactor))
		limit.buckets[i] = NewBucket(interval, r.IntervalOverhead, baseLimit, newLimit, count)
	}

	return limit
}
Reserve

After creating a request and checking if it was cached, the client will use Reserve(), initializing the App and Method buckets in a route AND the MethodID if not initialized.

If rate limited, Reserve() will block until the next bucket reset. A context can be passed, allowing for the request to be cancelled, a check will be done to see if waiting would exceed the deadline set in a context, returning an error if it would.

Reserve() will reserve one request for the App and Method buckets in a route.

Update

After receiving a response, Update() will verify that the current buckets in memory match the ones received by the Riot API, if they don't, it will force an update in all buckets.

By 'matching', I mean that the current baseLimit and interval in the buckets already in memory match the ones received by the Riot API.

Documentation

Index

Constants

View Source
const (
	RATE_LIMIT_TYPE_HEADER = "X-Rate-Limit-Type"
	RETRY_AFTER_HEADER     = "Retry-After"

	APP_RATE_LIMIT_HEADER          = "X-App-Rate-Limit"
	APP_RATE_LIMIT_COUNT_HEADER    = "X-App-Rate-Limit-Count"
	METHOD_RATE_LIMIT_HEADER       = "X-Method-Rate-Limit"
	METHOD_RATE_LIMIT_COUNT_HEADER = "X-Method-Rate-Limit-Count"

	APP_RATE_LIMIT_TYPE     = "application"
	METHOD_RATE_LIMIT_TYPE  = "method"
	SERVICE_RATE_LIMIT_TYPE = "service"

	DEFAULT_RETRY_AFTER = 2 * time.Second
)

Variables

View Source
var (
	ErrContextDeadlineExceeded = errors.New("waiting would exceed context deadline")
)

Functions

func WaitN

func WaitN(ctx context.Context, estimated time.Time, duration time.Duration) error

WaitN waits for the given duration after checking if the context deadline will be exceeded.

Types

type Bucket

type Bucket struct {
	// contains filtered or unexported fields
}

func NewBucket

func NewBucket(interval time.Duration, intervalOverhead time.Duration, baseLimit int, limit int, tokens int) *Bucket

func (*Bucket) MarshalZerologObject

func (b *Bucket) MarshalZerologObject(encoder *zerolog.Event)

type Limit

type Limit struct {
	// contains filtered or unexported fields
}

Limit represents a collection of buckets and the type of limit (application or method).

func NewLimit

func NewLimit(limitType string) *Limit

type Limits

type Limits struct {
	App     *Limit
	Methods map[string]*Limit
}

Limits in a route.

func NewLimits

func NewLimits() *Limits

type RateLimit

type RateLimit struct {
	Route   map[string]*Limits
	Enabled bool
	// Factor to be applied to the limit. E.g. if set to 0.5, the limit will be reduced by 50%.
	LimitUsageFactor float64
	// Delay in milliseconds to be add to reset intervals.
	IntervalOverhead time.Duration
	// contains filtered or unexported fields
}

func NewInternalRateLimit added in v0.19.3

func NewInternalRateLimit(limitUsageFactor float64, intervalOverhead time.Duration) *RateLimit

func (*RateLimit) CheckRetryAfter

func (r *RateLimit) CheckRetryAfter(route string, methodID string, headers http.Header) time.Duration

CheckRetryAfter returns the number of seconds to wait from the Retry-After header before retrying, or DEFAULT_RETRY_AFTER if not found.

func (*RateLimit) MarshalZerologObject added in v1.0.0

func (r *RateLimit) MarshalZerologObject(encoder *zerolog.Event)

func (*RateLimit) Reserve added in v1.0.2

func (r *RateLimit) Reserve(ctx context.Context, logger zerolog.Logger, route string, methodID string) error

Reserves one request for the App and Method buckets in a route.

If rate limited, will block until the next bucket reset.

func (*RateLimit) Update

func (r *RateLimit) Update(logger zerolog.Logger, route string, methodID string, headers http.Header)

Update creates new buckets in a route with the limits provided in the response headers.

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL