Documentation
¶
Index ¶
Constants ¶
This section is empty.
Variables ¶
This section is empty.
Functions ¶
Types ¶
type Backoff ¶
type Backoff struct {
// contains filtered or unexported fields
}
Backoff tracks per-key exponential backoff state. It is safe for concurrent use.
func NewBackoff ¶
func NewBackoff(config BackoffConfig) *Backoff
NewBackoff creates a new Backoff tracker with the given configuration.
func (*Backoff) RecordFailure ¶
RecordFailure records a failure for the given key, setting or increasing the backoff duration with jitter.
func (*Backoff) RecordSuccess ¶
RecordSuccess clears the backoff state for the given key.
func (*Backoff) ShouldRetry ¶
ShouldRetry returns true if the key has no recorded failure or if the backoff deadline has passed.
type BackoffConfig ¶
type BackoffConfig struct {
// Initial is the initial backoff duration after the first failure.
Initial time.Duration
// Multiplier is the geometric progression factor applied on each failure.
Multiplier float64
// Max is the maximum backoff duration.
Max time.Duration
// JitterFrac adds randomization to prevent thundering herd. A value of 0.2
// means the actual delay is within ±20% of the computed backoff.
JitterFrac float64
}
BackoffConfig configures the exponential backoff parameters.
func DefaultBackoffConfig ¶
func DefaultBackoffConfig() BackoffConfig
DefaultBackoffConfig returns a BackoffConfig with sensible defaults: 10s initial, 1.5x multiplier, 5m max, 20% jitter.
type Controller ¶
type ToolsGetter ¶
type ToolsGetter interface {
GetTools() ([]commonParams.RunnerApplicationDownload, error)
}
type UnboundedChan ¶
type UnboundedChan[T any] struct { // contains filtered or unexported fields }
UnboundedChan provides an unbounded FIFO channel backed by an internal slice-based queue. A router goroutine absorbs sends into the queue immediately and feeds them to the output one at a time. This ensures the producer is never blocked (beyond a brief channel send), while the consumer processes events sequentially. In theory, this does have the potential to grow forever but in practice it should not happen as we consume the queue and process it. Regardless of result, the queue should shrink. The only way this never shrinks is if the consumer is deadlocked for whatever reason.
Use In() to send events and Out() to receive them.
func NewUnboundedChan ¶
func NewUnboundedChan[T any](ctx context.Context, quit <-chan struct{}) *UnboundedChan[T]
NewUnboundedChan creates and starts an UnboundedChan. The router goroutine runs until ctx is cancelled or quit is closed.
func (*UnboundedChan[T]) In ¶
func (u *UnboundedChan[T]) In() chan<- T
In returns the send side of the channel.
func (*UnboundedChan[T]) Out ¶
func (u *UnboundedChan[T]) Out() <-chan T
Out returns the receive side of the channel.
func (*UnboundedChan[T]) Process ¶
func (u *UnboundedChan[T]) Process(handler func(T))
Process reads from the output channel and calls handler for each item sequentially. It blocks until ctx is cancelled or quit is closed.