Documentation
¶
Index ¶
Constants ¶
This section is empty.
Variables ¶
This section is empty.
Functions ¶
func AddJobIndexes ¶
AddJobIndexes will add job indexes to the specified indexer. If removeAfter is specified, completed and cancelled jobs are automatically removed when their finished timestamp falls behind the specified duration.
Note: It is recommended to create custom indexes that support the exact nature of data and access patterns.
Types ¶
type Context ¶ added in v0.20.0
type Context struct {
// Model is the model carried by the job.
Model Model
// Result can be set with a custom result.
Result bson.M
// Task is the task that processes this job.
//
// Usage: Read Only
Task *Task
// Queue is the queue this job was dequeued from.
//
// Usage: Read Only
Queue *Queue
// Pool is the pool that manages the queue and task.
//
// Usage: Read Only
Pool *Pool
// Store is the store used by the queue.
//
// Usage: Read Only
Store *coal.Store
// The tracer used to trace code execution.
//
// Usage: Read Only
Tracer *fire.Tracer
}
Context holds and stores contextual data.
type Job ¶
type Job struct {
coal.Base `json:"-" bson:",inline" coal:"jobs"`
// The name of the job.
Name string `json:"name" bson:"name"`
// The data that has been supplied on creation.
Data bson.Raw `json:"data" bson:"data"`
// The current status of the job.
Status Status `json:"status" bson:"status"`
// The time when the job was created.
Created time.Time `json:"created-at" bson:"created_at"`
// The time when the job is available for execution.
Available time.Time `json:"available-at" bson:"available_at"`
// The time when the job was dequeue the last time.
Started *time.Time `json:"started-at" bson:"started_at"`
// The time when the last attempt ended (completed, failed or cancelled).
Ended *time.Time `json:"ended-at" bson:"ended_at"`
// The time when the job was finished (completed or cancelled).
Finished *time.Time `json:"finished-at" bson:"finished_at"`
// Attempts is incremented with each execution attempt.
Attempts int `json:"attempts" bson:"attempts"`
// The result submitted during completion.
Result bson.M `json:"result" bson:"result"`
// The last message submitted when the job was failed or cancelled.
Reason string `json:"reason" bson:"reason"`
}
Job is a single job managed by a queue.
type Pool ¶
type Pool struct {
// The function gets invoked by the pool with critical errors.
Reporter func(error)
// contains filtered or unexported fields
}
Pool manages tasks and queues.
type Queue ¶
type Queue struct {
// MaxLag defines the maximum amount of lag that should be applied to every
// dequeue attempt.
//
// By default multiple processes compete with each other when getting jobs
// from the same queue. An artificial lag prevents multiple simultaneous
// dequeue attempts and allows the worker with the smallest lag to dequeue
// the job and inform the other processes to prevent another dequeue attempt.
//
// Default: 100ms.
MaxLag time.Duration
// DequeueInterval defines the time after a worker might try to dequeue a job
// again that has not yet been associated.
//
// Default: 2s.
DequeueInterval time.Duration
// contains filtered or unexported fields
}
Queue manages the queueing of jobs.
type Status ¶
type Status string
Status defines the allowed statuses of a job.
const ( // StatusEnqueued is used as the initial status when jobs are created. StatusEnqueued Status = "enqueued" // StatusDequeued is set when a job has been successfully dequeued. StatusDequeued Status = "dequeued" // StatusCompleted is set when a jobs has been successfully executed. StatusCompleted Status = "completed" // StatusFailed is set when an execution of a job failed. StatusFailed Status = "failed" // StatusCancelled is set when a jobs has been cancelled. StatusCancelled Status = "cancelled" )
The available job statuses.
type Task ¶
type Task struct {
// Name is the unique name of the task.
Name string
// Model is the model that holds task related data.
Model Model
// Queue is the queue that is used to manage the jobs.
Queue *Queue
// Handler is the callback called with jobs for processing. The handler
// should return errors formatted with E to properly indicate the status of
// the job. If a task execution is successful the handler may return some
// data that is attached to the job.
Handler func(*Context) error
// Workers defines the number for spawned workers that dequeue and execute
// jobs in parallel.
//
// Default: 2.
Workers int
// MaxAttempts defines the maximum attempts to complete a task. Zero means
// that the jobs is retried forever. The error retry field will take
// precedence to this setting and allow retry beyond the configured maximum.
//
// Default: 0
MaxAttempts int
// Interval defines the rate at which the worker will request a job from the
// queue.
//
// Default: 100ms.
Interval time.Duration
// MinDelay is the minimal time after a failed task is retried.
//
// Default: 1s.
MinDelay time.Duration
// MaxDelay is the maximal time after a failed task is retried.
//
// Default: 10m.
MaxDelay time.Duration
// DelayFactor defines the exponential increase of the delay after individual
// attempts.
//
// Default: 2.
DelayFactor float64
// Timeout is the time after which a task can be dequeue again in case the
// worker was not able to set its status.
//
// Default: 10m.
Timeout time.Duration
}
Task describes work that is managed using a job queue.
Click to show internal directories.
Click to hide internal directories.