Documentation
¶
Index ¶
- Variables
- func NewJSONDataStore(path string) (*jsonDataStore, error)
- type PipelineInfo
- type PipelineJob
- type PipelineRunner
- func (r *PipelineRunner) CancelJob(id uuid.UUID) error
- func (r *PipelineRunner) HandleStageChange(stage *scheduler.Stage)
- func (r *PipelineRunner) HandleTaskChange(t *task.Task)
- func (r *PipelineRunner) IterateJobs(process func(j *PipelineJob))
- func (r *PipelineRunner) JobCompleted(id uuid.UUID, err error)
- func (r *PipelineRunner) ListPipelines() []PipelineInfo
- func (r *PipelineRunner) ReadJob(id uuid.UUID, process func(j *PipelineJob)) error
- func (r *PipelineRunner) SaveToStore()
- func (r *PipelineRunner) ScheduleAsync(pipeline string, opts ScheduleOpts) (*PipelineJob, error)
- type ScheduleOpts
Constants ¶
This section is empty.
Variables ¶
var ErrJobNotFound = errors.New("job not found")
Functions ¶
func NewJSONDataStore ¶
Types ¶
type PipelineInfo ¶
type PipelineJob ¶
type PipelineJob struct {
ID uuid.UUID
Pipeline string
Variables map[string]interface{}
Completed bool
Canceled bool
// Created is the schedule / queue time of the job
Created time.Time
// Start is the actual start time of the job
Start *time.Time
// End is the actual end time of the job (can be nil if incomplete)
End *time.Time
User string
// Tasks is an in-memory representation with state of tasks, sorted by dependencies
Tasks jobTasks
LastError error
// contains filtered or unexported fields
}
PipelineJob is a single execution context (a single run of a single pipeline). Can be scheduled (in the waitListByPipeline of PipelineRunner), or currently running (jobsByID / jobsByPipeline in PipelineRunner)
type PipelineRunner ¶
type PipelineRunner struct {
// contains filtered or unexported fields
}
PipelineRunner is the main data structure which is basically a runtime state "singleton"
All exported functions are synced with the mx mutex and are safe for concurrent use
func NewPipelineRunner ¶
func NewPipelineRunner(ctx context.Context, defs *definition.PipelinesDef, createTaskRunner func() taskctl.Runner, store dataStore) (*PipelineRunner, error)
NewPipelineRunner creates the central data structure which controls the full runner state; so this knows what is currently running
func (*PipelineRunner) HandleStageChange ¶
func (r *PipelineRunner) HandleStageChange(stage *scheduler.Stage)
HandleStageChange will be called when the stage state changes in the scheduler
func (*PipelineRunner) HandleTaskChange ¶
func (r *PipelineRunner) HandleTaskChange(t *task.Task)
HandleTaskChange will be called when the task state changes in the task runner
func (*PipelineRunner) IterateJobs ¶
func (r *PipelineRunner) IterateJobs(process func(j *PipelineJob))
IterateJobs calls process for each job in a read lock. It is not safe to reference the job outside of the process function.
func (*PipelineRunner) JobCompleted ¶
func (r *PipelineRunner) JobCompleted(id uuid.UUID, err error)
func (*PipelineRunner) ListPipelines ¶
func (r *PipelineRunner) ListPipelines() []PipelineInfo
ListPipelines lists pipelines with status information about each pipeline (is it running, is it schedulable)
func (*PipelineRunner) ReadJob ¶
func (r *PipelineRunner) ReadJob(id uuid.UUID, process func(j *PipelineJob)) error
func (*PipelineRunner) SaveToStore ¶
func (r *PipelineRunner) SaveToStore()
func (*PipelineRunner) ScheduleAsync ¶
func (r *PipelineRunner) ScheduleAsync(pipeline string, opts ScheduleOpts) (*PipelineJob, error)
type ScheduleOpts ¶
Directories
¶
| Path | Synopsis |
|---|---|
|
cmd
|
|
|
prunner
command
|
|
|
Package server Prunner REST API
|
Package server Prunner REST API |
|
Package taskctl contains custom implementations and extensions for the github.com/taskctl/taskctl packages
|
Package taskctl contains custom implementations and extensions for the github.com/taskctl/taskctl packages |