taskengine

package
v0.6.1 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Mar 18, 2026 License: Apache-2.0 Imports: 20 Imported by: 0

Documentation

Overview

Package taskengine provides a configurable workflow system for building AI-powered task chains. It supports LLM interactions, external integrations via hooks, complex branching logic, and robust error handling with retries and timeouts.

Key Features: - Multiple task handlers (condition keys, parsing, hooks, no-op) - Type-safe data passing between tasks - Conditional branching with various operators - External system integration via hooks - Comprehensive debugging and monitoring - Template-based prompt generation

taskengine enables building AI workflows where tasks are linked in sequence, supporting conditional branching, numeric or scored evaluation, range resolution, and optional integration with external systems (hooks). Each task can invoke an LLM prompt or a custom hook function depending on its type.

Hooks are pluggable interfaces that allow tasks to perform side effects — calling APIs, saving data, or triggering custom business logic — outside of prompt-based processing.

Typical use cases:

  • Chatbot systems
  • Data processing pipelines
  • Automated decision-making systems
  • Dynamic content generation (e.g. marketing copy, reports)
  • AI agent orchestration with branching logic
  • Decision trees based on LLM outputs
  • Automation pipelines involving prompts and external system calls

Index

Constants

View Source
const (
	TermEnd = "end"
)

Variables

View Source
var ErrHookNotFound = errors.New("hook not found")

ErrHookNotFound is returned when a named hook is not registered in any repo.

View Source
var ErrUnsupportedTaskType = errors.New("executor does not support the task type")

ErrUnsupportedTaskType indicates unrecognized task type

Functions

func ConvertOpenAIToChatHistory

func ConvertOpenAIToChatHistory(request OpenAIChatRequest) (ChatHistory, int, []Tool, LLMExecutionConfig)

ConvertOpenAIToChatHistory converts an OpenAI-compatible chat request into the internal ChatHistory and LLMExecutionConfig formats used by the task engine.

func ConvertToType

func ConvertToType(value interface{}, dataType DataType) (interface{}, error)

ConvertToType converts a value to the specified DataType

func ExportedResolveHookNames added in v0.5.0

func ExportedResolveHookNames(ctx context.Context, allowlist []string, provider HookProvider) ([]string, error)

ExportedResolveHookNames is a test-only export of resolveHookNames.

func ExtractJSONArray added in v0.5.0

func ExtractJSONArray(s string) string

ExtractJSONArray scans s for the outermost [...] block and returns it. It first strips code fences, then skips any preamble text the LLM may have placed before the JSON array to be robust to inconsistent model output.

func HookArgsFromContext added in v0.5.0

func HookArgsFromContext(ctx context.Context, hookName string) map[string]string

HookArgsFromContext returns the args previously stored for hookName, or nil if none were set. The returned map must not be mutated by the caller.

func StripCodeFences added in v0.5.0

func StripCodeFences(s string) string

StripCodeFences removes leading and trailing Markdown code fences from LLM output. It handles ` ``` `, ` ```json `, ` ```javascript ` etc.

func SupportedOperators

func SupportedOperators() []string

func TemplateVarsFromContext

func TemplateVarsFromContext(ctx context.Context) (map[string]string, error)

TemplateVarsFromContext returns the template variables map from the context. Returns nil if not set; a nil map is safe to read (key lookup returns false). MacroEnv will return an error for any {{var:key}} whose key is absent.

func WithHookArgs added in v0.5.0

func WithHookArgs(ctx context.Context, hookName string, args map[string]string) context.Context

WithHookArgs stores a copy of args for the named hook in ctx.

The map is copied on entry so the stored value is immutable — callers must not rely on mutating the original map being visible to hook implementations. This ensures no data races when the same context is read concurrently (e.g. during tool-list construction in ExecEnv).

func WithTemplateVars

func WithTemplateVars(ctx context.Context, vars map[string]string) context.Context

WithTemplateVars attaches a map of template variables to the context. MacroEnv expands {{var:name}} from this map. The engine never reads os.Getenv; callers (e.g. Contenox CLI, API) build the map and attach it here.

Types

type BranchCompose

type BranchCompose struct {
	// Selects the variable to compose the current input with.
	WithVar string `yaml:"with_var,omitempty" json:"with_var,omitempty"`
	// Strategy defines how values should be merged ("override", "merge_chat_histories", "append_string_to_chat_history").
	// Optional; defaults to "override" or "merge_chat_histories" if both output and WithVar values are ChatHistory.
	// "merge_chat_histories": If both output and WithVar values are ChatHistory,
	// appends the WithVar's Messages to the output's Messages.
	Strategy string `yaml:"strategy,omitempty" json:"strategy,omitempty"`
}

BranchCompose is a task that composes multiple variables into a single output. the composed output is stored in a variable named after the task ID with "_composed" suffix. and is also directly mutating the task's output. example:

compose:

with_var: "chat2"
strategy: "override"

type CapturedStateUnit

type CapturedStateUnit struct {
	TaskID      string        `json:"taskID" example:"validate_input"`
	TaskHandler string        `json:"taskHandler" example:"condition_key"`
	InputType   DataType      `json:"inputType" example:"string" openapi_include_type:"string"`
	OutputType  DataType      `json:"outputType" example:"string" openapi_include_type:"string"`
	Transition  string        `json:"transition" example:"valid_input"`
	Duration    time.Duration `json:"duration" example:"452000000"` // in nanoseconds
	Error       ErrorResponse `json:"error" openapi_include_type:"taskengine.ErrorResponse"`
	Input       string        `json:"input" example:"This is a test input that needs validation"`
	Output      string        `json:"output" example:"valid"`
	InputVar    string        `json:"inputVar" example:"input"` // Which variable was used as input
}

func (CapturedStateUnit) MarshalJSON

func (c CapturedStateUnit) MarshalJSON() ([]byte, error)

value receiver so json.Marshal works on both CapturedStateUnit values and pointers. With a pointer receiver, marshal-by-value skips the custom method and stores Duration as a raw nanosecond integer.

func (*CapturedStateUnit) UnmarshalJSON

func (c *CapturedStateUnit) UnmarshalJSON(data []byte) error

type ChainContext

type ChainContext struct {
	Tools       map[string]ToolWithResolution
	ClientTools []Tool
	Debug       bool
}

type ChainTerms

type ChainTerms string

type ChatHistory

type ChatHistory struct {
	// Messages is the list of messages in the conversation.
	Messages []Message `json:"messages"`
	// Model is the name of the model to use for the conversation.
	Model string `json:"model" example:"mistral:instruct"`
	// InputTokens will be filled by the engine and will hold the number of tokens used for the input.
	InputTokens int `json:"inputTokens" example:"15"`
	// OutputTokens will be filled by the engine and will hold the number of tokens used for the output.
	OutputTokens int `json:"outputTokens" example:"10"`
}

ChatHistory represents a conversation history with an LLM.

type DataType

type DataType int

DataType represents the type of data passed between tasks.

const (
	DataTypeAny DataType = iota
	DataTypeString
	DataTypeBool
	DataTypeInt
	DataTypeFloat
	DataTypeVector
	DataTypeSearchResults
	DataTypeJSON
	DataTypeChatHistory
	DataTypeOpenAIChat
	DataTypeOpenAIChatResponse
	DataTypeNil
)

func DataTypeFromString

func DataTypeFromString(s string) (DataType, error)

DataTypeFromString converts a string to DataType.

func (DataType) MarshalJSON

func (d DataType) MarshalJSON() ([]byte, error)

func (DataType) MarshalYAML

func (d DataType) MarshalYAML() ([]byte, error)

func (*DataType) String

func (d *DataType) String() string

String returns the string representation of the data type.

func (*DataType) UnmarshalJSON

func (dt *DataType) UnmarshalJSON(data []byte) error

func (*DataType) UnmarshalYAML

func (dt *DataType) UnmarshalYAML(data []byte) error

type EnvExecutor

type EnvExecutor interface {
	ExecEnv(ctx context.Context, chain *TaskChainDefinition, input any, dataType DataType) (any, DataType, []CapturedStateUnit, error)
}

EnvExecutor executes complete task chains with input and environment management.

func NewEnv

func NewEnv(
	_ context.Context,
	tracker libtracker.ActivityTracker,
	exec TaskExecutor,
	inspector Inspector,
	hookProvider HookRepo,
) (EnvExecutor, error)

NewEnv creates a new SimpleEnv with the given tracker and task executor.

func NewMacroEnv

func NewMacroEnv(inner EnvExecutor, hookProvider HookRepo) (EnvExecutor, error)

NewMacroEnv wraps an existing EnvExecutor with macro expansion.

type ErrorResponse

type ErrorResponse struct {
	ErrorInternal error  `json:"-"`
	Error         string `json:"error" example:"validation failed: input contains prohibited content"`
}

type ExecutionState

type ExecutionState struct {
	Variables   map[string]any
	DataTypes   map[string]DataType
	CurrentTask *TaskDefinition
}

type FunctionCall

type FunctionCall struct {
	Name      string `json:"name" example:"get_current_weather"`
	Arguments string `json:"arguments" example:"{\n  \"location\": \"San Francisco, CA\",\n  \"unit\": \"celsius\"\n}"`
}

FunctionCall specifies the function name and arguments for a tool call.

type FunctionCallObject

type FunctionCallObject struct {
	Name      string `json:"name" example:"get_current_weather"`
	Arguments any    `json:"arguments"`
}

type FunctionTool

type FunctionTool struct {
	Name        string      `json:"name"`
	Description string      `json:"description,omitempty"`
	Parameters  interface{} `json:"parameters,omitempty"` // JSON Schema object
}

FunctionTool defines the schema for a function-type tool.

type HookCall

type HookCall struct {
	// Name is the registered hook-service (e.g., "send_email").
	Name string `yaml:"name" json:"name" example:"slack"`

	// ToolName is the name of the tool to invoke (e.g., "send_slack_notification").
	ToolName string `yaml:"tool_name" json:"tool_name" example:"send_slack_notification"`
	// Args are key-value pairs to parameterize the hook call.
	// Example: {"to": "user@example.com", "subject": "Notification"}
	Args map[string]string `yaml:"args" json:"args" example:"{\"channel\": \"#alerts\", \"message\": \"Task completed successfully\"}"`
}

HookCall represents an external integration or side-effect triggered during a task. Hooks allow tasks to interact with external systems (e.g., "send_email", "update_db").

type HookProvider

type HookProvider interface {
	HookRegistry
	HooksWithSchema
}

type HookRegistry

type HookRegistry interface {
	Supports(ctx context.Context) ([]string, error)
}

type HookRepo

type HookRepo interface {
	Exec(ctx context.Context, startingTime time.Time, input any, debug bool, args *HookCall) (any, DataType, error)
	HookRegistry
	HooksWithSchema
}

HookRepo defines interface for external system integrations and side effects.

type HooksWithSchema

type HooksWithSchema interface {
	GetSchemasForSupportedHooks(ctx context.Context) (map[string]*openapi3.T, error)
	GetToolsForHookByName(ctx context.Context, name string) ([]Tool, error)
}

type Inspector

type Inspector interface {
	Start(ctx context.Context) StackTrace
}

func NewSimpleInspector

func NewSimpleInspector() Inspector

type KVActivitySink

type KVActivitySink struct {
	// contains filtered or unexported fields
}

func NewKVActivityTracker

func NewKVActivityTracker(kvManager libkv.KVManager) *KVActivitySink

func (*KVActivitySink) GetActivityLogs

func (t *KVActivitySink) GetActivityLogs(ctx context.Context, limit int) ([]TrackedEvent, error)

func (*KVActivitySink) GetActivityLogsByRequestID

func (t *KVActivitySink) GetActivityLogsByRequestID(ctx context.Context, requestID string) ([]TrackedEvent, error)

func (*KVActivitySink) GetExecutionStateByRequestID

func (t *KVActivitySink) GetExecutionStateByRequestID(ctx context.Context, requestID string) ([]CapturedStateUnit, error)

func (*KVActivitySink) GetKnownOperations

func (t *KVActivitySink) GetKnownOperations(ctx context.Context) ([]Operation, error)

func (*KVActivitySink) GetRecentRequestIDs

func (t *KVActivitySink) GetRecentRequestIDs(ctx context.Context, limit int) ([]TrackedRequest, error)

func (*KVActivitySink) GetRequestIDByOperation

func (t *KVActivitySink) GetRequestIDByOperation(ctx context.Context, operation Operation) ([]TrackedRequest, error)

func (*KVActivitySink) GetStatefulRequests

func (t *KVActivitySink) GetStatefulRequests(ctx context.Context) ([]string, error)

func (*KVActivitySink) Start

func (t *KVActivitySink) Start(
	ctx context.Context,
	operation string,
	subject string,
	kvArgs ...any,
) (func(error), func(string, any), func())

type LLMExecutionConfig

type LLMExecutionConfig struct {
	Model       string   `yaml:"model" json:"model" example:"mistral:instruct"`
	Models      []string `yaml:"models,omitempty" json:"models,omitempty" example:"[\"gpt-4\", \"gpt-3.5-turbo\"]"`
	Provider    string   `yaml:"provider,omitempty" json:"provider,omitempty" example:"ollama"`
	Providers   []string `yaml:"providers,omitempty" json:"providers,omitempty" example:"[\"ollama\", \"openai\"]"`
	Temperature float32  `yaml:"temperature,omitempty" json:"temperature,omitempty" example:"0.7"`
	// Hooks is the allowlist of hook names this task may invoke.
	//
	// Patterns supported:
	//   - absent/null   — all registered hooks (backward-compatible default)
	//   - []            — no hooks exposed to the model
	//   - ["*"]         — all registered hooks (explicit)
	//   - ["a","b"]     — only the named hooks (unknown names silently ignored)
	//   - ["*","!name"] — all hooks except the excluded name(s)
	//
	// Exclusions ("!name") are only meaningful when combined with "*".
	Hooks     []string `yaml:"hooks,omitempty" json:"hooks,omitempty" example:"[\"local_shell\", \"nws\"]"`
	HideTools []string `yaml:"hide_tools,omitempty" json:"hide_tools,omitempty" example:"[\"tool1\", \"hook_name1.tool1\"]"`
	// HookPolicies carries per-hook policy overrides for this task.
	// Keys are hook names; values are maps of policy key → value pairs.
	// These are injected into the context before GetToolsForHookByName is called,
	// so hooks can produce dynamic tool descriptions and enforce the policy at Exec time.
	//
	// Example (local_shell):
	//   hook_policies:
	//     local_shell:
	//       _allowed_commands: "git,go,ls,cat,grep"
	//       _denied_commands:  "sudo,su,dd,mkfs"
	HookPolicies     map[string]map[string]string `yaml:"hook_policies,omitempty" json:"hook_policies,omitempty"`
	PassClientsTools bool                         `yaml:"pass_clients_tools" json:"pass_clients_tools"`
	// Think enables reasoning mode for supported models.
	// Accepts "true"/"false" or "high"/"medium"/"low". Empty = provider default (off).
	Think string `yaml:"think,omitempty" json:"think,omitempty" example:"high"`
	// Shift allows the context window to slide on overflow instead of erroring.
	Shift bool `yaml:"shift,omitempty" json:"shift,omitempty"`
}

LLMExecutionConfig represents configuration for executing tasks using Large Language Models (LLMs).

type MacroEnv

type MacroEnv struct {
	// contains filtered or unexported fields
}

MacroEnv is a transparent decorator around EnvExecutor that expands special macros in task templates before execution. Supported macros:

  • {{hookservice:list}} -> JSON map of hook name -> tool names
  • {{hookservice:hooks}} -> JSON array of hook names
  • {{hookservice:tools <hook_name>}} -> JSON array of tool names for that hook
  • {{var:<name>}} -> value from context template vars (set by caller via WithTemplateVars; engine never reads env); errors if key is missing
  • {{now}} or {{now:<layout>}} -> current time (default RFC3339; layout e.g. 2006-01-02)
  • {{chain:id}} -> chain ID of the chain being executed

The engine does not expand any env:VAR-style macro; var:* is populated only by the caller.

func (*MacroEnv) ExecEnv

func (m *MacroEnv) ExecEnv(
	ctx context.Context,
	chain *TaskChainDefinition,
	input any,
	dataType DataType,
) (any, DataType, []CapturedStateUnit, error)

type Message

type Message struct {
	// ID is the unique identifier for the message.
	// This field is not used by the engine. It can be filled as part of the Request, or left empty.
	// The ID is useful for tracking messages and computing differences of histories before storage.
	ID string `json:"id" example:"msg_123456"`
	// Role is the role of the message sender.
	Role string `json:"role" example:"user"`
	// Content is the content of the message.
	Content string `json:"content,omitempty" example:"What is the capital of France?"`
	// Thinking is the model's internal reasoning trace.
	// Only populated when thinking is enabled; never sent back to the model as history.
	Thinking string `json:"thinking,omitempty"`
	// ToolCallID is the ID of the tool call associated with the message.
	ToolCallID string `json:"tool_call_id,omitempty"`
	// CallTools is the tool call of the message sender.
	CallTools []ToolCall `json:"callTools,omitempty"`
	// Timestamp is the time the message was sent.
	Timestamp time.Time `json:"timestamp" example:"2023-11-15T14:30:45Z"`
}

Message represents a single message in a chat conversation.

type MockTaskExecutor

type MockTaskExecutor struct {
	// Single value responses
	MockOutput          any
	MockTransitionValue string
	MockError           error

	// Sequence responses
	MockOutputSequence          []any
	MockTaskTypeSequence        []DataType
	MockTransitionValueSequence []string
	ErrorSequence               []error

	// Tracking
	CalledWithTask   *TaskDefinition
	CalledWithInput  any
	CalledWithPrompt string
	// contains filtered or unexported fields
}

MockTaskExecutor is a mock implementation of taskengine.TaskExecutor.

func (*MockTaskExecutor) CallCount

func (m *MockTaskExecutor) CallCount() int

CallCount returns how many times TaskExec was called

func (*MockTaskExecutor) Reset

func (m *MockTaskExecutor) Reset()

Reset clears all mock state between tests

func (*MockTaskExecutor) TaskExec

func (m *MockTaskExecutor) TaskExec(ctx context.Context, startingTime time.Time, tokenLimit int, chainContext *ChainContext, currentTask *TaskDefinition, input any, dataType DataType) (any, DataType, string, error)

TaskExec is the mock implementation of the TaskExec method.

type OpenAIChatRequest

type OpenAIChatRequest struct {
	Model            string                     `json:"model" example:"mistral:instruct"`
	Messages         []OpenAIChatRequestMessage `json:"messages" openapi_include_type:"taskengine.OpenAIChatRequestMessage"`
	Tools            []Tool                     `json:"tools,omitempty"`
	ToolChoice       interface{}                `json:"tool_choice,omitempty"` // Can be "none", "auto", or {"type": "function", "function": {"name": "my_function"}}
	MaxTokens        int                        `json:"max_tokens,omitempty" example:"512"`
	Temperature      float64                    `json:"temperature,omitempty" example:"0.7"`
	TopP             float64                    `json:"top_p,omitempty" example:"1.0"`
	Stop             []string                   `json:"stop,omitempty" example:"[\"\\n\", \"###\"]"`
	N                int                        `json:"n,omitempty" example:"1"`
	Stream           bool                       `json:"stream,omitempty" example:"false"`
	PresencePenalty  float64                    `json:"presence_penalty,omitempty" example:"0.0"`
	FrequencyPenalty float64                    `json:"frequency_penalty,omitempty" example:"0.0"`
	User             string                     `json:"user,omitempty" example:"user_123"`
}

OpenAIChatRequest represents a request compatible with OpenAI's chat API.

func ConvertChatHistoryToOpenAIRequest

func ConvertChatHistoryToOpenAIRequest(
	chatHistory ChatHistory,
) (OpenAIChatRequest, int, int)

type OpenAIChatRequestMessage

type OpenAIChatRequestMessage struct {
	Role    string `json:"role" example:"user"`
	Content string `json:"content,omitempty" example:"Hello, how are you?"`
	// Thinking allows clients to supply their own reasoning traces for assistant messages.
	Thinking string `json:"thinking,omitempty" example:"The user is asking a greeting."`
	// ToolCalls carries tool call requests from an assistant message.
	// Required to round-trip Gemini thought_signature (via ProviderMeta) through the OpenAI-compat path.
	ToolCalls []ToolCall `json:"tool_calls,omitempty" openapi_include_type:"taskengine.ToolCall"`
	// ToolCallID links a tool-result message back to its originating call.
	ToolCallID string `json:"tool_call_id,omitempty"`
}

type OpenAIChatResponse

type OpenAIChatResponse struct {
	ID                string                     `json:"id" example:"chat_123"`
	Object            string                     `json:"object" example:"chat.completion"`
	Created           int64                      `json:"created" example:"1690000000"`
	Model             string                     `json:"model" example:"mistral:instruct"`
	Choices           []OpenAIChatResponseChoice `json:"choices" openapi_include_type:"taskengine.OpenAIChatResponseChoice"`
	Usage             OpenAITokenUsage           `json:"usage" openapi_include_type:"taskengine.OpenAITokenUsage"`
	SystemFingerprint string                     `json:"system_fingerprint,omitempty" example:"system_456"`
}

func ConvertChatHistoryToOpenAI

func ConvertChatHistoryToOpenAI(id string, chatHistory ChatHistory) OpenAIChatResponse

ConvertChatHistoryToOpenAI converts the internal ChatHistory format to an OpenAI-compatible response. This is useful for adapting the task engine's output to systems expecting an OpenAI API format.

type OpenAIChatResponseChoice

type OpenAIChatResponseChoice struct {
	Index        int                       `json:"index" example:"0"`
	Message      OpenAIChatResponseMessage `json:"message" openapi_include_type:"taskengine.OpenAIChatResponseMessage"`
	FinishReason string                    `json:"finish_reason" example:"stop"`
}

OpenAIChatResponseChoice represents a single choice in an OpenAI chat response.

type OpenAIChatResponseMessage

type OpenAIChatResponseMessage struct {
	Role      string     `json:"role" example:"assistant"`
	Content   *string    `json:"content,omitempty" example:"I can help with that."` // Pointer to handle null content for tool calls
	Thinking  string     `json:"thinking,omitempty" example:"The user asked for help. I should respond positively."`
	ToolCalls []ToolCall `json:"tool_calls,omitempty" openapi_include_type:"taskengine.ToolCall"`
}

type OpenAITokenUsage

type OpenAITokenUsage struct {
	PromptTokens     int `json:"prompt_tokens" example:"100"`
	CompletionTokens int `json:"completion_tokens" example:"50"`
	TotalTokens      int `json:"total_tokens" example:"150"`
}

type Operation

type Operation struct {
	Operation string `json:"operation"`
	Subject   string `json:"subject"`
}

type OperatorTerm

type OperatorTerm string

OperatorTerm represents logical operators used for task transition evaluation

const (
	OpEquals      OperatorTerm = "equals"
	OpContains    OperatorTerm = "contains"
	OpStartsWith  OperatorTerm = "starts_with"
	OpEndsWith    OperatorTerm = "ends_with"
	OpGreaterThan OperatorTerm = ">"
	OpGt          OperatorTerm = "gt"
	OpLessThan    OperatorTerm = "<"
	OpLt          OperatorTerm = "lt"
	OpInRange     OperatorTerm = "in_range"
	OpDefault     OperatorTerm = "default"
)

func ToOperatorTerm

func ToOperatorTerm(s string) (OperatorTerm, error)

func (OperatorTerm) String

func (t OperatorTerm) String() string

type SearchResult

type SearchResult struct {
	ID           string  `json:"id" example:"search_123456"`
	ResourceType string  `json:"type" example:"document"`
	Distance     float32 `json:"distance" example:"0.85"`
}

type SimpleEnv

type SimpleEnv struct {
	// contains filtered or unexported fields
}

SimpleEnv is the default implementation of EnvExecutor.

func (SimpleEnv) ExecEnv

func (env SimpleEnv) ExecEnv(ctx context.Context, chain *TaskChainDefinition, input any, dataType DataType) (any, DataType, []CapturedStateUnit, error)

ExecEnv executes the given chain with the provided input.

type SimpleExec

type SimpleExec struct {
	// contains filtered or unexported fields
}

SimpleExec is a basic implementation of TaskExecutor. It supports prompt-to-string, number, score, range, boolean condition evaluation, and delegation to registered hooks.

func (*SimpleExec) Embed

func (exe *SimpleExec) Embed(ctx context.Context, llmCall LLMExecutionConfig, prompt string, ctxLength int) ([]float64, error)

Prompt resolves a model client and sends the prompt to be executed. Returns the trimmed response string or an error.

func (*SimpleExec) Prompt

func (exe *SimpleExec) Prompt(ctx context.Context, systemInstruction string, llmCall LLMExecutionConfig, prompt string, ctxLength int) (string, error)

Prompt resolves a model client using the resolver policy and sends the prompt to be executed. Returns the trimmed response string or an error.

func (*SimpleExec) TaskExec

func (exe *SimpleExec) TaskExec(taskCtx context.Context, startingTime time.Time, ctxLength int, chainContext *ChainContext, currentTask *TaskDefinition, input any, dataType DataType) (any, DataType, string, error)

TaskExec dispatches task execution based on the task type.

type SimpleStackTrace

type SimpleStackTrace struct {
	// contains filtered or unexported fields
}

func (*SimpleStackTrace) ClearBreakpoints

func (s *SimpleStackTrace) ClearBreakpoints()

func (*SimpleStackTrace) GetCurrentState

func (s *SimpleStackTrace) GetCurrentState() ExecutionState

func (*SimpleStackTrace) GetExecutionHistory

func (s *SimpleStackTrace) GetExecutionHistory() []CapturedStateUnit

func (*SimpleStackTrace) HasBreakpoint

func (s *SimpleStackTrace) HasBreakpoint(taskID string) bool

func (*SimpleStackTrace) RecordStep

func (s *SimpleStackTrace) RecordStep(step CapturedStateUnit)

func (*SimpleStackTrace) SetBreakpoint

func (s *SimpleStackTrace) SetBreakpoint(taskID string)

type StackTrace

type StackTrace interface {
	// Observation
	RecordStep(step CapturedStateUnit)
	GetExecutionHistory() []CapturedStateUnit

	// Control
	SetBreakpoint(taskID string)
	ClearBreakpoints()

	HasBreakpoint(taskID string) bool
}

type TaskChainDefinition

type TaskChainDefinition struct {
	// ID uniquely identifies the chain.
	ID string `yaml:"id" json:"id"`

	// Enables capturing user input and output.
	Debug bool `yaml:"debug" json:"debug"`

	// Description provides a human-readable summary of the chain's purpose.
	Description string `yaml:"description" json:"description"`

	// Tasks is the list of tasks to execute in sequence.
	Tasks []TaskDefinition `yaml:"tasks" json:"tasks" openapi_include_type:"taskengine.TaskDefinition"`

	// TokenLimit is the token limit for the context window (used during execution).
	TokenLimit int64 `yaml:"token_limit" json:"token_limit"`
}

TaskChainDefinition describes a sequence of tasks to execute in order, along with branching logic, retry policies, and model preferences.

TaskChainDefinition support dynamic routing based on LLM outputs or conditions, and can include hooks to perform external actions (e.g., sending emails).

type TaskDefinition

type TaskDefinition struct {
	// ID uniquely identifies the task within the chain.
	ID string `yaml:"id" json:"id" example:"validate_input"`

	// Description is a human-readable summary of what the task does.
	Description string `yaml:"description" json:"description" example:"Validates user input meets quality requirements"`

	// Handler determines how the LLM output (or hook) will be interpreted.
	Handler TaskHandler `yaml:"handler" json:"handler" example:"condition_key" openapi_include_type:"string"`

	// SystemInstruction provides additional instructions to the LLM, if applicable system level will be used.
	SystemInstruction string `` /* 158-byte string literal not displayed */

	// ValidConditions defines allowed values for ConditionKey tasks.
	// Required for ConditionKey tasks, ignored for all other types.
	// Example: {"yes": true, "no": true} for a yes/no condition.
	ValidConditions map[string]bool `yaml:"valid_conditions,omitempty" json:"valid_conditions,omitempty" example:"{\"valid\": true, \"invalid\": true}"`

	// ExecuteConfig defines the configuration for executing prompt or chat model tasks.
	ExecuteConfig *LLMExecutionConfig `yaml:"execute_config,omitempty" json:"execute_config,omitempty" openapi_include_type:"taskengine.LLMExecutionConfig"`

	// Hook defines an external action to run.
	// Required for Hook tasks, must be nil/omitted for all other types.
	// Example: {type: "send_email", args: {"to": "user@example.com"}}
	Hook *HookCall `yaml:"hook,omitempty" json:"hook,omitempty" openapi_include_type:"taskengine.HookCall"`

	// Print optionally formats the output for display/logging.
	// Supports template variables from previous task outputs.
	// Optional for all task types except Hook where it's rarely used.
	// Example: "The score is: {{.previous_output}}"
	Print string `yaml:"print,omitempty" json:"print,omitempty" example:"Validation result: {{.validate_input}}"`

	// PromptTemplate is the text prompt sent to the LLM.
	// It's Required and only applicable for the prompt_to_string type.
	// Supports template variables from previous task outputs.
	// Example: "Rate the quality from 1-10: {{.input}}"
	PromptTemplate string `yaml:"prompt_template" json:"prompt_template" example:"Is this input valid? {{.input}}"`

	// OutputTemplate is an optional go template to format the output of a hook.
	// If specified, the hook's JSON output will be used as data for the template.
	// The final output of the task will be the rendered string.
	// Example: "The weather is {{.weather}} with a temperature of {{.temperature}}."
	OutputTemplate string `yaml:"output_template,omitempty" json:"output_template,omitempty" example:"Hook result: {{.status}}"`

	// InputVar is the name of the variable to use as input for the task.
	// Example: "input" for the original input.
	// Each task stores its output in a variable named with it's task id.
	InputVar string `yaml:"input_var,omitempty" json:"input_var,omitempty" example:"input"`

	// Transition defines what to do after this task completes.
	Transition TaskTransition `yaml:"transition" json:"transition" openapi_include_type:"taskengine.TaskTransition"`

	// Timeout optionally sets a timeout for task execution.
	// Format: "10s", "2m", "1h" etc.
	// Optional for all task types.
	Timeout string `yaml:"timeout,omitempty" json:"timeout,omitempty" example:"30s"`

	// RetryOnFailure sets how many times to retry this task on failure.
	// Applies to all task types including Hooks.
	// Default: 0 (no retries)
	RetryOnFailure int `yaml:"retry_on_failure,omitempty" json:"retry_on_failure,omitempty" example:"2"`
}

TaskDefinition represents a single step in a workflow. Each task has a handler that dictates how its prompt will be processed.

Field validity by task type: | Field | ConditionKey | ParseNumber | ParseScore | ParseRange | RawString | Hook | Noop | |---------------------|--------------|-------------|------------|------------|-----------|-------|-------| | ValidConditions | Required | - | - | - | - | - | - | | Hook | - | - | - | - | - | Req | - | | PromptTemplate | Required | Required | Required | Required | Required | - | Opt | | OutputTemplate | - | - | - | - | - | Opt | - | | Print | Optional | Optional | Optional | Optional | Optional | Opt | Opt | | ExecuteConfig | Optional | Optional | Optional | Optional | Optional | - | - | | InputVar | Optional | Optional | Optional | Optional | Optional | Opt | Opt | | SystemInstruction | Optional | Optional | Optional | Optional | Optional | Opt | Opt | | Transition | Required | Required | Required | Required | Required | Req | Req |

type TaskExecutor

type TaskExecutor interface {
	// TaskExec executes a single task with the given input and data type.
	// Returns:
	// - output: The processed task result
	// - outputType: The data type of the output
	// - transitionEval: String used for transition evaluation
	// - error: Any execution error encountered
	//
	// Parameters:
	// - ctx: Context for cancellation and timeouts
	// - startingTime: Chain start time for consistent timing
	// - ctxLength: Token context length limit for LLM operations
	// - chainContext: Immutable context of the chain
	// - currentTask: The task definition to execute
	// - input: Task input data
	// - dataType: Type of the input data
	TaskExec(ctx context.Context, startingTime time.Time, ctxLength int, chainContext *ChainContext, currentTask *TaskDefinition, input any, dataType DataType) (any, DataType, string, error)
}

TaskExecutor executes individual tasks within a workflow. Implementations should handle all task types and return appropriate outputs.

func NewExec

func NewExec(
	_ context.Context,
	repo llmrepo.ModelRepo,
	hookProvider HookRepo,
	tracker libtracker.ActivityTracker,
) (TaskExecutor, error)

NewExec creates a new SimpleExec instance

type TaskHandler

type TaskHandler string

TaskHandler defines how task outputs are processed and interpreted.

const (
	// HandlePromptToCondition interprets response as a condition key for transition branching.
	// Requires ValidConditions to be set with allowed values.
	HandlePromptToCondition TaskHandler = "prompt_to_condition"

	// HandlePromptToInt expects a numeric response and parses it into an integer.
	// Returns error if response cannot be parsed as integer.
	HandlePromptToInt TaskHandler = "prompt_to_int"

	// HandleParseScore expects a floating-point score (e.g., quality rating).
	// Returns error if response cannot be parsed as float.
	HandlePromptToFloat TaskHandler = "prompt_to_float"

	// HandlePromptToRange expects a numeric range like "5-7" or single number "5" (converted to "5-5").
	// Returns error if response cannot be parsed as valid range.
	HandlePromptToRange TaskHandler = "prompt_to_range"

	// HandlePromptToString returns the raw string result from the LLM without parsing.
	HandlePromptToString TaskHandler = "prompt_to_string"

	// HandlePromptToJS asks the LLM to produce JavaScript source code.
	// The output is a JSON object { "code": string }, so later steps
	// (e.g. goja/jseval execution) can consume it.
	HandlePromptToJS TaskHandler = "prompt_to_js"

	// HandleTextToEmbedding expects string input and returns an embedding vector ([]float64).
	// This is useful as last step in a text enrichment pipeline to enrich the data before embedding.
	HandleTextToEmbedding TaskHandler = "text_to_embedding"

	// HandleRaiseError raises an error with the provided message from task input.
	// Useful for explicit error conditions in workflows.
	HandleRaiseError TaskHandler = "raise_error"

	// HandleChatCompletion executes specified model on chat history input.
	// Requires DataTypeChatHistory input and ExecuteConfig configuration.
	HandleChatCompletion TaskHandler = "chat_completion"

	// HandleExecuteToolCalls executes specified tool calls on chat history input.
	HandleExecuteToolCalls TaskHandler = "execute_tool_calls"

	// HandleParseTransition attempts to parse transition commands (e.g., "/command").
	// Strips transition prefix if present in input.
	HandleParseTransition TaskHandler = "parse_command"

	// HandleParseKeyValue expects a string of key=value pairs and parses it into a JSON object.
	// Example input: "name=John, age=30, city=New York"
	// Returns a map[string]string that can be serialized as JSON.
	HandleParseKeyValue TaskHandler = "parse_key_value"

	// HandleConvertToOpenAIChatResponse converts a chat history input to OpenAI Chat format.
	// Requires DataTypeChatHistory input and ExecuteConfig configuration.
	HandleConvertToOpenAIChatResponse TaskHandler = "convert_to_openai_chat_response"

	// HandleNoop performs no operation, passing input through unchanged.
	// Useful for data mutation, variable composition, and transition steps.
	HandleNoop TaskHandler = "noop"

	// HandleHook executes an external action via registered hook rather than calling LLM.
	// Requires Hook configuration with name and arguments.
	HandleHook TaskHandler = "hook"
)

func (TaskHandler) String

func (t TaskHandler) String() string

type TaskTransition

type TaskTransition struct {
	// OnFailure is the task ID to jump to in case of failure.
	OnFailure string `yaml:"on_failure" json:"on_failure" example:"error_handler"`

	// Branches defines conditional branches for successful task completion.
	Branches []TransitionBranch `yaml:"branches" json:"branches" openapi_include_type:"taskengine.TransitionBranch"`
}

TaskTransition defines what happens after a task completes, including which task to go to next and how to handle errors.

type Tool

type Tool struct {
	Type     string       `json:"type"`
	Function FunctionTool `json:"function"`
}

OpenAIChatRequest represents a request compatible with OpenAI's chat API. Tool represents a tool that can be called by the model.

type ToolCall

type ToolCall struct {
	ID       string       `json:"id" example:"call_abc123"`
	Type     string       `json:"type" example:"function"`
	Function FunctionCall `json:"function" openapi_include_type:"taskengine.FunctionCall"`
	// ProviderMeta carries opaque provider-specific data (e.g. Gemini thought_signature)
	// that must be round-tripped back on the next turn.
	ProviderMeta map[string]string `json:"provider_meta,omitempty" example:"{\"thought_signature\":\"123456\"}"`
}

ToolCall represents a tool call requested by the model.

type ToolWithResolution

type ToolWithResolution struct {
	Tool
	HookName string
}

type TrackedEvent

type TrackedEvent struct {
	ID         string            `json:"id"`
	Operation  string            `json:"operation"`
	Subject    string            `json:"subject"`
	Start      time.Time         `json:"start"`
	End        *time.Time        `json:"end,omitempty"`
	Error      *string           `json:"error,omitempty"`
	EntityID   *string           `json:"entityID,omitempty"`
	EntityData any               `json:"entityData,omitempty"`
	Duration   float64           `json:"duration"` // Duration in milliseconds
	Metadata   map[string]string `json:"metadata,omitempty"`
	RequestID  string            `json:"requestID,omitempty"`
}

type TrackedRequest

type TrackedRequest struct {
	ID string `json:"id"`
}

type TransitionBranch

type TransitionBranch struct {
	// Operator defines how to compare the task's output to When.
	Operator OperatorTerm `yaml:"operator,omitempty" json:"operator,omitempty" example:"equals" openapi_include_type:"string"`

	// When specifies the condition that must be met to follow this branch.
	// Format depends on the task type:
	// - For condition_key: exact string match
	// - For prompt_to_int: numeric comparison (using Operator)
	When string `yaml:"when" json:"when" example:"yes"`

	// Goto specifies the target task ID if this branch is taken.
	// Leave empty or use taskengine.TermEnd to end the chain.
	Goto string `yaml:"goto" json:"goto" example:"positive_response"`

	// Compose defines how to transform data when taking this branch.
	// Optional - if not specified, the current task output is passed as-is.
	Compose *BranchCompose `yaml:"compose,omitempty" json:"compose,omitempty" openapi_include_type:"taskengine.BranchCompose"`
}

TransitionBranch defines a single possible path in the workflow, selected when the task's output matches the specified condition.

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL