Documentation
¶
Overview ¶
Package orla provides a public Go client library for the Orla Agentic Serving Layer daemon API (RFC 5).
This package enables external code to interact with the Orla daemon for: - Workflow execution and coordination - Shared context management - Multi-agent experiments
Example usage:
client := api.NewClient("http://localhost:8081")
execID, err := client.StartWorkflow(ctx, "story_finishing_game")
task, taskIndex, complete, err := client.GetNextTask(ctx, execID)
response, err := client.ExecuteTask(ctx, execID, taskIndex, prompt, &api.ExecuteTaskOptions{Stream: true})
err := client.CompleteTask(ctx, execID, taskIndex, response)
Index ¶
- func LogDeferredError(fn func() error)
- type AgentExecuteRequest
- type AgentExecuteResponse
- type AgentExecutor
- type Client
- func (c *Client) CompleteTask(ctx context.Context, executionID string, taskIndex int, response *TaskResponse) error
- func (c *Client) ExecuteTask(ctx context.Context, executionID string, taskIndex int, prompt string, ...) (*TaskResponse, error)
- func (c *Client) GetContext(ctx context.Context, serverName string) ([]Message, error)
- func (c *Client) GetNextTask(ctx context.Context, executionID string) (*WorkflowTask, int, bool, string, error)
- func (c *Client) Health(ctx context.Context) error
- func (c *Client) StartWorkflow(ctx context.Context, workflowName string) (string, error)
- func (c *Client) SyncContext(ctx context.Context, serverName string, messages []Message) error
- type CompleteTaskRequest
- type CompleteTaskResponse
- type ExecuteTaskOptions
- type ExecuteTaskRequest
- type ExecuteTaskRequestOptions
- type ExecuteTaskResponse
- type GetContextResponse
- type GetNextTaskResponse
- type Message
- type StartWorkflowRequest
- type StartWorkflowResponse
- type SyncContextRequest
- type SyncContextResponse
- type TaskResponse
- type TaskResponseMetrics
- type WorkflowExecutor
- type WorkflowTask
Constants ¶
This section is empty.
Variables ¶
This section is empty.
Functions ¶
func LogDeferredError ¶
func LogDeferredError(fn func() error)
LogDeferredError takes a function that returns an error, calls it, and logs the error if it is not nil
Types ¶
type AgentExecuteRequest ¶
type AgentExecuteRequest struct {
ProfileName string `json:"profile_name"`
Prompt string `json:"prompt"`
Messages []Message `json:"messages,omitempty"` // Conversation history
Tools []*mcp.Tool `json:"tools,omitempty"` // Available tools (from MCP)
MaxTokens int `json:"max_tokens,omitempty"`
Stream bool `json:"stream,omitempty"`
}
AgentExecuteRequest represents a request to execute an agent
type AgentExecuteResponse ¶
type AgentExecuteResponse struct {
Success bool `json:"success"`
Response *TaskResponse `json:"response,omitempty"`
Error string `json:"error,omitempty"`
}
AgentExecuteResponse represents the response from agent execution
type AgentExecutor ¶
type AgentExecutor struct {
// contains filtered or unexported fields
}
AgentExecutor provides high-level agent execution with tool support Tools are handled client-side via MCP The daemon handles inference; the client handles the agent loop with tools The daemon reads orla.yaml and resolves server names from agent profiles automatically.
func NewAgentExecutor ¶
func NewAgentExecutor(daemonURL string) *AgentExecutor
NewAgentExecutor creates a new agent executor The daemon reads orla.yaml and resolves server names from agent profiles automatically.
func (*AgentExecutor) Execute ¶
func (e *AgentExecutor) Execute(ctx context.Context, req *AgentExecuteRequest) (*TaskResponse, error)
Execute executes a single agent inference call This is a single-turn execution. For multi-turn with tool calling, use ExecuteWithTools which handles the agent loop client-side.
func (*AgentExecutor) ExecuteWithTools ¶
func (e *AgentExecutor) ExecuteWithTools( ctx context.Context, profileName string, prompt string, mcpSession *mcp.ClientSession, maxIterations int, onIteration func(iteration int, response *TaskResponse) error, ) (*TaskResponse, error)
ExecuteWithTools executes an agent with tool support, handling the full agent loop This requires client-side MCP connection for tools The daemon handles inference; the client handles tool execution via MCP Returns the final response after all tool calls are executed
type Client ¶
type Client struct {
// contains filtered or unexported fields
}
Client is the public API client for the Orla daemon
func (*Client) CompleteTask ¶
func (c *Client) CompleteTask(ctx context.Context, executionID string, taskIndex int, response *TaskResponse) error
CompleteTask marks a task as complete and reports the response to the daemon
func (*Client) ExecuteTask ¶
func (c *Client) ExecuteTask(ctx context.Context, executionID string, taskIndex int, prompt string, options *ExecuteTaskOptions) (*TaskResponse, error)
ExecuteTask executes a workflow task on the daemon. options may be nil for defaults.
func (*Client) GetContext ¶
GetContext retrieves shared context from the daemon for a given LLM server
func (*Client) GetNextTask ¶
func (c *Client) GetNextTask(ctx context.Context, executionID string) (*WorkflowTask, int, bool, string, error)
GetNextTask retrieves the next task to execute from a workflow Returns the task, task index, completion status, and resolved LLM server name
func (*Client) StartWorkflow ¶
StartWorkflow starts a workflow execution on the daemon
type CompleteTaskRequest ¶
type CompleteTaskRequest struct {
ExecutionID string `json:"execution_id"`
TaskIndex int `json:"task_index"`
Response *TaskResponse `json:"response"`
}
CompleteTaskRequest represents a task completion request
type CompleteTaskResponse ¶
type CompleteTaskResponse struct {
Success bool `json:"success"`
Error string `json:"error,omitempty"`
}
CompleteTaskResponse represents a task completion response
type ExecuteTaskOptions ¶ added in v1.1.0
type ExecuteTaskOptions struct {
MaxTokens int // Maximum tokens to generate; 0 = omit
Stream bool // If true, server streams and returns response.metrics (ttft_ms, tpot_ms)
}
ExecuteTaskOptions are options for executing a task (mirrors serving.ExecuteTaskOptions).
type ExecuteTaskRequest ¶
type ExecuteTaskRequest struct {
ExecutionID string `json:"execution_id"`
TaskIndex int `json:"task_index"`
Prompt string `json:"prompt"`
Options *ExecuteTaskRequestOptions `json:"options,omitempty"` // Optional; mirrors ExecuteTaskOptions
}
ExecuteTaskRequest represents a task execution request
type ExecuteTaskRequestOptions ¶ added in v1.1.0
type ExecuteTaskRequestOptions struct {
MaxTokens *int `json:"max_tokens,omitempty"`
Stream bool `json:"stream,omitempty"`
}
ExecuteTaskRequestOptions are the JSON wire form of ExecuteTaskOptions.
type ExecuteTaskResponse ¶
type ExecuteTaskResponse struct {
Success bool `json:"success"`
Response *TaskResponse `json:"response,omitempty"`
Error string `json:"error,omitempty"`
}
ExecuteTaskResponse represents a task execution response
type GetContextResponse ¶
type GetContextResponse struct {
Messages []Message `json:"messages"`
Error string `json:"error,omitempty"`
}
GetContextResponse represents the response from getting context
type GetNextTaskResponse ¶
type GetNextTaskResponse struct {
Task *WorkflowTask `json:"task"`
TaskIndex int `json:"task_index"`
Complete bool `json:"complete"`
LLMServer string `json:"llm_server,omitempty"` // Resolved server name from daemon
Error string `json:"error,omitempty"`
}
GetNextTaskResponse represents the response from getting the next task
type StartWorkflowRequest ¶
type StartWorkflowRequest struct {
WorkflowName string `json:"workflow_name"`
}
StartWorkflowRequest represents a workflow start request
type StartWorkflowResponse ¶
type StartWorkflowResponse struct {
ExecutionID string `json:"execution_id"`
Error string `json:"error,omitempty"`
}
StartWorkflowResponse represents a workflow start response
type SyncContextRequest ¶
type SyncContextRequest struct {
ServerName string `json:"server_name"`
Messages []Message `json:"messages"`
}
SyncContextRequest represents a context sync request
type SyncContextResponse ¶
type SyncContextResponse struct {
Success bool `json:"success"`
Error string `json:"error,omitempty"`
}
SyncContextResponse represents a context sync response
type TaskResponse ¶
type TaskResponse struct {
Content string `json:"content"`
Thinking string `json:"thinking,omitempty"`
ToolCalls []any `json:"tool_calls,omitempty"`
ToolResults []any `json:"tool_results,omitempty"`
Metrics *TaskResponseMetrics `json:"metrics,omitempty"`
}
TaskResponse represents the response from a task execution. Matches the daemon's model.Response; Metrics is set when the task was executed with streaming.
type TaskResponseMetrics ¶ added in v1.1.0
type TaskResponseMetrics struct {
TTFTMs int64 `json:"ttft_ms,omitempty"` // Time to first token (ms)
TPOTMs int64 `json:"tpot_ms,omitempty"` // Time per output token (ms)
}
TaskResponseMetrics holds timing metrics from streaming execution (TTFT, TPOT).
type WorkflowExecutor ¶
type WorkflowExecutor struct {
// contains filtered or unexported fields
}
WorkflowExecutor provides high-level workflow execution with automatic task orchestration. It uses the daemon's /api/v1/workflow/task/execute endpoint for inference (remote execution mode). The daemon resolves server names from agent profiles, so no config is needed.
func NewWorkflowExecutor ¶
func NewWorkflowExecutor(daemonURL string) *WorkflowExecutor
NewWorkflowExecutor creates a new workflow executor that uses remote execution (daemon handles inference via /api/v1/workflow/task/execute) The daemon reads orla.yaml and resolves server names from agent profiles automatically.
func (*WorkflowExecutor) ExecuteWorkflow ¶
func (e *WorkflowExecutor) ExecuteWorkflow(ctx context.Context, workflowName string, initialPrompt string, maxTokensPerTask int) ([]*TaskResponse, error)
ExecuteWorkflow executes a complete workflow, handling all task orchestration Returns a slice of task responses in order
func (*WorkflowExecutor) ExecuteWorkflowWithCallback ¶
func (e *WorkflowExecutor) ExecuteWorkflowWithCallback( ctx context.Context, workflowName string, initialPrompt string, maxTokensPerTask int, onTask func(taskIndex int, task *WorkflowTask, response *TaskResponse) error, ) error
ExecuteWorkflowWithCallback executes a workflow and calls a callback for each task This allows custom handling of task responses
type WorkflowTask ¶
type WorkflowTask struct {
// AgentProfile is the name of the agent profile to use for this task
AgentProfile string `json:"agent_profile"`
// LLMServer is an optional override for the LLM server configuration
LLMServer string `json:"llm_server,omitempty"`
// Turn is the turn number for multi-agent coordination (1-based)
Turn int `json:"turn,omitempty"`
// Prompt is the prompt or prompt template for this task
Prompt string `json:"prompt,omitempty"`
// UseContext indicates whether to use previous task outputs as context
UseContext bool `json:"use_context,omitempty"`
}
WorkflowTask represents a workflow task This matches the structure returned by the daemon API