runtime

package
v0.2.1 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Jan 18, 2026 License: Apache-2.0 Imports: 18 Imported by: 0

Documentation

Overview

Package runtime provides runtime helpers for generated llm-compiler programs. This file contains the app context and initialization helpers.

Package runtime provides output capture utilities for generated workflows.

Index

Constants

This section is empty.

Variables

This section is empty.

Functions

func EvalCondition

func EvalCondition(ctx interface{ Get(string) string }, condition string) bool

EvalCondition evaluates a simple condition like "{{var}} == 'hello'".

func RenderTemplate

func RenderTemplate(input string, vars map[string]string) (string, error)

func SanitizeForShell

func SanitizeForShell(s string) string

SanitizeForShell prepares free-form text (like LLM output) to be safely placed inside a double-quoted shell argument. It performs a light sanitization: trims whitespace, collapses internal whitespace to single spaces, escapes double quotes, and removes NULs. This is intentionally conservative but avoids executing arbitrary multi-line commands when workflows embed LLM output into `sh -c`.

Types

type App

type App struct {
	// Configuration
	Config *config.Config

	// Output files
	FmtFile   *os.File
	LlamaFile *os.File
	ExeDir    string

	// Saved terminal descriptors for restoration
	SavedStdout *os.File
	SavedStderr *os.File

	// Signal coordination for cross-workflow step outputs
	Signals   map[string]chan SignalMsg
	SignalsMu sync.Mutex

	// Contexts for each workflow
	Contexts   map[string]map[string]string
	ContextsMu sync.Mutex

	// WaitGroup for workflow coordination
	WG sync.WaitGroup
	// contains filtered or unexported fields
}

App represents the application runtime with all required state

func NewApp

func NewApp() *App

NewApp creates a new application runtime

func (*App) DumpContextsAndSignals

func (a *App) DumpContextsAndSignals() error

DumpContextsAndSignals dumps all contexts and signal values to a JSON file

func (*App) LLM

func (a *App) LLM() *LLMRuntime

LLM returns the LLM runtime, creating it if needed

func (*App) LocalLlama

func (a *App) LocalLlama() *LocalLlamaRuntime

LocalLlama returns the local llama runtime, creating it if needed Note: This should be called per-workflow as llama.cpp may not be thread-safe

func (*App) MakeSignal

func (a *App) MakeSignal(key string) chan SignalMsg

MakeSignal returns or creates a signal channel for the given key

func (*App) SaveContext

func (a *App) SaveContext(workflowName string, vars map[string]string)

SaveContext saves a workflow's context

func (*App) SendSignal

func (a *App) SendSignal(key, val string)

SendSignal sends a value to a signal channel (non-blocking)

func (*App) SendSignalError

func (a *App) SendSignalError(key, err string)

SendSignalError sends an error to a signal channel (non-blocking)

func (*App) Shell

func (a *App) Shell() *ShellRuntime

Shell returns the shell runtime, creating it if needed

func (*App) WaitForSignal

func (a *App) WaitForSignal(key string, timeout int) (SignalMsg, error)

WaitForSignal waits for a signal with optional timeout

type LLMRuntime

type LLMRuntime struct {
	// contains filtered or unexported fields
}

func NewLLMRuntime

func NewLLMRuntime() *LLMRuntime

func (*LLMRuntime) Generate

func (r *LLMRuntime) Generate(prompt string, model string) (string, error)

type LocalLlamaRuntime

type LocalLlamaRuntime struct {
	// contains filtered or unexported fields
}

LocalLlamaRuntime manages loaded models (cached) and generation.

func NewLocalLlamaRuntime

func NewLocalLlamaRuntime() *LocalLlamaRuntime

func (*LocalLlamaRuntime) Close

func (r *LocalLlamaRuntime) Close() error

Close releases resources held by the runtime, including worker clients.

func (*LocalLlamaRuntime) Generate

func (r *LocalLlamaRuntime) Generate(prompt string, modelPath string, maxTokens int) (string, error)

Generate runs the model with prompt and returns the completion text. maxTokens controls the number of tokens to generate (0 = use default inside runtime).

func (*LocalLlamaRuntime) LoadModel

func (r *LocalLlamaRuntime) LoadModel(filePath string) (*llama.Model, error)

LoadModel loads a gguf model from filePath (caches handle).

type OutputCapture

type OutputCapture struct {
	// contains filtered or unexported fields
}

OutputCapture manages capturing stdout/stderr to files. It handles both Go-level and native (cgo) output redirection.

func NewOutputCapture

func NewOutputCapture() *OutputCapture

NewOutputCapture creates a new output capture instance.

func (*OutputCapture) Start

func (oc *OutputCapture) Start() (*os.File, *os.File, error)

Start begins capturing output to files next to the executable. Returns the saved stdout for printing messages to terminal. If LLMC_NO_CAPTURE=1 is set, output capture is skipped.

func (*OutputCapture) Stop

func (oc *OutputCapture) Stop()

Stop ends output capture and restores original stdout/stderr.

type RuntimeContext

type RuntimeContext struct {
	Vars map[string]string
}

func NewRuntimeContext

func NewRuntimeContext() *RuntimeContext

func (*RuntimeContext) Get

func (c *RuntimeContext) Get(name string) string

func (*RuntimeContext) Set

func (c *RuntimeContext) Set(name, value string)

type ShellRuntime

type ShellRuntime struct{}

ShellRuntime runs shell commands from workflow steps

func NewShellRuntime

func NewShellRuntime() *ShellRuntime

func (*ShellRuntime) Run

func (s *ShellRuntime) Run(command string) (string, error)

type SignalMsg

type SignalMsg struct {
	Val string
	Err string
}

SignalMsg is used for cross-workflow step coordination

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL