contenox/runtime: GenAI Orchestration Runtime

contenox/runtime is an open-source runtime for orchestrating generative AI workflows. It treats AI workflows as state machines, enabling:
β
Declarative workflow definition
β
Built-in state management
β
Vendor-agnostic execution
β
Multi-backend orchestration
β
Observability with passion
β
Made with Go for intensive load
β
Build agentic capabilities via hooks
β
Drop-in for OpenAI chatcompletion API
β‘ Get Started in 1-3 Minutes
This single command will start all necessary services, configure the backend, and download the initial models.
Prerequisites
- Docker and Docker Compose
curl and jq
Run the Bootstrap Script
# Clone the repository
git clone https://github.com/contenox/runtime.git
cd runtime
# Configure the systems fallback models
export EMBED_MODEL=nomic-embed-text:latest
export EMBED_PROVIDER=ollama
export EMBED_MODEL_CONTEXT_LENGTH=2048
export TASK_MODEL=phi3:3.8b
export TASK_MODEL_CONTEXT_LENGTH=2048
export TASK_PROVIDER=ollama
export CHAT_MODEL=phi3:3.8b
export CHAT_MODEL_CONTEXT_LENGTH=2048
export CHAT_PROVIDER=ollama
export OLLAMA_BACKEND_URL="http://ollama:11434"
# or any other like: export OLLAMA_BACKEND_URL="http://host.docker.internal:11434"
# to use OLLAMA_BACKEND_URL with host.docker.internal
# remember sudo systemctl edit ollama.service -> Environment="OLLAMA_HOST=172.17.0.1" or 0.0.0.0
# Start the container services
echo "Starting services with 'docker compose up -d'..."
docker compose up -d
echo "Services are starting up."
# Configure the runtime with your model preferences
# the bootstraping script works only for ollama models/backends
# for to use other providers refer to the API-Spec.
./scripts/bootstrap.sh $EMBED_MODEL $TASK_MODEL $CHAT_MODEL
# setup a demo OpenAI chat-completion and model endpoint
./scripts/openai-demo.sh $CHAT_MODEL demo
# this will setup the following endpoints:
# - http://localhost:8081/openai/demo/v1/chat/completions
# - http://localhost:8081/openai/demo/v1/models
#
# example:
# docker run -d -p 3000:8080 \
# -e OPENAI_API_BASE_URL='http://host.docker.internal:8081/openai/demo/v1' \
# -e OPENAI_API_KEY='any-key-for-demo-env' \
# --add-host=host.docker.internal:host-gateway \
# -v open-webui:/app/backend/data \
# --name open-webui \
# --restart always \
# ghcr.io/open-webui/open-webui:main
Once the script finishes, the environment is fully configured and ready to use.
Try It Out: Execute a Prompt
After the bootstrap is complete, test the setup by executing a simple prompt:
curl -X POST http://localhost:8081/execute \
-H "Content-Type: application/json" \
-d '{"prompt": "Explain quantum computing in simple terms"}'
Next Steps: Create a Workflow
Save the following as qa.json:
{
"input": "What's the best way to optimize database queries?",
"inputType": "string",
"chain": {
"id": "smart-query-assistant",
"description": "Handles technical questions",
"tasks": [
{
"id": "generate_response",
"description": "Generate final answer",
"handler": "raw_string",
"systemInstruction": "You're a senior engineer. Provide concise, professional answers to technical questions.",
"transition": {
"branches": [
{ "operator": "default", "goto": "end" }
]
}
}
]
}
}
Execute the workflow:
curl -X POST http://localhost:8081/tasks \
-H "Content-Type: application/json" \
-d @qa.json
All runtime activity is captured in structured logs:
docker logs contenox-runtime-kernel
β¨ Key Features
State Machine Engine
- Conditional Branching: Route execution based on LLM outputs
- Built-in Handlers:
condition_key: Validate and route responses
parse_number: Extract numerical values
parse_range: Handle score ranges
raw_string: Standard text generation
embedding: Embedding generation
model_execution: Model execution on a chat history
hook: Calls a user-defined hook pointing to an external service
- Context Preservation: Automatic input/output passing between steps
- Multi-Model Support: Define preferred models for each task chain
- Retry and Timeout: Configure task-level retries and timeouts for robust workflows
Multi-Provider Support
Define preferred model provider and backend resolution policy directly within task chains. This allows for seamless, dynamic orchestration across various LLM providers.
Architecture Overview
graph TD
subgraph "User Space"
U[User / Client Application]
end
subgraph "contenox/runtime"
API[API Layer]
OE["Orchestration Engine <br/> Task Execution <br/> & State Management"]
CONN["Connectors <br/> Model Resolver <br/> & Hook Client"]
end
subgraph "External Services"
LLM[LLM Backends <br/> Ollama, OpenAI, vLLM, etc.]
HOOK[External Tools and APIs <br/> Custom Hooks]
end
%% --- Data Flow ---
U -- API Requests --> API
API -- Triggers Task Chain --> OE
OE -- Executes via --> CONN
CONN -- Routes to LLMs --> LLM
CONN -- Calls External Hooks --> HOOK
LLM -- LLM Responses --> CONN
HOOK -- Hook Responses --> CONN
CONN -- Results --> OE
OE -- Returns Final Output --> API
API -- API Responses --> U
- Unified Interface: Consistent API across providers
- Automatic Sync: Models stay consistent across backends
- Affinity Group Management: Map models to backends for performance tiering and routing strategies
- Backend Resolver: Distribute requests to backends based on resolution policies
π§© Extensibility
Custom Hooks
Hooks are external servers that can be called from within task chains when registered. They allow interaction with systems and data outside of the runtime and task chains themselves.
The runtime communicates with hooks using an OpenAI-compatible function call format, making it easy to integrate with a wide range of existing tool servers.
π See Hook Documentation
π API Documentation
The full API surface is thoroughly documented and defined in the OpenAPI format, making it easy to integrate with other tools. You can find more details here:
The API-Tests are available for additional context.