contenox

module
v0.6.1 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Mar 18, 2026 License: Apache-2.0

README ΒΆ

Contenox

Go License GitHub release

AI workflows at your fingertips

Contenox is a lightning-fast, fully-local CLI that turns natural language goals into persistent, step-by-step plans and executes them with real shell + custom hooks like filesystem tools. Powered by any LLM (Ollama, OpenAI, Gemini, vLLM, etc.). Zero cloud required.

$ contenox plan new "install a git pre-commit hook that prevents commits when go build fails"
Creating plan "install-a-git-pre-commit-a3f9e12b" with 5 steps. Now active.

$ contenox plan next --auto
Executing Step 1: Install necessary tools...              βœ“
Executing Step 2: Create .git/hooks/pre-commit...         βœ“
Executing Step 3: Edit the hook script with the check...  βœ“
Executing Step 4: Write bash content to the hook file...  βœ“
Executing Step 5: chmod +x .git/hooks/pre-commit...       βœ“

No pending steps. Plan is complete!

The model wrote that hook. On your machine. No copy-paste hell.


⭐ Leave us a star if you like it! | 🌟 We welcome any suggestions, and contributions!


πŸ“Ί contenox vibe β€” Interactive TUI

When you want more than a shell prompt:

contenox vibe

A full-screen terminal dashboard (Bubble Tea) with:

  • Live plan sidebar β€” watch steps execute with ⟳ / βœ“ / βœ— indicators in real time
  • Interactive approvals β€” approve or deny sensitive filesystem actions with y/n before they run
  • Full CLI parity β€” every contenox subcommand is a slash command inside vibe: /plan, /model, /session, /backend, /hook, /mcp, /config, /run
/plan new "add prometheus metrics to the HTTP server"
/plan next --auto              ← run to completion
/model set-context gpt-5-mini --context 128k
/backend add local --type ollama --url http://127.0.0.1:11434
/mcp add memory --transport stdio --command npx --args "-y,@modelcontextprotocol/server-memory"
/help

Why Contenox?

Contenox is different:

  • Persistent plans stored in SQLite β€” pause, inspect, retry, replan at any time
  • Human-in-the-loop by default β€” --auto only when you say so
  • Real tools β€” shell commands and filesystem, not just code suggestions
  • Fully offline with Ollama β€” no data leaves your machine
  • Chains are just JSON β€” write your own LLM workflows
  • Workflow Engine β€” Contenox is not a toy, a complete statemachine lives under the hood.
  • Native MCP Support β€” connect to local filesystems, memory servers, and remote tools instantly via the Model Context Protocol.

πŸ”Œ Universal Tooling with MCP

Contenox is a native Model Context Protocol (MCP) client. Instead of writing custom integrations, you can instantly connect your local agent to any MCP-compatible data source, persistent memory, or tool API.

# Give your agent access to the local filesystem
contenox mcp add filesystem --transport stdio \
  --command npx --args "-y,@modelcontextprotocol/server-filesystem,/"

# Give your agent a persistent memory graph across reboots
contenox mcp add memory --transport stdio \
  --command npx --args "-y,@modelcontextprotocol/server-memory"

# Connect to cloud tools securely over SSE
contenox mcp add cloud-tools --transport sse --url https://api.example.com/mcp

Every registered MCP server becomes natively available to your agent during chat sessions and execution plans.


πŸ›  Turn Any API into an Agent Tool

Don't need the MCP ecosystem? Expose any HTTP API as an agent tool in seconds with contenox hook add. Write a FastAPI service β€” Contenox reads its OpenAPI schema and makes every endpoint callable by the model, with no extra glue code.

# Register your FastAPI service as a tool
contenox hook add my-api --url http://localhost:8000

# The model can now call any endpoint on it directly as a tool
contenox run "fetch the latest metrics from my API and summarize them"

Any service that speaks HTTP and exposes an OpenAPI spec becomes a first-class agent tool.


Quick Start

Install

Ubuntu / Linux

TAG=v0.6.1
ARCH=$(uname -m | sed 's/x86_64/amd64/;s/aarch64/arm64/')
curl -sL "https://github.com/contenox/contenox/releases/download/${TAG}/contenox-${TAG}-linux-${ARCH}" -o contenox
chmod +x contenox && sudo mv contenox /usr/local/bin/contenox
contenox --version

macOS

TAG=v0.6.1
ARCH=$(uname -m | sed 's/x86_64/amd64/;s/arm64/arm64/')
curl -sL "https://github.com/contenox/contenox/releases/download/${TAG}/contenox-${TAG}-darwin-${ARCH}" -o contenox
chmod +x contenox && sudo mv contenox /usr/local/bin/contenox
contenox --version

Or pick a binary from Releases.

First run
# 1. Initialize (creates .contenox/ with default chains)
contenox init

# 2. Register a backend
ollama serve && ollama pull qwen2.5:7b
contenox backend add local --type ollama
contenox config set default-model qwen2.5:7b

# Or for OpenAI / Gemini:
# contenox backend add openai --type openai --api-key-env OPENAI_API_KEY
# contenox config set default-model gpt-5-mini

# 3. Chat with your model:
contenox "hey, what can you do?"
echo 'fix the typos in README.md' | contenox

# 4. Plan and execute a multi-step task:
contenox plan new "create a TODOS.md from all TODO comments in the codebase"
contenox plan next --auto

Requirements: an LLM with tool calling support. Local: ollama serve && ollama pull qwen2.5:7b Cloud: register a backend with contenox backend add and set your API key via --api-key-env.


Full example
# 1. Create
contenox plan new "install a git pre-commit hook that blocks commits when go build fails"

# 2. Review the plan before touching anything
contenox plan show

# 3. Execute one step at a time
contenox plan next
contenox plan next
# ...

# Or run everything at once once you trust it
contenox plan next --auto

# 4. If a step went wrong
contenox plan retry 3

# 5. Final check
contenox plan show

contenox plan β€” AI-driven plans

contenox plan new "migrate all TODO comments in the codebase to TODOS.md"
contenox plan new "set up a git pre-commit hook that blocks commits when go build fails"
contenox plan new "find all .go files larger than 500 lines and write a refactoring report"

Contenox breaks any goal into an ordered plan, then executes it step by step using real tools.

Commands
Command What it does
contenox plan next Run one step (safe default β€” review before continuing)
contenox plan next --auto Run all remaining steps autonomously
contenox plan show See the active plan + step status
contenox plan list All plans (* = active)
contenox plan retry <N> Re-run a failed step
contenox plan skip <N> Mark a step skipped and move on
contenox plan replan Let the model rewrite the remaining steps
contenox plan delete <name> Delete a plan by name
contenox plan clean Delete all completed plans

Pro tip: Always do contenox plan show before --auto.


contenox chat β€” Persistent chat session
contenox chat "what is my current working directory?"
contenox chat "list files in my home directory"
echo "explain this" | contenox chat

Uses .contenox/default-chain.json. Natural language β†’ shell tools β†’ response.

contenox run β€” Scriptable, stateless execution

For CI/pipelines where you want full control:

contenox run --chain .contenox/my-chain.json "what is 2+2?"
cat diff.txt | contenox run --chain .contenox/review.json --input-type chat
contenox run --chain .contenox/doc-chain.json --input @main.go
contenox run --chain .contenox/parse-chain.json --input-type json '{"key":"value"}'

run is stateless β€” no chat history. --chain is required. Supported --input-type: string (default), chat, json, int, float, bool.

🧠 Reasoning model support

Pass --think to stream the model's internal chain-of-thought to stderr before it acts β€” works with DeepSeek-R1, OpenAI o3, Gemini Thinking, and Ollama thinking models:

contenox --think "why is my API slow?"
contenox run --chain .contenox/review.json --think --input @main.go

Configuration

Contenox stores all configuration in SQLite (.contenox/local.db or ~/.contenox/local.db). No YAML file needed β€” register backends and set defaults using CLI commands.

Register a backend
# Local Ollama (URL inferred automatically)
contenox backend add local --type ollama

# OpenAI (base URL inferred)
contenox backend add openai --type openai --api-key-env OPENAI_API_KEY

# Google Gemini
contenox backend add gemini --type gemini --api-key-env GEMINI_API_KEY

# Self-hosted vLLM or compatible endpoint
contenox backend add myvllm --type vllm --url http://gpu-host:8000
Set persistent defaults
contenox config set default-model    qwen2.5:7b
contenox config set default-provider ollama
contenox config set default-chain    .contenox/default-chain.json

contenox config list   # review current settings
Manage backends
contenox backend list
contenox backend show openai
contenox backend remove myvllm
Backend --type Notes
Ollama ollama Local. ollama serve first.
OpenAI openai Use --api-key-env OPENAI_API_KEY
Gemini gemini Use --api-key-env GEMINI_API_KEY
vLLM vllm Self-hosted OpenAI-compatible endpoint

Safety

  • Opt-in shell access β€” --shell flag must be passed explicitly to enable local_shell
  • Chain-scoped policy β€” allowed and denied commands are declared in the chain's hook_policies field; the default chains ship with a sensible allowlist out of the box
  • Human-in-the-loop β€” plan next executes one step and stops; --auto requires explicit intent
  • Local-first β€” with Ollama, nothing leaves your machine

Architecture

contenox CLI
  β”œβ”€β”€ plan new       β†’ LLM planner chain β†’ SQLite plan + steps
  β”œβ”€β”€ plan next      β†’ LLM executor chain β†’ local_shell / local_fs β†’ result persisted
  β”œβ”€β”€ vibe           β†’ Bubble Tea TUI: chat + live plan sidebar + HITL approvals
  β”œβ”€β”€ run            β†’ run any chain, any input type, stateless
  β”œβ”€β”€ (bare)         β†’ stateless run via default-run-chain.json (same as run)
  └── chat           β†’ LLM chat chain β†’ session history persisted in SQLite

SQLite (.contenox/local.db)
  β”œβ”€β”€ plans + plan_steps   (autonomous plan state)
  β”œβ”€β”€ message_index        (chat sessions)
  └── kv                   (active session + config)

Chains are JSON files in .contenox/. They define the LLM workflow: model, hooks, branching logic. See ARCHITECTURE.md for the full picture.

Contenox is powered by a battle-tested enterprise workflow engine. The Runtime API is also available as a self-hostable Docker deployment for teams who want the full server with REST API, observability, and multi-tenant support.


Build from source

git clone https://github.com/contenox/contenox
cd contenox
go build -o contenox ./cmd/contenox
contenox init

Questions or feedback: hello@contenox.com

Directories ΒΆ

Path Synopsis
Package chatservice provides chat session management and message persistence.
Package chatservice provides chat session management and message persistence.
cmd
contenox command
Contenox Vibe: run task chains locally with SQLite, in-memory bus, and estimate tokenizer.
Contenox Vibe: run task chains locally with SQLite, in-memory bus, and estimate tokenizer.
runtime command
core module
internal
contenoxcli
backends.go contains helpers for LLM backend and provider config KV storage.
backends.go contains helpers for LLM backend and provider config KV storage.
hooks
internal/hooks/multi_repo.go
internal/hooks/multi_repo.go
mcpserverapi
Package mcpserverapi exposes REST endpoints for managing MCP server configurations.
Package mcpserverapi exposes REST endpoints for managing MCP server configurations.
runtimestate
runtimestate implements the core logic for reconciling the declared state of LLM backends (from dbInstance) with their actual observed state.
runtimestate implements the core logic for reconciling the declared state of LLM backends (from dbInstance) with their actual observed state.
vfsapi
Package vfsapi provides HTTP handlers for file and folder management.
Package vfsapi provides HTTP handlers for file and folder management.
Package libauth provides secure authentication and authorization services using JWT tokens.
Package libauth provides secure authentication and authorization services using JWT tokens.
Package bus provides an interface for core publish-subscribe messaging.
Package bus provides an interface for core publish-subscribe messaging.
Package libcipher provides a collection of cryptographic utilities for encryption, decryption, integrity verification, and secure key generation.
Package libcipher provides a collection of cryptographic utilities for encryption, decryption, integrity verification, and secure key generation.
3.
Package routine provides utilities for managing recurring tasks (routines) with circuit breaker protection.
Package routine provides utilities for managing recurring tasks (routines) with circuit breaker protection.
libs
libauth module
libdb module
libollama module
Package localhooks provides local hook integrations.
Package localhooks provides local hook integrations.
mcpoauth
Package mcpoauth implements the MCP OAuth 2.1 Authorization Code + PKCE flow for CLI clients.
Package mcpoauth implements the MCP OAuth 2.1 Authorization Code + PKCE flow for CLI clients.
Package mcpserverservice provides CRUD operations for MCP server configurations.
Package mcpserverservice provides CRUD operations for MCP server configurations.
Package mcpworker manages persistent MCP server sessions across a distributed deployment.
Package mcpworker manages persistent MCP server sessions across a distributed deployment.
Package planservice manages AI-generated execution plans.
Package planservice manages AI-generated execution plans.
Package sessionservice provides CRUD operations for CLI/TUI chat sessions.
Package sessionservice provides CRUD operations for CLI/TUI chat sessions.
Package taskengine provides a configurable workflow system for building AI-powered task chains.
Package taskengine provides a configurable workflow system for building AI-powered task chains.
tokenizer module
tools
openapi-gen command
version command
Package vfsservice provides a virtual filesystem abstraction backed by Postgres (via libdbexec).
Package vfsservice provides a virtual filesystem abstraction backed by Postgres (via libdbexec).

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL