contenox

module
v0.1.7 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Mar 4, 2026 License: Apache-2.0

README

Contenox

Go License GitHub release

AI workflows at your fingertips

Contenox is a lightning-fast, fully-local CLI that turns natural language goals into persistent, step-by-step plans and executes them with real shell + custom hooks like filesystem tools. Powered by any LLM (Ollama, OpenAI, Gemini, vLLM, etc.). Zero cloud required.

$ contenox plan new "install a git pre-commit hook that prevents commits when go build fails"
Creating plan "install-a-git-pre-commit-a3f9e12b" with 5 steps. Now active.

$ contenox plan next --auto
Executing Step 1: Install necessary tools...              ✓
Executing Step 2: Create .git/hooks/pre-commit...         ✓
Executing Step 3: Edit the hook script with the check...  ✓
Executing Step 4: Write bash content to the hook file...  ✓
Executing Step 5: chmod +x .git/hooks/pre-commit...       ✓

No pending steps. Plan is complete!

The model wrote that hook. On your machine. No copy-paste hell.


⭐ Leave us a star if you like it! | 🌟 We welcome any suggestions, and contributions!


Why Contenox?

Contenox is different:

  • Persistent plans stored in SQLite — pause, inspect, retry, replan at any time
  • Human-in-the-loop by default--auto only when you say so
  • Real tools — shell commands and filesystem, not just code suggestions
  • Fully offline with Ollama — no data leaves your machine
  • Chains are just JSON — write your own LLM workflows
  • Workflow Engine — Contenox is not a toy, a complete statemachine lives under the hood.

Quick Start

Install

Ubuntu / Linux

TAG=v0.1.7
ARCH=$(uname -m | sed 's/x86_64/amd64/;s/aarch64/arm64/')
curl -sL "https://github.com/contenox/contenox/releases/download/${TAG}/contenox-${TAG}-linux-${ARCH}" -o contenox
chmod +x contenox && sudo mv contenox /usr/local/bin/contenox
contenox --version

macOS

TAG=v0.1.7
ARCH=$(uname -m | sed 's/x86_64/amd64/;s/arm64/arm64/')
curl -sL "https://github.com/contenox/contenox/releases/download/${TAG}/contenox-${TAG}-darwin-${ARCH}" -o contenox
chmod +x contenox && sudo mv contenox /usr/local/bin/contenox
contenox --version

Or pick a binary from Releases.

First run
contenox init                              # scaffold .contenox/ with config + chain
contenox "list files in my home directory" # instant agentic chat

Requirements: an LLM with tool calling support. Local: ollama serve && ollama pull qwen2.5:7b # (qwen2.5:7b smallest model with okay performance) Cloud: set your API key in .contenox/config.yaml (generated by contenox init).


Full example
# 1. Create
contenox plan new "install a git pre-commit hook that blocks commits when go build fails"

# 2. Review the plan before touching anything
contenox plan show

# 3. Execute one step at a time
contenox plan next
contenox plan next
# ...

# Or run everything at once once you trust it
contenox plan next --auto

# 4. If a step went wrong
contenox plan retry 3

# 5. Final check
contenox plan show

contenox plan — AI-driven plans

contenox plan new "migrate all TODO comments in the codebase to TODOS.md"
contenox plan new "set up a git pre-commit hook that blocks commits when go build fails"
contenox plan new "find all .go files larger than 500 lines and write a refactoring report"

Contenox breaks any goal into an ordered plan, then executes it step by step using real tools.

Commands
Command What it does
contenox plan next Run one step (safe default — review before continuing)
contenox plan next --auto Run all remaining steps autonomously
contenox plan show See the active plan + step status
contenox plan list All plans (* = active)
contenox plan retry <N> Re-run a failed step
contenox plan skip <N> Mark a step skipped and move on
contenox plan replan Let the model rewrite the remaining steps
contenox plan delete <name> Delete a plan by name
contenox plan clean Delete all completed plans

Pro tip: Always do contenox plan show before --auto.


Other Modes

contenox — Interactive chat
contenox "what is my current working directory?"
contenox list files in my home directory
contenox --input "explain this file"
echo "explain this" | contenox

Uses .contenox/default-chain.json. Natural language → shell tools → response.

contenox exec — Scriptable, stateless execution

For CI/pipelines where you want full control:

contenox exec --chain .contenox/my-chain.json "what is 2+2?"
cat diff.txt | contenox exec --chain .contenox/review.json --input-type chat
contenox exec --chain .contenox/doc-chain.json --input @main.go
contenox exec --chain .contenox/parse-chain.json --input-type json '{"key":"value"}'

exec is stateless — no chat history. --chain is required. Supported --input-type: string (default), chat, json, int, float, bool.


Configuration (.contenox/config.yaml)

contenox init generates this. Edit to select your provider.

Local (Ollama — default)

backends:
  - name: local
    type: ollama
    base_url: http://127.0.0.1:11434
default_provider: local
default_model: qwen2.5:7b
context: 32768
enable_local_shell: true
local_shell_allowed_commands: "bash,echo,cat,ls,chmod,sh,date,pwd,head,tail,grep,find,mkdir,cp,mv"

OpenAI

backends:
  - name: openai
    type: openai
    base_url: https://api.openai.com/v1
    api_key_from_env: OPENAI_API_KEY
default_provider: openai
default_model: gpt-4.1-mini

Gemini

backends:
  - name: gemini
    type: gemini
    api_key_from_env: GEMINI_API_KEY
default_provider: gemini
default_model: gemini-3.1-flash-lite-preview
Backend type Notes
Ollama ollama Local. ollama serve first.
OpenAI openai api_key_from_env or api_key
Gemini gemini api_key_from_env or api_key
vLLM vllm Self-hosted OpenAI-compatible endpoint

Safety

  • Opt-in shell accessenable_local_shell is false by default
  • Whitelist — only commands you explicitly allow can run
  • Human-in-the-loopplan next executes one step and stops; --auto requires explicit intent
  • Local-first — with Ollama, nothing leaves your machine

Architecture

contenox CLI
  ├── plan new      → LLM planner chain → SQLite plan + steps
  ├── plan next     → LLM executor chain → local_shell / local_fs → result persisted
  ├── exec          → run any chain, any input type, stateless
  └── run (default) → LLM chat chain → interactive response

SQLite (.contenox/local.db)
  ├── plans + plan_steps   (autonomous plan state)
  ├── message_index        (chat sessions)
  └── kv                   (active session + config)

Chains are JSON files in .contenox/. They define the LLM workflow: model, hooks, branching logic. See ARCHITECTURE.md for the full picture.

Contenox is also the local CLI layer of a full Runtime API server (PostgreSQL + NATS) for production deployments. Read the server docs →


Build from source

git clone https://github.com/contenox/contenox
cd contenox
go build -o contenox ./cmd/contenox
contenox init

Questions or feedback: hello@contenox.com

Directories

Path Synopsis
Package chatservice provides chat session management and message persistence.
Package chatservice provides chat session management and message persistence.
cmd
contenox command
Contenox Vibe: run task chains locally with SQLite, in-memory bus, and estimate tokenizer.
Contenox Vibe: run task chains locally with SQLite, in-memory bus, and estimate tokenizer.
runtime command
core module
internal
contenoxcli
backends.go ensures backends from config exist in the DB and cloud provider API keys in KV.
backends.go ensures backends from config exist in the DB and cloud provider API keys in KV.
hooks
internal/hooks/multi_repo.go
internal/hooks/multi_repo.go
runtimestate
runtimestate implements the core logic for reconciling the declared state of LLM backends (from dbInstance) with their actual observed state.
runtimestate implements the core logic for reconciling the declared state of LLM backends (from dbInstance) with their actual observed state.
Package libauth provides secure authentication and authorization services using JWT tokens.
Package libauth provides secure authentication and authorization services using JWT tokens.
Package bus provides an interface for core publish-subscribe messaging.
Package bus provides an interface for core publish-subscribe messaging.
Package libcipher provides a collection of cryptographic utilities for encryption, decryption, integrity verification, and secure key generation.
Package libcipher provides a collection of cryptographic utilities for encryption, decryption, integrity verification, and secure key generation.
3.
Package routine provides utilities for managing recurring tasks (routines) with circuit breaker protection.
Package routine provides utilities for managing recurring tasks (routines) with circuit breaker protection.
libs
libauth module
libdb module
libollama module
Package taskengine provides a configurable workflow system for building AI-powered task chains.
Package taskengine provides a configurable workflow system for building AI-powered task chains.
tokenizer module
tools
openapi-gen command
version command

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL