OpenCode-Native

command module
v1.9.0 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Mar 1, 2026 License: MIT Imports: 2 Imported by: 0

README

[!NOTE] Fork of now archived https://github.com/MerrukTechnology/OpenCode-Native The focus is changed towards headless experience oriented towards autonomous agents.

Quality Gate Status

⌬ OpenCode

OpenCode is a powerful CLI tool that brings AI assistance directly to your terminal. It provides both an interactive Terminal User Interface (TUI) for human users and a headless non-interactive mode ideal for scripting and autonomous agents. OpenCode acts as an intelligent coding assistant that can read, edit, and create files, execute shell commands, search through codebases, and integrate with various AI language models.

Designed for developers who prefer command-line workflows, OpenCode offers deep integration with popular AI providers while maintaining a seamless terminal experience. Whether you need help debugging code, implementing new features, or automating repetitive development tasks, OpenCode provides the tools and flexibility to get the job done efficiently.

Features

  • Interactive TUI built with Bubble Tea
  • Non-interactive mode for headless automation and autonomous agents
  • Flows: deterministic multi-step agent workflows defined in YAML (guide)
  • Subagents: highly customizable agents calling another agents to do work [[#Agents]]
  • Multiple AI providers: Anthropic, OpenAI, Google Gemini, AWS Bedrock, VertexAI, KiloCode, Mistral, and self-hosted
  • Tool integration: file operations, shell commands, code search, LSP code intelligence
  • Structured output: enforce final agent's output with json schema, perfect for automated pipelines
  • MCP support: extend capabilities via Model Context Protocol servers
  • Agent skills: reusable instruction sets loaded on-demand (guide)
  • Custom commands: predefined prompts with named arguments (guide)
  • Session management with SQLite or MySQL storage (guide)
  • LSP integration with auto-install for 30+ language servers (guide)
  • File change tracking during sessions

Installation

Install Script
curl -fsSL https://raw.githubusercontent.com/MerrukTechnology/OpenCode-Native/refs/heads/main/install | bash

# Specific version
curl -fsSL https://raw.githubusercontent.com/MerrukTechnology/OpenCode-Native/refs/heads/main/install | VERSION=0.1.0 bash
Homebrew
brew install MerrukTechnology/tap/opencode
AUR (Arch Linux)
yay -S opencode-native-bin
Go
go install github.com/MerrukTechnology/OpenCode-Native@latest

Usage

opencode                        # Start TUI
opencode -d                     # Debug mode
opencode -c /path/to/project    # Set working directory
opencode -a hivemind            # Start with a specific agent
opencode -s <session-id>        # Resume or create a session
opencode -s <session-id> -D     # Delete session and start fresh
Non-Interactive Mode
opencode -p "Explain context in Go"           # Single prompt
opencode -p "Explain context in Go" -f json   # JSON output
opencode -p "Explain context in Go" -q        # Quiet (no spinner)
opencode -p "Refactor this module" -t 5m      # With 5-minute timeout
Non-Interactive Flow Mode
opencode -p "Explain context in Go" -F review-code -A hash=93706ee  # Run review-uncommited flow with args
opencode -p "Explain context in Go" -F ralph-does -s wiggum         # Run flow with the pinned session data

All permissions are auto-approved in non-interactive mode.

Command-Line Flags
Flag Short Description
--help -h Display help
--debug -d Enable debug mode
--cwd -c Set working directory
--prompt -p Non-interactive single prompt
--agent -a Agent ID to use (e.g. coder, hivemind)
--session -s Session ID to resume or create
--delete -D Delete the session specified by --session before starting
--output-format -f Output format: text (default), json
--quiet -q Hide spinner in non-interactive mode
--timeout -t Timeout for non-interactive mode (e.g. 10s, 30m, 1h)
--flow -F Flow ID to execute, more info
--arg -A Flow argument as key=value (repeatable)
--args-file JSON file with flow arguments

Configuration

OpenCode looks for .opencode.json in:

  1. ./.opencode.json (project directory)
  2. $XDG_CONFIG_HOME/opencode/.opencode.json
  3. $HOME/.opencode.json
Full Config Example
{
  "data": {
    "directory": ".opencode"
  },
  "providers": {
    "openai": { "apiKey": "..." },
    "anthropic": { "apiKey": "..." },
    "gemini": { "apiKey": "..." },
    "vertexai": {
      "project": "your-project-id",
      "location": "us-central1"
    }
  },
  "agents": {
    "coder": {
      "model": "vertexai.claude-opus-4-6",
      "maxTokens": 5000,
      "reasoningEffort": "high"
    },
    "explorer": {
      "model": "claude-4-5-sonnet[1m]",
      "maxTokens": 5000
    },
    "summarizer": {
      "model": "vertexai.gemini-3.0-flash",
      "maxTokens": 5000
    },
    "descriptor": {
      "model": "claude-4-5-sonnet[1m]",
      "maxTokens": 80
    }
  },
  "shell": {
    "path": "/bin/bash",
    "args": ["-l"]
  },
  "mcpServers": {
    "example": {
      "type": "stdio",
      "command": "path/to/mcp-server",
      "args": []
    }
  },
  "lsp": {
    "gopls": {
      "initialization": { "codelenses": { "test": true } }
    }
  },
  "sessionProvider": { "type": "sqlite" },
  "skills": { "paths": ["~/my-skills"] },
  "permission": {
    "skill": { "*": "ask" },
    "rules": {
      "bash": { "*": "ask", "git *": "allow" },
      "edit": { "*": "allow" }
    }
  },
  "autoCompact": true,
  "debug": false
}
Agents

Each built-in agent can be customized:

Agent Mode Purpose
coder agent Main coding agent (all tools)
hivemind agent Supervisory agent for coordinating subagents
explorer subagent Fast codebase exploration (read-only tools)
workhorse subagent Autonomous coding subagent (all tools)
summarizer subagent Session summarization
descriptor subagent Session title generation

Agent fields:

Field Description
model Model ID to use
maxTokens Maximum response tokens
reasoningEffort low, medium, high (default), max
mode agent (primary, switchable via tab) or subagent (invoked via task tool)
name Display name for the agent
description Short description of agent's purpose
permission Agent-specific permission overrides (supports granular glob patterns)
tools Enable/disable specific tools (e.g., {"skill": false})
color Badge color for subagent indication in TUI
Custom Agents via Markdown

Define custom agents as markdown files with YAML frontmatter. Discovery locations (merge priority, lowest to highest):

  1. ~/.config/opencode/agents/*.md — Global agents
  2. ~/.agents/types/*.md — Global agents
  3. .opencode/agents/*.md — Project agents
  4. .agents/types/*.md — Project agents
  5. .opencode.json config — Highest priority

Example .opencode/agents/reviewer.md:

---
name: Code Reviewer
description: Reviews code for quality and best practices
mode: subagent
model: vertexai.claude-opus-4-6
color: info
tools:
  bash: false
  write: false
---

You are a code review specialist...

The file basename (without .md) becomes the agent ID. Custom agents default to subagent mode.

Auto Compact

When enabled (default), automatically summarizes conversations approaching the context window limit (95%) and continues in a new session.

{ "autoCompact": true }
Shell

Override the default shell (falls back to $SHELL or /bin/bash):

{
  "shell": {
    "path": "/bin/zsh",
    "args": ["-l"]
  }
}
MCP Servers
{
  "mcpServers": {
    "stdio-example": {
      "type": "stdio",
      "command": "path/to/server",
      "env": [],
      "args": []
    },
    "sse-example": {
      "type": "sse",
      "url": "https://example.org/mcp",
      "headers": { "Authorization": "Bearer token" }
    },
    "http-example": {
      "type": "http",
      "url": "https://example.com/mcp",
      "headers": { "Authorization": "Bearer token" }
    }
  }
}
LSP

OpenCode auto-detects and starts LSP servers for your project's languages. Over 30 servers are built-in with auto-install support. See the full LSP guide for details.

{
  "lsp": {
    "gopls": {
      "env": { "GOFLAGS": "-mod=vendor" },
      "initialization": { "codelenses": { "test": true } }
    },
    "typescript": { "disabled": true },
    "my-lsp": {
      "command": "my-lsp-server",
      "args": ["--stdio"],
      "extensions": [".custom"]
    }
  },
  "disableLSPDownload": false
}

Disable auto-download of LSP binaries via config ("disableLSPDownload": true) or env var (OPENCODE_DISABLE_LSP_DOWNLOAD=true).

Self-Hosted Models

Local endpoint:

export LOCAL_ENDPOINT=http://localhost:1235/v1
export LOCAL_ENDPOINT_API_KEY=secret
{
  "agents": {
    "coder": {
      "model": "local.granite-3.3-2b-instruct@q8_0"
    }
  }
}

LiteLLM proxy:

{
  "providers": {
    "vertexai": {
      "apiKey": "litellm-api-key",
      "baseURL": "https://localhost/vertex_ai",
      "headers": {
        "x-litellm-api-key": "litellm-api-key"
      }
    }
  }
}
Environment Variables
Variable Purpose
ANTHROPIC_API_KEY Anthropic Claude models
OPENAI_API_KEY OpenAI models
GEMINI_API_KEY Google Gemini models
VERTEXAI_PROJECT Google Cloud VertexAI
VERTEXAI_LOCATION Google Cloud VertexAI
VERTEXAI_LOCATION_COUNT VertexAI token count endpoint
AWS_ACCESS_KEY_ID AWS Bedrock
AWS_SECRET_ACCESS_KEY AWS Bedrock
AWS_REGION AWS Bedrock
KILO_API_KEY KiloCode models
MISTRAL_API_KEY Mistral models
LOCAL_ENDPOINT Self-hosted model endpoint
LOCAL_ENDPOINT_API_KEY Self-hosted model API key
SHELL Default shell
OPENCODE_SESSION_PROVIDER_TYPE sqlite (default) or mysql
OPENCODE_MYSQL_DSN MySQL connection string
OPENCODE_DISABLE_CLAUDE_SKILLS Disable .claude/skills/ discovery
OPENCODE_DISABLE_LSP_DOWNLOAD Disable auto-install of LSP servers

Architecture

OpenCode is organized into several key packages that work together to provide a seamless AI coding experience:

internal/
├── agent/           # Agent registry and management
├── app/             # Application setup and LSP integration
├── completions/    # Shell completions
├── config/          # Configuration management
├── db/              # Database providers (SQLite, MySQL)
├── diff/            # Diff/patch functionality
├── fileutil/        # File utilities
├── format/          # Code formatting
├── history/         # File change tracking
├── llm/
│   ├── agent/       # Agent implementation and tool execution
│   ├── models/      # Model definitions and metadata
│   ├── prompt/      # System prompts for different agents
│   ├── provider/   # LLM provider implementations
│   └── tools/      # Tool implementations (edit, bash, grep, etc.)
├── logging/        # Logging infrastructure
├── lsp/             # LSP client and language server management
├── message/        # Message types and content handling
├── permission/     # Permission system for tool access
├── session/         # Session management
├── skill/          # Agent skills system
└── tui/             # Terminal User Interface
Key Components
Component Purpose
Agent System Manages different AI agent types (coder, hivemind, explorer, etc.)
LLM Providers Unified interface for OpenAI, Anthropic, Google Gemini, VertexAI, and more
Tool System File operations, shell commands, code search, LSP integration
Skills Reusable instruction sets loaded on-demand for specialized tasks
Session Storage SQLite or MySQL for persistent conversation history
LSP Client Auto-detects languages and provides code intelligence

For a detailed architecture overview, see the Providers and Models Guide.

Supported Models

OpenCode supports a wide range of LLM providers and models:

Provider Models
OpenAI GPT-5, O3 Mini, O4 Mini, GPT-4o, GPT-4o-mini, GPT-4 Turbo
Anthropic Claude 4.6 Sonnet (200K), Claude 4.6 Opus (200K), Claude 4.5 Sonnet
Google Gemini Gemini 3.0 Pro, Gemini 3.0 Flash, Gemini 2.0 Flash
AWS Bedrock Claude 4.5 Sonnet (via Bedrock)
VertexAI Claude 4.6 Sonnet (1M), Claude 4.6 Opus (1M), Gemini 3.0 Pro, Gemini 3.0 Flash
KiloCode KiloCode Auto
Mistral Mistral models
Groq Groq-hosted models
DeepSeek DeepSeek chat models
OpenRouter Aggregated access to 100+ models
Local/Ollama Any OpenAI-compatible local model

For a complete list of supported models and their configurations, see the Providers and Models Guide.

Tools

File & Code
Tool Description
glob Find files by pattern
grep Search file contents
ls List directory contents
view View file contents
view_image View image files as base64
write Write to files
edit Edit files
multiedit Multiple edits in one file
patch Apply patches to files
lsp Code intelligence (go-to-definition, references, hover, etc.)
delete Delete file or directory
Tool Description
bash Execute shell commands
fetch Fetch data from URLs
sourcegraph Search public repositories
task Run sub-tasks with a subagent (supports subagent_type and task_id for resumption)
skill Load agent skills on-demand
struct_output Emit structured JSON conforming to a user-supplied schema

Keyboard Shortcuts

Global
Shortcut Action
Ctrl+C Quit
Ctrl+H Toggle help
Ctrl+L View logs
Ctrl+A Switch session
Ctrl+N New session
Ctrl+P Prune session
Ctrl+K Command dialog
Ctrl+O Model selection
Ctrl+X Cancel generation
Tab Switch primary agent
Esc Close dialog / exit mode
Editor
Shortcut Action
i Focus editor
Ctrl+S / Enter Send message
Ctrl+E Open external editor
Esc Blur editor
Dialogs
Shortcut Action
/k, /j Navigate items
/h, /l Switch tabs/providers
Enter Select
a / A / d Allow / Allow for session / Deny (permissions)

Extended Documentation

Topic Link
Skills docs/skills.md
Flows docs/flows.md
Custom Commands docs/custom-commands.md
Session Providers docs/session-providers.md
LSP Servers docs/lsp.md
Structured Output docs/structured-output.md

Development

Prerequisites
  • Go 1.24.0 or higher
Building from Source
git clone https://github.com/MerrukTechnology/OpenCode-Native.git
cd opencode
make build
Docker

Build and run OpenCode in a container:

# Build the Docker image (cross-compiles a Linux binary automatically)
make docker-build

All CLI arguments are passed through directly:

# Non-interactive prompt
docker run opencode:latest -p "Explain context in Go" -f json -q

# Run a flow
docker run opencode:latest -F my-flow -A key1=value1 -A key2=value2

# With timeout
docker run opencode:latest -p "Refactor this module" -t 5m -q

Mount your configuration and workspace as volumes:

docker run -ti --rm \
  -e LOCAL_ENDPOINT_API_KEY="${LOCAL_ENDPOINT_API_KEY}" \
  -e LOCAL_ENDPOINT="${LOCAL_ENDPOINT}" \
  -e VERTEXAI_PROJECT="${VERTEXAI_PROJECT}" \
  -e VERTEXAI_LOCATION="${VERTEXAI_LOCATION:-global}" \
  -e VERTEXAI_LOCATION_COUNT="${VERTEXAI_LOCATION_COUNT:-us-east5}" \
  -v ~/.opencode.json:/workspace/.opencode.json \
  -v $(pwd):/workspace \
  --network opencode_default \
  opencode:latest  # you can pass args here, e.g. -p "Analyze this codebase"

To run non interactivly (you can pass [[#Command-Line Flags]])

docker run --rm \
  -v ~/.opencode.json:/workspace/.opencode.json \
  -v $(pwd):/workspace \
  --network opencode_default \
  opencode:latest -p "Analyze this codebase" -q

The container uses /workspace as its working directory. Mount .opencode.json there to provide configuration — it is not baked into the image.

Release
make release SCOPE=patch
# or
make release SCOPE=minor

Acknowledgments

License

MIT — see LICENSE.

Contributing

  1. Fork the repository
  2. Create a feature branch
  3. Commit your changes
  4. Open a Pull Request

Documentation

The Go Gopher

There is no documentation for this package.

Directories

Path Synopsis
cmd
schema command
internal
app
config
Package config manages application configuration from various sources.
Package config manages application configuration from various sources.
config/mocks
Package mock_config is a generated GoMock package.
Package mock_config is a generated GoMock package.
db
Package db provides database access for sessions, messages, and files.
Package db provides database access for sessions, messages, and files.
diff
Package diff provides utilities for computing and rendering differences between files.
Package diff provides utilities for computing and rendering differences between files.
fileutil
Package fileutil provides utility functions for file operations including reading, writing, searching, and managing files and directories.
Package fileutil provides utility functions for file operations including reading, writing, searching, and managing files and directories.
llm/tools
Package tools provides tool implementations for the LLM agent system.
Package tools provides tool implementations for the LLM agent system.
llm/tools/mocks
Package mock_tools is a generated GoMock package.
Package mock_tools is a generated GoMock package.
logging
Package logging provides structured logging functionality for the application.
Package logging provides structured logging functionality for the application.
lsp
Generated code.
Generated code.
lsp/mocks
Package mock_lsp is a generated GoMock package.
Package mock_lsp is a generated GoMock package.
permission/mocks
Package mock_permission is a generated GoMock package.
Package mock_permission is a generated GoMock package.
tui
tui/page
page package: This file defines the ChatPage, which is the main interface for users to interact with chat sessions, send messages, and view message history.
page package: This file defines the ChatPage, which is the main interface for users to interact with chat sessions, send messages, and view message history.

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL