OpenClaw-Go (OCG) Documentation
OpenClaw-Go (OCG) is a lightweight, high-performance Go implementation of OpenClaw, designed for local deployment with minimal resource usage.
Features
- π Fast Startup - <1 second vs 5-10s for Node.js
- πΎ Low Memory - 50-100MB vs 200-500MB+
- π Privacy - 100% local, zero data transfer
- π° Cost - Built-in llama.cpp, zero API costs
Table of Contents
- Quick Start
- Architecture
- Configuration
- Tools
- WebSocket API
- Pulse/Heartbeat System
- Cron Jobs
- Session Management
- Channel Adapters
Quick Start
Build
cd /opt/openclaw-go
make build-all
Run
# Start all services (ocg exits after everything is ready)
# NOTE: Make sure the embedding model file exists (see EMBEDDING_MODEL_PATH)
export OPENCLAW_UI_TOKEN=your_token
./bin/ocg start
Access
Architecture
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β Gateway (HTTP Server) β
β Port: 55003 / 18789 β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ€
β βββββββββββββββ βββββββββββββββ βββββββββββββββββββ β
β β Web UI β β WebSocket β β Channel Adapter β β
β βββββββββββββββ βββββββββββββββ βββββββββββββββββββ β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ€
β RPC (Unix Socket) β
β /tmp/ocg-agent.sock β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β
βΌ
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β Agent (LLM Engine) β
β βββββββββββββββ βββββββββββββββ βββββββββββββββββββ β
β β Sessions β β Memory β β Tools β β
β βββββββββββββββ βββββββββββββββ βββββββββββββββββββ β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β
βΌ
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β Embedding Service (HTTP) β
β Port: 50001 β
β βββββββββββββββββββββββββββββββββββββββββββββββββββββββ β
β β llama.cpp (Embedding Model) β β
β βββββββββββββββββββββββββββββββββββββββββββββββββββββββ β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
Configuration
Environment Variables
| Variable |
Default |
Description |
OPENCLAW_UI_TOKEN |
- |
UI authentication token |
OPENCLAW_API_KEY |
- |
LLM API key |
OPENCLAW_BASE_URL |
- |
LLM API base URL |
OPENCLAW_MODEL |
- |
Model name |
OPENCLAW_FORCE_ENV_CONFIG |
false |
Force env.config to override DB config |
OPENCLAW_AGENT_SOCK |
/tmp/ocg-agent.sock |
Unix socket path |
EMBEDDING_SERVER_URL |
http://localhost:50001 |
Embedding service |
HNSW_PATH |
vector.index |
Vector index file |
env.config
# LLM Configuration
OPENCLAW_API_KEY=sk-xxx
OPENCLAW_BASE_URL=https://api.openai.com/v1
OPENCLAW_MODEL=gpt-4
OPENCLAW_FORCE_ENV_CONFIG=true
# Gateway Configuration
OPENCLAW_UI_TOKEN=your_secure_token
OPENCLAW_PORT=55003
# Storage
OPENCLAW_DB_PATH=ocg.db
# Embedding
EMBEDDING_SERVER_URL=http://localhost:50001
# Required: embedding model file must exist before starting ocg
EMBEDDING_MODEL_PATH=/path/to/model.gguf
Deployment
A one-shot deploy.sh script is included for Debian/Ubuntu hosts. It installs build dependencies, updates the repo, syncs llama.cpp, and builds binaries.
# As root (or sudo -E)
./deploy.sh
Deploy Environment Variables
| Variable |
Default |
Description |
LLAMA_JOBS |
1 |
Parallel build jobs for llama.cpp |
LLAMA_STATIC |
OFF |
Build llama.cpp static binaries |
BUILD_TYPE |
Release |
CMake build type |
USE_SWAP |
on |
Auto-create swap if none exists |
SWAP_SIZE |
4G |
Swap size |
OCG_REF |
main |
Git ref/branch for this repo |
LLAMA_REF |
master |
Git ref/branch for llama.cpp |
| Tool |
Status |
Description |
exec |
β
Complete |
Execute shell commands |
read |
β
Complete |
Read files (50KB limit) |
write |
β
Complete |
Write files |
edit |
β
Complete |
Edit files |
process |
β
Complete |
Process management |
memory |
β
Complete |
Vector memory search/store |
web |
β
Basic |
Web search/fetch |
pulse |
β
Complete |
Event heartbeat system |
browser |
β οΈ Basic |
Browser control |
sessions |
β οΈ Basic |
Session management |
# Via HTTP API
curl -X POST http://localhost:55003/v1/chat/completions \
-H "Authorization: Bearer $TOKEN" \
-H "Content-Type: application/json" \
-d '{"messages":[{"role":"user","content":"read /etc/hostname"}]}'
# Via WebSocket
ws://localhost:55003/ws/chat?token=$TOKEN
WebSocket API
Connection
const ws = new WebSocket('ws://localhost:55003/ws/chat?token=YOUR_TOKEN');
Send Message
ws.send(JSON.stringify({
type: 'chat',
content: JSON.stringify({
model: 'default',
messages: [{role: 'user', content: 'Hello!'}]
})
}));
Receive Response
ws.onmessage = (event) => {
const msg = JSON.parse(event.data);
if (msg.type === 'done') {
const data = JSON.parse(msg.content);
console.log(data.content); // AI response
}
};
Fallback to HTTP
The UI automatically falls back to HTTP if WebSocket is unavailable:
if (!ws || ws.readyState !== WebSocket.OPEN) {
// Use HTTP instead
await fetch('/v1/chat/completions', {...});
}
Pulse/Heartbeat System
The pulse system provides event-driven automation with priority levels.
Priority Levels
| Level |
Name |
Behavior |
| 0 |
Critical |
Broadcast to all channels immediately |
| 1 |
High |
Broadcast to specified channel |
| 2 |
Normal |
Process when idle |
| 3 |
Low |
Process when available |
Adding Events
# Via pulse tool
{
"action": "add",
"title": "Important Reminder",
"content": "Check the reports",
"priority": 1,
"channel": "telegram"
}
Status Check
{
"action": "status"
}
Configuration
PulseConfig{
Interval: 1 * time.Second, // Check interval
Enabled: true, // Enable system
LLMEnabled: true, // Use LLM for processing
MaxQueueSize: 100, // Event queue size
CleanupHours: 24, // Auto-cleanup after 24h
}
Cron Jobs
Schedule tasks with precise timing.
Job Types
- at: One-shot at specific time
- every: Fixed interval
- cron: Cron expression
Session Targets
- main: Run in main session (system event)
- isolated: Run in separate session
Example: Daily Briefing
{
"name": "Morning briefing",
"schedule": {
"kind": "cron",
"expr": "0 7 * * *",
"tz": "Asia/Shanghai"
},
"sessionTarget": "isolated",
"payload": {
"kind": "agentTurn",
"message": "Generate today's briefing"
},
"delivery": {
"mode": "announce",
"channel": "telegram",
"to": "USER_ID"
}
}
Wake Modes
- now: Execute immediately
- next-heartbeat: Wait for next heartbeat
Session Management
Session Keys
main - Main conversation
telegram:USER_ID - Telegram user session
discord:USER_ID - Discord user session
cron:JOB_ID - Cron job session
Creating Sessions
// Create session
session, _ := sm.CreateSession("session-key", "agent-1")
// Create channel session
session, _ := sm.GetOrCreateChannelSession("telegram", "123456", "agent-1")
// Add message
sm.AddMessage("telegram:123456", Message{
Role: "user",
Content: "Hello",
})
Listing Sessions
infos := sm.ListSessionInfos()
for _, info := range infos {
fmt.Printf("Session: %s, Messages: %d\n", info.Key, info.MessageCount)
}
Channel Adapters
Architecture
Gateway
β
βΌ
ChannelAdapter
β
βββ Telegram (β
Implemented)
βββ WhatsApp (πΆ Planned)
βββ Discord (πΆ Planned)
βββ Slack (πΆ Planned)
Telegram Bot Setup
# Set bot token
export TELEGRAM_BOT_TOKEN=your_bot_token
# Start all services
./bin/ocg start
Bot Commands
/start - Start bot
/help - Help
/stats - Stats
/reset - Reset greeting status
Proactive Greeting
The bot sends greeting to new users automatically:
bot.SetGreeting(true, "Hello! I'm OpenClaw-Go π€")
Database Schema
Tables
-- Messages
CREATE TABLE messages (
id INTEGER PRIMARY KEY,
session_key TEXT,
role TEXT,
content TEXT,
created_at DATETIME
);
-- Memories (Vector)
CREATE TABLE memories (
id INTEGER PRIMARY KEY,
key TEXT UNIQUE,
value TEXT,
category TEXT,
importance REAL,
created_at DATETIME
);
-- Session Meta
CREATE TABLE session_meta (
session_key TEXT PRIMARY KEY,
total_tokens INTEGER,
compaction_count INTEGER,
last_summary TEXT,
updated_at DATETIME
);
-- Events (Pulse)
CREATE TABLE events (
id INTEGER PRIMARY KEY,
title TEXT,
content TEXT,
priority INTEGER,
status TEXT,
channel TEXT,
created_at DATETIME,
processed_at DATETIME
);
-- Config
CREATE TABLE config (
section TEXT,
key TEXT,
value TEXT,
PRIMARY KEY (section, key)
);
API Endpoints
| Endpoint |
Method |
Description |
/health |
GET |
Health check |
/v1/chat/completions |
POST |
Chat API |
/ws/chat |
WS |
WebSocket chat |
/storage/stats |
GET |
Storage stats |
/memory/search |
GET |
Search memory |
/memory/store |
POST |
Store memory |
/process/start |
POST |
Start process |
/telegram/webhook |
POST |
Telegram webhook |
Troubleshooting
Service Not Starting
# Check logs
tail -f /tmp/ocg/logs/gateway.log
# Verify ports
ss -ltn | grep -E "55003|50001|18000"
Database Issues
# Backup database
cp ocg.db ocg.db.backup
# Reinitialize
rm ocg.db
./bin/ocg start
Memory Search Not Working
# Check embedding service
curl http://localhost:50001/health
# Rebuild vector index
# (Coming soon)
| Metric |
OCG |
Official OpenClaw |
| Startup |
<1s |
5-10s |
| Memory |
50-100MB |
200-500MB |
| Requests/s |
~100 |
~50 |
License
MIT License
Contributing
Contributions welcome! Please see GitHub for details.