denkeeper

module
v0.0.1 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Mar 27, 2026 License: Apache-2.0

README

Denkeeper

CI Release Latest Release Docker Image Go Report Card License

A security-first personal AI agent that lives in your chat. Install Built in Go as a single binary, designed to run anywhere from a Raspberry Pi to a cloud VM.

Denkeeper connects to your Telegram or Discord, routes messages through LLM providers via Anthropic, OpenRouter, or a local Ollama instance, and remembers conversations across sessions using a local SQLite database. It enforces per-session cost budgets, user allowlists, and a tiered permission system — so you stay in control of what it can do and how much it can spend.

Installation

One-liner (Linux and macOS)

curl -fsSL https://raw.githubusercontent.com/Temikus/denkeeper/main/install.sh | sh

To install to a custom prefix (e.g. without sudo):

curl -fsSL https://raw.githubusercontent.com/Temikus/denkeeper/main/install.sh | sh -s -- --prefix ~/.local

The installer detects OS/arch, downloads the correct release archive, verifies the SHA-256 checksum, and places the binary in <prefix>/bin.

Debian / Ubuntu (.deb)

VERSION=$(curl -fsSL https://api.github.com/repos/Temikus/denkeeper/releases/latest | grep '"tag_name"' | sed 's/.*"\(v[^"]*\)".*/\1/')
curl -fsSL "https://github.com/Temikus/denkeeper/releases/download/${VERSION}/denkeeper_${VERSION#v}_linux_amd64.deb" -o denkeeper.deb
sudo dpkg -i denkeeper.deb

Configure and start the service:

sudo cp /etc/denkeeper/denkeeper.toml.example /etc/denkeeper/denkeeper.toml
sudoedit /etc/denkeeper/denkeeper.toml
sudo systemctl enable --now denkeeper
journalctl -u denkeeper -f

RHEL / Fedora (.rpm)

VERSION=$(curl -fsSL https://api.github.com/repos/Temikus/denkeeper/releases/latest | grep '"tag_name"' | sed 's/.*"\(v[^"]*\)".*/\1/')
curl -fsSL "https://github.com/Temikus/denkeeper/releases/download/${VERSION}/denkeeper_${VERSION#v}_linux_amd64.rpm" -o denkeeper.rpm
sudo rpm -i denkeeper.rpm

Docker

docker pull ghcr.io/temikus/denkeeper:latest
docker run -d --name denkeeper \
  -v ~/.denkeeper:/data \
  ghcr.io/temikus/denkeeper:latest

Homebrew (macOS)

brew install Temikus/denkeeper/denkeeper

Verify release signatures

All release archives are signed with cosign (keyless OIDC — no long-lived keys):

cosign verify-blob \
  --signature checksums.txt.sig \
  --certificate checksums.txt.pem \
  --certificate-oidc-issuer=https://token.actions.githubusercontent.com \
  --certificate-identity-regexp='https://github.com/Temikus/denkeeper/.github/workflows/release.yml.*' \
  checksums.txt

Docker images are signed and carry SLSA build provenance attestations:

cosign verify \
  --certificate-oidc-issuer=https://token.actions.githubusercontent.com \
  --certificate-identity-regexp='https://github.com/Temikus/denkeeper/.github/workflows/release.yml.*' \
  ghcr.io/temikus/denkeeper:latest

Features

  • Single binary — no runtime dependencies, no containers required
  • Multi-agent routing — run multiple named agents, each with their own persona, skills, LLM model, and permission tier
  • Telegram + Discord — chat with your agent from your phone or Discord server, including inline Approve/Deny buttons for supervised actions; both adapters can run simultaneously
  • User allowlist — only approved user IDs can interact (per-adapter)
  • LLM routing — pluggable provider interface; Anthropic (direct), OpenRouter (cloud, hundreds of models), and Ollama (local inference) built-in
  • Fallback strategies — automatic model/provider switching on errors, rate limits, or low funds
  • Cost tracking — per-session budgets with automatic cutoff
  • Conversation memory — SQLite-backed, persistent across restarts
  • Scheduler — cron expressions, named intervals, and @daily/@hourly shorthand; per-schedule agent targeting and session modes
  • Skills — flat markdown files with TOML frontmatter; trigger-based filtering (command:/schedule:) and per-agent skill merging
  • MCP tools — spawn MCP stdio servers, discover tools, and execute tool calls in an agentic loop
  • Plugin system — subprocess plugins with capability declarations; tools capability wires plugin tools into the agent's LLM loop
  • Web dashboard — embedded Svelte UI (served via the API server) with overview, chat, sessions, approvals, schedules, skills, agent context viewer, and API key management
  • Voice — speech-to-text and text-to-speech via OpenAI (Whisper + TTS)
  • Permission tiers — autonomous, supervised (default), and restricted; configurable per-agent or per-schedule
  • Approval workflows — supervised-tier actions (profile updates, skill creation, schedule additions) require explicit human approval via chat buttons (Telegram/Discord) or REST API
  • Config MCP server — per-agent in-process MCP tools let the LLM list/create skills, list/add schedules, and inspect its own permission tier at runtime
  • External REST API — HTTP server with scoped API key auth, rate limiting, CORS, and TLS support; chat endpoint with SSE streaming, session management, and approval CRUD
  • Personality — ships with a SOUL.md that gives the agent character (editable)

Architecture

Adapter (Telegram/Discord) → Dispatcher → Engine (per agent) → LLM Router → Provider (Anthropic/OpenRouter/Ollama)
                                               ↕                    ↕
                                           MemoryStore          CostTracker
                                           (SQLite)

API Server (/api/v1/...) ──────────────────────┘
Scheduler ─────────────────────────────────────┘

The Dispatcher routes incoming messages to named agent Engines based on adapter bindings. Each Engine checks permissions, loads conversation history, builds the system prompt (persona + skills), calls the LLM (with tool-call loop if MCP tools are configured), stores the response, and sends it back through the adapter.

Quick start

Prerequisites

Setup

# Clone
git clone https://github.com/Temikus/denkeeper.git
cd denkeeper

# Copy and edit the config
mkdir -p ~/.denkeeper
cp denkeeper.toml.example ~/.denkeeper/denkeeper.toml
# Fill in your token, API key, and user ID
$EDITOR ~/.denkeeper/denkeeper.toml

# Build and run
just build
./pkg/bin/denkeeper serve

Or run directly without building:

just serve

Configuration

Denkeeper uses a single TOML file (default ~/.denkeeper/denkeeper.toml). See denkeeper.toml.example for all options.

Key sections:

Section Purpose
[telegram] Bot token and allowed user IDs
[discord] Bot token and allowed user snowflake IDs
[llm] Default provider (anthropic/openrouter/ollama), model, and per-session cost cap
[llm.anthropic] Anthropic API key (direct provider; no OpenRouter key needed)
[llm.openrouter] OpenRouter API key
[llm.ollama] Ollama base URL (default: http://localhost:11434)
[[llm.fallback]] Fallback strategies (error/rate_limit/low_funds triggers)
[session] Default permission tier (supervised/autonomous/restricted)
[[agents]] Multi-agent definitions (persona, skills, LLM model, adapter bindings)
[tools.*] MCP tool server definitions
[plugins.*] Subprocess plugin definitions (capability declarations)
[voice] STT/TTS configuration (OpenAI)
[api] External REST API (listen addr, TLS, CORS, rate limiting, API keys with scopes)
[[schedules]] Recurring tasks (cron, interval, or named schedules)
[memory] SQLite database path
[log] Log level and format

Skills

Skills are markdown files that teach the agent how to handle specific tasks. They use TOML frontmatter enclosed in +++ delimiters:

+++
name = "daily-briefing"
description = "Compile and deliver a daily briefing"
version = "1.0.0"
triggers = ["schedule:daily:08:00", "command:briefing"]
+++

# Daily Briefing

When triggered, compile a briefing with:
1. Weather forecast for the user's location
2. Top 3 news headlines
3. Any pending reminders

Place skill files in ~/.denkeeper/skills/ (configurable via [agent] skills_dir). Subdirectories with a SKILL.md file are also supported. Skills with triggers are only injected when matched; skills without triggers are always included.

Agent-specific skills in <persona_dir>/skills/ override global skills of the same name.

A sample help skill is included in agents/default/skills/.

Multi-Agent

Define multiple agents, each with their own persona, skills, LLM model, and adapter bindings:

[[agents]]
name = "default"
persona_dir = "~/.denkeeper/agents/default"
adapters = ["telegram"]              # wildcard: all Telegram messages

[[agents]]
name = "work-assistant"
persona_dir = "~/.denkeeper/agents/work-assistant"
adapters = ["telegram:987654321"]    # specific chat only
llm_model = "openai/gpt-4o"
session_tier = "restricted"

If no [[agents]] section is present, a single "default" agent is synthesized from [agent]/[session].

Schedules

Schedules support three expression formats, per-schedule agent targeting, and configurable session modes:

[[schedules]]
name = "daily-briefing"
type = "agent"
schedule = "0 8 * * *"
skill = "daily-briefing"
agent = "default"                # target agent (default: "default")
session_tier = "supervised"
session_mode = "isolated"        # fresh context each run (default: "shared")
channel = "telegram:YOUR_CHAT_ID"
enabled = true

[[schedules]]
name = "hourly-check"
type = "agent"
schedule = "@every 1h"           # or @daily, @hourly, @weekly
channel = "telegram:YOUR_CHAT_ID"

session_mode = "isolated" creates a fresh conversation context for each run so scheduled jobs don't mix into your regular chat history.

REST API

Enable the API with [api] enabled = true in your config. All endpoints (except /health) require a Bearer token matching a configured API key.

[api]
enabled = true
listen = "0.0.0.0:8080"

[[api.keys]]
name = "my-client"
key  = "dk-your-secret-key"
scopes = ["chat", "sessions:read", "costs:read"]

Available scopes: chat, admin, sessions:read, costs:read, skills:read, schedules:read, approvals:read, approvals:write

Endpoints:

Method Path Scope Description
GET /api/v1/health Health check (no auth)
POST /api/v1/chat chat Send a message; returns { session_id, response }. Add Accept: text/event-stream for SSE.
GET /api/v1/sessions sessions:read List all conversations
GET /api/v1/sessions/{id}/messages sessions:read Get messages for a session
DELETE /api/v1/sessions/{id} sessions:read Delete a session and its history
GET /api/v1/agents admin List agents with metadata
GET /api/v1/agents/{name} admin Agent details and skills
GET /api/v1/skills skills:read List all skills across agents
GET /api/v1/skills/{agent} skills:read List skills for a specific agent
GET /api/v1/schedules schedules:read List schedules with run times
GET /api/v1/costs costs:read Cost summary
GET /api/v1/approvals approvals:read List approval requests (filter by ?status=pending)
GET /api/v1/approvals/{id} approvals:read Get a single approval request
POST /api/v1/approvals/{id}/approve approvals:write Approve a pending request
POST /api/v1/approvals/{id}/deny approvals:write Deny a pending request

Chat example:

# Non-streaming
curl -X POST http://localhost:8080/api/v1/chat \
  -H "Authorization: Bearer dk-your-secret-key" \
  -H "Content-Type: application/json" \
  -d '{"message": "Hello!", "session_id": "my-session"}'

# SSE streaming
curl -X POST http://localhost:8080/api/v1/chat \
  -H "Authorization: Bearer dk-your-secret-key" \
  -H "Content-Type: application/json" \
  -H "Accept: text/event-stream" \
  -d '{"message": "Hello!", "session_id": "my-session"}'

Pass the same session_id in subsequent requests to continue the conversation. Omit it to start a new session with an auto-generated ID.

Development

just is used as the command runner. Run just to see all available recipes:

just build           # Build the denkeeper binary (requires web/dist/ to exist)
just build-ui        # Build the Svelte web dashboard (requires Node.js)
just build-full      # Build web dashboard then Go binary in one step
just serve           # Start the agent (just serve ./path/to/config.toml)
just web-dev         # Start Vite dev server for dashboard hot-reload
just test            # Run all tests with race detector
just test-v          # Verbose test output
just test-pkg <pkg>  # Test a single package (e.g. just test-pkg internal/agent)
just test-cover      # Tests with coverage report
just test-cover-html # Open coverage in browser
just lint            # Run golangci-lint
just lint-fix        # Lint with auto-fix
just fmt             # Format all Go files
just fmt-check       # CI-friendly format check
just vet             # Run go vet
just check           # Run all checks (fmt + vet + lint + test)
just tidy            # go mod tidy
just clean           # Remove build artifacts
just loc             # Count lines of source vs test code

Project structure

cmd/denkeeper/       Entry point
internal/
  adapter/           Platform integrations
    telegram/        Telegram bot adapter
    discord/         Discord bot adapter
  agent/             Dispatcher, engine, and conversation memory
  api/               External REST API server
  approval/          Approval workflow manager, store, registry, and callback handler
  config/            TOML config parsing and validation
  configmcp/         Per-agent Config MCP server (skill/schedule/tier tools)
  llm/               Provider interface, router, cost tracking
    anthropic/       Anthropic direct client
    openrouter/      OpenRouter client
    ollama/          Ollama local inference client
  persona/           Persona file loader (SOUL.md, USER.md, MEMORY.md)
  plugin/            Subprocess plugin manager
  scheduler/         Cron and interval scheduling
  security/          Permission engine (tiers)
  skill/             Skill file loader, trigger matching, merging
  tool/              MCP tool server manager
  voice/             STT/TTS provider interface
    openai/          OpenAI Whisper + TTS client
  web/               Embedded web dashboard handler (serves web/dist/)
web/                 Svelte dashboard source (npm build → web/dist/)
pkg/bin/             Build output (gitignored)
agents/default/
  skills/            Bundled skills (e.g. help.md)
  SOUL.md            Agent personality

Roadmap

Denkeeper is built in phases:

Phase 1 — Foundation

  • Telegram adapter with user allowlist
  • LLM routing via OpenRouter
  • Conversation memory (SQLite)
  • Per-session cost budgets
  • Permission engine (supervised tier)
  • Agent persona system (SOUL.md, USER.md, MEMORY.md injection)

Phase 2 — Core Features

  • Multi-agent routing with per-agent personas, skills, LLM models, and permissions
  • Scheduler with cron/interval/named expressions, per-schedule agent targeting
  • Configurable session modes for schedules (shared/isolated)
  • Skills system with trigger-based filtering and per-agent merge
  • MCP tool support (agentic tool-call loop)
  • Fallback strategies (error/rate_limit/low_funds → switch_provider/switch_model/wait_and_retry)
  • Voice messages (STT/TTS via OpenAI)
  • Three permission tiers (autonomous/supervised/restricted), per-agent and per-schedule
  • External REST API server skeleton (auth, rate limiting, CORS, TLS, health endpoint)

Phase 3 — Extensibility

  • REST API chat endpoint — POST /api/v1/chat with JSON response and SSE streaming, session_id for conversation continuity, DELETE /api/v1/sessions/:id
  • Approval workflows — supervised-tier Telegram inline buttons (Approve/Deny) + REST API (GET|POST /api/v1/approvals/...); TTL expiry, stale callback UX, keyboard auto-removal on resolution
  • Config MCP server — per-agent in-process MCP tools for skill and schedule self-modification
  • Ollama LLM provider — local inference with conditional OpenRouter API key validation
  • Plugin system — subprocess plugins with capability declarations (capabilities = ["tools"]); Docker sandboxing planned
  • Web dashboard — embedded Svelte UI with overview, sessions, approvals, schedules, skills, and agent context viewer (persona status + MCP tool names)

Phase 4 — Polish

  • Discord adapter — DM and guild channel support, allowlist, typing indicator, action-row approval buttons
  • Anthropic direct LLM provider — Anthropic Messages API, tool_use support, no OpenRouter dependency
  • API Key CRUD — runtime key management (create/revoke/rotate) without TOML restarts
  • Web dashboard Chat page — SSE streaming chat UI in the dashboard
  • GoReleaser, .deb/.rpm packages, Homebrew tap config
  • CI/CD pipeline (golangci-lint, govulncheck, cosign signing, SBOM generation)
  • One-liner install script + systemd service unit

Phase 5 — Documentation

  • Hugo documentation website

License

Apache 2.0

Directories

Path Synopsis
cmd
denkeeper command
internal
api
configmcp
Package configmcp provides an in-process MCP server that exposes denkeeper's own configuration as tools callable by the agent.
Package configmcp provides an in-process MCP server that exposes denkeeper's own configuration as tools callable by the agent.
llm
llm/anthropic
Package anthropic implements the llm.Provider interface against the Anthropic Messages API (https://docs.anthropic.com/en/api/messages).
Package anthropic implements the llm.Provider interface against the Anthropic Messages API (https://docs.anthropic.com/en/api/messages).
scheduler
Package scheduler provides a cron-style task scheduler with support for named intervals (@daily, @every 5m), 5-field cron expressions, typed schedule categories, and freeform tags.
Package scheduler provides a cron-style task scheduler with support for named intervals (@daily, @every 5m), 5-field cron expressions, typed schedule categories, and freeform tags.
skill
Package skill loads and parses flat-file skill definitions.
Package skill loads and parses flat-file skill definitions.
web

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL