Documentation
¶
Overview ¶
agent_controller.go — AgentController interface and shared types for the agent-state API surface.
This file breaks the import cycle between the engine MCP layer (which exposes the tools) and the root package (which owns the actual *ServeAgent goroutine). The root package implements AgentController in a thin adapter (see root: serve_agents.go) and injects it into the MCP server via NewMCPServerWithAgentController.
Design rationale (see Agent T's design spike, cog://mem/semantic/surveys/2026-04-21-consolidation/agent-T-agent-state-design):
- Identities are static YAML (AgentProvider reconciles them; they don't run).
- Sessions are external-client conversational contexts (Agent P's lane).
- Agents are kernel-internal goroutines — the ServeAgent singleton today.
This interface surfaces the agent. It does NOT surface identities or sessions; those are separate APIs.
agent_state_query.go — query helpers for the agent-state API.
Pure functions that operate on the AgentController interface. Keeping them in a separate file makes it clear what logic lives in the engine layer (validation, clamping, tool/HTTP marshaling) versus the root package's adapter (which actually reads *ServeAgent state).
benchmark.go — foveated context benchmark runner
Measures context assembly quality against a set of prompts with known expected CogDoc matches. For each prompt it records:
- Recall: fraction of expected docs that appeared in the assembled context
- Precision: fraction of injected docs that were expected
- Assembly latency (ms)
- Total token count of assembled context
- Model response (for manual quality review)
Run with: cogos-v3 bench [--prompts path] [--model name] [--budget n]
blobs_cmd.go — CLI commands for blob store management
Usage:
cogos-v3 blobs list — list all stored blobs cogos-v3 blobs store <file> — manually store a file cogos-v3 blobs get <hash> <out> — retrieve blob to file cogos-v3 blobs verify — check all pointers have matching blobs cogos-v3 blobs gc [--dry-run] — garbage collect unreferenced blobs cogos-v3 blobs init — initialize the blob store
blobstore.go — Content-addressed blob store for CogOS
Stores large binary content (PDFs, audio, model weights) outside of git in a content-addressed directory at .cog/blobs/. Files are addressed by SHA-256 hash and stored with a 2-character prefix directory for filesystem efficiency.
Layout:
.cog/blobs/ ├── a1/ │ └── b2c3d4e5f6... (blob content) ├── manifest.jsonl (index of all stored blobs) └── .gitkeep
The blob store is gitignored. CogDocs reference blobs via pointer files (type: blob.pointer) that carry the hash, size, and content type.
bus_consumers.go — consumer cursor registry for the bus surface.
Track 5 Phase 3: port of the consumer-cursor portion of root's bus_stream.go (ADR-061 server-side consumer position tracking). Kept in a separate file from the SSE broker for readability — the two surfaces cooperate but are decoupled.
Persistence layout:
{workspace}/.cog/run/bus/{bus_id}.cursors.jsonl — append-only log, last
entry per consumer wins
bus_session.go — per-bus session manager + hash-chained event log.
Track 5 Phase 3: ported verbatim from the root package's bus_session.go so that the `/v1/bus/*` HTTP surface lives in engine. The storage layout is identical to root:
{workspace}/.cog/.state/buses/
registry.json — bus metadata catalogue
{bus_id}/events.jsonl — append-only hash chain (one CogBlock/line)
Bus events use pkg/cogfield.Block as the wire type; the hash chain is per-bus (distinct from the ledger chain in ledger.go — do NOT merge).
Byte-compat with root: the canonical form used for hash computation and the event JSON shape must stay identical. The bridge at cog-sandbox-mcp/src/cog_sandbox_mcp/tools/cogos_bridge.py reads:
{v: 2, bus_id, seq, ts, from, type, payload, prev_hash?, prev?, hash}
bus_stream.go — bus-session-backed SSE broker (parallel to the ledger EventBroker in eventbus.go).
Track 5 Phase 3: library-only port of root's bus_stream.go broker so that AppendEvent can fan out to SSE subscribers. No HTTP route is registered in this phase — PR #16 already owns `/v1/events/stream` (ledger-backed). The broker exists here so future work (or a caller that wires its own SSE handler) can subscribe to per-bus events without re-running the port.
This surface is intentionally distinct from the ledger EventBroker:
EventBroker (eventbus.go) — ledger-append events, hash-chain over all events
BusEventBroker (this file) — per-bus events from .cog/.state/buses/{id}/events.jsonl
Per I2 the two chains stay separate.
canvas_embed.go — embeds and serves the canvas-based dashboard
The canvas view is served at GET /canvas from the v3 daemon. It provides an infinite-canvas spatial interface with draggable nodes, real-time chat, and CogDoc visualization.
chat.go — interactive chat REPL
Two modes:
--direct Calls OllamaProvider directly (no daemon needed). Useful for
offline testing or when the daemon isn't running.
(default) POSTs to the running daemon at localhost:PORT/v1/chat/completions
and streams the SSE response to stdout.
Usage:
cogos-v3 chat # connect to daemon on default port 6931 cogos-v3 chat --port 6931 # explicit port cogos-v3 --port 6931 chat # flags must precede subcommand name cogos-v3 chat --direct # bypass daemon, talk directly to Ollama
chunk.go — Turn-aware document chunking for CogOS v3
Splits CogDoc content into chunks suitable for embedding. Conversations (ChatGPT exports, Claude sessions, etc.) are chunked by turn boundaries so that no chunk splits mid-turn. Non-conversation content falls back to paragraph/character-based chunking.
The target chunk size is in characters (~500 tokens at 4 chars/token). A single oversized turn becomes its own chunk regardless of target.
main.go — CogOS v3 kernel entry point
Starts the continuous process daemon. Three goroutines run concurrently:
- process.Run(ctx) — the cognitive loop (field updates, consolidation, heartbeat)
- server.Start() — the HTTP API
Flags:
--port API port (default 6931; v2 is 5100) --workspace path to workspace root (auto-detected from cwd if omitted) --config (reserved for future use)
cli_emit.go — `cogos emit` subcommand: ledger-append for hook-fired events.
Supports two invocation shapes, both historically dispatched by the root package's `case "emit":` (cog.go:5625 → cmdEmit at cog.go:2621).
Hook-style (the live form, used by .cog/hooks/* via lib/cog_emit.py):
cogos emit --json '{"type":"SESSION_START","session_id":"…","data":{…}}' \
--identity system --source hook
Handler-style (historical, used by SDK event client and manual invocations):
cogos emit <event-name> [--dry-run]
Phase 1 of Track 5 (per Agent I2's revised dead-code plan): this engine implementation exists alongside the root `cmdEmit`. The root binary still dispatches to root's path today; once Phase 4 flips the Makefile default build target to cmd/cogos/, the engine path takes over for the installed `cogos` binary. Hook invocations must continue to succeed at every step of the cutover.
Engine-side behavior vs root:
- Hook-style (--json path): engine ACTUALLY writes the event to the per-session ledger via AppendEvent (hash-chained). Root silently accepted these flags but treated "--json" as the event name and fired no handlers — so on-disk effect was zero. This is a strict improvement: hooks always returned 0 under root, and they continue to return 0 here, but now the event actually lands in the ledger.
- Handler-style (<event> [--dry-run] path): engine emits a noop for now (no handler index port in Phase 1; events dir is empty in live workspaces). Root's path is retained until Phase 5 sweep.
cli_mcp.go — `cogos mcp serve` subcommand: run the engine MCPServer on stdio.
Phase 2 of Track 5 (per Agent I2's revised dead-code plan): this engine implementation exists alongside the root-package `cmdMCP` (mcp.go). The root-linked `cogos` binary still dispatches to its own path today. Once Phase 4 flips the Makefile default build target to cmd/cogos/, `cogos mcp serve` will naturally route through here.
Byte-compat and feature diff vs root:
Transport: both use stdio with newline-delimited JSON-RPC 2.0. Root implements the JSON-RPC loop by hand (mcp.go:515 Run); engine uses the upstream modelcontextprotocol/go-sdk StdioTransport which speaks the same wire format.
Tool catalogue: root's cmdMCP registers the 4 kernel-native tools (memory_search / memory_read / memory_write / coherence_check) plus the bridge-mode external-tool loader. Engine's MCPServer registers the FULL engine tool catalogue (20+ tools including ledger, traces, config, agent-state, kernel-slog, tool-calls, conversation, event- bus) via registerTools in mcp_server.go.
This is a strict super-set; every root tool has an engine equivalent. When the Phase 4 Makefile switch lands, `cogos mcp serve` users get the richer surface without any additional work.
Bridge mode: root's --bridge flag (OpenClaw gateway) is not mirrored in Phase 2. It is scoped for a follow-up if/when needed; bridge mode is not used by the kernel-native workflow.
Lifecycle:
cogos mcp serve [--workspace PATH]
Workspace resolution mirrors other engine subcommands (auto-detected via findWorkspaceRoot when --workspace is absent). The server runs until the client closes stdin (EOF) or SIGINT/SIGTERM is received, then returns 0.
cogdoc_service.go — Unified CogDoc write path
CogDocService is the ONLY write path for CogDoc mutations. It ensures that every write synchronises all kernel state: file write, index refresh, attentional field boost, and ledger event emission.
Before this service, three MCP tool handlers (toolWriteCogdoc, toolPatchFrontmatter, toolIngest) each performed an inconsistent subset of these steps. CogDocService centralises the contract.
coherence.go — CogOS v3 coherence validation
Simplified from apps/cogos/validation.go (v2.4.0). Provides the 4-layer validation stack as callable functions. The continuous process runs this on a cadence; it is not session-triggered.
Layers:
- Schema — frontmatter structure valid
- Invariants — system invariants hold (nucleus loaded, workspace intact)
- Policy — kernel boundary not violated
- Consistency — cross-artifact coherence
config.go — CogOS v3 configuration loading
config_write.go — kernel.yaml mutation plumbing for CogOS v3.
Implements the Config Mutation API per Agent O's design (`cog://mem/semantic/surveys/2026-04-21-consolidation/agent-O-config-mutation-design`).
Core exports consumed by MCP / HTTP surfaces:
ReadConfigOnDisk(root) — parse kernel.yaml into kernelConfig WriteConfigPatch(root, patch,) — validate + atomic-write with rotating backups RollbackConfig(root, name) — restore a prior .bak-<ts> ResolveFromKernelConfig(kc) — project kernelConfig → effective *Config DefaultKernelYAML() — hardcoded defaults for diff surfaces
Merge semantics follow RFC 7396 (JSON Merge Patch): explicit null deletes the key (restoring LoadConfig's default on next boot); missing keys are preserved.
Concurrency: package-level writeConfigMu serializes all mutating operations. v1 intentionally writes-to-disk + `requires_restart: true` — we never mutate live *Config pointers under the hot path.
context_assembly.go — foveated context assembly for chat requests
The engine owns the full context window. It accepts the client's messages[], decomposes them, scores conversation turns alongside CogDocs, and renders everything into a stability-ordered token stream within the configured budget.
Stability zones (ordered for KV cache optimization):
Zone 0: Nucleus (identity card) — most stable, always present Zone 1: CogDocs + client system prompt — shifts slowly per query Zone 2: Conversation history — scored by recency + relevance, evictable Zone 3: Current message — always present [Reserve: OutputReserve tokens for model generation]
Token budget is approximated as chars/4. Default budget: 32768 tokens (matches provider context_window from providers.yaml).
Any OpenAI-compatible client works transparently — the engine intercepts the standard messages[] array and manages what the model actually sees.
context_blocks.go — builder functions for well-known context blocks
Each build* function produces a single ContextBlock (or nil when the source data is unavailable). The foveated context pipeline calls these builders, collects non-nil results into a ContextFrame, and renders the frame.
context_frame.go — structured output type for the foveated context rendering pipeline
ContextFrame is the intermediate representation between context assembly and rendering. Each block is named, tiered by priority, annotated with a stability hint (for KV cache optimization), and carries its rendered content.
The rendering pipeline (serve_foveated.go) can compose, prioritize, and budget-fit blocks before emitting the final HTML comment block stream.
conversation_query.go — Reader for the turn.completed ledger stream.
Thin wrapper that scans .cog/ledger/<sid>/events.jsonl for turn.completed events and optionally hydrates each with the full text from the per-session sidecar .cog/run/turns/<sid>.jsonl.
When Agent L's QueryLedger API lands this wrapper will collapse to a 5-line shim over QueryLedger(event_type="turn.completed"); until then we do the (small) ledger scan locally. The scan is O(events-in-session) — cheap for typical sessions of <100 turns.
dashboard_embed.go — embeds and serves the web dashboard
The dashboard is a single HTML file served at GET / from the v3 daemon. No build step, no external dependencies, no separate process.
debug.go — introspection endpoints for the foveated context engine
Provides real-time visibility into engine state:
GET /v1/debug/last — full pipeline snapshot from the most recent chat request GET /v1/debug/context — current context window as zones with ordering and token counts
No external dependencies. Just curl it.
docs_generate.go — Auto-documentation pipeline
Walks the CogDoc corpus, parses frontmatter, groups by type/status/sector, and generates deterministic documentation outputs:
- DASHBOARD.md — inbox health (raw/enriched/integrated counts)
- INDEX.md — research index grouped by tags
- CATALOG.md — tool/skill inventory
- README.md — per-directory summaries
This is the efferent pathway: knowledge flows OUT of the CogDoc substrate as human-readable documentation. No LLM calls — purely deterministic.
Usage: cogos-v3 docs [--workspace PATH]
eventbus.go — in-process event broker for the kernel ledger.
The broker is the live fan-out companion to the hash-chained ledger. Every call to AppendEvent publishes the envelope here; subscribers get filtered, non-blocking deliveries via a buffered channel. A small ring buffer retains recent events so that reconnecting subscribers can replay the tail without re-reading the JSONL ledger.
Design constraints (from Agent N's design survey):
- Broker hook lives INSIDE AppendEvent (the single write sink) so that consolidate.go, cogblock_ledger.go, process.go — all of which call AppendEvent directly — feed the live bus without extra wiring.
- Publish is fire-and-forget: the ledger never blocks on a slow SSE consumer. Full subscriber channels drop events and increment a counter.
- A package-level currentBroker is set once by NewProcess; tests that want isolation use SetCurrentBroker(nil).
- Close unblocks all waiting subscribers so servers can shut down cleanly.
events_query.go — observability-flavored wrapper around QueryLedger.
The hash-chained ledger (ledger.go / ledger_query.go) is the source of truth. This file exists so the event-bus surface (MCP cog_read_events, HTTP GET /v1/events) can reuse that read path without dragging audit semantics (verify_chain, seq pagination) into a debugging tool.
Differences vs. QueryLedger:
- Accepts a Source filter in addition to SessionID/EventType.
- Since/Until accept either RFC3339 or duration shorthand ("5m"), resolved by the caller via ParseSinceDuration before construction.
- Order toggles asc/desc (QueryLedger is always asc within a session). Default is desc (newest first — "what's happening right now?").
- No verify_chain, no NextAfterSeq. Paging is via next_before (time- based) for the multi-session case.
experiment.go — autoresearch experiment runner
An experiment is a CogDoc (YAML frontmatter + markdown body) that specifies:
- Which benchmark prompts file to use
- Model, budget, and method
- Optional comparison against a baseline run
Usage:
cogos-v3 experiment run <path-to-experiment.md>
The runner:
- Loads the experiment config from YAML frontmatter
- Loads the benchmark prompts
- Runs the benchmark suite
- Saves results as a new experiment log CogDoc
- If a previous run exists, computes and prints the recall/precision delta
- Flags regressions (recall drop > threshold)
field.go — CogOS v3 attentional field
The attentional field is the continuous salience map over the memory corpus. Every memory file gets a float64 score. The "fovea" is the top-N files by score that fit in the context window.
In v2, salience was computed once per session at context assembly time. In v3, the field is updated continuously by the process loop, decoupled from any external request.
gate.go — CogOS v3 attentional gate
The gate receives events (perturbations) and routes them into the fovea. It decides:
- Which memory files should be elevated in salience as a result of this event
- Whether the event triggers a state transition in the process
Stage 1: minimal routing — gate accepts events and records them. Stage 2+: gate will perform semantic matching against the attentional field.
index.go — CogDoc index for CogOS v3
BuildIndex walks .cog/mem/ and constructs an in-memory lookup table for all CogDoc files (Markdown files with YAML frontmatter). The index provides O(1) lookups by URI, type, tag, and status, plus forward and inverse reference graphs for coherence validation.
Index lifecycle:
- Built on startup (best-effort; errors are non-fatal).
- Rebuilt by Process.runConsolidation() on each consolidation tick.
- Served via /v1/resolve for URI resolution queries.
ingest.go — Core types and interfaces for the deterministic ingestion pipeline.
The ingestion pipeline decomposes heterogeneous input (URLs, conversations, documents) into normalized IngestResult records suitable for the inbox/memory lifecycle. Format-specific logic lives in Decomposer implementations; this file defines the shared vocabulary and the pipeline orchestrator.
ingest_url.go — URL decomposer for the ingestion pipeline.
Fetches a URL, extracts structured metadata from HTML (title, meta tags, Open Graph, Twitter cards), classifies content by domain, and returns a normalized IngestResult.
init.go — cogos init command
Scaffolds a new CogOS workspace with the minimum structure needed for the daemon to start: config files, memory directories, a default identity card, and an empty ledger.
Idempotent: does not overwrite existing files. Safe to run on an existing workspace to fill in missing structure.
kernel_log_query.go — Read-side of the kernel slog API.
Implements Part (b) of Agent U's kernel-slog-api design — the surface half. Exposes QueryKernelLog, which reads the JSONL sink produced by the teeHandler (see log_capture.go) and returns the most recent entries in newest-first order, optionally filtered by level, substring, and time range.
Filter shape mirrors Agent L's QueryLedger and Agent Q's QueryTraces (from their respective in-flight PRs) so the three observability surfaces — ledger, traces, kernel slog — present a consistent query grammar to Claude:
- ledger: hash-chained events (Agent L)
- traces: client-originated metabolites (Agent Q)
- slog: operator/debug diagnostic text (this file)
The on-disk format is one slog.NewJSONHandler record per line. Top-level fields "time", "level", "msg" are extracted into typed fields; anything else becomes an entry in the Attrs map. Malformed lines are skipped silently (matches the readLastJSONLEntries tolerance used by the proprioceptive endpoint) — corrupted lines are a data-quality signal, not an API error.
Scan strategy: forward-scan into a bounded ring with a 1 MiB bufio buffer (same primitive as serve.go:readLastJSONLEntries). For v1 file sizes (sub-GB for years of uptime on active workspaces) this is fast enough; a reverse-scan optimization can land in v1.5 alongside rotation.
ledger.go — CogOS v3 hash-chained event ledger
Ported from apps/cogos/ledger_core.go (v2.4.0). CLI command functions removed; EventEnvelope, hash chain, and append logic preserved.
Every significant cognitive event is recorded as an append-only JSONL entry. Entries are hash-chained (RFC 8785 canonical JSON + SHA-256) to provide tamper-evidence and causal ordering.
ledger_query.go — read-side query API for the hash-chained event ledger.
The write side (AppendEvent, CanonicalizeEvent, HashEvent) lives in ledger.go. This file exposes filtered reads for MCP tools and HTTP endpoints without reimplementing any of the canonicalization or hashing logic.
log_capture.go — Kernel slog capture: tee stderr text output to a JSONL file sink.
Implements Part (a) of Agent U's kernel-slog-api design — the file-sink/capture half. The default logger installed by setupLogger() keeps writing key=value text to os.Stderr (backwards-compatibility lock for service managers that capture stderr); this file adds a fan-out so the same records are simultaneously emitted as structured JSON into <WorkspaceRoot>/.cog/run/kernel.log.jsonl.
Why a teeHandler instead of io.MultiWriter: the two sinks have different encodings (text on stderr, JSON in the file). A single slog.Handler cannot emit two formats; wrapping two sub-handlers in a Handle() fan-out lets us keep the human-readable terminal output while also getting machine-parseable on-disk output for the cog_tail_kernel_log MCP tool / GET /v1/kernel-log.
Call order: setupLogger() runs at process start (stderr-only text handler); upgradeLoggerWithFileSink(cfg) runs AFTER LoadConfig once the workspace root is known. Anything logged between the two goes to stderr only — that window is microseconds in practice and those early lines are re-emitted after the upgrade via the "config loaded" Info call.
The file handle is opened with O_APPEND and left open for the kernel's lifetime; the OS flushes on process exit. If the open fails (disk full, permission denied, read-only mount) we log a Warn to stderr and fall back to the stderr-only handler. The kernel never crashes over log-sink setup.
mcp_server.go — MCP Streamable HTTP server for CogOS v3
Embeds the MCP server into the existing HTTP server at /mcp. Registers 11 MCP tools and 3 MCP resources. Four former tools (resolve_uri, get_trust, get_nucleus, get_index) are no longer registered as MCP tools but their implementations remain — used by the internal tool loop (tool_loop.go).
Resources (read-only addressable data):
- cogos://state — kernel process state
- cogos://nucleus — identity context
- cogos://field — attentional field (top-20)
Transport: Streamable HTTP (MCP spec 2025-03-26) Endpoint: POST/GET /mcp
mcp_sessions.go — 8 native MCP tools over the kernel session/handoff registries. Complement (not replacement) to the 8 `cogos_*` Python bridge tools that live in cog-sandbox-mcp: per amendment #5 both surfaces coexist by design — bridge tools keep MCP-level ergonomics for agents already wired to the Python sandbox; these expose the same kernel truth with no Python dependency so a future native client (Wave widget, desktop app, direct `cog` CLI) can just speak MCP.
Tool naming mirrors the rest of the kernel MCP surface: snake_case under the `cog_*` prefix. Input/output types live in this file so mcp_server.go stays focused on registration.
mcp_stubs.go — Internal API stubs for MCP tools
These functions bridge MCP tool calls to the v3 kernel internals. Some delegate to existing functionality, others are stubs awaiting full implementation. Each stub documents what it should eventually do.
memory.go — CogOS v3 memory system interface
Thin interface over the CogDocs memory layout (.cog/mem/). Delegates search to the cog CLI wrapper (scripts/cog memory search). In stage 5, this will be replaced with local embedding-based retrieval.
nucleus.go — CogOS v3 nucleus
The nucleus is the always-loaded identity context: the runtime object that is never evicted from memory. It replaces the v2 pattern of loading the identity card from disk at session start.
In v3, the nucleus is loaded once at daemon startup and held in memory for the lifetime of the process. It is the "floor" of the attentional field.
observer.go — CogOS v3 observer loop (Field → Observer → Model → Field)
Implements the trefoil closed loop that makes the daemon a true observer:
Loop 1 (Field → Observer): Each consolidation tick reads attention signals from the attention log and current field scores — the raw percept. Loop 2 (Observer → Model): TrajectoryModel updates attention momentum via EMA, computes Jaccard prediction error against the previous cycle, and generates a new prediction. Both error and prediction are recorded in the ledger (hash-chained, irreversible — this is the arrow of time). Loop 3 (Model → Field): Predictions pre-warm the field (salience boost). Paths that drop out of the prediction are attenuated. Prediction errors above the surprise threshold emit an observer.surprise coherence signal.
The consolidation CogDoc written each cycle is the model's trace in the field — a legible record that the observer existed and acted.
process.go — CogOS v3 continuous process state machine
Implements the always-running cognitive process described in the v3 spec. The process has four states and an internal event loop that runs independently of external HTTP requests.
States:
Active — processing an external perturbation Receptive — idle, listening for input Consolidating — running internal maintenance (memory, coherence) Dormant — minimal activity, heartbeat only
The select loop is the core architectural difference from v2: v2 is request-triggered; v3 has internal tickers that fire regardless.
procmgr.go — Process lifecycle manager for Claude Code subprocesses.
Tracks all spawned claude processes (foreground, background, agent). Handles:
- Client disconnect / cancellation (SIGTERM → SIGKILL escalation)
- Background process lifecycle (outlive the HTTP request)
- Concurrent process limits (per-identity and global)
- Process inventory for observability
- Callback delivery when background tasks complete
Process kinds:
- Foreground: tied to an HTTP request. Killed on client disconnect.
- Background: fire-and-forget. Has its own timeout. Reports via callback.
- Agent: runs in a Docker container. Trust-bounded. Future implementation.
proprioceptive.go — Proprioceptive logging for TRM prediction-vs-reality tracking.
After each chat request, the TRM predicts which chunks will be referenced. This logger records predictions alongside actual references extracted from the response, enabling continuous calibration of the light cone.
Log format: JSONL at .cog/run/proprioceptive.jsonl
provider.go — CogOS v3 inference provider interface
Adapted from the workspace PROVIDER-SPEC.md contract. All LLM backends satisfy the Provider interface. The kernel never calls a model API directly — it always routes through a Provider.
Key design decisions:
- Models are organs, not the organism. Swappable, upgradeable.
- CompletionRequest carries foveated context, not raw strings.
- Router implements the sovereignty gradient: local-first, cloud-escalate.
- ProcessState uses string (maps to ProcessState.String() from process.go).
provider_anthropic.go — AnthropicProvider
Implements Provider against the Anthropic Messages API (POST /v1/messages). Auth: x-api-key header, read from the env var named by config.APIKeyEnv. Streaming: SSE (text/event-stream), reading typed events (message_start,
content_block_start, content_block_delta, message_delta, message_stop).
Tool calls: streamed incrementally as ToolCallDelta chunks; non-streaming
responses decode tool_use content blocks directly.
Context items are prepended to the system string as labelled sections.
provider_claudecode.go — ClaudeCodeProvider
Implements Provider by spawning `claude -p` subprocesses. Unlike the Anthropic and Ollama providers, ClaudeCodeProvider is agentic: the subprocess owns its own tool loop (filesystem, MCP, etc.).
Authentication: uses the host's Claude Max subscription via OAuth (keychain). Does NOT use --bare mode, which would require API keys.
Process lifecycle:
- Foreground: tied to HTTP request context. Cancelled on disconnect.
- Background: outlives the request. Reports back via callback.
- Agent: runs in Docker container. Trust-bounded, resource-limited.
Output: parsed from `--output-format stream-json --include-partial-messages` which emits NDJSON with Anthropic streaming events.
provider_codex.go — CodexProvider
Implements Provider by spawning `codex exec` subprocesses (OpenAI Codex CLI). Parses the NDJSON event stream (--json flag) and extracts agent_message items.
Authentication: uses the host's ChatGPT Pro subscription via codex CLI auth.
provider_ollama.go — OllamaProvider
Implements Provider against a local Ollama server (http://localhost:11434). Uses /api/chat for multi-turn conversations (not /api/generate). Streaming: Ollama returns newline-delimited JSON chunks. think=false: disables qwen3's thinking mode to avoid silent token burn.
provider_openai.go — OpenAICompatProvider
Implements Provider against any OpenAI-compatible API server: LM Studio, vLLM, llama.cpp server, text-generation-webui, or the OpenAI API itself. Uses /v1/chat/completions for both streaming (SSE) and non-streaming. Discovery: GET /v1/models to enumerate available models.
SSE format: "data: {...}\n\n" lines with "data: [DONE]" sentinel. No CGO dependencies — standard library net/http only.
provider_pi.go — PiProvider
Implements Provider by spawning `pi -p` subprocesses for local agentic inference. Pi handles the tool loop (read, bash, edit, write) against local models via Ollama, while the kernel handles context assembly.
This is the local counterpart to ClaudeCodeProvider:
- ClaudeCodeProvider: cloud agentic inference (Claude Max via OAuth)
- PiProvider: local agentic inference (Ollama via Pi)
The kernel assembles foveated context and injects it via --system-prompt. Pi runs the agent loop. Ollama runs the model.
Output: parsed from `--mode json` which emits NDJSON AgentSessionEvents.
provider_stub.go — StubProvider for testing
In-memory provider with configurable responses, error injection, and latency simulation. Used in unit tests and for offline development.
router.go — SimpleRouter + BuildRouter
SimpleRouter implements the Router interface with rule-based provider selection:
- Check process-state routing overrides
- Try preferred provider first, then fallback chain
- Filter by availability + required capabilities
- Score local > cloud (sovereignty gradient)
- Record every routing decision for future sentinel training
BuildRouter reads .cog/config/providers.yaml and instantiates enabled providers. Falls back to a default Ollama config when no providers.yaml is present.
salience.go — CogOS v3 git-derived salience scoring
Ported from apps/cogos/salience.go (v2.4.0). CLI command functions removed; core computation preserved.
Implements ADR-018: Salience System (Git-Derived Attention). Performance target: <5ms per file via go-git (vs 80ms in shell version).
serve.go — CogOS v3 HTTP API
Core endpoints:
GET /health — liveness + readiness probe GET /v1/context — current attentional field (debug) GET /v1/resolve — resolve a cog:// URI to a filesystem path POST /v1/chat/completions — OpenAI-compatible chat (streaming + non-streaming) POST /v1/messages — Anthropic Messages-compatible chat POST /v1/context/foveated — foveated context assembly for Claude Code hook GET /v1/proprioceptive — last 50 proprioceptive log entries + light cone status GET /v1/ledger — query the hash-chained event ledger GET /v1/lightcone — light cone metadata (placeholder) GET /v1/kernel-log — tail kernel slog (diagnostic text) JSONL sink; filter by level/substring/time
Constellation / attention endpoints (Phase 3, see serve_attention.go):
POST /v1/attention — emit attention signal GET /v1/constellation/fovea — current fovea state GET /v1/constellation/adjacent?uri=… — adjacent nodes by attentional proximity
The chat endpoint routes through the inference Router when one is set, otherwise returns 501.
serve_blocks.go — HTTP endpoints for block sync protocol
Phase 3 of the block sync protocol: remote blob exchange.
GET /v1/blocks/{hash} — retrieve a blob by hash
PUT /v1/blocks/{hash} — store a blob by hash
GET /v1/blocks/manifest — list all stored blobs (manifest exchange)
POST /v1/blocks/verify — verify a list of hashes, return missing ones
These endpoints enable workspace-to-workspace blob sync:
- Workspace A gets B's manifest
- Diffs against local manifest
- GETs missing blobs by hash
- Stores them locally
Content is verified by hash on both read and write — the hash IS the address.
serve_bus.go — HTTP surface for the per-bus event store.
Track 5 Phase 3: ported verbatim from the root package's bus_api.go + serve_bus.go. The response shapes are byte-compat with the live v3 daemon so cog-sandbox-mcp's bridge (tools/cogos_bridge.py) keeps working when the installed binary flips to engine in Phase 4.
Routes owned by this file:
POST /v1/bus/send — append event to a bus
POST /v1/bus/open — create/register a bus
GET /v1/bus/list — list all registered buses
GET /v1/bus/events — cross-bus event search
GET /v1/bus/{bus_id}/events — per-bus events, filtered
GET /v1/bus/{bus_id}/events/{seq} — single event by seq
GET /v1/bus/{bus_id}/stats — bus statistics
GET /v1/bus/consumers — list consumer cursors (ADR-061)
DELETE /v1/bus/consumers/{consumer_id} — remove a consumer cursor
serve_compat.go — v2 compatibility endpoints for Phase 0 cutover.
These endpoints allow v3 to replace v2 as the production kernel on port 5100. Consumers: OpenClaw cogos plugin, CogBus plugin, launchd service.
DEPRECATED: These compatibility routes exist only for migration from v2. They will be removed once all clients migrate to standard endpoints. Standard endpoints: /v1/chat/completions, /v1/messages, /mcp, /health
Endpoints:
GET /v1/card — kernel capability card (OpenClaw auth flow) GET /v1/models — OpenAI-compatible model list GET /memory/search — memory search (was missing from v2 too) GET /memory/read — memory read (was missing from v2 too) GET /coherence/check — coherence check GET /v1/providers — provider list with health GET /v1/taa — TAA context visibility stub
Removed in the event-bus PR (were always stubs):
GET /v1/events/stream — replaced by the real broker-backed handler in serve.go
POST /v1/bus/{bus_id}/ack — dropped; no consumer, new SSE uses Last-Event-ID
serve_config.go — HTTP surface for the Config Mutation API (Agent O design).
GET /v1/config — read effective + raw + defaults PATCH /v1/config — RFC 7396 merge-patch (validated, atomic, backed up) POST /v1/config/rollback — restore from a .bak-<timestamp> file
All handlers return JSON on success and on structured-error paths.
serve_events.go — HTTP surface for the kernel event bus.
Two endpoints:
GET /v1/events — historical read. Thin wrapper around QueryLedger
with observability-flavored defaults (no
verify_chain, "since" accepts durations).
GET /v1/events/stream — live SSE stream backed by the EventBroker. Emits
`event: ledger.appended` frames with `id: <hash>`
for standard Last-Event-ID resume.
Internal invariant: both endpoints source from AppendEvent's ledger (the single write sink). There is no duplicate storage — QueryLedger reads disk, the SSE stream reads the broker's ring + live fan-out.
serve_foveated.go — POST /v1/context/foveated
Bridge endpoint matching the v2 foveated context API so that the Claude Code hook (foveated-context.py) can point at the v3 kernel.
Input: {prompt, iris: {size, used}, profile, session_id} Output: {context, tokens, anchor, goal, iris_pressure}
The "context" field is a rendered string of CogBlock HTML comment blocks that get injected into Claude's context window via the hook's additionalContext.
serve_sessions.go — per-session context observability endpoints.
Track 5 Phase 3: ported from root's serve_context.go (handleListSessions + handleSessionContext). Preserves the byte-compat response shape:
GET /v1/sessions → { count, sessions: [sessionSummary ...] }
GET /v1/sessions/{id} → SessionContextState (full detail)
GET /v1/sessions/{id}/context → SessionContextState (alias)
The store is kept as a struct field on Server so tests can instantiate isolated instances; root's singleton-style map is equivalent in single- server deployments.
serve_sessions_mgmt.go — kernel-native HTTP surface for session & handoff management. The 8 routes below are the "invariance layer" of the hybrid design: they validate inputs, enforce the state machine via SessionRegistry / HandoffRegistry, and only then append to the bus.
Routes registered here:
POST /v1/sessions/register — register (or idempotently
update) a session
POST /v1/sessions/{id}/heartbeat — bump last_seen + optional
status fields
POST /v1/sessions/{id}/end — mark ended; optional
handoff_id hand-off-ref
GET /v1/sessions/presence — roster of tracked sessions
(optional active_within_seconds)
POST /v1/handoffs/offer — post an offer (kernel mints
handoff_id if missing)
GET /v1/handoffs — list handoffs, optional
state / for_session filters
POST /v1/handoffs/{id}/claim — atomic first-wins claim;
emits handoff.claim_rejected
on failure (amendment #4)
POST /v1/handoffs/{id}/complete — mark claimed offer completed
Note: the existing routes `/v1/sessions` and `/v1/sessions/{id}[/context]` registered by serve_bus.go are preserved untouched (they serve per-turn TAA inference context, a different concern). Go 1.22 pattern precedence resolves `POST /v1/sessions/register` before `GET /v1/sessions/` because the method+pattern is more specific.
sessions.go — kernel-native session & handoff registries.
This is the "invariance layer" of the hybrid design sketched in cog://mem/semantic/surveys/2026-04-21-consolidation/agent-P-session-management-evaluation. The bus (BusSessionManager) remains ground truth; everything in this file is a derived view rebuilt from bus replay at startup. The registries' job is to enforce the state-machine invariants the bridge-only implementation was doing by convention:
- session_id format validation (regex) on register
- idempotent re-register = update-semantics (re-register a live session updates the in-memory row; re-register after end is allowed only if the prior session is ended or its heartbeat is outside the active window)
- heartbeat/end refused against unknown sessions
- end refused against already-ended sessions
- handoff.offer → claim → complete state machine with:
- first-wins claim (atomic check under handoff mutex)
- TTL enforcement (offer rejected if now - created_at > ttl_seconds)
- claim-before-offer rejection (phantom offers) as 404
- complete-before-claim rejected as 409
- on every rejected claim, emit a handoff.claim_rejected event on bus_handoffs so operators have an audit trail (amendment #4 of the user-approved plan).
Everything is flushed to the bus via BusSessionManager.AppendEvent so the seq/hash chain remains the authoritative ledger. The in-memory maps only speed up reads and guard writes against out-of-order transitions.
telemetry.go — OpenTelemetry tracing and metrics for the v3 kernel
Initializes a tracer and meter provider with OTLP HTTP exporters. Degrades gracefully to no-op when no collector is available.
Environment variables:
OTEL_EXPORTER_OTLP_ENDPOINT — collector endpoint (default: http://localhost:4318) OTEL_SERVICE_NAME — service name (default: cogos-v3)
Usage:
shutdown := initTelemetry(ctx)
defer shutdown(ctx)
// Use otel.Tracer("cogos-v3") and otel.Meter("cogos-v3") anywhere.
tool_calls_query.go — Query layer for tool.call / tool.result ledger events.
Agent S (survey-2026-04-21-agent-S-tool-bridge-design §4.4) specifies a call+result stitching view over the hash-chained ledger. The query layer is a thin wrapper: it scans .cog/ledger/{sessionID}/events.jsonl (optionally across all sessions), picks out tool.call and tool.result entries, pairs them by call_id, applies filters, and returns a first-class row shape.
The stitched view is the ergonomic win: a caller asking "show me the 10 most recent tool invocations and did they succeed?" gets one row per call with both timestamps and the status already joined — no two-round-trip client- side pairing like the generic event-read surface would require.
When upstream lands Agent L's QueryLedger, this file can become a thin wrapper over that API (Agent S §5.1). For now it reads the ledger JSONL files directly — the same pattern process.go and cogblock_ledger.go use.
tool_loop.go — Internal tool execution loop for the v3 kernel
When the inference provider returns tool_calls, the kernel: 1. Checks if each tool is a kernel-owned tool 2. Executes kernel tools internally (no HTTP round-trip) 3. Injects tool results into the conversation 4. Re-calls the provider until it produces text or hits max iterations
Client-owned tools are passed back to the client in the response. The kernel acts as a tool call router: execute what it owns, forward what it doesn't.
tool_observer.go — Tool-call observability: emission + pending-call correlation.
Agent S (survey-2026-04-21-agent-S-tool-bridge-design) specifies that every tool invocation observed by the kernel should emit a "tool.call" ledger event and — when the result is known — a paired "tool.result" event. The gate recognizer at gate.go:94 has accepted those types since day one but nothing emitted them. This file fills that gap and activates the scaffolded-but-never- wired per-tool-call observation surface identified in issue #22.
The kernel observes two kinds of tool invocation:
Kernel-ownership (MCP handlers, internal tool-loop executions). The kernel runs the tool inline; call and result are both emitted by withToolObserver right around the handler invocation. No correlation cache is needed — the result is known the moment the handler returns.
Client-ownership (provider returned a tool_call the client must execute). The kernel emits the tool.call immediately, registers a pending entry, and emits tool.result later when the client sends a matching role=tool message back. The pending-call cache lives for a bounded TTL and is swept periodically; entries that exceed the TTL get a tool.result{status=timeout} event so the audit trail never leaves a tool.call dangling forever.
All emission flows through process.emitEvent → AppendEvent, i.e. the same hash-chained ledger Agent L's read tools query and Agent N's event bus tails. No orphan writer. The root-package tools_bridge.go is explicitly NOT the model for this file — that file bypasses AppendEvent and is slated for deletion with Agent I's purge.
transition_hooks.go — ADR-072 state-transition hook dispatch.
Implements the per-transition handler layer described in ADR-072 ("State-Transition Hooks for Node Lifecycle"). On every state change, `transition()` invokes a per-state enter handler. Each handler:
- Runs the minimum-viable per-ADR work inline (small, bounded).
- Dispatches any matching declarative StateHook definitions loaded from `.cog/hooks/transitions/*.yaml`. Only the `shell:` form is implemented at this revision — agent-form hooks (ADR-072 Phase 3) are loaded and matched but logged as pending instead of executed.
Non-goals of this file:
- Event-bus emission of transition records (owned by a sibling track).
- Full agent-subprocess spawning with budget/timeout (ADR-072 Phase 3).
- Condition evaluation beyond the scaffold (ADR-072 Phase 4).
Concurrency: all handler bodies and hook executions run on goroutines spawned by `transition()`. They must not take `p.mu`; they may read immutable fields (cfg, sessionID) directly.
trm.go — Pure Go implementation of MambaTRM inference.
The TRM (Temporal Retrieval Model) uses Mamba selective state spaces to process temporally ordered session events and predict which workspace chunks are most relevant. It maintains a "light cone" — compressed SSM hidden state representing the observer's trajectory through the workspace.
This is a zero-dependency inference engine: no gonum, no CGO. All math is done with raw float32 slices and manual loops. The model is tiny (~1.7M params), so this is efficient enough.
Binary weight format (TRM1):
4 bytes: magic "TRM1" 4 bytes: uint32 number of tensors Per tensor: name_len(4) + name(N) + ndim(4) + shape(4*ndim) + data(4*numel) All values little-endian. Data is float32, row-major.
trm_context.go — TRM integration into the context assembly pipeline.
Provides:
- OllamaEmbed: embed a query via the local Ollama /api/embeddings endpoint
- trmScoreDocs: score CogDoc candidates using MambaTRM + embedding index
- loadTRMAtStartup: one-shot loader called from main.go
When the TRM is available, it replaces keyword+salience scoring as the primary CogDoc ranking signal. When unavailable (no weights, Ollama down, etc.), the pipeline falls back to the existing scoring transparently.
trm_index.go — Embedding index for TRM cosine pre-filtering.
Loads the binary embedding index (EMB1 format) and chunk metadata (JSON) exported by trm_export.py. Provides fast cosine similarity top-K search over the full embedding corpus.
Binary format (EMB1):
4 bytes: magic "EMB1" 4 bytes: uint32 num_chunks 4 bytes: uint32 dim (384) num_chunks * dim * 4 bytes: float32 data (row-major, little-endian)
trm_lightcone.go — Thread-safe per-conversation light cone state manager.
Each conversation maintains its own light cone — the SSM hidden state that compresses the observer's trajectory through the workspace. The LightConeManager provides concurrent-safe access keyed by conversation ID.
turn_storage.go — Chat-history turn persistence (Agent R hybrid design).
Closes Agent F gap #4 and cogos#20 (RecordBlock data-loss bug). RecordBlock at cogblock_ledger.go is persistence-theatre: it writes a metadata-only event and drops the full CogBlock payload on the floor. Instead of mutating RecordBlock (its call-site pattern is used elsewhere for metadata-only ingest events), we add a parallel RecordTurn path that captures the prompt/response pair — a first-class "turn" — via:
- A sidecar JSONL file at .cog/run/turns/<sessionID>.jsonl carrying the full turn (prompt, response, tool-call transcript, usage, model).
- A hash-chained `turn.completed` ledger event with TRUNCATED previews (defaults: 8 KB prompt / 16 KB response) + pointer to the sidecar.
Readers (cog_read_conversation, GET /v1/conversation) hydrate from the sidecar. Agent N's broker, once it lands, will fan out turn.completed on the live event bus for free — no extra code in this file.
uri.go — cog:// URI projection system for CogOS v3
A cog:// URI has the form:
cog://type/path[#fragment]
"type" selects a Projection that maps to a filesystem location under the workspace root. "path" is the resource name within that projection. "fragment" (optional, after '#') identifies a section within the file.
Examples:
cog://mem/semantic/insights/eigenform.cog.md → .cog/mem/semantic/insights/eigenform.cog.md cog://mem/semantic/insights/eigenform.cog.md#Seed → same path, anchor "Seed" cog://conf/kernel.yaml → .cog/config/kernel.yaml cog://crystal → .cog/ledger/crystal.json
uri_v2_stub.go — stub for URIRegistry when coguri library is unavailable.
mcp_server.go references URIRegistry under the mcpserver build tag. When the coguri library isn't present (no coguri build tag), this stub provides a nil-valued placeholder so the package compiles cleanly. The nil check in mcp_server.go (if URIRegistry != nil) ensures the Resolve method is never actually called.
Index ¶
- Constants
- Variables
- func AppendEvent(workspaceRoot, sessionID string, envelope *EventEnvelope) error
- func ArchivedSessions(workspaceRoot, sessionID string) (map[string]struct{}, error)
- func BuildMemoryIndex(workspaceRoot, sector string) (any, error)
- func CanonicalizeEvent(payload *EventPayload) ([]byte, error)
- func CheckCoherenceMCP(cfg *Config, nucleus *Nucleus) (any, error)
- func ChunkDocument(body string, targetSize int) []string
- func ClampTraceLimit(n int) (int, error)
- func CollectReferencedHashes(workspaceRoot string) (map[string]bool, error)
- func ConfigToMap(cfg *Config) map[string]any
- func ContentTypeFromExt(path string) string
- func DecodePatchBody(data []byte) (map[string]any, error)
- func DefaultKernelLogPath(workspaceRoot string) string
- func DefaultManifestPath(workspaceRoot string) string
- func EmitIngestEvent(cfg *Config, result *IngestResult, cogdocPath string) error
- func EmitLedgerEvent(cfg *Config, event map[string]any) error
- func ExtractInlineRefs(content string) []string
- func ExtractReferencedPaths(response string) []string
- func FieldKeyToURI(workspaceRoot, absPath string) string
- func GetHashAlgorithm(workspaceRoot string) string
- func GetHotFiles(repoPath, scope string, limit int, threshold float64, daysWindow int, ...) ([]string, error)
- func HashEvent(canonicalBytes []byte, algorithm string) (string, error)
- func Main()
- func MemoryRead(workspaceRoot, path string) (string, error)
- func MemorySearch(workspaceRoot, query string) ([]string, error)
- func NextTurnIndex(workspaceRoot, sessionID string) int
- func OllamaEmbed(ctx context.Context, cfg *Config, query string) ([]float32, error)
- func ParseKernelLogSince(s string, now time.Time) (time.Time, error)
- func ParseKernelLogUntil(s string, now time.Time) (time.Time, error)
- func ParseSinceDuration(s string, now time.Time) (time.Time, error)
- func ParseTraceDurationOrTime(s string, now time.Time) (time.Time, error)
- func PathExistsOnDisk(path string) bool
- func PathToURI(workspaceRoot, path string) (string, error)
- func PrintSummary(results []BenchmarkResult)
- func ReadConfigOnDisk(root string) (kc kernelConfig, rawYAML string, exists bool, err error)
- func ReadIngestionQueueState(workspaceRoot string) ingestionQueueState
- func RegisterBroker(b *EventBroker)
- func ReplayHandoffRegistry(mgr *BusSessionManager, reg *HandoffRegistry) error
- func ReplaySessionRegistry(mgr *BusSessionManager, reg *SessionRegistry) error
- func ResolveToFieldKey(workspaceRoot, pointer string) string
- func RunExperiment(ctx context.Context, experimentPath, workspaceRoot string, process *Process, ...) error
- func RunInit(workspaceRoot string) error
- func RunToolLoop(ctx context.Context, provider Provider, req *CompletionRequest, ...) (*CompletionResponse, []ToolCall, error)
- func RunToolLoopWithTranscript(ctx context.Context, provider Provider, req *CompletionRequest, ...) (*CompletionResponse, []ToolCall, []ToolCallRecord, error)
- func SaveResults(workspaceRoot string, results []BenchmarkResult, model, method string) error
- func SearchMemory(workspaceRoot, query string, limit int, sector string) (any, error)
- func SetCurrentBroker(b *EventBroker)
- func ShouldRedirectToBlob(path string, size int64) bool
- func UnregisterBroker(b *EventBroker)
- func ValidateAgentID(id string) error
- func ValidateKernelLogLevel(s string) (string, error)
- func ValidateSessionID(id string) error
- func WriteCogDoc(workspaceRoot string, path string, opts CogDocWriteOpts) (string, error)
- type AgentActivitySummary
- type AgentController
- type AgentControllerError
- type AgentCycleTrace
- type AgentInboxEnrichItem
- type AgentInboxSummary
- type AgentMemoryEntry
- type AgentProposalEntry
- type AgentSnapshot
- type AgentSummary
- type AgentTriggerResult
- type AnthropicProvider
- func (p *AnthropicProvider) Available(_ context.Context) bool
- func (p *AnthropicProvider) Capabilities() ProviderCapabilities
- func (p *AnthropicProvider) Complete(ctx context.Context, req *CompletionRequest) (*CompletionResponse, error)
- func (p *AnthropicProvider) Name() string
- func (p *AnthropicProvider) Ping(ctx context.Context) (time.Duration, error)
- func (p *AnthropicProvider) Stream(ctx context.Context, req *CompletionRequest) (<-chan StreamChunk, error)
- type AssembleOption
- type AssimilationDecision
- type AttentionProbe
- type AttentionalField
- func (f *AttentionalField) AllScores() map[string]float64
- func (f *AttentionalField) Boost(path string, delta float64)
- func (f *AttentionalField) Fovea(n int) []FileScore
- func (f *AttentionalField) LastUpdated() time.Time
- func (f *AttentionalField) Len() int
- func (f *AttentionalField) Score(path string) float64
- func (f *AttentionalField) Update() error
- type AttentionalZone
- type BackgroundTaskOpts
- type BackupEntry
- type BenchmarkPrompt
- type BenchmarkResult
- type BenchmarkSuite
- type BlobEntry
- type BlobPointer
- type BlobStore
- func (bs *BlobStore) Exists(hash string) bool
- func (bs *BlobStore) GC(referencedHashes map[string]bool) (removed int, freed int64, err error)
- func (bs *BlobStore) Get(hash string) ([]byte, error)
- func (bs *BlobStore) Init() error
- func (bs *BlobStore) List() ([]BlobEntry, error)
- func (bs *BlobStore) PrintBlobList() error
- func (bs *BlobStore) Size() (int64, int, error)
- func (bs *BlobStore) Store(content []byte, contentType string, refs ...string) (string, error)
- func (bs *BlobStore) StoreFile(path string, contentType string, refs ...string) (string, error)
- func (bs *BlobStore) Verify(workspaceRoot string) (missing []string, err error)
- func (bs *BlobStore) WritePointer(path string, hash string, size int64, contentType string, originalPath string) error
- type BlockArtifact
- type BlockProvenance
- type BuildRouterOption
- type BusBlock
- type BusEventBroker
- func (b *BusEventBroker) Publish(busID string, evt *BusBlock)
- func (b *BusEventBroker) StartReaper(ctx context.Context)
- func (b *BusEventBroker) Subscribe(busID string, ch chan *BusBlock, ctx context.Context, consumerID string) bool
- func (b *BusEventBroker) SubscriberCount(busID string) int
- func (b *BusEventBroker) TouchWrite(busID string, ch chan *BusBlock)
- func (b *BusEventBroker) Unsubscribe(busID string, ch chan *BusBlock)
- type BusRegistryEntry
- type BusSessionManager
- func (m *BusSessionManager) AddEventHandler(name string, fn func(busID string, block *BusBlock))
- func (m *BusSessionManager) AppendEvent(busID, eventType, from string, payload map[string]interface{}) (*BusBlock, error)
- func (m *BusSessionManager) BusesDir() string
- func (m *BusSessionManager) EnsureBus(busID string) error
- func (m *BusSessionManager) EventsPath(busID string) string
- func (m *BusSessionManager) LoadRegistry() []BusRegistryEntry
- func (m *BusSessionManager) ReadEvents(busID string) ([]BusBlock, error)
- func (m *BusSessionManager) RegisterBus(busID, sessionID, origin string) error
- func (m *BusSessionManager) RegistryPath() string
- func (m *BusSessionManager) RemoveEventHandler(name string)
- func (m *BusSessionManager) WorkspaceRoot() string
- type Capability
- type ChunkMeta
- type ClaimRejection
- type ClaimResult
- type ClaudeCodeProvider
- func (p *ClaudeCodeProvider) Available(ctx context.Context) bool
- func (p *ClaudeCodeProvider) Capabilities() ProviderCapabilities
- func (p *ClaudeCodeProvider) Complete(ctx context.Context, req *CompletionRequest) (*CompletionResponse, error)
- func (p *ClaudeCodeProvider) Name() string
- func (p *ClaudeCodeProvider) Ping(ctx context.Context) (time.Duration, error)
- func (p *ClaudeCodeProvider) SpawnBackground(opts BackgroundTaskOpts) (string, error)
- func (p *ClaudeCodeProvider) Stream(ctx context.Context, req *CompletionRequest) (<-chan StreamChunk, error)
- type ClaudeCodeTailer
- type CodexProvider
- func (p *CodexProvider) Available(ctx context.Context) bool
- func (p *CodexProvider) Capabilities() ProviderCapabilities
- func (p *CodexProvider) Complete(ctx context.Context, req *CompletionRequest) (*CompletionResponse, error)
- func (p *CodexProvider) Name() string
- func (p *CodexProvider) Ping(ctx context.Context) (time.Duration, error)
- func (p *CodexProvider) Stream(ctx context.Context, req *CompletionRequest) (<-chan StreamChunk, error)
- type CogBlock
- func NormalizeAnthropicRequest(body []byte, source string) *CogBlock
- func NormalizeGateEvent(evt *GateEvent) *CogBlock
- func NormalizeIngestBlock(req *IngestRequest, result *IngestResult) *CogBlock
- func NormalizeMCPRequest(toolName string, input json.RawMessage) *CogBlock
- func NormalizeOpenAIRequest(req *oaiChatRequest, rawBody []byte, source string) *CogBlock
- type CogBlockKind
- type CogDocIndex
- type CogDocService
- type CogDocWriteOpts
- type CoherenceReport
- type CompletionRequest
- type CompletionResponse
- type Config
- type ConfigDiffEntry
- type ConfigViolation
- type ConsolidationAction
- type ConstellationBridge
- type ConstellationTrustSnapshot
- type ConsumerCursor
- type ConsumerEntry
- type ConsumerRegistry
- func (cr *ConsumerRegistry) Ack(busID, consumerID string, seq int64) (*ConsumerCursor, error)
- func (cr *ConsumerRegistry) GetOrCreate(busID, consumerID string) *ConsumerCursor
- func (cr *ConsumerRegistry) List(busID string) []*ConsumerCursor
- func (cr *ConsumerRegistry) LoadFromDisk() error
- func (cr *ConsumerRegistry) Remove(consumerID string) bool
- type ContainerConfig
- type ContainerRuntime
- type ContainerStatus
- type ContentPart
- type ContentType
- type ContextBlock
- type ContextFrame
- type ContextItem
- type ContextPackage
- type Conv1D
- type ConversationQuery
- type ConversationQueryResult
- type ConversationTurn
- type DaemonHealth
- type DaemonState
- type DebugBudget
- type DebugClientInfo
- type DebugContextView
- type DebugEngineInfo
- type DebugProviderInfo
- type DebugSnapshot
- type DebugZone
- type DebugZoneItem
- type Decomposer
- type DedupChecker
- type DefaultMembranePolicy
- type Diagnostic
- type DocRef
- type EmbeddingIndex
- type ErrBrokerClosed
- type ErrTooManySubscribers
- type EventBroker
- func (b *EventBroker) Close() error
- func (b *EventBroker) Publish(env *EventEnvelope)
- func (b *EventBroker) RingContainsHash(hash string) bool
- func (b *EventBroker) RingOldestHash() string
- func (b *EventBroker) RingReplay(filter EventFilter, since time.Time) []*EventEnvelope
- func (b *EventBroker) Subscribe(ctx context.Context, filter EventFilter) (*Subscription, error)
- func (b *EventBroker) SubscriberCount() int
- type EventBrokerOptions
- type EventEnvelope
- type EventFilter
- type EventMetadata
- type EventPayload
- type EventQuery
- type EventQueryResult
- type ExperimentConfig
- type ExperimentDelta
- type FileScore
- type FileWatcher
- type FovealDoc
- type Gate
- type GateEvent
- type GateResult
- type GetAgentRequest
- type HandoffRegistry
- func (r *HandoffRegistry) ApplyClaim(id, claimingSession string, now time.Time, appendFn func() error) (ClaimResult, error)
- func (r *HandoffRegistry) ApplyComplete(id, completingSession, outcome, notes, nextHandoffID string, now time.Time, ...) (*HandoffState, ClaimRejection, error)
- func (r *HandoffRegistry) ApplyOffer(h HandoffState, now time.Time, appendFn func() error) (*HandoffState, error)
- func (r *HandoffRegistry) Get(id string) (*HandoffState, bool)
- func (r *HandoffRegistry) Len() int
- func (r *HandoffRegistry) Snapshot() []*HandoffState
- type HandoffState
- type HealthChecker
- type HeartbeatReceipt
- type IndexResult
- type IndexedCogdoc
- type InferenceEvent
- type IngestFormat
- type IngestPipeline
- type IngestRequest
- type IngestResult
- type IngestSource
- type IngestStatus
- type IngestionResult
- type KernelHeartbeatPayload
- type KernelLogEntry
- type KernelLogFile
- type KernelLogQuery
- type KernelLogResult
- type KernelToolRegistry
- type LayerNorm
- type LedgerEvent
- type LedgerQuery
- type LedgerQueryResult
- type LedgerVerification
- type LightCone
- type LightConeInfo
- type LightConeManager
- func (m *LightConeManager) Count() int
- func (m *LightConeManager) Delete(convID string)
- func (m *LightConeManager) Get(convID string) *LightCone
- func (m *LightConeManager) List() []LightConeInfo
- func (m *LightConeManager) Prune(before time.Time) int
- func (m *LightConeManager) Set(convID string, lc *LightCone)
- type Linear
- type ListAgentsRequest
- type ListAgentsResponse
- type MCPServer
- type MambaBlock
- type MambaState
- type MambaTRM
- type ManagedProcess
- type ManagedProcessOpts
- type MembranePolicy
- type NerdctlRuntime
- func (n *NerdctlRuntime) Exec(containerID string, command []string) ([]byte, error)
- func (n *NerdctlRuntime) Logs(containerID string, follow bool) (io.ReadCloser, error)
- func (n *NerdctlRuntime) Pull(image string) error
- func (n *NerdctlRuntime) Start(image string, config ContainerConfig) (string, error)
- func (n *NerdctlRuntime) Status(containerID string) (ContainerStatus, error)
- func (n *NerdctlRuntime) Stop(containerID string) error
- type NilBridge
- type NodeHealth
- type NodeManifest
- type Nucleus
- type ObserverUpdate
- type OllamaProvider
- func (p *OllamaProvider) Available(ctx context.Context) bool
- func (p *OllamaProvider) Capabilities() ProviderCapabilities
- func (p *OllamaProvider) Complete(ctx context.Context, req *CompletionRequest) (*CompletionResponse, error)
- func (p *OllamaProvider) ContextWindow() int
- func (p *OllamaProvider) Name() string
- func (p *OllamaProvider) Ping(ctx context.Context) (time.Duration, error)
- func (p *OllamaProvider) Stream(ctx context.Context, req *CompletionRequest) (<-chan StreamChunk, error)
- type OpenAICompatProvider
- func (p *OpenAICompatProvider) Available(ctx context.Context) bool
- func (p *OpenAICompatProvider) Capabilities() ProviderCapabilities
- func (p *OpenAICompatProvider) Complete(ctx context.Context, req *CompletionRequest) (*CompletionResponse, error)
- func (p *OpenAICompatProvider) Name() string
- func (p *OpenAICompatProvider) Ping(ctx context.Context) (time.Duration, error)
- func (p *OpenAICompatProvider) Stream(ctx context.Context, req *CompletionRequest) (<-chan StreamChunk, error)
- type OpenClawTailer
- type PiProvider
- func (p *PiProvider) Available(ctx context.Context) bool
- func (p *PiProvider) Capabilities() ProviderCapabilities
- func (p *PiProvider) Complete(ctx context.Context, req *CompletionRequest) (*CompletionResponse, error)
- func (p *PiProvider) Name() string
- func (p *PiProvider) Ping(ctx context.Context) (time.Duration, error)
- func (p *PiProvider) Stream(ctx context.Context, req *CompletionRequest) (<-chan StreamChunk, error)
- type PredictedChunk
- type Process
- func (p *Process) AssembleContext(query string, messages []ProviderMessage, budget int, opts ...AssembleOption) (*ContextPackage, error)
- func (p *Process) Broker() *EventBroker
- func (p *Process) CurrentCycleID() string
- func (p *Process) EmbeddingIndex() *EmbeddingIndex
- func (p *Process) EmitEvent(eventType string, data map[string]interface{}, source string) error
- func (p *Process) Field() *AttentionalField
- func (p *Process) Fingerprint() string
- func (p *Process) Gate() *Gate
- func (p *Process) Index() *CogDocIndex
- func (p *Process) LightCones() *LightConeManager
- func (p *Process) NodeHealth() *NodeHealth
- func (p *Process) Observer() *TrajectoryModel
- func (p *Process) RecordBlock(block *CogBlock) string
- func (p *Process) RecordTurn(turn *TurnRecord) error
- func (p *Process) Run(ctx context.Context) error
- func (p *Process) RunPendingToolCallSweeper(ctx context.Context)
- func (p *Process) Send(evt *GateEvent) bool
- func (p *Process) SessionID() string
- func (p *Process) SetTRM(trm *MambaTRM, idx *EmbeddingIndex)
- func (p *Process) StartedAt() time.Time
- func (p *Process) State() ProcessState
- func (p *Process) SubmitExternal(evt *GateEvent) bool
- func (p *Process) TRM() *MambaTRM
- func (p *Process) TrustSnapshot() TrustState
- type ProcessKind
- type ProcessManager
- func (pm *ProcessManager) CanSpawn(identity string) error
- func (pm *ProcessManager) Finish(id string)
- func (pm *ProcessManager) Kill(id string)
- func (pm *ProcessManager) KillByIdentity(identity string) int
- func (pm *ProcessManager) KillBySource(source string) int
- func (pm *ProcessManager) List() []ProcessSummary
- func (pm *ProcessManager) Remove(id string)
- func (pm *ProcessManager) SetOnComplete(fn func(*ManagedProcess))
- func (pm *ProcessManager) Shutdown(timeout time.Duration)
- func (pm *ProcessManager) Stats() ProcessStats
- func (pm *ProcessManager) Track(cmd *exec.Cmd, opts ManagedProcessOpts) *ManagedProcess
- type ProcessManagerConfig
- type ProcessState
- type ProcessStats
- type ProcessStatus
- type ProcessSummary
- type Projection
- type ProprioceptiveEntry
- type ProprioceptiveLogger
- type Provider
- type ProviderCapabilities
- type ProviderConfig
- type ProviderMessage
- type ProviderMeta
- type ProviderSalienceEntry
- type ProviderSalienceSnapshot
- type ProviderScore
- type ProvidersConfig
- type ReadConfigResult
- type RequestMetadata
- type RequestPriority
- type RollbackOptions
- type RollbackResult
- type Router
- type RouterStats
- type RoutingConfig
- type RoutingDecision
- type SalienceConfig
- type SalienceScore
- type ScoreHead
- type ScoredMessage
- type Server
- type ServiceDef
- type ServiceHealth
- type SessionBlock
- type SessionContextState
- type SessionContextStore
- type SessionRegistry
- func (r *SessionRegistry) ApplyEnd(id, reason, handoffID string, now time.Time, appendFn func() error) (*SessionState, bool, error)
- func (r *SessionRegistry) ApplyHeartbeat(id string, contextUsage float64, status, currentTask string, now time.Time, ...) (*SessionState, bool, error)
- func (r *SessionRegistry) ApplyRegister(state SessionState, activeWindow time.Duration, now time.Time, ...) (*SessionState, bool, error)
- func (r *SessionRegistry) Get(id string) (*SessionState, bool)
- func (r *SessionRegistry) Len() int
- func (r *SessionRegistry) Snapshot() []*SessionState
- type SessionState
- type SimpleRouter
- type SourceStatus
- type StreamChunk
- type StreamTailer
- type StubProvider
- func (s *StubProvider) Available(_ context.Context) bool
- func (s *StubProvider) Capabilities() ProviderCapabilities
- func (s *StubProvider) Complete(_ context.Context, _ *CompletionRequest) (*CompletionResponse, error)
- func (s *StubProvider) Name() string
- func (s *StubProvider) Ping(_ context.Context) (time.Duration, error)
- func (s *StubProvider) Stream(_ context.Context, _ *CompletionRequest) (<-chan StreamChunk, error)
- type Subscription
- type SyncEnvelope
- type SyncEvent
- type SyncWatcher
- type TRMConfig
- type TailerManager
- type TailerStats
- type TokenUsage
- type ToolCall
- type ToolCallDelta
- type ToolCallEvent
- type ToolCallQuery
- type ToolCallQueryResult
- type ToolCallRecord
- type ToolCallRow
- type ToolCallSourceCounts
- type ToolCallValidationResult
- type ToolDefinition
- type ToolResultEvent
- type TraceQuery
- type TraceQueryResult
- type TraceResult
- type TraceSource
- type TrajectoryModel
- type TriggerAgentRequest
- type TrustContext
- type TrustState
- type TurnRecord
- type TurnUsage
- type URIResolution
- type URLDecomposer
- type ValidationResult
- type WriteConfigOptions
- type WriteConfigResult
- type WriteResult
Constants ¶
const ( BlockMessage = cogblock.BlockMessage BlockToolCall = cogblock.BlockToolCall BlockToolResult = cogblock.BlockToolResult BlockImport = cogblock.BlockImport BlockAttention = cogblock.BlockAttention BlockSystemEvent = cogblock.BlockSystemEvent )
const ( BlockNucleus = "nucleus" // Identity card BlockProject = "project" // CLAUDE.md content BlockKnowledge = "knowledge" // Foveated CogDocs BlockNode = "node" // Sibling service health BlockField = "field" // Attentional field top-N BlockEvents = "events" // Recent ledger events BlockFocus = "focus" // Current anchor/intent )
BlockName constants identify the well-known context blocks.
const ( // DefaultRingSize is the default capacity of the recent-events ring // buffer. See Agent N §7.1 — 1024 events × ~320 B/event ≈ 320 KB. DefaultRingSize = 1024 // DefaultSubChanBuffer is the per-subscriber channel buffer. A healthy // SSE writer empties this quickly; a wedged one gets dropped frames. DefaultSubChanBuffer = 64 // DefaultMaxSubscribers caps concurrent subscribers per broker. Single // global kernel bus — no per-bus scoping. DefaultMaxSubscribers = 50 )
const ( // DefaultKernelLogLimit is the default tail size when no limit is given. DefaultKernelLogLimit = 100 // MaxKernelLogLimit is the upper bound on a single query. Above this // callers should paginate via tighter since/until bounds. MaxKernelLogLimit = 1000 // MaxKernelLogSubstring is the hard cap on substring filter length. // Anything longer is rejected as a 400 (substring is intended for short // operator queries, not full-text search). MaxKernelLogSubstring = 1024 )
const ( BusSessions = "bus_sessions" BusHandoffs = "bus_handoffs" BusBroadcast = "bus_broadcast" EvtSessionRegister = "session.register" EvtSessionHeartbeat = "session.heartbeat" EvtSessionEnd = "session.end" EvtHandoffOffer = "handoff.offer" EvtHandoffClaim = "handoff.claim" EvtHandoffComplete = "handoff.complete" EvtHandoffClaimRejected = "handoff.claim_rejected" // amendment #4 )
const ( HandoffStateOpen = "open" HandoffStateClaimed = "claimed" HandoffStateCompleted = "completed" )
HandoffLifecycle values.
const ( ToolSourceMCP = "mcp" ToolSourceOpenAI = "openai-chat" ToolSourceAnthropic = "anthropic-messages" ToolSourceKernelLoop = "kernel-loop" )
Tool-call source taxonomy — where the invocation was observed.
const ( ToolOwnershipKernel = "kernel" ToolOwnershipClient = "client" )
Tool-call ownership taxonomy — who executes the tool body.
const ( ToolStatusPending = "pending" ToolStatusSuccess = "success" ToolStatusError = "error" ToolStatusRejected = "rejected" ToolStatusTimeout = "timeout" )
Tool-result status taxonomy — the outcome of an invocation.
const ( DefaultPromptPreviewCap = 8 * 1024 // 8 KB — covers most user messages DefaultResponsePreviewCap = 16 * 1024 // 16 KB — covers most assistant replies )
Default truncation caps for the ledger-event preview fields. The sidecar row carries the full text; these only apply to the hashed payload.
const DefaultAgentID = "primary"
DefaultAgentID is the stable handle for the singleton ServeAgent. The API pluralises from day one (§2.5 of Agent T's spec); callers that omit agent_id should default to this value.
const DefaultChunkSize = 2000
DefaultChunkSize is the target chunk size in characters (~500 tokens).
const DefaultFileWatcherPollInterval = time.Second
const DefaultOpenClawTailerScanInterval = time.Second
const DefaultSyncWatcherPollInterval = 5 * time.Second
const IngestEventType = "ingested"
IngestEventType is the ledger event type emitted when new material is ingested.
Variables ¶
var ( // Version is injected at build time via -ldflags (e.g. "v0.1.0"). Version = "dev" // BuildTime is injected at build time via -ldflags. BuildTime = "unknown" )
var DefaultStability = map[string]int{ BlockNucleus: 95, BlockProject: 90, BlockKnowledge: 30, BlockNode: 70, BlockField: 40, BlockEvents: 20, BlockFocus: 10, }
DefaultStability maps block names to stability hints (0-100).
var DefaultTiers = map[string]int{ BlockNucleus: 0, BlockProject: 0, BlockKnowledge: 1, BlockNode: 2, BlockField: 2, BlockEvents: 2, BlockFocus: 2, }
DefaultTiers maps block names to their default tier.
var ErrAfterSeqRequiresSession = errors.New("ledger: after_seq requires session_id")
ErrAfterSeqRequiresSession is returned when after_seq is set without session_id. Cross-session sequence numbers are not monotonic so paginating by seq only makes sense within a single session.
var ErrAgentInvalidInput = &AgentControllerError{Code: "invalid_input", Message: "invalid agent input"}
ErrAgentInvalidInput signals malformed arguments (bad agent_id regex, trace_limit out of range). HTTP handlers translate this to 400.
var ErrAgentNotFound = &AgentControllerError{Code: "not_found", Message: "agent not found"}
ErrAgentNotFound signals that the agent_id in the request did not match any agent known to the controller. HTTP handlers translate this to 404; MCP tool handlers translate this to an IsError:true response.
ErrAgentUnavailable signals that the controller is installed but no agent is currently wired (e.g. the ServeAgent singleton never started). HTTP handlers translate this to 503.
var ErrSessionNotFound = errors.New("ledger: session not found")
ErrSessionNotFound is returned when QueryLedger is scoped to a session that has no events.jsonl on disk. Handlers map it to HTTP 404.
var TraceEmitter func(ev trace.CycleEvent)
TraceEmitter is the bus-publish hook for cycle-trace events. The main package sets this at startup; engine calls it best-effort (never blocks the metabolic cycle on a nil hook or a slow consumer).
var TraceIdentity = func() string { return "cog" }
TraceIdentity returns the identity name stamped as the `source` field on emitted events. Set by the main package at startup; defaults to "cog".
var URIRegistry *uriRegistryStub
URIRegistry is nil when the coguri library is not linked.
Functions ¶
func AppendEvent ¶
func AppendEvent(workspaceRoot, sessionID string, envelope *EventEnvelope) error
AppendEvent appends an event to the process ledger with hash chaining. The ledger lives at .cog/ledger/{sessionID}/events.jsonl. Safe for concurrent callers (serialized via appendMu).
Uses an in-memory cache for the last event per session, turning the previous O(N) file scan per append into O(1) after the first access.
func ArchivedSessions ¶
func BuildMemoryIndex ¶ added in v0.3.0
BuildMemoryIndex builds a lightweight index of all CogDocs. Prefers the constellation SQLite database for speed and richer metadata; falls back to naive filepath.Walk when the DB is unavailable or corrupt.
func CanonicalizeEvent ¶
func CanonicalizeEvent(payload *EventPayload) ([]byte, error)
CanonicalizeEvent produces RFC 8785 canonical JSON for an event payload. Same logical content always produces the same bytes.
func CheckCoherenceMCP ¶ added in v0.3.0
CheckCoherenceMCP runs workspace coherence validation for MCP tools.
func ChunkDocument ¶
ChunkDocument splits a document body into chunks for embedding. It detects conversation format and uses turn-aware chunking when appropriate, falling back to paragraph-based chunking otherwise.
The body should have frontmatter already stripped.
func ClampTraceLimit ¶ added in v0.3.0
ClampTraceLimit enforces the [1, 20] range on trace_limit. Returns the clamped value and an error when the input was clearly out of range (>20 or <0). A value of 0 is treated as "use the default of 1".
func CollectReferencedHashes ¶
CollectReferencedHashes returns all blob hashes referenced by pointer CogDocs.
func ConfigToMap ¶ added in v0.3.0
ConfigToMap projects an effective *Config into a stable JSON-friendly map. Field names match the YAML keys to keep read/write schemas symmetrical.
func ContentTypeFromExt ¶
ContentTypeFromExt returns a MIME content type for a file extension.
func DecodePatchBody ¶ added in v0.3.0
DecodePatchBody decodes a JSON body into a map[string]any, preserving the distinction between an absent key and an explicit null. Used by the HTTP handler; the MCP SDK already hands us a map.
func DefaultKernelLogPath ¶ added in v0.3.0
DefaultKernelLogPath returns the default per-workspace path for the kernel slog JSONL sink. Callers should use cfg.KernelLogPath when non-empty and fall back to this helper otherwise.
func DefaultManifestPath ¶ added in v0.2.0
DefaultManifestPath returns the expected manifest location for a workspace.
func EmitIngestEvent ¶ added in v0.3.0
func EmitIngestEvent(cfg *Config, result *IngestResult, cogdocPath string) error
EmitIngestEvent writes an ingestion event to the workspace ledger. This allows the observer and other agents to react to new material arriving in the inbox.
func EmitLedgerEvent ¶ added in v0.3.0
EmitLedgerEvent appends an event to the workspace ledger.
Historical shape: pre-PR this helper wrote to a flat .cog/ledger/events.jsonl outside the hash chain and outside any session subdirectory (cogos#10). Post-refactor it builds a proper EventEnvelope and routes through AppendEvent, which:
- chains the event by prior_hash / seq,
- files it under .cog/ledger/<session_id>/events.jsonl,
- fans out to live subscribers via the in-process EventBroker.
Callers that have a live *Process should prefer (*Process).EmitEvent for accurate session attribution. This helper exists for the paths that only hold *Config (CogDocService.emitLedgerEvent, EmitIngestEvent, tests).
Source (envelope.Metadata.Source): pulled from event["source"] if set, otherwise defaults to "mcp-client". Type: event["type"]. All other keys become envelope.data.
func ExtractInlineRefs ¶
ExtractInlineRefs scans document content for embedded cog:// URIs and returns a deduplicated, sorted slice of every unique URI found.
func ExtractReferencedPaths ¶
ExtractReferencedPaths extracts file paths from an LLM response.
func FieldKeyToURI ¶ added in v0.2.0
FieldKeyToURI converts an absolute filesystem path (field key) back to a canonical cog:// URI. This is the "project outward" half of the holographic pointer — the internal key becomes a portable, context-free identifier.
Returns the path unchanged if it can't be mapped to a cog:// URI.
func GetHashAlgorithm ¶
GetHashAlgorithm returns the hash algorithm configured for the workspace. Defaults to "sha256" if no genesis event is found.
func GetHotFiles ¶
func GetHotFiles(repoPath, scope string, limit int, threshold float64, daysWindow int, cfg *SalienceConfig) ([]string, error)
GetHotFiles returns paths with salience above threshold.
func HashEvent ¶
HashEvent computes the hash of canonical bytes using the given algorithm. Supported: "sha256" (default), "sha512".
func MemoryRead ¶
MemoryRead returns the text contents of a memory file. path may be either an absolute path or a memory-relative path (e.g. "semantic/foo.md").
func MemorySearch ¶
MemorySearch runs `cog memory search <query>` and returns matching paths. Falls back to a simple filepath.Walk grep if the cog binary is not available.
func NextTurnIndex ¶ added in v0.3.0
NextTurnIndex returns the next 1-based turn index for sessionID, incrementing the in-memory counter. On first access it seeds the counter by scanning the sidecar file (if present) and taking the max turn_index plus one. This handles kernel restarts cleanly — a restarted session won't reset to turn 1.
func OllamaEmbed ¶
OllamaEmbed calls Ollama to embed a query string. Returns a 384-dim float32 vector. The endpoint and model are taken from config; defaults are localhost:11434 and nomic-embed-text.
func ParseKernelLogSince ¶ added in v0.3.0
ParseKernelLogSince parses a "since" / "until" query string value. Accepts RFC3339 timestamps AND relative durations (e.g. "5m", "2h", "24h") — the latter is resolved against `now`.
func ParseKernelLogUntil ¶ added in v0.3.0
ParseKernelLogUntil is identical to ParseKernelLogSince except that bare durations are treated as absolute (now - d). The semantic is the same — "entries older than this" on Since, "entries newer than this" on Until — so we reuse the parser.
func ParseSinceDuration ¶ added in v0.3.0
ParseSinceDuration parses `since` as either an RFC3339 timestamp or a shorthand duration like "5m", "1h", "30s". Relative forms are resolved against `now`. Empty string returns the zero Time.
func ParseTraceDurationOrTime ¶ added in v0.3.0
ParseTraceDurationOrTime interprets a since/until parameter. Accepts a Go duration ("5m", "1h", "24h") — interpreted as "since N ago" — or an RFC3339 / RFC3339Nano timestamp. Shared by the HTTP and MCP parsers.
func PathExistsOnDisk ¶ added in v0.2.0
PathExistsOnDisk reports whether the resolved path actually exists.
func PathToURI ¶
PathToURI converts an absolute (or workspace-relative) filesystem path to a cog:// URI using the longest-matching prefix rule. Returns an error if no mapping covers the path.
func PrintSummary ¶
func PrintSummary(results []BenchmarkResult)
PrintSummary writes a tabular result summary to stdout.
func ReadConfigOnDisk ¶ added in v0.3.0
ReadConfigOnDisk parses kernel.yaml at <root>/.cog/config/kernel.yaml. A missing file is not an error: the returned kernelConfig is the zero value, exists is false, and rawYAML is empty.
func ReadIngestionQueueState ¶ added in v0.3.0
func ReadIngestionQueueState(workspaceRoot string) ingestionQueueState
func RegisterBroker ¶ added in v0.3.0
func RegisterBroker(b *EventBroker)
RegisterBroker adds b to the package-level broker registry. AppendEvent publishes to every registered broker. Idempotent. Safe for concurrent use.
func ReplayHandoffRegistry ¶ added in v0.3.0
func ReplayHandoffRegistry(mgr *BusSessionManager, reg *HandoffRegistry) error
ReplayHandoffRegistry replays bus_handoffs into the handoff registry. Events are sorted by Seq ascending before consumption so replay is deterministic even if the on-disk file has out-of-order lines (e.g. from a resumed write after crash). See also ReplaySessionRegistry.
func ReplaySessionRegistry ¶ added in v0.3.0
func ReplaySessionRegistry(mgr *BusSessionManager, reg *SessionRegistry) error
ReplaySessionRegistry reads bus_sessions events through the given manager and reconstructs the in-memory session map. The bus is ground truth; this function just gets the derived view ready for read-traffic before the HTTP server starts serving. Errors are logged but non-fatal — an empty registry is a safe degraded start.
Events are sorted ascending by Seq before replay so the reconstructed state is deterministic regardless of on-disk line order. ReadEvents already de-dupes by Seq; a seq collision with conflicting payloads keeps the FIRST occurrence (insertion order). Callers that need "last-write-wins within a seq" semantics should not rely on replay ordering beyond Seq.
func ResolveToFieldKey ¶ added in v0.2.0
ResolveToFieldKey normalizes any pointer form to the attentional field's canonical key (absolute filesystem path). Accepts:
- cog:// URIs: cog://mem/semantic/insights/foo.cog.md
- short cog: URIs: cog:mem/semantic/insights/foo.cog.md
- memory-relative: semantic/insights/foo.cog.md
- absolute paths: /Users/.../cog/.cog/mem/semantic/insights/foo.cog.md
This is the "resolve locally" half of the holographic pointer — any form collapses to the same key regardless of where in the system it originated.
func RunExperiment ¶
func RunExperiment(ctx context.Context, experimentPath, workspaceRoot string, process *Process, router Router) error
RunExperiment loads and executes an experiment document. workspaceRoot is used to resolve relative prompt file paths.
func RunInit ¶
RunInit scaffolds a CogOS workspace at the given root directory. It creates directories and writes default config files, skipping any that already exist.
func RunToolLoop ¶ added in v0.3.0
func RunToolLoop( ctx context.Context, provider Provider, req *CompletionRequest, initialResp *CompletionResponse, registry *KernelToolRegistry, ) (*CompletionResponse, []ToolCall, error)
RunToolLoop executes the kernel tool loop. See RunToolLoopWithTranscript for the full detail — this wrapper drops the per-turn tool-call transcript for callers that don't need it. Preserved for backwards compatibility with existing tests / call sites.
func RunToolLoopWithTranscript ¶ added in v0.3.0
func RunToolLoopWithTranscript( ctx context.Context, provider Provider, req *CompletionRequest, initialResp *CompletionResponse, registry *KernelToolRegistry, ) (*CompletionResponse, []ToolCall, []ToolCallRecord, error)
RunToolLoopWithTranscript executes the kernel tool loop and returns a transcript of kernel-owned tool calls executed (and any rejected ones). The transcript is threaded back to chat handlers so it can be stored alongside the prompt/response in the turn.completed record (Agent R §5.4).
Given a CompletionResponse with tool_calls, it: 1. Separates kernel tools from client tools 2. Executes kernel tools (capturing a ToolCallRecord for each) 3. Appends results to messages 4. Re-calls the provider 5. Repeats until no more kernel tool calls or max iterations
Returns:
- final response
- any client tool calls that need forwarding
- per-turn tool-call transcript (kernel calls + rejections)
- error
func SaveResults ¶
func SaveResults(workspaceRoot string, results []BenchmarkResult, model, method string) error
SaveResults writes benchmark results as a CogDoc-format experiment log. The file is written under .cog/mem/episodic/experiments/.
func SearchMemory ¶ added in v0.3.0
SearchMemory searches the CogDoc corpus using the constellation FTS5 index. Falls back to naive filepath.Walk grep if the constellation DB is unavailable.
func SetCurrentBroker ¶ added in v0.3.0
func SetCurrentBroker(b *EventBroker)
SetCurrentBroker replaces the registry with just b (or clears it if nil). Kept for compatibility with earlier single-slot wiring; prefer Register / UnregisterBroker for additive wiring.
func ShouldRedirectToBlob ¶
ShouldRedirectToBlob returns true if a file should be stored in the blob store instead of committed to git.
func UnregisterBroker ¶ added in v0.3.0
func UnregisterBroker(b *EventBroker)
UnregisterBroker removes b from the registry. Idempotent. Safe for concurrent use.
func ValidateAgentID ¶ added in v0.3.0
ValidateAgentID checks the agent_id against agentIDRegex. Returns nil when valid, ErrAgentInvalidInput wrapped with the reason otherwise. Empty id is treated as invalid — callers must substitute DefaultAgentID before validating.
func ValidateKernelLogLevel ¶ added in v0.3.0
ValidateKernelLogLevel normalises and validates a level filter string. Empty input returns ("", nil); unknown values return an error suitable for a 400 response body.
func ValidateSessionID ¶ added in v0.3.0
ValidateSessionID returns nil iff id matches the lowercase-hyphen format with at least three components. Exported so tests and the HTTP handlers can share the single source of truth.
func WriteCogDoc ¶ added in v0.3.0
func WriteCogDoc(workspaceRoot string, path string, opts CogDocWriteOpts) (string, error)
WriteCogDoc writes a CogDoc to the memory filesystem with proper frontmatter. This is the internal API used by the ingestion pipeline.
Types ¶
type AgentActivitySummary ¶ added in v0.3.0
type AgentActivitySummary struct {
UserPresence string `json:"user_presence"`
UserLastEventAgo string `json:"user_last_event_ago"`
ClaudeCodeActive int `json:"claude_code_active"`
ClaudeCodeEvents int64 `json:"claude_code_events"`
TotalEventDelta int64 `json:"total_event_delta"`
HottestBus string `json:"hottest_bus,omitempty"`
HottestDelta int64 `json:"hottest_delta"`
}
AgentActivitySummary is the bus-delta + user-presence summary. Exact shape of the root package's AgentActivitySummary so handlers can pass through without mapping.
type AgentController ¶ added in v0.3.0
type AgentController interface {
// ListAgents returns a summary for each running agent. includeStopped
// is reserved for future pool managers; today every returned agent is
// running.
ListAgents(ctx context.Context, includeStopped bool) ([]AgentSummary, error)
// GetAgent returns the full state snapshot for the named agent. When
// includeTrace is true, up to traceLimit most-recent full cycle traces
// are attached (clamped to [1,20]). Returns ErrAgentNotFound when id
// is unknown.
GetAgent(ctx context.Context, id string, includeTrace bool, traceLimit int) (*AgentSnapshot, error)
// TriggerAgent manually invokes one homeostatic cycle for the named
// agent, outside the adaptive ticker. When wait is true, blocks until
// the cycle completes (or a 90s deadline elapses). When wait is false,
// returns immediately with a trigger receipt.
TriggerAgent(ctx context.Context, id string, reason string, wait bool) (*AgentTriggerResult, error)
}
AgentController is the kernel-agnostic contract the MCP layer uses to list, inspect, and trigger agent harness instances. The concrete implementation lives in the main package because *ServeAgent is there; this interface keeps internal/engine import-cycle-free.
Today there is exactly one agent per kernel process ("primary"). The interface pluralises from day one so new handles (e.g. "digestion", "identity-<name>") can land without an API break.
type AgentControllerError ¶ added in v0.3.0
type AgentControllerError struct {
Code string // "not_found" | "unavailable" | "invalid_input"
Message string
}
ErrAgentNotFound is returned by GetAgent/TriggerAgent when no agent matches the supplied id. Implementations should wrap this with %w so callers can errors.Is against it.
func (*AgentControllerError) Error ¶ added in v0.3.0
func (e *AgentControllerError) Error() string
type AgentCycleTrace ¶ added in v0.3.0
type AgentCycleTrace struct {
Cycle int64 `json:"cycle"`
Timestamp string `json:"timestamp"` // RFC3339
DurationMs int64 `json:"duration_ms"`
Action string `json:"action"`
Urgency float64 `json:"urgency"`
Reason string `json:"reason"`
Target string `json:"target,omitempty"`
Observation string `json:"observation,omitempty"`
Result string `json:"result,omitempty"`
}
AgentCycleTrace is the full record of one agent cycle: what it saw (observation), what it decided (action+reason+urgency+target), how long it took, and what it produced (result).
type AgentInboxEnrichItem ¶ added in v0.3.0
type AgentInboxEnrichItem struct {
Title string `json:"title"`
Connections int `json:"connections"`
Ago string `json:"ago"`
}
AgentInboxEnrichItem is one recent link-enrichment on the inbox.
type AgentInboxSummary ¶ added in v0.3.0
type AgentInboxSummary struct {
RawCount int `json:"raw_count"`
EnrichedCount int `json:"enriched_count"`
FailedCount int `json:"failed_count"`
TotalCount int `json:"total_count"`
LastPull string `json:"last_pull,omitempty"`
LastPullAgo string `json:"last_pull_ago,omitempty"`
NextPullIn string `json:"next_pull_in,omitempty"`
RecentEnrichments []AgentInboxEnrichItem `json:"recent_enrichments,omitempty"`
}
AgentInboxSummary is the inbox/link-feed summary. Engine-side mirror of linkfeed.AgentInboxSummary so tool/HTTP handlers can pass through without importing linkfeed from internal/engine.
type AgentMemoryEntry ¶ added in v0.3.0
type AgentMemoryEntry struct {
Cycle int64 `json:"cycle"`
Action string `json:"action"`
Urgency float64 `json:"urgency"`
Sentence string `json:"sentence"`
Ago string `json:"ago"`
}
AgentMemoryEntry is one decomposed rolling-memory item.
type AgentProposalEntry ¶ added in v0.3.0
type AgentProposalEntry struct {
File string `json:"file"`
Title string `json:"title"`
Type string `json:"type"`
Urgency string `json:"urgency"`
Created string `json:"created"`
}
AgentProposalEntry is one pending proposal on disk.
type AgentSnapshot ¶ added in v0.3.0
type AgentSnapshot struct {
Summary AgentSummary `json:"summary"`
Activity *AgentActivitySummary `json:"activity,omitempty"`
Memory []AgentMemoryEntry `json:"memory,omitempty"`
Proposals []AgentProposalEntry `json:"proposals,omitempty"`
Inbox *AgentInboxSummary `json:"inbox,omitempty"`
Traces []AgentCycleTrace `json:"traces,omitempty"`
LastObservation string `json:"last_observation,omitempty"`
IdentityRef string `json:"identity_ref,omitempty"`
}
AgentSnapshot is the full state projection for GET /v1/agents/{id}.
func QueryGetAgent ¶ added in v0.3.0
func QueryGetAgent(ctx context.Context, ctrl AgentController, req GetAgentRequest) (*AgentSnapshot, error)
QueryGetAgent wraps AgentController.GetAgent with normalization.
type AgentSummary ¶ added in v0.3.0
type AgentSummary struct {
AgentID string `json:"agent_id"`
Identity string `json:"identity,omitempty"` // nucleus.Name today ("cog", "sandy", etc.)
Alive bool `json:"alive"` // process is running
Running bool `json:"running"` // a cycle is in progress RIGHT NOW
UptimeSec int64 `json:"uptime_sec"`
CycleCount int64 `json:"cycle_count"`
LastAction string `json:"last_action,omitempty"` // sleep|observe|propose|execute|escalate|skip|error|""
LastCycle string `json:"last_cycle,omitempty"` // RFC3339
LastUrgency float64 `json:"last_urgency"`
LastReason string `json:"last_reason,omitempty"`
LastDurMs int64 `json:"last_duration_ms"`
Model string `json:"model,omitempty"`
Interval string `json:"interval,omitempty"`
}
AgentSummary is the list-friendly projection of one agent. Fields map 1:1 onto ServeAgent.Status() fields that already exist in-process — this is a rename, not new state.
type AgentTriggerResult ¶ added in v0.3.0
type AgentTriggerResult struct {
Triggered bool `json:"triggered"`
AgentID string `json:"agent_id"`
CycleID string `json:"cycle_id,omitempty"`
TriggerSeq int64 `json:"trigger_seq"`
Message string `json:"message"`
// Populated only when wait=true and the cycle observably completed.
Action string `json:"action,omitempty"`
Urgency float64 `json:"urgency,omitempty"`
Reason string `json:"reason,omitempty"`
DurationMs int64 `json:"duration_ms,omitempty"`
TimedOut bool `json:"timed_out,omitempty"`
}
AgentTriggerResult is the outcome of a POST /v1/agents/{id}/tick call. When wait=false, only Triggered/AgentID/CycleID/TriggerSeq/Message are populated. When wait=true, Action/Urgency/Reason/DurationMs/TimedOut carry the cycle's result.
func QueryTriggerAgent ¶ added in v0.3.0
func QueryTriggerAgent(ctx context.Context, ctrl AgentController, req TriggerAgentRequest) (*AgentTriggerResult, error)
QueryTriggerAgent wraps AgentController.TriggerAgent with normalization.
type AnthropicProvider ¶
type AnthropicProvider struct {
// contains filtered or unexported fields
}
AnthropicProvider implements Provider against the Anthropic Messages API.
func NewAnthropicProvider ¶
func NewAnthropicProvider(name string, cfg ProviderConfig) *AnthropicProvider
NewAnthropicProvider creates an AnthropicProvider from a ProviderConfig.
func (*AnthropicProvider) Available ¶
func (p *AnthropicProvider) Available(_ context.Context) bool
Available reports whether an API key is configured. For cloud providers we avoid a network round-trip on every health check — the presence of a non-empty API key is the availability signal.
func (*AnthropicProvider) Capabilities ¶
func (p *AnthropicProvider) Capabilities() ProviderCapabilities
Capabilities returns what Anthropic supports.
func (*AnthropicProvider) Complete ¶
func (p *AnthropicProvider) Complete(ctx context.Context, req *CompletionRequest) (*CompletionResponse, error)
Complete sends a non-streaming request and returns the full response.
func (*AnthropicProvider) Name ¶
func (p *AnthropicProvider) Name() string
Name returns the provider identifier.
func (*AnthropicProvider) Ping ¶
Ping probes the Anthropic API and returns measured round-trip latency. Uses GET /v1/models — lightweight, validates auth without running inference.
func (*AnthropicProvider) Stream ¶
func (p *AnthropicProvider) Stream(ctx context.Context, req *CompletionRequest) (<-chan StreamChunk, error)
Stream sends a streaming request and returns a channel of incremental chunks. The channel closes when generation is complete or the context is cancelled.
type AssembleOption ¶
type AssembleOption func(*assembleOpts)
AssembleOption configures optional AssembleContext parameters.
func WithContext ¶
func WithContext(ctx context.Context) AssembleOption
WithContext sets the request context for TRM embedding calls.
func WithConversationID ¶
func WithConversationID(id string) AssembleOption
WithConversationID sets the conversation ID for light cone tracking.
func WithIrisSignal ¶
func WithIrisSignal(signal irisSignal) AssembleOption
WithIrisSignal sets the current context-window usage signal for pressure-aware token estimation.
func WithManifestMode ¶
func WithManifestMode(enabled bool) AssembleOption
WithManifestMode switches CogDoc injection from full-body content to summary manifests with on-demand retrieval.
type AssimilationDecision ¶ added in v0.3.0
type AssimilationDecision string
const ( Integrate AssimilationDecision = "integrate" Quarantine AssimilationDecision = "quarantine" Defer AssimilationDecision = "defer" Discard AssimilationDecision = "discard" )
type AttentionProbe ¶
type AttentionProbe struct {
QProj [][]float32 // [d_head][d_model]
KProj [][]float32 // [d_head][d_model]
VProj [][]float32 // [d_head][d_model]
OutProj [][]float32 // [d_model][d_head]
DHead int
}
AttentionProbe lets trajectory context attend over the candidate set.
type AttentionalField ¶
type AttentionalField struct {
// contains filtered or unexported fields
}
AttentionalField holds the current salience map for the memory corpus. It is safe for concurrent reads (serve goroutine) and periodic writes (consolidation goroutine).
func NewAttentionalField ¶
func NewAttentionalField(cfg *Config) *AttentionalField
NewAttentionalField constructs an empty field. Call Update() to populate it.
func (*AttentionalField) AllScores ¶
func (f *AttentionalField) AllScores() map[string]float64
AllScores returns a copy of the full path→score map. Safe for external iteration (callers get a snapshot, not a live map).
func (*AttentionalField) Boost ¶
func (f *AttentionalField) Boost(path string, delta float64)
Boost adds delta to the score for path. Used by attention signals to apply a transient recency boost without a full field recomputation. The boost is overwritten on the next Update() call.
func (*AttentionalField) Fovea ¶
func (f *AttentionalField) Fovea(n int) []FileScore
Fovea returns the top n files by salience score (the "focal" context). If n <= 0, all files are returned.
func (*AttentionalField) LastUpdated ¶
func (f *AttentionalField) LastUpdated() time.Time
LastUpdated returns when the field was last recomputed.
func (*AttentionalField) Len ¶
func (f *AttentionalField) Len() int
Len returns the number of files currently in the field.
func (*AttentionalField) Score ¶
func (f *AttentionalField) Score(path string) float64
Score returns the current salience score for a single file. Returns 0.0 if the file is not in the field.
func (*AttentionalField) Update ¶
func (f *AttentionalField) Update() error
Update recomputes salience for memory files.
Three modes, selected automatically:
- HEAD unchanged + scores exist → no-op (instant)
- Previous HEAD known + new HEAD → delta scan (only new commits)
- No previous state → full scan (startup)
type AttentionalZone ¶
type AttentionalZone string
AttentionalZone maps to the v3 four-layer attentional field.
const ( ZoneNucleus AttentionalZone = "nucleus" // Identity, never drops below threshold ZoneMomentum AttentionalZone = "momentum" // Recent trajectory ZoneFoveal AttentionalZone = "foveal" // Current focus ZoneParafoveal AttentionalZone = "parafoveal" // Background, on demand )
type BackgroundTaskOpts ¶
type BackgroundTaskOpts struct {
Prompt string
Model string
Effort string
MCPConfig string
AllowedTools []string
Source string // "discord", "signal", "http", etc.
CallbackChannel string // channel to report results to
Identity string // NodeID of the requestor
MaxBudgetUSD float64
Timeout time.Duration
WorkDir string // working directory for the process
SystemPrompt string
}
BackgroundTaskOpts configures a fire-and-forget Claude Code task.
type BackupEntry ¶ added in v0.3.0
type BackupEntry struct {
Name string `json:"name"`
Path string `json:"path"`
Timestamp string `json:"timestamp"`
Size int64 `json:"size"`
}
BackupEntry describes a single rotating backup file on disk.
func ListBackups ¶ added in v0.3.0
func ListBackups(dir string) ([]BackupEntry, error)
ListBackups returns the .bak- files in dir, newest-first.
type BenchmarkPrompt ¶
type BenchmarkPrompt struct {
Prompt string `json:"prompt"`
ExpectedDocs []string `json:"expected_docs"` // partial path/ID fragments to match
ExpectedKeywords []string `json:"expected_keywords"` // words expected in response (future)
}
BenchmarkPrompt is a single test case.
func LoadPrompts ¶
func LoadPrompts(path string) ([]BenchmarkPrompt, error)
LoadPrompts reads benchmark prompts from a JSON file.
type BenchmarkResult ¶
type BenchmarkResult struct {
Prompt string
AssemblyMs int64
TotalTokens int
InjectedDocs []string
ExpectedDocs []string
Recall float64 // |injected ∩ expected| / |expected|
Precision float64 // |injected ∩ expected| / |injected|
Response string // for manual review
ResponseMs int64
}
BenchmarkResult is the measured output for a single prompt.
type BenchmarkSuite ¶
type BenchmarkSuite struct {
// contains filtered or unexported fields
}
BenchmarkSuite runs a set of prompts through context assembly and optionally inference, collecting quality metrics.
func NewBenchmarkSuite ¶
func NewBenchmarkSuite(process *Process, router Router, model string, budget int) *BenchmarkSuite
NewBenchmarkSuite constructs a suite bound to the given process and router. Pass nil for router to skip inference (assembly metrics only).
func (*BenchmarkSuite) Run ¶
func (b *BenchmarkSuite) Run(ctx context.Context, prompts []BenchmarkPrompt) []BenchmarkResult
Run executes all prompts sequentially and returns results.
type BlobEntry ¶
type BlobEntry struct {
Hash string `json:"hash"`
Size int64 `json:"size"`
ContentType string `json:"content_type"`
Refs []string `json:"refs,omitempty"` // CogDoc URIs that reference this blob
SyncedTo []string `json:"synced_to,omitempty"`
StoredAt string `json:"stored_at"`
}
BlobEntry is the metadata for a single stored blob.
type BlobPointer ¶
type BlobPointer struct {
Hash string `yaml:"hash" json:"hash"`
Size int64 `yaml:"size" json:"size"`
ContentType string `yaml:"content_type" json:"content_type"`
OriginalPath string `yaml:"original_path" json:"original_path"`
}
BlobPointer is the CogDoc frontmatter for a blob pointer file.
func FindBlobPointers ¶
func FindBlobPointers(workspaceRoot string) ([]BlobPointer, error)
FindBlobPointers walks the workspace and returns all blob pointer CogDocs.
type BlobStore ¶
type BlobStore struct {
// contains filtered or unexported fields
}
BlobStore manages content-addressed blob storage.
func NewBlobStore ¶
NewBlobStore creates a blob store rooted at workspaceRoot/.cog/blobs/.
func (*BlobStore) GC ¶
GC removes blobs not referenced by any CogDoc pointer in the workspace. Returns the number of blobs removed and total bytes freed.
func (*BlobStore) PrintBlobList ¶
PrintBlobList prints a formatted table of stored blobs.
func (*BlobStore) Store ¶
Store writes content to the blob store and returns the SHA-256 hash. If the blob already exists (same hash), this is a no-op.
type BlockArtifact ¶
type BlockArtifact = cogblock.BlockArtifact
type BlockProvenance ¶
type BlockProvenance = cogblock.BlockProvenance
type BuildRouterOption ¶
type BuildRouterOption func(*buildRouterOpts)
BuildRouterOption configures BuildRouter.
func WithProcessManager ¶
func WithProcessManager(pm *ProcessManager) BuildRouterOption
WithProcessManager provides a ProcessManager for providers that spawn subprocesses.
type BusBlock ¶ added in v0.3.0
BusBlock is the wire format for bus events. Alias to the canonical pkg/cogfield.Block so the byte-compat JSON shape is guaranteed — the root package uses the same type.
type BusEventBroker ¶ added in v0.3.0
type BusEventBroker struct {
// contains filtered or unexported fields
}
BusEventBroker manages SSE subscribers for real-time bus event delivery. Per-bus indexed, with a wildcard key "*" for cross-bus subscriptions.
func NewBusEventBroker ¶ added in v0.3.0
func NewBusEventBroker() *BusEventBroker
NewBusEventBroker constructs an empty broker.
func (*BusEventBroker) Publish ¶ added in v0.3.0
func (b *BusEventBroker) Publish(busID string, evt *BusBlock)
Publish sends an event to all subscribers of a bus AND to wildcard ("*") subscribers. Non-blocking: drops if channel is full.
func (*BusEventBroker) StartReaper ¶ added in v0.3.0
func (b *BusEventBroker) StartReaper(ctx context.Context)
StartReaper launches a background goroutine that periodically sweeps stale SSE connections across all buses. This is the "belt" — catches abandoned connections regardless of whether new ones are arriving.
func (*BusEventBroker) Subscribe ¶ added in v0.3.0
func (b *BusEventBroker) Subscribe(busID string, ch chan *BusBlock, ctx context.Context, consumerID string) bool
Subscribe registers a channel to receive events for a given bus. If consumerID is non-empty, any existing subscription with the same consumer identity is evicted first. If the bus is at the connection limit, stale/dead subscribers are swept first; returns false if still at capacity afterward.
func (*BusEventBroker) SubscriberCount ¶ added in v0.3.0
func (b *BusEventBroker) SubscriberCount(busID string) int
SubscriberCount returns the number of active subscribers for a bus.
func (*BusEventBroker) TouchWrite ¶ added in v0.3.0
func (b *BusEventBroker) TouchWrite(busID string, ch chan *BusBlock)
TouchWrite updates the lastWrite timestamp for a subscriber.
func (*BusEventBroker) Unsubscribe ¶ added in v0.3.0
func (b *BusEventBroker) Unsubscribe(busID string, ch chan *BusBlock)
Unsubscribe removes a channel from a bus's subscriber set.
type BusRegistryEntry ¶ added in v0.3.0
type BusRegistryEntry = cogfield.BusRegistryEntry
BusRegistryEntry matches registry.json shape — aliased for the same reason.
type BusSessionManager ¶ added in v0.3.0
type BusSessionManager struct {
// contains filtered or unexported fields
}
BusSessionManager manages CogBus operations: bus creation, event appending, and reading event history. Direct verbatim port of root's busSessionManager to preserve byte-compat.
func NewBusSessionManager ¶ added in v0.3.0
func NewBusSessionManager(workspaceRoot string) *BusSessionManager
NewBusSessionManager constructs a manager rooted at workspaceRoot. Events and registry live under {workspaceRoot}/.cog/.state/buses/.
func (*BusSessionManager) AddEventHandler ¶ added in v0.3.0
func (m *BusSessionManager) AddEventHandler(name string, fn func(busID string, block *BusBlock))
AddEventHandler registers a named handler for bus events. Handlers are called in registration order when a bus event is appended.
func (*BusSessionManager) AppendEvent ¶ added in v0.3.0
func (m *BusSessionManager) AppendEvent(busID, eventType, from string, payload map[string]interface{}) (*BusBlock, error)
AppendEvent appends a new BusBlock to a bus's event chain. V2 blocks hash the full canonical envelope (all fields except hash and sig). Both prev ([]string) and prev_hash (string) are written for V1 compat. Handlers are dispatched synchronously after the lock is released.
The bus directory + events.jsonl are created on demand if they don't yet exist — matches root's behaviour where handleBusSend pre-creates them but downstream callers (e.g. the chat pipeline) can skip that step.
func (*BusSessionManager) BusesDir ¶ added in v0.3.0
func (m *BusSessionManager) BusesDir() string
BusesDir returns the path to the buses state directory.
func (*BusSessionManager) EnsureBus ¶ added in v0.3.0
func (m *BusSessionManager) EnsureBus(busID string) error
EnsureBus creates the bus directory + events.jsonl if they don't exist. Safe to call multiple times.
func (*BusSessionManager) EventsPath ¶ added in v0.3.0
func (m *BusSessionManager) EventsPath(busID string) string
EventsPath returns the path to a bus's events JSONL file.
func (*BusSessionManager) LoadRegistry ¶ added in v0.3.0
func (m *BusSessionManager) LoadRegistry() []BusRegistryEntry
LoadRegistry is the public, lock-acquiring variant. Returns a copy of the current registry snapshot.
func (*BusSessionManager) ReadEvents ¶ added in v0.3.0
func (m *BusSessionManager) ReadEvents(busID string) ([]BusBlock, error)
ReadEvents reads all events from a bus. De-dups by seq (file may have duplicates from crash recovery).
func (*BusSessionManager) RegisterBus ¶ added in v0.3.0
func (m *BusSessionManager) RegisterBus(busID, sessionID, origin string) error
RegisterBus adds or updates a bus entry in the registry.
func (*BusSessionManager) RegistryPath ¶ added in v0.3.0
func (m *BusSessionManager) RegistryPath() string
RegistryPath returns the path to the bus registry file.
func (*BusSessionManager) RemoveEventHandler ¶ added in v0.3.0
func (m *BusSessionManager) RemoveEventHandler(name string)
RemoveEventHandler removes a named handler by name.
func (*BusSessionManager) WorkspaceRoot ¶ added in v0.3.0
func (m *BusSessionManager) WorkspaceRoot() string
WorkspaceRoot returns the workspace path the manager is bound to.
type Capability ¶
type Capability string
Capability is a single feature a provider may support.
const ( CapStreaming Capability = "streaming" CapToolUse Capability = "tool_use" CapToolCallValidation Capability = "tool_call_validation" CapVision Capability = "vision" CapLongContext Capability = "long_context" CapJSON Capability = "json_output" CapCaching Capability = "caching" CapBatch Capability = "batch" )
type ChunkMeta ¶
type ChunkMeta struct {
DocID string `json:"doc_id"`
Path string `json:"path"`
Title string `json:"title"`
SectionTitle string `json:"section_title"`
ChunkIdx int `json:"chunk_idx"`
ChunkID string `json:"chunk_id"`
TextPreview string `json:"text_preview"`
}
ChunkMeta holds metadata for a single embedded chunk.
type ClaimRejection ¶ added in v0.3.0
type ClaimRejection string
ClaimRejection reasons for the handoff.claim_rejected event (amendment #4).
const ( ClaimRejectedOfferNotFound ClaimRejection = "offer_not_found" ClaimRejectedAlreadyClaimed ClaimRejection = "already_claimed" ClaimRejectedTTLExpired ClaimRejection = "ttl_expired" ClaimRejectedOutOfOrder ClaimRejection = "out_of_order" )
type ClaimResult ¶ added in v0.3.0
type ClaimResult struct {
Offer *HandoffState
Rejection ClaimRejection // empty on success
ConflictingSession string // set on already_claimed
}
ClaimResult is what the handler returns after an atomic claim attempt.
type ClaudeCodeProvider ¶
type ClaudeCodeProvider struct {
// contains filtered or unexported fields
}
ClaudeCodeProvider implements Provider by spawning claude CLI processes.
func NewClaudeCodeProvider ¶
func NewClaudeCodeProvider(name string, cfg ProviderConfig, procMgr *ProcessManager) *ClaudeCodeProvider
NewClaudeCodeProvider creates a ClaudeCodeProvider from a ProviderConfig.
func (*ClaudeCodeProvider) Available ¶
func (p *ClaudeCodeProvider) Available(ctx context.Context) bool
Available checks that the claude binary exists and is authenticated.
func (*ClaudeCodeProvider) Capabilities ¶
func (p *ClaudeCodeProvider) Capabilities() ProviderCapabilities
Capabilities returns what this provider supports.
func (*ClaudeCodeProvider) Complete ¶
func (p *ClaudeCodeProvider) Complete(ctx context.Context, req *CompletionRequest) (*CompletionResponse, error)
Complete sends a prompt and waits for the full response.
func (*ClaudeCodeProvider) Name ¶
func (p *ClaudeCodeProvider) Name() string
Name returns the provider identifier.
func (*ClaudeCodeProvider) Ping ¶
Ping checks the binary is available and returns the startup overhead.
func (*ClaudeCodeProvider) SpawnBackground ¶
func (p *ClaudeCodeProvider) SpawnBackground(opts BackgroundTaskOpts) (string, error)
SpawnBackground starts a Claude Code process that outlives the HTTP request. Results are delivered via the process manager's callback mechanism.
func (*ClaudeCodeProvider) Stream ¶
func (p *ClaudeCodeProvider) Stream(ctx context.Context, req *CompletionRequest) (<-chan StreamChunk, error)
Stream spawns a claude process and returns incremental chunks. The returned channel closes when the process exits or ctx is cancelled. On ctx cancellation (client disconnect), the process is killed.
type ClaudeCodeTailer ¶
type ClaudeCodeTailer struct {
Watcher *FileWatcher
}
ClaudeCodeTailer tails Claude Code JSONL logs and emits normalized CogBlocks.
func (*ClaudeCodeTailer) Name ¶
func (t *ClaudeCodeTailer) Name() string
type CodexProvider ¶
type CodexProvider struct {
// contains filtered or unexported fields
}
CodexProvider implements Provider by spawning codex exec processes.
func NewCodexProvider ¶
func NewCodexProvider(name string, cfg ProviderConfig) *CodexProvider
NewCodexProvider creates a CodexProvider from a ProviderConfig.
func (*CodexProvider) Capabilities ¶
func (p *CodexProvider) Capabilities() ProviderCapabilities
func (*CodexProvider) Complete ¶
func (p *CodexProvider) Complete(ctx context.Context, req *CompletionRequest) (*CompletionResponse, error)
Complete sends a prompt and waits for the full response.
func (*CodexProvider) Name ¶
func (p *CodexProvider) Name() string
func (*CodexProvider) Stream ¶
func (p *CodexProvider) Stream(ctx context.Context, req *CompletionRequest) (<-chan StreamChunk, error)
Stream spawns a codex exec process and returns incremental chunks.
type CogBlock ¶
type CogBlock struct {
ID string `json:"id"`
Timestamp time.Time `json:"timestamp"`
SessionID string `json:"session_id,omitempty"`
ThreadID string `json:"thread_id,omitempty"`
// Source identification.
SourceChannel string `json:"source_channel"`
SourceTransport string `json:"source_transport"`
SourceIdentity string `json:"source_identity,omitempty"`
// Target.
TargetIdentity string `json:"target_identity,omitempty"`
WorkspaceID string `json:"workspace_id,omitempty"`
// Content.
Kind CogBlockKind `json:"kind"`
RawPayload json.RawMessage `json:"raw_payload,omitempty"`
Messages []ProviderMessage `json:"messages,omitempty"`
SystemPrompt string `json:"system_prompt,omitempty"`
// Provenance.
Provenance BlockProvenance `json:"provenance"`
TrustContext TrustContext `json:"trust_context"`
// Ledger linkage.
LedgerRef string `json:"ledger_ref,omitempty"`
// Artifacts produced from processing this block.
Artifacts []BlockArtifact `json:"artifacts,omitempty"`
}
CogBlock is the engine-local CogBlock that includes typed Messages. The canonical type definitions (CogBlockKind, BlockProvenance, TrustContext, BlockArtifact) live in pkg/cogblock and are re-exported below.
This struct mirrors cogblock.CogBlock but replaces the raw Messages field with the engine's typed []ProviderMessage for internal processing.
func NormalizeAnthropicRequest ¶
NormalizeAnthropicRequest converts an Anthropic Messages API request into a CogBlock.
func NormalizeGateEvent ¶
NormalizeGateEvent converts an internal GateEvent into a CogBlock.
func NormalizeIngestBlock ¶ added in v0.3.0
func NormalizeIngestBlock(req *IngestRequest, result *IngestResult) *CogBlock
func NormalizeMCPRequest ¶
func NormalizeMCPRequest(toolName string, input json.RawMessage) *CogBlock
NormalizeMCPRequest converts an MCP tool invocation that triggers cognition into a CogBlock.
func NormalizeOpenAIRequest ¶
NormalizeOpenAIRequest converts an OpenAI-compatible chat request into a CogBlock.
type CogBlockKind ¶
type CogBlockKind = cogblock.CogBlockKind
Re-export shared types from pkg/cogblock. These are type aliases so existing code compiles without changes.
type CogDocIndex ¶
type CogDocIndex struct {
// ByURI maps canonical cog:// URI → document.
ByURI map[string]*IndexedCogdoc
// ByType maps type string → all documents of that type.
ByType map[string][]*IndexedCogdoc
// ByTag maps tag string → all documents carrying that tag.
ByTag map[string][]*IndexedCogdoc
// ByStatus maps status string → all documents with that status.
ByStatus map[string][]*IndexedCogdoc
// RefGraph maps source URI → its explicit DocRef targets.
RefGraph map[string][]DocRef
// InverseRefs maps target URI → list of source URIs that reference it.
InverseRefs map[string][]string
}
CogDocIndex is the complete in-memory catalogue of the memory corpus. All fields are populated by BuildIndex; nil maps indicate an empty corpus.
func BuildIndex ¶
func BuildIndex(workspaceRoot string) (*CogDocIndex, error)
BuildIndex walks .cog/mem/ under workspaceRoot, parses CogDoc frontmatter, and returns a fully populated CogDocIndex.
Files with unparseable frontmatter are included with empty metadata (best-effort). If .cog/mem/ does not exist, an empty index is returned without error.
type CogDocService ¶ added in v0.3.0
type CogDocService struct {
// contains filtered or unexported fields
}
CogDocService provides a single, consistent write path for all CogDoc mutations. All writes go through WriteAndSync or PatchAndSync so that the index, field, and ledger stay in lockstep.
func NewCogDocService ¶ added in v0.3.0
func NewCogDocService(cfg *Config, process *Process) *CogDocService
NewCogDocService constructs a CogDocService bound to the given config and process.
func (*CogDocService) PatchAndSync ¶ added in v0.3.0
func (s *CogDocService) PatchAndSync(uri string, patches cogdocFrontmatterPatch) (*WriteResult, error)
PatchAndSync patches CogDoc frontmatter and synchronises all kernel state. This is the canonical patch path — every frontmatter mutation MUST flow through here.
Steps:
- Resolve URI to filesystem path
- Read file and apply frontmatter patch
- Write patched file
- Refresh CogDoc index
- Boost attentional field
- Emit ledger event (type: "cogdoc.patched")
func (*CogDocService) WriteAndSync ¶ added in v0.3.0
func (s *CogDocService) WriteAndSync(path string, opts CogDocWriteOpts) (*WriteResult, error)
WriteAndSync writes a CogDoc and synchronises all kernel state. This is the canonical write path — every CogDoc creation or overwrite MUST flow through here.
Steps:
- Write file via WriteCogDoc()
- Resolve absolute path
- Refresh CogDoc index (full rebuild)
- Boost attentional field for the new/updated path
- Emit ledger event (type: "cogdoc.written")
type CogDocWriteOpts ¶ added in v0.3.0
type CogDocWriteOpts struct {
Title string
Content string
Tags []string
Status string // default "active"
DocType string // e.g. "link", "conversation", "insight"
Source string // e.g. "discord", "chatgpt"
URL string // optional URL field
SourceID string // dedup key
Extra map[string]string // additional frontmatter fields
}
CogDocWriteOpts holds options for writing a CogDoc via the internal API.
func ApplyMembraneDecision ¶ added in v0.3.0
func ApplyMembraneDecision(defaultMemPath string, opts CogDocWriteOpts, decision IngestionResult) (string, CogDocWriteOpts, bool)
type CoherenceReport ¶
type CoherenceReport struct {
Pass bool `json:"pass"`
Results []ValidationResult `json:"results"`
Timestamp string `json:"timestamp"`
}
CoherenceReport aggregates all validation results from a single pass.
func RunCoherence ¶
func RunCoherence(cfg *Config, nucleus *Nucleus, idxArgs ...*CogDocIndex) *CoherenceReport
RunCoherence executes the 4-layer validation stack and returns a report. An optional *CogDocIndex enables Layer 4 dead-reference detection; without it Layer 4 passes trivially (maintaining backward compatibility).
type CompletionRequest ¶
type CompletionRequest struct {
// SystemPrompt carries nucleus content: identity, role, self-model.
SystemPrompt string `json:"system_prompt"`
// Messages is the conversation history in the current foveal window.
Messages []ProviderMessage `json:"messages"`
// Context is the assembled foveal content from the attentional field.
Context []ContextItem `json:"context,omitempty"`
// MaxTokens is the generation limit.
MaxTokens int `json:"max_tokens,omitempty"`
// Temperature controls randomness [0.0, 1.0].
Temperature *float64 `json:"temperature,omitempty"`
// TopP is the nucleus sampling parameter.
TopP *float64 `json:"top_p,omitempty"`
// Stop sequences that terminate generation.
Stop []string `json:"stop,omitempty"`
// Tools defines MCP tool definitions the model can invoke.
Tools []ToolDefinition `json:"tools,omitempty"`
// ToolChoice constrains tool use: "auto", "none", "required", or a name.
ToolChoice string `json:"tool_choice,omitempty"`
// ModelOverride, when non-empty, instructs the provider to use this model
// instead of its configured default. Set by --model flag or request body.
ModelOverride string `json:"model_override,omitempty"`
// InteractionID links the request back to the canonical ingress CogBlock.
InteractionID string `json:"interaction_id,omitempty"`
// Metadata carries routing/ledger information not sent to the model.
Metadata RequestMetadata `json:"metadata"`
}
CompletionRequest carries the assembled context package to the model. This is the output of the attentional field, not raw chat input.
type CompletionResponse ¶
type CompletionResponse struct {
Content string `json:"content"`
ToolCalls []ToolCall `json:"tool_calls,omitempty"`
StopReason string `json:"stop_reason"` // "end_turn" | "max_tokens" | "tool_use"
Usage TokenUsage `json:"usage"`
ProviderMeta ProviderMeta `json:"provider_meta"`
}
CompletionResponse is what a Provider returns from Complete().
type Config ¶
type Config struct {
// WorkspaceRoot is the absolute path to the cog-workspace root.
WorkspaceRoot string
// CogDir is WorkspaceRoot/.cog
CogDir string
// Port the HTTP API listens on. Default: 6931 (ln(2) × 10⁴).
Port int
// BindAddr is the interface the HTTP API binds to.
// Default: "127.0.0.1" (loopback-only). Set to "0.0.0.0" to listen
// on all interfaces — required for pod/LAN/Tailnet deployments.
// Users opting in to non-loopback binds are expected to handle the
// network boundary themselves (trusted network, VPN, firewall).
BindAddr string
// ConsolidationInterval is how often the consolidation loop fires (seconds).
ConsolidationInterval int
// HeartbeatInterval is the dormant-state heartbeat cadence (seconds).
HeartbeatInterval int
// SalienceDaysWindow is the git history window for salience scoring.
SalienceDaysWindow int
// OutputReserve is tokens reserved for model generation (subtracted from budget).
OutputReserve int
// TRMWeightsPath is the path to the TRM binary weights file.
// If empty, TRM is disabled and keyword+salience scoring is used.
TRMWeightsPath string
// TRMEmbeddingsPath is the path to the TRM embedding index binary.
TRMEmbeddingsPath string
// TRMChunksPath is the path to the TRM chunk metadata JSON.
TRMChunksPath string
// OllamaEmbedEndpoint is the Ollama /api/embeddings endpoint URL.
// Default: http://localhost:11434
OllamaEmbedEndpoint string
// OllamaEmbedModel is the embedding model name for Ollama.
// Default: nomic-embed-text
OllamaEmbedModel string
// ToolCallValidationEnabled gates runtime validation for model-emitted tool calls.
// Providers that advertise CapToolUse are trusted and skip this guardrail.
ToolCallValidationEnabled bool
// DigestPaths maps stream tailer adapter names to JSONL file/directory paths.
// Empty map means external digestion is disabled.
DigestPaths map[string]string
// KernelLogPath overrides the default per-workspace kernel slog JSONL sink
// at .cog/run/kernel.log.jsonl. Leave empty for the default.
KernelLogPath string
LocalModel string
// contains filtered or unexported fields
}
Config holds all runtime configuration for the v3 kernel.
func DefaultKernelYAML ¶ added in v0.3.0
DefaultKernelYAML returns a kernelConfig with no overrides (pure defaults). Used for the `include_defaults` read surface.
func LoadConfig ¶
LoadConfig builds a Config from flags + environment + .cog/config/kernel.yaml. Precedence: flag > env > file > default.
func ResolveFromKernelConfig ¶ added in v0.3.0
ResolveFromKernelConfig projects a kernelConfig into the effective *Config the daemon would load, applying LoadConfig's default / override semantics. Used for `effective_config` surfaces without needing to touch disk.
type ConfigDiffEntry ¶ added in v0.3.0
type ConfigDiffEntry struct {
Field string `json:"field"`
Before any `json:"before"`
After any `json:"after"`
}
ConfigDiffEntry describes a single field that changed between two configs.
type ConfigViolation ¶ added in v0.3.0
type ConfigViolation struct {
Field string `json:"field"`
Rule string `json:"rule"`
Got any `json:"got,omitempty"`
}
ConfigViolation records a single validation failure.
type ConsolidationAction ¶
func (ConsolidationAction) Run ¶
func (a ConsolidationAction) Run() (int, error)
type ConstellationBridge ¶
type ConstellationBridge interface {
EmitHeartbeat(payload KernelHeartbeatPayload) (HeartbeatReceipt, error)
TrustSnapshot() ConstellationTrustSnapshot
Start(ctx context.Context) error
Stop()
}
ConstellationBridge defines the kernel-side integration point for constellation.
type ConstellationTrustSnapshot ¶
type ConstellationTrustSnapshot struct {
SelfCoherencePass bool `json:"self_coherence_pass"`
SelfTrustScore float64 `json:"self_trust_score"`
PeerTrustMean float64 `json:"peer_trust_mean"`
PeerCount int `json:"peer_count"`
TrustedPeerCount int `json:"trusted_peer_count"`
ConstellationHealthy bool `json:"constellation_healthy"`
Timestamp time.Time `json:"timestamp"`
}
ConstellationTrustSnapshot summarizes current local and peer trust state.
type ConsumerCursor ¶ added in v0.3.0
type ConsumerCursor struct {
ConsumerID string `json:"consumer_id"`
BusID string `json:"bus_id"`
LastAckedSeq int64 `json:"last_acked_seq"`
ConnectedAt time.Time `json:"connected_at"`
LastAckAt time.Time `json:"last_ack_at"`
Stale bool `json:"stale"`
}
ConsumerCursor tracks a consumer's position in a bus event stream. Persisted to {bus_id}.cursors.jsonl alongside events.
JSON field names MUST stay identical to root for byte-compat (the cursor objects are what GET /v1/bus/consumers returns under "consumers").
type ConsumerEntry ¶ added in v0.2.0
type ConsumerEntry struct {
Path string `yaml:"path" json:"path"`
Type string `yaml:"type" json:"type"` // json, sed, plist
JSONPath string `yaml:"jsonpath,omitempty" json:"jsonpath,omitempty"`
Template string `yaml:"template,omitempty" json:"template,omitempty"`
Match string `yaml:"match,omitempty" json:"match,omitempty"`
Replace string `yaml:"replace,omitempty" json:"replace,omitempty"`
Key string `yaml:"key,omitempty" json:"key,omitempty"`
}
ConsumerEntry declares a file that references this service's port.
type ConsumerRegistry ¶ added in v0.3.0
type ConsumerRegistry struct {
// contains filtered or unexported fields
}
ConsumerRegistry manages consumer cursors for all buses. Thread-safe — all public methods acquire the mutex.
func NewConsumerRegistry ¶ added in v0.3.0
func NewConsumerRegistry(dataDir string) *ConsumerRegistry
NewConsumerRegistry constructs an empty registry that will persist cursor updates to dataDir. If dataDir is empty, persistence is disabled (tests).
func (*ConsumerRegistry) Ack ¶ added in v0.3.0
func (cr *ConsumerRegistry) Ack(busID, consumerID string, seq int64) (*ConsumerCursor, error)
Ack advances a consumer's cursor. Returns the updated cursor. Monotonic — ignores seq <= current LastAckedSeq. Returns error if the bus/consumer is unknown (matches root's semantics for /v1/bus/{id}/ack).
func (*ConsumerRegistry) GetOrCreate ¶ added in v0.3.0
func (cr *ConsumerRegistry) GetOrCreate(busID, consumerID string) *ConsumerCursor
GetOrCreate returns the cursor for a consumer, creating one at position 0 if it doesn't exist.
func (*ConsumerRegistry) List ¶ added in v0.3.0
func (cr *ConsumerRegistry) List(busID string) []*ConsumerCursor
List returns all cursors, optionally filtered by busID (empty = all). Returned cursors are copies; modifying them has no effect on the registry.
func (*ConsumerRegistry) LoadFromDisk ¶ added in v0.3.0
func (cr *ConsumerRegistry) LoadFromDisk() error
LoadFromDisk reads all cursor files and reconstructs the latest state per consumer.
func (*ConsumerRegistry) Remove ¶ added in v0.3.0
func (cr *ConsumerRegistry) Remove(consumerID string) bool
Remove deletes a consumer's cursor by consumer ID across all buses. Returns true if at least one cursor was removed.
type ContainerConfig ¶
type ContainerRuntime ¶
type ContainerRuntime interface {
Start(image string, config ContainerConfig) (containerID string, err error)
Stop(containerID string) error
Status(containerID string) (ContainerStatus, error)
Logs(containerID string, follow bool) (io.ReadCloser, error)
Exec(containerID string, command []string) ([]byte, error)
Pull(image string) error
}
type ContainerStatus ¶
type ContentPart ¶
type ContentPart struct {
Type string `json:"type"`
Text string `json:"text,omitempty"`
ImageURL string `json:"image_url,omitempty"`
}
ContentPart is a structured content element preserving multi-modal data (text and images) that would be lost by the text-only Content field.
type ContentType ¶ added in v0.3.0
type ContentType string
ContentType is the semantic classification assigned after decomposition.
const ( ContentPaper ContentType = "paper" ContentRepo ContentType = "repo" ContentVideo ContentType = "video" ContentArticle ContentType = "article" ContentDiscussion ContentType = "discussion" ContentTool ContentType = "tool" ContentUnknown ContentType = "unknown" )
type ContextBlock ¶ added in v0.2.0
type ContextBlock struct {
Name string `json:"name"` // e.g. "nucleus", "project", "knowledge", "node", "field", "events"
Tier int `json:"tier"` // 0=fixed, 1=priority, 2=flexible, 3=expendable
Stability int `json:"stability"` // 0-100: higher = less likely to change between turns (KV cache hint)
Content string `json:"content"` // Rendered markdown/text
Tokens int `json:"tokens"` // Estimated token count
Hash string `json:"hash"` // Content hash for change detection
}
ContextBlock is a single named section of the context frame.
func NewBlock ¶ added in v0.2.0
func NewBlock(name, content string) ContextBlock
NewBlock creates a ContextBlock with defaults from the tier/stability maps.
type ContextFrame ¶ added in v0.2.0
type ContextFrame struct {
Blocks []ContextBlock `json:"blocks"`
Budget int `json:"budget"`
UsedTokens int `json:"used_tokens"`
Anchor string `json:"anchor,omitempty"`
Goal string `json:"goal,omitempty"`
}
ContextFrame is the structured output of the foveated context rendering pipeline. Each block has a name, tier (priority), stability (KV cache hint), and content.
func (*ContextFrame) FitBudget ¶ added in v0.2.0
func (f *ContextFrame) FitBudget(budget int)
FitBudget evicts lowest-tier blocks until total tokens <= budget. Tier 0 blocks are never removed. Among blocks of equal tier, the least stable block is evicted first.
func (*ContextFrame) Render ¶ added in v0.2.0
func (f *ContextFrame) Render() string
Render serializes the frame as HTML comment blocks for hook injection. Blocks are sorted by stability descending (most stable first — KV cache friendly). Each block is emitted as:
<!-- block:{tier}:{name} hash:{hash} tokens:{tokens} stability:{stability} -->
{content}
---
type ContextItem ¶
type ContextItem struct {
ID string `json:"id"` // cog:// URI or memory address
Zone AttentionalZone `json:"zone"`
Salience float64 `json:"salience"`
Content string `json:"content"`
TokenEstimate int `json:"token_estimate,omitempty"`
}
ContextItem is a piece of foveated context assembled by the attentional field.
type ContextPackage ¶
type ContextPackage struct {
// NucleusText is the identity card content — always present (Zone 0).
NucleusText string
// ClientSystem is the client's system prompt if provided (Zone 1).
ClientSystem string
// FovealDocs are the CogDocs selected for injection (Zone 1).
FovealDocs []FovealDoc
// Conversation is the scored/filtered conversation history (Zone 2).
Conversation []ScoredMessage
// CurrentMessage is the latest user message — always present (Zone 3).
CurrentMessage *ProviderMessage
// TotalTokens is the approximate token count of the assembled context.
TotalTokens int
// OutputReserve is tokens reserved for generation.
OutputReserve int
// InjectedPaths is the list of injected absolute file paths (for logging).
InjectedPaths []string
}
ContextPackage is the assembled context for a single chat request.
func (*ContextPackage) FormatForProvider ¶
func (pkg *ContextPackage) FormatForProvider() (string, []ProviderMessage)
FormatForProvider renders a ContextPackage as (systemPrompt, messages) for the provider.
The system prompt is stability-ordered for KV cache optimization: nucleus → client system prompt → CogDocs (by salience descending).
Messages are in chronological order: conversation history → current message.
type Conv1D ¶
type Conv1D struct {
Weight [][][]float32 // [channels][1][kernel_size]
Bias []float32 // [channels]
Channels int
Kernel int
}
Conv1D implements depthwise 1D convolution with kernel size K. For single-step inference, we store the last (K-1) inputs as state.
type ConversationQuery ¶ added in v0.3.0
type ConversationQuery struct {
SessionID string
AfterTurn int // turn_index > AfterTurn (0 means no lower bound)
BeforeTurn int // turn_index < BeforeTurn (0 means no upper bound)
Since time.Time // zero means no since-filter
Limit int // default 20, max 200 (turns are bigger than events)
IncludeFull bool // default true — hydrate prompt/response from sidecar
IncludeTools bool // default true — include tool-call transcript
Order string // "asc" (default) | "desc"
}
ConversationQuery parameterises a read over the turn history.
type ConversationQueryResult ¶ added in v0.3.0
type ConversationQueryResult struct {
Count int `json:"count"`
SessionID string `json:"session_id,omitempty"`
Turns []ConversationTurn `json:"turns"`
Truncated bool `json:"truncated"`
NextAfterTurn int `json:"next_after_turn,omitempty"`
}
ConversationQueryResult is the envelope returned to MCP/HTTP callers.
func QueryConversation ¶ added in v0.3.0
func QueryConversation(workspaceRoot string, q ConversationQuery) (*ConversationQueryResult, error)
QueryConversation reads turn.completed events for the given session and (optionally) hydrates each from the sidecar. Returns turns in ascending turn_index order by default.
If q.SessionID is "" the result is empty — caller must scope the read; cross-session scanning is explicitly out of scope for v1 (matches Agent R §9.9: default to current session, scoped reads only).
type ConversationTurn ¶ added in v0.3.0
type ConversationTurn struct {
TurnID string `json:"turn_id"`
TurnIndex int `json:"turn_index"`
SessionID string `json:"session_id"`
Timestamp string `json:"timestamp"`
DurationMs int64 `json:"duration_ms,omitempty"`
Prompt string `json:"prompt"`
PromptTruncated bool `json:"prompt_truncated,omitempty"`
Response string `json:"response"`
ResponseTruncated bool `json:"response_truncated,omitempty"`
ToolCalls []ToolCallRecord `json:"tool_calls,omitempty"`
Provider string `json:"provider,omitempty"`
Model string `json:"model,omitempty"`
Usage TurnUsage `json:"usage,omitempty"`
BlockID string `json:"block_id,omitempty"`
Status string `json:"status,omitempty"`
Error string `json:"error,omitempty"`
LedgerHash string `json:"ledger_hash,omitempty"`
SidecarMissing bool `json:"sidecar_missing,omitempty"`
}
ConversationTurn is the reader-side projection of a turn.
type DaemonHealth ¶
type DaemonState ¶
type DebugBudget ¶
type DebugClientInfo ¶
type DebugContextView ¶
type DebugContextView struct {
Zones []DebugZone `json:"zones"`
Budget DebugBudget `json:"budget"`
}
DebugContextView shows the current context window as stability-ordered zones.
type DebugEngineInfo ¶
type DebugEngineInfo struct {
NucleusTokens int `json:"nucleus_tokens"`
ClientSystemTokens int `json:"client_system_tokens"`
CogDocsScored int `json:"cogdocs_scored"`
CogDocsInjected int `json:"cogdocs_injected"`
CogDocsInjectedPaths []string `json:"cogdocs_injected_paths"`
ConversationTurnsIn int `json:"conversation_turns_in"`
ConversationTurnsKept int `json:"conversation_turns_kept"`
CurrentMessageTokens int `json:"current_message_tokens"`
TotalTokens int `json:"total_tokens"`
Budget int `json:"budget"`
OutputReserve int `json:"output_reserve"`
FlexBudgetUsed int `json:"flex_budget_used"`
}
type DebugProviderInfo ¶
type DebugSnapshot ¶
type DebugSnapshot struct {
Timestamp time.Time `json:"timestamp"`
Client DebugClientInfo `json:"client"`
Engine DebugEngineInfo `json:"engine"`
Provider DebugProviderInfo `json:"provider"`
Context DebugContextView `json:"context"`
}
DebugSnapshot captures the full pipeline state of a single chat request.
type DebugZone ¶
type DebugZone struct {
Zone string `json:"zone"`
Tokens int `json:"tokens"`
ContentPreview string `json:"content_preview,omitempty"`
Items []DebugZoneItem `json:"items,omitempty"`
}
type DebugZoneItem ¶
type DebugZoneItem struct {
ID string `json:"id,omitempty"`
Title string `json:"title,omitempty"`
Role string `json:"role,omitempty"`
Tokens int `json:"tokens"`
Salience float64 `json:"salience,omitempty"`
Recency float64 `json:"recency,omitempty"`
Relevance float64 `json:"relevance,omitempty"`
Reason string `json:"reason,omitempty"`
Preview string `json:"preview"`
}
type Decomposer ¶ added in v0.3.0
type Decomposer interface {
// CanDecompose reports whether this decomposer can handle the request.
CanDecompose(req *IngestRequest) bool
// Decompose extracts structured content from the request.
Decompose(ctx context.Context, req *IngestRequest) (*IngestResult, error)
}
Decomposer is implemented by format-specific processors that know how to break raw input into a normalized IngestResult.
type DedupChecker ¶ added in v0.3.0
type DedupChecker struct {
// contains filtered or unexported fields
}
DedupChecker scans the inbox sector for existing CogDocs to prevent duplicate ingestion of the same source material.
func NewDedupChecker ¶ added in v0.3.0
func NewDedupChecker(workspaceRoot string) *DedupChecker
NewDedupChecker creates a checker that scans the given workspace's inbox.
func (*DedupChecker) IsDuplicate ¶ added in v0.3.0
func (d *DedupChecker) IsDuplicate(sourceID string) bool
IsDuplicate checks whether a CogDoc with the given source ID already exists in the inbox. It scans frontmatter of existing files for a matching source_id field. This is O(n) over inbox size — acceptable for the expected volume. A future optimization could use an index.
type DefaultMembranePolicy ¶ added in v0.3.0
type DefaultMembranePolicy struct{}
func (DefaultMembranePolicy) Evaluate ¶ added in v0.3.0
func (p DefaultMembranePolicy) Evaluate(block *CogBlock) IngestionResult
type Diagnostic ¶
type Diagnostic struct {
Rule string `json:"rule"`
Expected string `json:"expected"`
Actual string `json:"actual"`
Suggestion string `json:"suggestion"`
Severity string `json:"severity"` // "error", "warning", "info"
}
Diagnostic carries the details of a validation failure.
type DocRef ¶
type DocRef struct {
// URI is the target cog:// URI.
URI string `yaml:"uri"`
// Rel is the relationship label (e.g. "related", "supersedes", "depends-on").
Rel string `yaml:"rel"`
}
DocRef is an explicit typed reference declared in a CogDoc's frontmatter.
type EmbeddingIndex ¶
EmbeddingIndex holds the full embedding matrix and chunk metadata.
func LoadEmbeddingIndex ¶
func LoadEmbeddingIndex(embPath, chunksPath string) (*EmbeddingIndex, error)
LoadEmbeddingIndex loads the binary embedding file and chunk metadata JSON.
func (*EmbeddingIndex) CosineTopK ¶
func (idx *EmbeddingIndex) CosineTopK(query []float32, k int) []IndexResult
CosineTopK returns the top-K chunks by cosine similarity to the query. The query should already be L2-normalized (as are the stored embeddings).
func (*EmbeddingIndex) CosineTopKIndices ¶
func (idx *EmbeddingIndex) CosineTopKIndices(query []float32, k int) ([]int, [][]float32)
CosineTopKIndices returns the indices and embeddings of the top-K chunks. Useful for pre-filtering before TRM scoring.
func (*EmbeddingIndex) Size ¶
func (idx *EmbeddingIndex) Size() int
Size returns the number of chunks in the index.
type ErrBrokerClosed ¶ added in v0.3.0
type ErrBrokerClosed struct{}
ErrBrokerClosed is returned by Subscribe after Close has been called.
func (ErrBrokerClosed) Error ¶ added in v0.3.0
func (ErrBrokerClosed) Error() string
type ErrTooManySubscribers ¶ added in v0.3.0
type ErrTooManySubscribers struct{ Max int }
ErrTooManySubscribers is returned when Subscribe would exceed maxSubscribers.
func (ErrTooManySubscribers) Error ¶ added in v0.3.0
func (e ErrTooManySubscribers) Error() string
type EventBroker ¶ added in v0.3.0
type EventBroker struct {
// contains filtered or unexported fields
}
EventBroker fans out kernel events to live subscribers.
func CurrentBroker ¶ added in v0.3.0
func CurrentBroker() *EventBroker
CurrentBroker returns the most recently registered broker, or nil. Tests use this as a convenience — production code should hold a direct *Process reference and call p.Broker().
func NewEventBroker ¶ added in v0.3.0
func NewEventBroker(opts EventBrokerOptions) *EventBroker
NewEventBroker constructs a broker with sane defaults.
func (*EventBroker) Close ¶ added in v0.3.0
func (b *EventBroker) Close() error
Close unblocks all current subscribers, unregisters the broker so AppendEvent stops publishing to it, and prevents new subscriptions. Idempotent.
func (*EventBroker) Publish ¶ added in v0.3.0
func (b *EventBroker) Publish(env *EventEnvelope)
Publish fans env out to matching subscribers. Non-blocking: a full sub channel increments its drop counter rather than stalling the ledger. Safe after Close — returns silently.
func (*EventBroker) RingContainsHash ¶ added in v0.3.0
func (b *EventBroker) RingContainsHash(hash string) bool
RingContainsHash reports whether the given hash is present in the ring.
func (*EventBroker) RingOldestHash ¶ added in v0.3.0
func (b *EventBroker) RingOldestHash() string
RingOldestHash returns the hash of the oldest envelope currently in the ring, or "" if empty. Used by SSE handlers to decide whether a Last-Event-ID resume point fits inside the ring or needs to fall through to disk.
func (*EventBroker) RingReplay ¶ added in v0.3.0
func (b *EventBroker) RingReplay(filter EventFilter, since time.Time) []*EventEnvelope
RingReplay returns a filtered copy of the current ring contents. Callers use this to drain the replay window before consuming live events. Events with a parsed timestamp earlier than `since` are dropped when since is non-zero.
func (*EventBroker) Subscribe ¶ added in v0.3.0
func (b *EventBroker) Subscribe(ctx context.Context, filter EventFilter) (*Subscription, error)
Subscribe registers a new live subscriber. The returned channel streams matching events; when the passed ctx is cancelled or Cancel is called, the subscription is removed and the channel closed.
Replay is handled by the caller via RingReplay — keeping replay out of Subscribe avoids a race where events can arrive on the channel before the caller has finished streaming the ring snapshot.
func (*EventBroker) SubscriberCount ¶ added in v0.3.0
func (b *EventBroker) SubscriberCount() int
SubscriberCount returns the current number of live subscribers.
type EventBrokerOptions ¶ added in v0.3.0
EventBrokerOptions configures NewEventBroker.
type EventEnvelope ¶
type EventEnvelope struct {
HashedPayload EventPayload `json:"hashed_payload"`
Metadata EventMetadata `json:"metadata,omitempty"`
}
EventEnvelope is the canonical on-disk event shape.
func GetLastEvent ¶
func GetLastEvent(workspaceRoot, sessionID string) (*EventEnvelope, error)
GetLastEvent returns the last event in a session ledger, or nil if empty.
func GetLastGlobalEvent ¶
func GetLastGlobalEvent(workspaceRoot, currentSessionID string) (*EventEnvelope, error)
GetLastGlobalEvent returns the last event from the most-recently-modified session in .cog/ledger/ that is NOT currentSessionID. Used on process startup to chain the new session's genesis event to the previous session's final event, maintaining a continuous cross-session ledger. Returns nil (without error) if there is no prior session.
type EventFilter ¶ added in v0.3.0
type EventFilter struct {
// SessionID filters to a single session when non-empty.
SessionID string
// EventTypePattern is the raw pattern (exact, "prefix.*", or "*").
// It is compiled once on Subscribe via compileEventTypeMatcher.
EventTypePattern string
// Source filters on envelope.Metadata.Source (e.g. "kernel-v3").
Source string
// Since, when non-zero, drops envelopes whose parsed Timestamp is
// before it. Used for live-only tailing; replay uses the ring +
// disk path instead.
Since time.Time
}
EventFilter selects which events a subscriber receives. Zero values are no-ops (don't filter on that field).
type EventMetadata ¶
type EventMetadata struct {
Hash string `json:"hash,omitempty"`
Seq int64 `json:"seq,omitempty"`
Source string `json:"source,omitempty"`
}
EventMetadata is NOT included in the hash (for extensibility).
type EventPayload ¶
type EventPayload struct {
Type string `json:"type"`
Timestamp string `json:"timestamp"`
SessionID string `json:"session_id"`
PriorHash string `json:"prior_hash,omitempty"`
Data map[string]interface{} `json:"data,omitempty"`
}
EventPayload is the content that gets canonicalized and hashed.
type EventQuery ¶ added in v0.3.0
type EventQuery struct {
SessionID string
EventTypePattern string
Source string
Since time.Time
Until time.Time
Limit int
Order string // "asc" or "desc" (default "desc")
}
EventQuery is the input shape for QueryEvents.
type EventQueryResult ¶ added in v0.3.0
type EventQueryResult struct {
Count int `json:"count"`
Events []LedgerEvent `json:"events"`
Truncated bool `json:"truncated"`
NextBefore string `json:"next_before,omitempty"` // RFC3339 timestamp for pagination
}
EventQueryResult is the QueryEvents output.
func QueryEvents ¶ added in v0.3.0
func QueryEvents(workspaceRoot string, q EventQuery) (*EventQueryResult, error)
QueryEvents reads matching events from the hash-chained ledger. Internally this delegates to QueryLedger with verify_chain=false and then applies the extra filters (source, until, order). We pull enough rows from QueryLedger to honour the caller's limit after filtering.
type ExperimentConfig ¶
type ExperimentConfig struct {
Type string `yaml:"type"`
Title string `yaml:"title"`
Created string `yaml:"created"`
Run struct {
PromptsFile string `yaml:"prompts_file"` // path to benchmark_prompts.json
Model string `yaml:"model"` // e.g. "qwen3.5:9b"
Budget int `yaml:"budget"` // token budget (0 = default 4096)
Method string `yaml:"method"` // e.g. "keyword-match"
RegressionThreshold float64 `yaml:"regression_threshold"` // recall drop that triggers flag (default 0.1)
BaselineRun string `yaml:"baseline_run"` // path to previous result CogDoc for comparison
} `yaml:"run"`
}
ExperimentConfig is the YAML frontmatter of an experiment CogDoc.
type ExperimentDelta ¶
ExperimentDelta is the change in aggregate metrics vs a baseline.
type FileScore ¶
FileScore pairs a file path with its total salience score.
func RankFilesBySalience ¶
func RankFilesBySalience(repoPath, scope string, limit, daysWindow int, cfg *SalienceConfig) ([]FileScore, error)
RankFilesBySalience walks scope and returns all .md/.cog.md files sorted by score.
Uses a single-pass commit walk: iterates the git log once and records which scope-files each commit touched. This is O(commits × files_per_commit) instead of the old O(files × commits) approach that ran a filtered log per file.
type FileWatcher ¶
FileWatcher polls a file for newly appended newline-delimited content.
func NewFileWatcher ¶
func NewFileWatcher(pollInterval time.Duration) *FileWatcher
NewFileWatcher creates a polling file watcher.
type FovealDoc ¶
type FovealDoc struct {
URI string
Path string
Title string
Content string
Summary string
SchemaIssues []string
Salience float64
Tokens int
Reason string // "high-salience", "query-match", or "both"
}
FovealDoc is a single CogDoc selected for context injection.
type Gate ¶
type Gate struct {
// contains filtered or unexported fields
}
Gate routes events into the attentional field.
func NewGate ¶
func NewGate(field *AttentionalField, cfg *Config) *Gate
NewGate constructs a Gate backed by the given attentional field.
func (*Gate) Process ¶
func (g *Gate) Process(evt *GateEvent) *GateResult
Process routes an event through the gate and returns a routing decision.
type GateEvent ¶
type GateEvent struct {
// Type is the event category (e.g. "user.message", "tool.call", "heartbeat").
Type string
// Content is the raw content of the event (e.g. user message text).
Content string
// Timestamp records when the event arrived.
Timestamp time.Time
// SessionID is the originating session (empty for internal events).
SessionID string
// Data holds type-specific structured data.
Data map[string]interface{}
}
GateEvent is an input to the attentional gate.
func NewGateEventFromBlock ¶
NewGateEventFromBlock builds a GateEvent from an ingress CogBlock while keeping GateEvent as the active process-routing primitive.
type GateResult ¶
type GateResult struct {
// Elevated is the set of memory files to bring into the fovea for this event.
Elevated []FileScore
// StateTransition is the suggested next process state (empty = no change).
StateTransition ProcessState
// Accepted records whether the event was accepted into the fovea.
Accepted bool
}
GateResult is the gate's routing decision for an event.
type GetAgentRequest ¶ added in v0.3.0
GetAgentRequest is the normalized input to the get-agent helper.
func (*GetAgentRequest) Normalize ¶ added in v0.3.0
func (r *GetAgentRequest) Normalize() error
Normalize fills in defaults and validates the request. Returns the original ErrAgentInvalidInput for obvious malformed input, else nil.
type HandoffRegistry ¶ added in v0.3.0
type HandoffRegistry struct {
// contains filtered or unexported fields
}
HandoffRegistry guards in-memory handoff state. The bus is still the source of truth; this struct is how we atomically enforce first-wins on claim and state-order on complete.
The mutex is held across the bus append in mutating methods (via the appendFn callback). Yes this serializes concurrent offers/claims/completes against disk I/O, but: (a) these operations are low-rate, (b) the append is local disk — typical <10ms — and (c) correctness demands the bus-append and registry mutation commit as one atomic unit so the derived view can't run ahead of the ground-truth ledger.
func NewHandoffRegistry ¶ added in v0.3.0
func NewHandoffRegistry() *HandoffRegistry
NewHandoffRegistry returns an empty registry.
func (*HandoffRegistry) ApplyClaim ¶ added in v0.3.0
func (r *HandoffRegistry) ApplyClaim( id, claimingSession string, now time.Time, appendFn func() error, ) (ClaimResult, error)
ApplyClaim is the ATOMIC first-wins check + bus-append path. Semantics:
- If the offer is missing, already claimed/completed, or TTL-expired, returns a ClaimResult with a non-empty Rejection; NO mutation, and appendFn is NOT invoked (the caller emits handoff.claim_rejected independently for audit).
- Otherwise appendFn is invoked WHILE THE LOCK IS HELD. If it errors, the claim is aborted — the in-memory row stays open and the error is returned on the second return value. The "losing" session thus sees an internal error, not a conflict, and can retry.
- On appendFn success, the row transitions to Claimed and the second return value is nil. This makes the claim first-wins on the bus as well as in memory — the whole point of the PR.
Holding a lock across disk I/O is deliberate. Claims are rare, appends are local (<10ms typical), and correctness here beats throughput. See codex review of PR#43.
func (*HandoffRegistry) ApplyComplete ¶ added in v0.3.0
func (r *HandoffRegistry) ApplyComplete( id, completingSession, outcome, notes, nextHandoffID string, now time.Time, appendFn func() error, ) (*HandoffState, ClaimRejection, error)
ApplyComplete transitions a claimed handoff to completed. Returns:
- (nil, ClaimRejectedOfferNotFound, nil) when unknown → 404.
- (&row, ClaimRejectedOutOfOrder, nil) not claimed yet → 409.
- (nil, "", appendErr) append failed.
- (&row, "", nil) success.
appendFn runs under the lock AFTER the state-order check but BEFORE the Completed transition commits; an error aborts the mutation.
func (*HandoffRegistry) ApplyOffer ¶ added in v0.3.0
func (r *HandoffRegistry) ApplyOffer( h HandoffState, now time.Time, appendFn func() error, ) (*HandoffState, error)
ApplyOffer records a handoff offer. Idempotent on duplicate IDs — replaces the row; this shouldn't happen in normal flow (IDs are unique) but mirrors the bus semantics (re-emitting the same offer would just append another event).
appendFn, if non-nil, runs under the registry lock BEFORE the mutation is committed. An error aborts the mutation and is returned; on nil-error the in-memory row is installed atomically with the bus append.
func (*HandoffRegistry) Get ¶ added in v0.3.0
func (r *HandoffRegistry) Get(id string) (*HandoffState, bool)
Get returns a copy, or (nil, false).
func (*HandoffRegistry) Len ¶ added in v0.3.0
func (r *HandoffRegistry) Len() int
Len returns the number of tracked handoffs (open + claimed + completed).
func (*HandoffRegistry) Snapshot ¶ added in v0.3.0
func (r *HandoffRegistry) Snapshot() []*HandoffState
Snapshot returns a copy of every handoff row.
type HandoffState ¶ added in v0.3.0
type HandoffState struct {
HandoffID string `json:"handoff_id"`
FromSession string `json:"from_session"`
ToSession string `json:"to_session,omitempty"`
Reason string `json:"reason,omitempty"`
// Full offer payload (mirror of what went on the bus). Kept verbatim so
// claimants can read it back without a separate bus fetch.
OfferPayload map[string]interface{} `json:"offer,omitempty"`
CreatedAt time.Time `json:"created_at"`
TTLSeconds int `json:"ttl_seconds,omitempty"`
ExpiresAt time.Time `json:"expires_at,omitempty"`
State string `json:"state"`
ClaimingSession string `json:"claiming_session,omitempty"`
ClaimedAt time.Time `json:"claimed_at,omitempty"`
CompletingSession string `json:"completing_session,omitempty"`
CompletedAt time.Time `json:"completed_at,omitempty"`
CompletionOutcome string `json:"outcome,omitempty"`
CompletionNotes string `json:"notes,omitempty"`
NextHandoffID string `json:"next_handoff_id,omitempty"`
}
HandoffState is the in-memory row for a handoff.
type HealthChecker ¶
type HealthChecker func(endpoint string, timeout time.Duration) (*DaemonHealth, error)
type HeartbeatReceipt ¶
type HeartbeatReceipt struct {
Hash string `json:"hash,omitempty"`
Timestamp time.Time `json:"timestamp,omitempty"`
PeersSent int `json:"peers_sent"`
}
HeartbeatReceipt summarizes the result of a bridge heartbeat emission.
type IndexResult ¶
IndexResult is a single search result from the embedding index.
type IndexedCogdoc ¶
type IndexedCogdoc struct {
// URI is the canonical cog:// address of this document.
URI string
// Path is the absolute filesystem path.
Path string
// ID is the value of the `id:` frontmatter field (may be empty).
ID string
// Title is the value of the `title:` frontmatter field.
Title string
// Type is the value of the `type:` frontmatter field (e.g. "insight").
Type string
// Tags is the value of the `tags:` frontmatter field.
Tags []string
// Status is the value of the `status:` frontmatter field (e.g. "active").
Status string
// Created is the value of the `created:` frontmatter field (string, any format).
Created string
// Refs are the explicit `refs:` entries in the frontmatter.
Refs []DocRef
// InlineRefs are cog:// URIs found in the document body (extracted by regex).
InlineRefs []string
}
IndexedCogdoc is a lightweight representation of a single CogDoc file, containing only the metadata needed for index lookups and coherence checks.
type InferenceEvent ¶
type InferenceEvent struct {
RequestID string `json:"request_id"`
Timestamp time.Time `json:"timestamp"`
Provider string `json:"provider"`
Model string `json:"model"`
ProcessState string `json:"process_state"`
Usage TokenUsage `json:"usage"`
CostUSD float64 `json:"cost_usd"`
Latency time.Duration `json:"latency"`
RoutingDecision *RoutingDecision `json:"routing_decision,omitempty"`
Escalated bool `json:"escalated"`
Source string `json:"source"`
Success bool `json:"success"`
Error string `json:"error,omitempty"`
}
InferenceEvent is the ledger event recorded for every inference request.
type IngestFormat ¶ added in v0.3.0
type IngestFormat string
IngestFormat describes the structural format of the raw input data.
const ( FormatURL IngestFormat = "url" FormatConversation IngestFormat = "conversation" FormatMessage IngestFormat = "message" FormatDocument IngestFormat = "document" )
type IngestPipeline ¶ added in v0.3.0
type IngestPipeline struct {
// contains filtered or unexported fields
}
IngestPipeline orchestrates decomposition by delegating to the first registered Decomposer that can handle a given request.
func NewIngestPipeline ¶ added in v0.3.0
func NewIngestPipeline(workspaceRoot string) *IngestPipeline
NewIngestPipeline creates a pipeline rooted at the given workspace directory.
func (*IngestPipeline) CheckDuplicate ¶ added in v0.3.0
func (p *IngestPipeline) CheckDuplicate(sourceID string) bool
CheckDuplicate returns true if a source ID already exists in the inbox.
func (*IngestPipeline) Ingest ¶ added in v0.3.0
func (p *IngestPipeline) Ingest(ctx context.Context, req *IngestRequest) (*IngestResult, error)
Ingest runs the ingestion pipeline for a single request. It iterates through registered decomposers, using the first one that reports it can handle the request. When no decomposer matches, a minimal result with ContentType=ContentUnknown is returned.
func (*IngestPipeline) Register ¶ added in v0.3.0
func (p *IngestPipeline) Register(d Decomposer)
Register adds a Decomposer to the pipeline. Decomposers are tried in registration order; the first match wins.
type IngestRequest ¶ added in v0.3.0
type IngestRequest struct {
Source IngestSource `json:"source"`
Format IngestFormat `json:"format"`
Data string `json:"data"` // raw material (URL, JSON blob, text)
Metadata map[string]string `json:"metadata"` // optional context (discord_message_id, channel, etc.)
}
IngestRequest is the input to the ingestion pipeline.
type IngestResult ¶ added in v0.3.0
type IngestResult struct {
Title string `json:"title"`
URL string `json:"url,omitempty"`
Domain string `json:"domain,omitempty"`
ContentType ContentType `json:"content_type"`
Tags []string `json:"tags"`
Summary string `json:"summary,omitempty"` // first ~500 chars or abstract
Fields map[string]string `json:"fields,omitempty"` // type-specific extracted fields
Source IngestSource `json:"source"`
SourceID string `json:"source_id,omitempty"` // dedup key (URL, message ID, etc.)
}
IngestResult is the normalized output of decomposition.
type IngestSource ¶ added in v0.3.0
type IngestSource string
IngestSource identifies the origin system of ingested material.
const ( SourceDiscord IngestSource = "discord" SourceChatGPT IngestSource = "chatgpt" SourceClaude IngestSource = "claude" SourceGemini IngestSource = "gemini" SourceURL IngestSource = "url" SourceFile IngestSource = "file" )
type IngestStatus ¶ added in v0.3.0
type IngestStatus string
IngestStatus tracks where an item sits in the inbox lifecycle.
const ( StatusRaw IngestStatus = "raw" StatusEnriched IngestStatus = "enriched" StatusIntegrated IngestStatus = "integrated" )
type IngestionResult ¶ added in v0.3.0
type IngestionResult struct {
Decision AssimilationDecision `json:"decision"`
Block *CogBlock `json:"block,omitempty"`
Reason string `json:"reason,omitempty"`
Provenance BlockProvenance `json:"provenance"`
QuarantineReason string `json:"quarantine_reason,omitempty"`
}
type KernelHeartbeatPayload ¶
type KernelHeartbeatPayload struct {
ProcessState string `json:"process_state"`
FieldSize int `json:"field_size"`
CoherenceFingerprint string `json:"coherence_fingerprint"`
NucleusFingerprint string `json:"nucleus_fingerprint"`
LedgerHead string `json:"ledger_head,omitempty"`
Timestamp time.Time `json:"timestamp"`
}
KernelHeartbeatPayload captures the kernel state exported to the constellation bridge.
type KernelLogEntry ¶ added in v0.3.0
type KernelLogEntry struct {
Time time.Time `json:"time"`
Level string `json:"level"`
Msg string `json:"msg"`
Attrs map[string]any `json:"attrs,omitempty"`
Line json.RawMessage `json:"line"`
}
KernelLogEntry is one parsed row returned to callers. Time/Level/Msg are extracted from the top-level slog fields; Attrs holds everything else (including nested groups) as parsed JSON. Line is the raw source line for callers that want bit-exact fidelity.
type KernelLogFile ¶ added in v0.3.0
type KernelLogFile struct {
Path string `json:"path"`
SizeBytes int64 `json:"size_bytes"`
Exists bool `json:"exists"`
}
KernelLogFile reports the state of the on-disk sink at query time. Returned even when no entries match so callers can distinguish "no sink wired" from "sink wired but nothing matched".
type KernelLogQuery ¶ added in v0.3.0
type KernelLogQuery struct {
// Limit bounds the returned entries. 0 → DefaultKernelLogLimit;
// > MaxKernelLogLimit is clamped.
Limit int
// Level filters by exact (case-insensitive) level match. One of
// "", "debug", "info", "warn", "error". v1 does NOT support
// ">=warn"-style comparator filtering (see Agent U §4.4).
Level string
// Substring is a case-insensitive byte-scan filter applied to the
// raw JSON line before JSON parse (cheap pre-filter). Max
// MaxKernelLogSubstring characters.
Substring string
// Since, when non-zero, excludes entries earlier than this time.
Since time.Time
// Until, when non-zero, excludes entries later than this time.
Until time.Time
}
KernelLogQuery is the filter/pagination shape accepted by QueryKernelLog. Zero values mean "no filter" for every field.
func BuildKernelLogQueryFromValues ¶ added in v0.3.0
func BuildKernelLogQueryFromValues( limit, level, substring, since, until string, now time.Time, ) (KernelLogQuery, error)
BuildKernelLogQueryFromValues parses the query-string form shared by the HTTP handler and the MCP tool. All parameters are optional; returns a zero-valued KernelLogQuery when none are present.
limit int 1..MaxKernelLogLimit level string debug|info|warn|error (case-insensitive) substring string <= MaxKernelLogSubstring since string RFC3339 OR duration (5m, 2h, …) until string RFC3339 OR duration
type KernelLogResult ¶ added in v0.3.0
type KernelLogResult struct {
Count int `json:"count"`
Entries []KernelLogEntry `json:"entries"`
Truncated bool `json:"truncated"`
File KernelLogFile `json:"file"`
}
KernelLogResult is the typed result returned by QueryKernelLog and emitted verbatim by the /v1/kernel-log handler and the cog_tail_kernel_log MCP tool.
func QueryKernelLog ¶ added in v0.3.0
func QueryKernelLog(path string, q KernelLogQuery) (*KernelLogResult, error)
QueryKernelLog reads the JSONL kernel-log sink at path and returns the most recent entries that satisfy q's filters, in newest-first order. Missing files yield an empty result with Exists=false (no error). Malformed JSONL lines are skipped silently.
Truncated is true when more than Limit entries matched the filters; the returned slice is the newest Limit.
type KernelToolRegistry ¶ added in v0.3.0
type KernelToolRegistry struct {
// contains filtered or unexported fields
}
KernelToolRegistry holds the kernel's tool definitions and executors. Populated at startup from the MCP server's tool set.
func NewKernelToolRegistry ¶ added in v0.3.0
func NewKernelToolRegistry(mcpSrv *MCPServer) *KernelToolRegistry
NewKernelToolRegistry builds the tool registry from the MCP server.
func (*KernelToolRegistry) Definitions ¶ added in v0.3.0
func (r *KernelToolRegistry) Definitions() []ToolDefinition
Definitions returns the tool definitions for inclusion in CompletionRequest.
func (*KernelToolRegistry) Execute ¶ added in v0.3.0
Execute runs a kernel tool and returns the result as a string.
func (*KernelToolRegistry) IsKernelTool ¶ added in v0.3.0
func (r *KernelToolRegistry) IsKernelTool(name string) bool
IsKernelTool returns true if the named tool is owned by the kernel.
type LedgerEvent ¶ added in v0.3.0
type LedgerEvent struct {
Type string `json:"type"`
Timestamp string `json:"timestamp"`
SessionID string `json:"session_id"`
Seq int64 `json:"seq"`
Hash string `json:"hash"`
PriorHash string `json:"prior_hash,omitempty"`
Source string `json:"source,omitempty"`
Data map[string]interface{} `json:"data,omitempty"`
}
LedgerEvent is the public JSON shape for a returned event. It flattens the internal EventEnvelope so clients don't need to know about the two-level hashed_payload/metadata split.
type LedgerQuery ¶ added in v0.3.0
type LedgerQuery struct {
SessionID string // filter to a single session (empty = all sessions)
EventType string // exact match, or "prefix.*" wildcard
AfterSeq int64 // return events with seq > this (requires SessionID)
SinceTimestamp string // RFC3339; return events with timestamp >= this
Limit int // default 100, capped at 1000
VerifyChain bool // recompute hashes + validate prior_hash links
}
LedgerQuery selects and filters events. All fields are optional.
type LedgerQueryResult ¶ added in v0.3.0
type LedgerQueryResult struct {
Count int `json:"count"`
Events []LedgerEvent `json:"events"`
NextAfterSeq int64 `json:"next_after_seq,omitempty"`
Truncated bool `json:"truncated"`
Verification LedgerVerification `json:"verification"`
}
LedgerQueryResult is the QueryLedger output.
func QueryLedger ¶ added in v0.3.0
func QueryLedger(workspaceRoot string, q LedgerQuery) (*LedgerQueryResult, error)
QueryLedger reads the hash-chained ledger under workspaceRoot and returns events matching q. It never reimplements hashing — verification, when requested, recomputes hashes via CanonicalizeEvent + HashEvent.
Ordering:
- Single-session query: ascending seq (file order).
- Multi-session query: session mtime desc, then intra-session ascending seq.
type LedgerVerification ¶ added in v0.3.0
type LedgerVerification struct {
Requested bool `json:"requested"`
TotalChecked int `json:"total_checked"`
Valid bool `json:"valid"`
FirstBrokenSeq int64 `json:"first_broken_seq,omitempty"`
FirstBrokenSession string `json:"first_broken_session,omitempty"`
Errors []string `json:"errors,omitempty"`
}
LedgerVerification reports the result of optional chain verification. When requested=false, only `requested` is meaningful.
type LightCone ¶
type LightCone struct {
States []MambaState // one per layer
}
LightCone holds per-layer SSM states — the compressed observer trajectory.
type LightConeInfo ¶
type LightConeInfo struct {
ConversationID string `json:"conversation_id"`
NLayers int `json:"n_layers"`
LayerNorms []float64 `json:"layer_norms"`
CompressedNorm float64 `json:"compressed_norm"`
UpdatedAt time.Time `json:"updated_at"`
}
LightConeInfo is a summary of a stored light cone for the /v1/lightcone endpoint.
type LightConeManager ¶
type LightConeManager struct {
// contains filtered or unexported fields
}
LightConeManager provides thread-safe per-conversation light cone storage.
func NewLightConeManager ¶
func NewLightConeManager(trm *MambaTRM) *LightConeManager
NewLightConeManager creates a new manager. The trm parameter is used for computing light cone norms (can be nil if norms are not needed).
func (*LightConeManager) Count ¶
func (m *LightConeManager) Count() int
Count returns the number of active light cones.
func (*LightConeManager) Delete ¶
func (m *LightConeManager) Delete(convID string)
Delete removes the light cone for a conversation.
func (*LightConeManager) Get ¶
func (m *LightConeManager) Get(convID string) *LightCone
Get returns the light cone for a conversation, or nil if none exists.
func (*LightConeManager) List ¶
func (m *LightConeManager) List() []LightConeInfo
List returns summary information for all stored light cones.
func (*LightConeManager) Prune ¶
func (m *LightConeManager) Prune(before time.Time) int
Prune removes light cones that haven't been updated since the given time. Returns the number of pruned entries.
func (*LightConeManager) Set ¶
func (m *LightConeManager) Set(convID string, lc *LightCone)
Set stores or updates the light cone for a conversation.
type Linear ¶
type Linear struct {
Weight [][]float32 // [out_features][in_features]
Bias []float32 // [out_features] or nil
InDim int
OutDim int
}
Linear represents a dense layer: y = x @ W^T + b
type ListAgentsRequest ¶ added in v0.3.0
type ListAgentsRequest struct {
IncludeStopped bool
}
ListAgentsRequest is the normalized input to the list-agents helper.
type ListAgentsResponse ¶ added in v0.3.0
type ListAgentsResponse struct {
Count int `json:"count"`
Agents []AgentSummary `json:"agents"`
}
ListAgentsResponse is the normalized output.
func QueryListAgents ¶ added in v0.3.0
func QueryListAgents(ctx context.Context, ctrl AgentController, req ListAgentsRequest) (*ListAgentsResponse, error)
QueryListAgents wraps AgentController.ListAgents with uniform error handling + an envelope shape suitable for direct JSON marshal.
type MCPServer ¶ added in v0.3.0
type MCPServer struct {
// contains filtered or unexported fields
}
MCPServer wraps the MCP server and its dependencies.
func NewMCPServer ¶ added in v0.3.0
NewMCPServer creates and configures the MCP server with all stage-1 tools. The returned server has no AgentController attached. Call SetAgentController to enable cog_list_agents / cog_get_agent_state / cog_trigger_agent_loop.
func NewMCPServerWithAgentController ¶ added in v0.3.0
func NewMCPServerWithAgentController(cfg *Config, nucleus *Nucleus, process *Process, ctrl AgentController) *MCPServer
NewMCPServerWithAgentController creates the MCP server and attaches an AgentController for the agent-state tools. The controller may be nil; the tools remain registered and return a "not configured" response in that case so clients get a consistent error shape.
func (*MCPServer) RunStdio ¶ added in v0.3.0
RunStdio runs the MCP server on the upstream SDK's StdioTransport, which reads/writes newline-delimited JSON-RPC 2.0 on os.Stdin / os.Stdout. This is the transport Claude Desktop, Cursor, Windsurf, etc. all speak.
Blocks until ctx is cancelled or the client closes the stream. Returns nil on clean EOF; otherwise the underlying transport error.
func (*MCPServer) SetAgentController ¶ added in v0.3.0
func (m *MCPServer) SetAgentController(ctrl AgentController)
SetAgentController wires a live AgentController into an already-built MCPServer. Safe to call after construction; the tool registration is unchanged because the tools resolve the current controller on each call.
func (*MCPServer) SetSessionsBackend ¶ added in v0.3.0
func (m *MCPServer) SetSessionsBackend( bus *BusSessionManager, sessions *SessionRegistry, handoffs *HandoffRegistry, )
SetSessionsBackend wires the bus manager + registries so the kernel-side MCP tools can serve traffic. Safe to call post-construction.
type MambaBlock ¶
type MambaBlock struct {
Norm LayerNorm
InProj [][]float32 // [2*d_inner][d_model] — projects to (x_ssm, z)
Conv Conv1D
XProj [][]float32 // [d_state*2+1][d_inner] — projects to (B, C, delta)
LogA [][]float32 // [d_inner][d_state]
D []float32 // [d_inner] skip connection
OutProj [][]float32 // [d_model][d_inner]
DInner int
DState int
}
MambaBlock is a single selective SSM block with pre-norm residual.
func (*MambaBlock) Step ¶
func (mb *MambaBlock) Step(x []float32, state *MambaState) ([]float32, *MambaState)
Step processes a single event through the Mamba block, updating the SSM state. Input: x [d_model], state: MambaState (or nil for fresh). Returns: output [d_model], new state.
Note: unlike the forward path, step() does NOT include the residual connection. This matches the Python SelectiveSSM.step() method used for inference.
type MambaState ¶
type MambaState struct {
H [][]float32 // [d_inner][d_state]
}
MambaState is the SSM hidden state for one layer.
type MambaTRM ¶
type MambaTRM struct {
Config TRMConfig
TypeEmbed [][]float32 // [n_event_types][d_model]
InputProj Linear // 2*d_model → d_model
Layers []MambaBlock // [n_layers]
FinalNorm LayerNorm // d_model
Probes []AttentionProbe // [n_probes]
ProbeNorms []LayerNorm // [n_probes]
Head ScoreHead
}
MambaTRM is the full temporal retrieval model.
func (*MambaTRM) GetLightConeNorms ¶
GetLightConeNorms returns per-layer SSM state norms and a compressed scalar.
func (*MambaTRM) ScoreCandidates ¶
ScoreCandidates scores a set of candidates against a trajectory context. context: [d_model] from Step(), candidates: [n][d_model] embeddings. Returns [n] scores (higher = more relevant).
type ManagedProcess ¶
type ManagedProcess struct {
ID string `json:"id"`
Kind ProcessKind `json:"kind"`
Status ProcessStatus `json:"status"`
Source string `json:"source"` // "http", "discord", "signal", etc.
CallbackChannel string `json:"callback_channel"` // where to deliver results
Identity string `json:"identity"` // NodeID of requestor
StartedAt time.Time `json:"started_at"`
FinishedAt *time.Time `json:"finished_at,omitempty"`
Error string `json:"error,omitempty"`
Usage *TokenUsage
// contains filtered or unexported fields
}
ManagedProcess tracks a single Claude Code subprocess.
func (*ManagedProcess) SetError ¶
func (p *ManagedProcess) SetError(err error)
SetError records an error on the process.
type ManagedProcessOpts ¶
type ManagedProcessOpts struct {
Kind ProcessKind
Source string
CallbackChannel string
Identity string
Cancel func()
}
ManagedProcessOpts configures tracking for a new process.
type MembranePolicy ¶ added in v0.3.0
type MembranePolicy interface {
Evaluate(block *CogBlock) IngestionResult
}
type NerdctlRuntime ¶
type NerdctlRuntime struct {
// contains filtered or unexported fields
}
func NewNerdctlRuntime ¶
func NewNerdctlRuntime() (*NerdctlRuntime, error)
func (*NerdctlRuntime) Exec ¶
func (n *NerdctlRuntime) Exec(containerID string, command []string) ([]byte, error)
func (*NerdctlRuntime) Logs ¶
func (n *NerdctlRuntime) Logs(containerID string, follow bool) (io.ReadCloser, error)
func (*NerdctlRuntime) Pull ¶
func (n *NerdctlRuntime) Pull(image string) error
func (*NerdctlRuntime) Start ¶
func (n *NerdctlRuntime) Start(image string, config ContainerConfig) (string, error)
func (*NerdctlRuntime) Status ¶
func (n *NerdctlRuntime) Status(containerID string) (ContainerStatus, error)
func (*NerdctlRuntime) Stop ¶
func (n *NerdctlRuntime) Stop(containerID string) error
type NilBridge ¶
type NilBridge struct{}
NilBridge provides neutral standalone-mode behavior when no constellation is configured.
func (NilBridge) EmitHeartbeat ¶
func (NilBridge) EmitHeartbeat(KernelHeartbeatPayload) (HeartbeatReceipt, error)
func (NilBridge) TrustSnapshot ¶
func (NilBridge) TrustSnapshot() ConstellationTrustSnapshot
type NodeHealth ¶ added in v0.2.0
type NodeHealth struct {
// contains filtered or unexported fields
}
NodeHealth holds the last-known health of all sibling services.
func NewNodeHealth ¶ added in v0.2.0
func NewNodeHealth() *NodeHealth
NewNodeHealth returns an empty NodeHealth.
func (*NodeHealth) Counts ¶ added in v0.2.0
func (nh *NodeHealth) Counts() (int, int)
Counts returns (healthy, total) for quick reporting.
func (*NodeHealth) Names ¶ added in v0.2.0
func (nh *NodeHealth) Names() []string
Names returns sorted service names.
func (*NodeHealth) Probe ¶ added in v0.2.0
func (nh *NodeHealth) Probe(manifest *NodeManifest, selfPort int)
Probe checks all sibling services defined in the manifest concurrently. Skips the kernel itself (it knows its own health). Each probe has a 2s timeout; total wall time is ~2s regardless of service count.
func (*NodeHealth) Snapshot ¶ added in v0.2.0
func (nh *NodeHealth) Snapshot() map[string]ServiceHealth
Snapshot returns a copy of the current service health map.
func (*NodeHealth) Summary ¶ added in v0.2.0
func (nh *NodeHealth) Summary() map[string]string
Summary returns a compact status map (service → status string).
type NodeManifest ¶ added in v0.2.0
type NodeManifest struct {
APIVersion string `yaml:"apiVersion" json:"apiVersion"`
Kind string `yaml:"kind" json:"kind"`
Services map[string]ServiceDef `yaml:"services" json:"services"`
}
NodeManifest is the single source of truth for services on this node.
func LoadManifest ¶ added in v0.2.0
func LoadManifest(path string) (*NodeManifest, error)
LoadManifest reads and parses a manifest.yaml file.
type Nucleus ¶
type Nucleus struct {
// Name is the identity name (e.g. "Cog", "Sandy").
Name string
// Role is the identity role descriptor.
Role string
// Card is the full text of the identity card (markdown).
Card string
// WorkspaceRoot is the absolute path to the workspace.
WorkspaceRoot string
// LoadedAt records when this nucleus was loaded.
LoadedAt time.Time
// contains filtered or unexported fields
}
Nucleus is the always-above-threshold identity context. It holds the parsed identity card and the workspace root.
func LoadNucleus ¶
LoadNucleus reads the current identity from .cog/config/identity.yaml and loads the corresponding identity card file. Falls back to an embedded default identity if no config or card exists.
type ObserverUpdate ¶
type ObserverUpdate struct {
// PredictionError is the Jaccard distance between the previous prediction
// and the paths actually attended this cycle (0 = perfect, 1 = total miss).
PredictionError float64
// Prediction is the set of paths the model expects to be attended next cycle.
Prediction []string
// Receding is the set of paths that were predicted last cycle but dropped
// out this cycle (expected, then stopped being attended).
Receding []string
// Cycle is the cycle number that just completed.
Cycle int64
// MeanError is the running mean prediction error across all cycles.
MeanError float64
}
ObserverUpdate is the result of a single TrajectoryModel.Update() call.
type OllamaProvider ¶
type OllamaProvider struct {
// contains filtered or unexported fields
}
OllamaProvider implements Provider against a local Ollama server.
func NewOllamaProvider ¶
func NewOllamaProvider(name string, cfg ProviderConfig) *OllamaProvider
NewOllamaProvider creates an OllamaProvider from a ProviderConfig.
func (*OllamaProvider) Available ¶
func (p *OllamaProvider) Available(ctx context.Context) bool
Available checks if Ollama is running and the configured model is loaded.
func (*OllamaProvider) Capabilities ¶
func (p *OllamaProvider) Capabilities() ProviderCapabilities
Capabilities returns what Ollama supports.
func (*OllamaProvider) Complete ¶
func (p *OllamaProvider) Complete(ctx context.Context, req *CompletionRequest) (*CompletionResponse, error)
Complete sends a non-streaming request and returns the full response.
func (*OllamaProvider) ContextWindow ¶
func (p *OllamaProvider) ContextWindow() int
ContextWindow returns the configured num_ctx for this provider.
func (*OllamaProvider) Name ¶
func (p *OllamaProvider) Name() string
Name returns the provider identifier.
func (*OllamaProvider) Stream ¶
func (p *OllamaProvider) Stream(ctx context.Context, req *CompletionRequest) (<-chan StreamChunk, error)
Stream sends a streaming request and returns a channel of chunks. The channel closes when generation is complete or the context is cancelled.
type OpenAICompatProvider ¶
type OpenAICompatProvider struct {
// contains filtered or unexported fields
}
OpenAICompatProvider implements Provider against any OpenAI-compatible server.
func NewOpenAICompatProvider ¶
func NewOpenAICompatProvider(name string, cfg ProviderConfig) *OpenAICompatProvider
NewOpenAICompatProvider creates an OpenAICompatProvider from a ProviderConfig.
func (*OpenAICompatProvider) Available ¶
func (p *OpenAICompatProvider) Available(ctx context.Context) bool
Available checks if the server is reachable and has at least one model.
func (*OpenAICompatProvider) Capabilities ¶
func (p *OpenAICompatProvider) Capabilities() ProviderCapabilities
Capabilities returns what this provider supports.
func (*OpenAICompatProvider) Complete ¶
func (p *OpenAICompatProvider) Complete(ctx context.Context, req *CompletionRequest) (*CompletionResponse, error)
Complete sends a non-streaming request and returns the full response.
func (*OpenAICompatProvider) Name ¶
func (p *OpenAICompatProvider) Name() string
Name returns the provider identifier.
func (*OpenAICompatProvider) Stream ¶
func (p *OpenAICompatProvider) Stream(ctx context.Context, req *CompletionRequest) (<-chan StreamChunk, error)
Stream sends a streaming request and returns a channel of incremental chunks. The channel closes when generation is complete or the context is cancelled.
type OpenClawTailer ¶
type OpenClawTailer struct {
Watcher *FileWatcher
ScanInterval time.Duration
}
OpenClawTailer watches OpenClaw JSONL logs and emits normalized CogBlocks.
func (*OpenClawTailer) Name ¶
func (t *OpenClawTailer) Name() string
type PiProvider ¶ added in v0.2.0
type PiProvider struct {
// contains filtered or unexported fields
}
PiProvider implements Provider by spawning pi CLI processes.
func NewPiProvider ¶ added in v0.2.0
func NewPiProvider(name string, cfg ProviderConfig, procMgr *ProcessManager) *PiProvider
NewPiProvider creates a PiProvider from a ProviderConfig.
func (*PiProvider) Available ¶ added in v0.2.0
func (p *PiProvider) Available(ctx context.Context) bool
func (*PiProvider) Capabilities ¶ added in v0.2.0
func (p *PiProvider) Capabilities() ProviderCapabilities
func (*PiProvider) Complete ¶ added in v0.2.0
func (p *PiProvider) Complete(ctx context.Context, req *CompletionRequest) (*CompletionResponse, error)
Complete sends a prompt and waits for the full response.
func (*PiProvider) Name ¶ added in v0.2.0
func (p *PiProvider) Name() string
func (*PiProvider) Stream ¶ added in v0.2.0
func (p *PiProvider) Stream(ctx context.Context, req *CompletionRequest) (<-chan StreamChunk, error)
Stream spawns a pi process in JSON mode and returns incremental chunks.
type PredictedChunk ¶
type PredictedChunk struct {
Path string `json:"path"`
SectionTitle string `json:"section_title,omitempty"`
Score float32 `json:"score"`
}
PredictedChunk is a single TRM prediction.
type Process ¶
type Process struct {
// NodeID is the stable kernel node identity persisted across restarts.
NodeID string
// TrustState carries local trust, coherence, and heartbeat metadata.
TrustState TrustState
// contains filtered or unexported fields
}
Process is the always-running cognitive process.
func NewProcess ¶
NewProcess constructs and initialises the process.
func (*Process) AssembleContext ¶
func (p *Process) AssembleContext(query string, messages []ProviderMessage, budget int, opts ...AssembleOption) (*ContextPackage, error)
AssembleContext builds a ContextPackage from the full client request.
It decomposes the incoming messages[], scores conversation history alongside CogDocs, manages eviction when the budget is exceeded, and prepares the context for stability-ordered rendering.
The budget is in approximate tokens (chars/4). Pass 0 to use the default (32768). ctx and convID are optional (pass context.Background() / "" when not available). When TRM is loaded and ctx is non-nil, TRM scoring is used for CogDoc ranking.
func (*Process) Broker ¶ added in v0.3.0
func (p *Process) Broker() *EventBroker
Broker returns the process event broker (nil if not initialised).
func (*Process) CurrentCycleID ¶ added in v0.2.0
CurrentCycleID returns the current iteration's cycle ID (may be empty between iterations). Exported so the agent harness and tool dispatch paths can correlate their own trace events with the running kernel cycle.
func (*Process) EmbeddingIndex ¶
func (p *Process) EmbeddingIndex() *EmbeddingIndex
EmbeddingIndex returns the embedding index (nil if not loaded).
func (*Process) EmitEvent ¶ added in v0.3.0
EmitEvent appends a single event to the session ledger via AppendEvent, which means it flows through the hash chain AND the in-process broker. Callers: MCP cog_emit_event tool, internal emitEvent helper.
The `source` field lands on envelope.Metadata.Source — subscribers use it to distinguish kernel-generated events (kernel-v3) from MCP-client events (mcp-client) and future external sources.
This replaces the pre-PR EmitLedgerEvent in mcp_stubs.go which wrote to a flat .cog/ledger/events.jsonl orphan file bypassing hash chaining (cogos#10). Post-refactor: every event — kernel or MCP — goes through AppendEvent, so the live broker sees everything.
func (*Process) Field ¶
func (p *Process) Field() *AttentionalField
Field returns the attentional field (for use by the serve layer).
func (*Process) Fingerprint ¶
Fingerprint returns a stable trust fingerprint for the current process state.
func (*Process) Index ¶
func (p *Process) Index() *CogDocIndex
Index returns the current CogDoc index (may be nil before first consolidation).
func (*Process) LightCones ¶
func (p *Process) LightCones() *LightConeManager
LightCones returns the per-conversation light cone manager.
func (*Process) NodeHealth ¶ added in v0.2.0
func (p *Process) NodeHealth() *NodeHealth
NodeHealth returns the current node health state (for use by the serve layer).
func (*Process) Observer ¶
func (p *Process) Observer() *TrajectoryModel
Observer returns the trajectory model (for use by the HTTP layer).
func (*Process) RecordBlock ¶
RecordBlock writes a CogBlock to the process ledger and returns the ledger ref.
func (*Process) RecordTurn ¶ added in v0.3.0
func (p *Process) RecordTurn(turn *TurnRecord) error
RecordTurn persists a completed turn: full text to the session sidecar, truncated preview to a hash-chained `turn.completed` ledger event.
This is the fix for cogos#20 (RecordBlock data-loss): the existing cogblock.ingest event at cogblock_ledger.go:46 stays as-is (message_count metadata) for consumers that treat it as an ingest receipt, while the new turn.completed event is the authoritative persistence point for the conversation text. Chat handlers call BOTH — RecordBlock for the block envelope, RecordTurn for the turn content — so nothing is lost.
If turn.Timestamp is zero it defaults to time.Now().UTC(). If turn.Status is "" it defaults to "ok".
func (*Process) RunPendingToolCallSweeper ¶ added in v0.3.0
RunPendingToolCallSweeper is a blocking helper that runs the TTL sweep on a ticker until ctx is cancelled. Exported so the process.Run loop or the daemon lifecycle can start it alongside the other tickers. Safe to call without a pending registry — it will initialize one lazily when an entry is first registered.
func (*Process) Send ¶
Send delivers an external event to the process loop (non-blocking). Returns false if the channel is full.
func (*Process) SetTRM ¶
func (p *Process) SetTRM(trm *MambaTRM, idx *EmbeddingIndex)
SetTRM installs the TRM model and embedding index (called at startup).
func (*Process) State ¶
func (p *Process) State() ProcessState
State returns the current process state (safe for concurrent reads).
func (*Process) SubmitExternal ¶ added in v0.2.0
SubmitExternal is the canonical entry point for external perturbations (dashboard chat, modality inlets, etc.) into the metabolic cycle. It is a non-blocking alias of Send(): the external channel has bounded capacity and the cycle must never stall on a slow or noisy producer. Callers that need backpressure should observe the returned bool and drop/log accordingly.
func (*Process) TrustSnapshot ¶
func (p *Process) TrustSnapshot() TrustState
TrustSnapshot returns a copy of the current trust metadata.
type ProcessKind ¶
type ProcessKind int
ProcessKind classifies how a process is managed.
const ( // ProcessForeground is tied to an HTTP request and killed on disconnect. ProcessForeground ProcessKind = iota // ProcessBackground outlives the request and reports via callback. ProcessBackground // ProcessAgent runs in a sandboxed Docker container. ProcessAgent )
func (ProcessKind) String ¶
func (k ProcessKind) String() string
type ProcessManager ¶
type ProcessManager struct {
// contains filtered or unexported fields
}
ProcessManager tracks all active Claude Code subprocesses.
func NewProcessManager ¶
func NewProcessManager(cfg ProcessManagerConfig) *ProcessManager
NewProcessManager creates a process manager.
func (*ProcessManager) CanSpawn ¶
func (pm *ProcessManager) CanSpawn(identity string) error
CanSpawn checks whether a new process is allowed under the concurrency limits.
func (*ProcessManager) Finish ¶
func (pm *ProcessManager) Finish(id string)
Finish marks a background process as complete and fires the callback.
func (*ProcessManager) Kill ¶
func (pm *ProcessManager) Kill(id string)
Kill sends SIGTERM to a process, then SIGKILL after 5 seconds.
func (*ProcessManager) KillByIdentity ¶
func (pm *ProcessManager) KillByIdentity(identity string) int
KillByIdentity cancels all foreground processes for a given NodeID. Background processes are NOT killed — they were explicitly requested.
func (*ProcessManager) KillBySource ¶
func (pm *ProcessManager) KillBySource(source string) int
KillBySource cancels all processes from a given source (e.g., when a Discord channel is closed or a client session ends).
func (*ProcessManager) List ¶
func (pm *ProcessManager) List() []ProcessSummary
List returns a snapshot of all tracked processes.
func (*ProcessManager) Remove ¶
func (pm *ProcessManager) Remove(id string)
Remove unregisters a process. Called when a foreground process completes.
func (*ProcessManager) SetOnComplete ¶
func (pm *ProcessManager) SetOnComplete(fn func(*ManagedProcess))
SetOnComplete registers a callback for when background processes finish.
func (*ProcessManager) Shutdown ¶
func (pm *ProcessManager) Shutdown(timeout time.Duration)
Shutdown gracefully terminates all running processes. Sends SIGTERM to all, waits up to timeout, then SIGKILL.
func (*ProcessManager) Stats ¶
func (pm *ProcessManager) Stats() ProcessStats
func (*ProcessManager) Track ¶
func (pm *ProcessManager) Track(cmd *exec.Cmd, opts ManagedProcessOpts) *ManagedProcess
Track registers a new process with the manager. Call before cmd.Start().
type ProcessManagerConfig ¶
type ProcessManagerConfig struct {
MaxGlobal int // 0 = unlimited
MaxPerIdentity int // 0 = unlimited
}
ProcessManagerConfig configures the process manager.
type ProcessState ¶
type ProcessState int
ProcessState represents the four operational states of the v3 process.
const ( StateActive ProcessState = iota // Processing external input StateReceptive // Idle, waiting StateConsolidating // Internal maintenance StateDormant // Minimal activity )
func (ProcessState) String ¶
func (s ProcessState) String() string
type ProcessStats ¶
type ProcessStats struct {
Total int `json:"total"`
Running int `json:"running"`
Completed int `json:"completed"`
Failed int `json:"failed"`
Cancelled int `json:"cancelled"`
ByKind map[string]int `json:"by_kind"`
BySource map[string]int `json:"by_source"`
}
Stats returns aggregate counts.
type ProcessStatus ¶
type ProcessStatus int
ProcessStatus tracks the lifecycle state of a managed process.
const ( ProcessRunning ProcessStatus = iota ProcessCompleted ProcessFailed ProcessCancelled ProcessTimedOut )
func (ProcessStatus) String ¶
func (s ProcessStatus) String() string
type ProcessSummary ¶
type ProcessSummary struct {
ID string `json:"id"`
Kind string `json:"kind"`
Status string `json:"status"`
Source string `json:"source"`
Identity string `json:"identity,omitempty"`
StartedAt string `json:"started_at"`
Duration string `json:"duration"`
CallbackChannel string `json:"callback_channel,omitempty"`
Error string `json:"error,omitempty"`
}
ProcessSummary is a JSON-friendly snapshot of a managed process.
type Projection ¶
type Projection struct {
// Base is the workspace-relative prefix under the workspace root
// (e.g. ".cog/mem/"). Mutually exclusive with ExtBase.
Base string
// ExtBase is a workspace-root-relative prefix for paths that live
// outside .cog/ (e.g. ".claude/skills/").
ExtBase string
// Pattern controls resolution: "direct" | "directory" | "glob" | "singleton".
Pattern string
// Suffix is appended to the resolved path for "direct" patterns
// (e.g. ".cog.md" for specs).
Suffix string
// GlobPat is a fmt.Sprintf template (one %s) for "glob" patterns.
// E.g. "%s-*.md" matches numbered ADR files.
GlobPat string
}
Projection defines how a cog:// URI type maps to the filesystem.
type ProprioceptiveEntry ¶
type ProprioceptiveEntry struct {
Timestamp string `json:"timestamp"`
Event string `json:"event,omitempty"`
Provider string `json:"provider,omitempty"`
ToolName string `json:"tool_name,omitempty"`
ToolCallID string `json:"tool_call_id,omitempty"`
ToolArgs string `json:"tool_args,omitempty"`
Reason string `json:"reason,omitempty"`
Query string `json:"query"`
Predicted []PredictedChunk `json:"predicted"`
Actual []string `json:"actual"`
Hits int `json:"hits"`
Delta float64 `json:"delta"`
ResponseLen int `json:"response_len"`
}
ProprioceptiveEntry is a single prediction-vs-reality log entry.
func ComputeEntry ¶
func ComputeEntry(query string, predicted []PredictedChunk, response string) ProprioceptiveEntry
ComputeEntry builds a ProprioceptiveEntry from TRM predictions and a response.
type ProprioceptiveLogger ¶
type ProprioceptiveLogger struct {
// contains filtered or unexported fields
}
ProprioceptiveLogger writes proprioceptive entries to a JSONL file.
func NewProprioceptiveLogger ¶
func NewProprioceptiveLogger(logPath string) *ProprioceptiveLogger
NewProprioceptiveLogger creates a logger writing to the given path. The parent directory is created if it does not exist.
func (*ProprioceptiveLogger) Log ¶
func (p *ProprioceptiveLogger) Log(entry ProprioceptiveEntry)
Log appends a proprioceptive entry to the JSONL log.
type Provider ¶
type Provider interface {
// Complete sends a context package and waits for the full response.
Complete(ctx context.Context, req *CompletionRequest) (*CompletionResponse, error)
// Stream sends a request and returns a channel of incremental chunks.
// The channel closes when done or on error. Providers that don't support
// streaming must fall back to Complete and send a single chunk.
Stream(ctx context.Context, req *CompletionRequest) (<-chan StreamChunk, error)
// Name returns the provider identifier (e.g. "ollama", "anthropic").
Name() string
// Available reports whether the provider is ready to serve requests.
// For local providers: checks the model server is running and model loaded.
Available(ctx context.Context) bool
// Capabilities returns what this provider supports.
Capabilities() ProviderCapabilities
// Ping probes the endpoint and returns measured latency.
Ping(ctx context.Context) (time.Duration, error)
}
Provider is the fundamental abstraction for any LLM backend. Anthropic, Ollama, MLX, OpenRouter — all satisfy this interface.
type ProviderCapabilities ¶
type ProviderCapabilities struct {
Capabilities []Capability `json:"capabilities"`
MaxContextTokens int `json:"max_context_tokens"`
MaxOutputTokens int `json:"max_output_tokens"`
ModelsAvailable []string `json:"models_available"`
IsLocal bool `json:"is_local"`
AgenticHarness bool `json:"agentic_harness,omitempty"`
CostPerInputToken float64 `json:"cost_per_input_token"`
CostPerOutputToken float64 `json:"cost_per_output_token"`
}
ProviderCapabilities describes what a provider can do.
func (ProviderCapabilities) HasAllCapabilities ¶
func (pc ProviderCapabilities) HasAllCapabilities(required []Capability) bool
HasAllCapabilities checks if the provider supports all required capabilities.
func (ProviderCapabilities) HasCapability ¶
func (pc ProviderCapabilities) HasCapability(cap Capability) bool
HasCapability checks if the provider supports a specific capability.
type ProviderConfig ¶
type ProviderConfig struct {
Type string `yaml:"type,omitempty" json:"type,omitempty"`
APIKeyEnv string `yaml:"api_key_env,omitempty" json:"api_key_env,omitempty"`
Endpoint string `yaml:"endpoint,omitempty" json:"endpoint,omitempty"`
Model string `yaml:"model" json:"model"`
ContextWindow int `yaml:"context_window,omitempty" json:"context_window,omitempty"`
MaxTokens int `yaml:"max_tokens,omitempty" json:"max_tokens,omitempty"`
Timeout int `yaml:"timeout,omitempty" json:"timeout,omitempty"`
Headers map[string]string `yaml:"headers,omitempty" json:"headers,omitempty"`
Options map[string]interface{} `yaml:"options,omitempty" json:"options,omitempty"`
Enabled *bool `yaml:"enabled,omitempty" json:"enabled,omitempty"`
}
ProviderConfig configures a single provider instance.
func (ProviderConfig) IsEnabled ¶
func (pc ProviderConfig) IsEnabled() bool
IsEnabled returns whether the provider is active (default: true).
type ProviderMessage ¶
type ProviderMessage struct {
Role string `json:"role"` // "user", "assistant", "system", "tool"
Content string `json:"content"`
ContentParts []ContentPart `json:"content_parts,omitempty"`
Name string `json:"name,omitempty"`
ToolCallID string `json:"tool_call_id,omitempty"`
ToolCalls []ToolCall `json:"tool_calls,omitempty"`
}
ProviderMessage is a single conversation turn.
type ProviderMeta ¶
type ProviderMeta struct {
Provider string `json:"provider"`
Model string `json:"model"`
Latency time.Duration `json:"latency"`
Region string `json:"region,omitempty"`
Cached bool `json:"cached,omitempty"`
}
ProviderMeta carries provenance for the ledger.
type ProviderSalienceEntry ¶
type ProviderSalienceEntry struct {
ID string `json:"id"`
Salience float64 `json:"salience"`
Zone AttentionalZone `json:"zone"`
}
ProviderSalienceEntry records a single item's salience score.
type ProviderSalienceSnapshot ¶
type ProviderSalienceSnapshot struct {
TopItems []ProviderSalienceEntry `json:"top_items"`
FocalPoint string `json:"focal_point"`
MomentumVector []float64 `json:"momentum_vector,omitempty"`
}
ProviderSalienceSnapshot captures attentional field state at request time.
type ProviderScore ¶
type ProviderScore struct {
Provider string `json:"provider"`
RawScore float64 `json:"raw_score"`
SwapPenalty float64 `json:"swap_penalty"`
AdjustedScore float64 `json:"adjusted_score"`
Available bool `json:"available"`
CapabilitiesMet bool `json:"capabilities_met"`
}
ProviderScore records a single provider's routing score.
type ProvidersConfig ¶
type ProvidersConfig struct {
Providers map[string]ProviderConfig `yaml:"providers" json:"providers"`
Routing RoutingConfig `yaml:"routing" json:"routing"`
}
ProvidersConfig is the top-level configuration from .cog/config/providers.yaml.
type ReadConfigResult ¶ added in v0.3.0
type ReadConfigResult struct {
EffectiveConfig map[string]any `json:"effective_config"`
Path string `json:"path"`
Exists bool `json:"exists"`
RawYAML string `json:"raw_yaml,omitempty"`
Defaults map[string]any `json:"defaults,omitempty"`
}
ReadConfigResult is returned from the read-side helpers. It mirrors the MCP + HTTP read surface so the wiring layer is a thin projection.
func ReadConfigSnapshot ¶ added in v0.3.0
func ReadConfigSnapshot(root string, includeRaw, includeDefaults bool) (ReadConfigResult, error)
ReadConfigSnapshot bundles the read view used by `cog_read_config` and `GET /v1/config`.
type RequestMetadata ¶
type RequestMetadata struct {
RequestID string `json:"request_id"`
ProcessState string `json:"process_state"` // from ProcessState.String()
Priority RequestPriority `json:"priority"`
PreferLocal bool `json:"prefer_local,omitempty"`
PreferProvider string `json:"prefer_provider,omitempty"` // force-route to named provider
MaxCostUSD *float64 `json:"max_cost_usd,omitempty"`
RequiredCapabilities []Capability `json:"required_capabilities,omitempty"`
Source string `json:"source,omitempty"`
SalienceSnapshot *ProviderSalienceSnapshot `json:"salience_snapshot,omitempty"`
}
RequestMetadata carries routing/ledger data that doesn't go to the model.
type RequestPriority ¶
type RequestPriority int
RequestPriority controls routing urgency.
const ( PriorityLow RequestPriority = 0 PriorityNormal RequestPriority = 1 PriorityHigh RequestPriority = 2 PriorityCritical RequestPriority = 3 )
type RollbackOptions ¶ added in v0.3.0
type RollbackOptions struct {
Backup string // bare filename, e.g. "kernel.yaml.bak-2026-04-21T16-30-00Z". Empty → most recent.
ListOnly bool
}
RollbackOptions controls rollback behaviour.
type RollbackResult ¶ added in v0.3.0
type RollbackResult struct {
Restored bool `json:"restored"`
RestoredFrom string `json:"restored_from,omitempty"`
RequiresRestart bool `json:"requires_restart"`
Backups []BackupEntry `json:"backups"`
Path string `json:"path"`
Error string `json:"error,omitempty"`
}
RollbackResult is the response shape for `cog_rollback_config` / `POST /v1/config/rollback`.
func RollbackConfig ¶ added in v0.3.0
func RollbackConfig(root string, opts RollbackOptions) (RollbackResult, error)
RollbackConfig restores kernel.yaml from a prior backup, optionally listing backups without mutating anything.
type Router ¶
type Router interface {
// Route selects the best provider for a request.
Route(ctx context.Context, req *CompletionRequest) (Provider, *RoutingDecision, error)
// RegisterProvider adds a provider to the pool.
RegisterProvider(p Provider)
// DeregisterProvider removes a provider.
DeregisterProvider(name string)
// Stats returns routing statistics.
Stats() RouterStats
}
Router selects which Provider handles a given request. Maps to the externalized gating network from the MoE architecture.
func BuildRouter ¶
func BuildRouter(cfg *Config, opts ...BuildRouterOption) (Router, error)
BuildRouter constructs a Router from workspace configuration. Reads .cog/config/providers.yaml; falls back to a default Ollama config.
type RouterStats ¶
type RouterStats struct {
TotalRequests int64 `json:"total_requests"`
RequestsByProvider map[string]int64 `json:"requests_by_provider"`
ToolCallRejectionsByProvider map[string]int64 `json:"tool_call_rejections_by_provider,omitempty"`
EscalationCount int64 `json:"escalation_count"`
FallbackCount int64 `json:"fallback_count"`
SovereigntyRatio float64 `json:"sovereignty_ratio"`
TotalCostUSD float64 `json:"total_cost_usd"`
TokensByProvider map[string]TokenUsage `json:"tokens_by_provider"`
AvgLatencyByProvider map[string]time.Duration `json:"avg_latency_by_provider"`
}
RouterStats tracks routing patterns for observability.
type RoutingConfig ¶
type RoutingConfig struct {
Default string `yaml:"default" json:"default"`
LocalThreshold float64 `yaml:"local_threshold" json:"local_threshold"`
FallbackChain []string `yaml:"fallback_chain" json:"fallback_chain"`
MaxCostPerDayUSD float64 `yaml:"max_cost_per_day_usd,omitempty" json:"max_cost_per_day_usd,omitempty"`
ProcessStateRouting map[string]string `yaml:"process_state_routing,omitempty" json:"process_state_routing,omitempty"`
}
RoutingConfig controls Router behaviour.
type RoutingDecision ¶
type RoutingDecision struct {
RequestID string `json:"request_id"`
SelectedProvider string `json:"selected_provider"`
Scores []ProviderScore `json:"scores"`
Reason string `json:"reason"`
Escalated bool `json:"escalated"`
FallbackUsed bool `json:"fallback_used"`
FallbackFrom string `json:"fallback_from,omitempty"`
Timestamp time.Time `json:"timestamp"`
LatencyNs int64 `json:"latency_ns"`
}
RoutingDecision records why the router chose a specific provider.
type SalienceConfig ¶
type SalienceConfig struct {
WeightRecency float64
WeightFrequency float64
WeightChurn float64
WeightAuthorship float64
DecayModel string
HalfLife int // days
}
SalienceConfig holds weights and decay parameters for salience computation.
func DefaultSalienceConfig ¶
func DefaultSalienceConfig() *SalienceConfig
DefaultSalienceConfig returns sensible defaults.
type SalienceScore ¶
type SalienceScore struct {
Recency float64
Frequency float64
Churn float64
Authorship float64
Total float64
CommitCount int
TotalChanges int
UniqueAuthors int
DaysAgo int
}
SalienceScore holds the computed salience breakdown for a file.
func ComputeFileSalience ¶
func ComputeFileSalience(repoPath, filePath string, daysWindow int, cfg *SalienceConfig) (*SalienceScore, error)
ComputeFileSalience computes salience for a single file from its git history. Returns a zero-score result (not nil) if the file has no commits in the window.
NOTE: For batch scoring (many files), use RankFilesBySalience which opens the repo once. This function opens the repo per call and is only suitable for single-file queries or tests.
type ScoreHead ¶
type ScoreHead struct {
W1 [][]float32 // [d_model][2*d_model]
B1 []float32 // [d_model]
W2 [][]float32 // [1][d_model]
B2 []float32 // [1]
DIn int // 2*d_model
DMid int // d_model
}
ScoreHead is the final scoring MLP: Linear(2*d) → GELU → Linear(1).
type ScoredMessage ¶
type ScoredMessage struct {
Role string
Content string
Tokens int
TurnIndex int // 0 = oldest
RecencyScore float64 // 1.0 = most recent, decays toward 0
RelevanceScore float64 // keyword overlap with current query
CombinedScore float64 // weighted combination
}
ScoredMessage is a conversation turn scored for retention.
type Server ¶
type Server struct {
// contains filtered or unexported fields
}
Server wraps the HTTP server and its dependencies.
func (*Server) SetAgentController ¶ added in v0.3.0
func (s *Server) SetAgentController(ctrl AgentController)
SetAgentController wires a live AgentController into the server so the cog_list_agents / cog_get_agent_state / cog_trigger_agent_loop MCP tools have a backing implementation. Callers outside engine (like the root-package serveServer) can build the controller and pass it here. Safe to call post-construction: the MCP tool registry resolves the controller at call time.
type ServiceDef ¶ added in v0.2.0
type ServiceDef struct {
Port int `yaml:"port" json:"port"`
Binary string `yaml:"binary,omitempty" json:"binary,omitempty"`
Workdir string `yaml:"workdir,omitempty" json:"workdir,omitempty"`
Venv string `yaml:"venv,omitempty" json:"venv,omitempty"`
Command string `yaml:"command" json:"command"`
Health string `yaml:"health" json:"health"`
Restart string `yaml:"restart" json:"restart"`
Launchd string `yaml:"launchd,omitempty" json:"launchd,omitempty"`
DependsOn []string `yaml:"depends_on" json:"depends_on"`
Consumers []ConsumerEntry `yaml:"consumers,omitempty" json:"consumers,omitempty"`
}
ServiceDef describes a single managed service.
type ServiceHealth ¶ added in v0.2.0
type ServiceHealth struct {
Port int `json:"port"`
Status string `json:"status"` // healthy, degraded, down
At time.Time `json:"probed_at"`
}
ServiceHealth is the probed state of a single service.
type SessionBlock ¶ added in v0.3.0
type SessionBlock struct {
Hash string `json:"hash"`
Kind string `json:"kind,omitempty"`
Tokens int `json:"tokens"`
Score float64 `json:"score,omitempty"`
Source string `json:"source,omitempty"`
Tier string `json:"tier,omitempty"`
Content string `json:"content,omitempty"`
}
SessionBlock is the block shape embedded inside SessionContextState. Matches root's ContextBlock JSON shape (serve_context.go ContextBlock).
type SessionContextState ¶ added in v0.3.0
type SessionContextState struct {
SessionID string `json:"session_id"`
Profile string `json:"profile"`
TurnNumber int `json:"turn_number"`
IrisSize int `json:"iris_size"`
IrisUsed int `json:"iris_used"`
IrisPressure float64 `json:"iris_pressure"`
TotalTokens int `json:"total_tokens"`
Blocks []SessionBlock `json:"blocks"`
BlockCount int `json:"block_count"`
CacheHits int `json:"cache_hits"`
LastRequestAt time.Time `json:"last_request_at"`
CoherenceScore float64 `json:"coherence_score,omitempty"`
}
SessionContextState captures the context state for a single session's most recent foveated request. Mirrors root's type (serve_context.go) exactly so the JSON payload is byte-compatible.
type SessionContextStore ¶ added in v0.3.0
type SessionContextStore struct {
// contains filtered or unexported fields
}
SessionContextStore is the in-memory index of session → latest context state. Methods are goroutine-safe.
func NewSessionContextStore ¶ added in v0.3.0
func NewSessionContextStore() *SessionContextStore
NewSessionContextStore constructs an empty store.
func (*SessionContextStore) Get ¶ added in v0.3.0
func (s *SessionContextStore) Get(sessionID string) (*SessionContextState, bool)
Get returns the state for a session.
func (*SessionContextStore) Record ¶ added in v0.3.0
func (s *SessionContextStore) Record(state *SessionContextState)
Record replaces the state for a session.
func (*SessionContextStore) Snapshot ¶ added in v0.3.0
func (s *SessionContextStore) Snapshot() []*SessionContextState
Snapshot returns a copy of every session's state. Slice order is not guaranteed — matches root (iteration over map).
type SessionRegistry ¶ added in v0.3.0
type SessionRegistry struct {
// contains filtered or unexported fields
}
SessionRegistry is the in-memory, RWMutex-guarded map of session_id → SessionState. The bus is ground truth; this map is a derived, warm cache.
The mutating methods take an optional `appendFn` callback that writes the corresponding event to the bus WHILE the registry lock is held. On appendFn() error, the in-memory row is left untouched so the derived view can never run ahead of the authoritative ledger. This fixes the inverted write-path critical raised in codex's PR#43 review.
func NewSessionRegistry ¶ added in v0.3.0
func NewSessionRegistry() *SessionRegistry
NewSessionRegistry returns an empty registry.
func (*SessionRegistry) ApplyEnd ¶ added in v0.3.0
func (r *SessionRegistry) ApplyEnd( id, reason, handoffID string, now time.Time, appendFn func() error, ) (*SessionState, bool, error)
ApplyEnd transitions a session to ended. Returns:
- (nil, false, nil) when the session is unknown → 404.
- (&state, true, errAlready) when the session was already ended → 409.
- (&state, true, nil) on success.
- (nil, true, appendErr) if appendFn failed — registry unchanged.
appendFn runs AFTER validation but BEFORE the Ended transition commits, under the registry lock. If it errors, no mutation is applied.
func (*SessionRegistry) ApplyHeartbeat ¶ added in v0.3.0
func (r *SessionRegistry) ApplyHeartbeat( id string, contextUsage float64, status, currentTask string, now time.Time, appendFn func() error, ) (*SessionState, bool, error)
ApplyHeartbeat bumps LastSeen + optional fields. Returns (state, ok, err).
- ok=false when session is unknown → caller returns 404.
- ok=true with non-nil err when the session was already ended. In that case the registry is NOT mutated (no LastSeen update, no status bump) and no appendFn is invoked — the caller translates to 409.
- ok=true, err=nil on success — the mutation and (optional) bus append both happened atomically under the registry lock.
appendFn, if non-nil, is invoked AFTER validation/ended-check but BEFORE the LastSeen mutation commits. An appendFn error aborts the mutation and is returned verbatim so the registry stays in lockstep with the bus.
func (*SessionRegistry) ApplyRegister ¶ added in v0.3.0
func (r *SessionRegistry) ApplyRegister( state SessionState, activeWindow time.Duration, now time.Time, appendFn func() error, ) (*SessionState, bool, error)
ApplyRegister records a session.register event into the map. Idempotent per amendment #2: same session_id updates the in-memory row; re-registration after end is allowed only if the prior session is ended or its heartbeat is outside the active window.
If appendFn is non-nil it is invoked AFTER validation but BEFORE the registry mutation is committed, while the registry lock is held. An error from appendFn aborts the mutation and is returned verbatim — the registry is left unchanged, preserving the bus-is-ground-truth invariant.
Returns the resulting state (copy), a flag indicating whether the registry row was newly created vs updated, and an error.
func (*SessionRegistry) Get ¶ added in v0.3.0
func (r *SessionRegistry) Get(id string) (*SessionState, bool)
Get returns a copy of the session row, or (nil, false) if unknown.
func (*SessionRegistry) Len ¶ added in v0.3.0
func (r *SessionRegistry) Len() int
Len returns the number of tracked sessions.
func (*SessionRegistry) Snapshot ¶ added in v0.3.0
func (r *SessionRegistry) Snapshot() []*SessionState
Snapshot returns a copy of every session row. Order is not guaranteed.
type SessionState ¶ added in v0.3.0
type SessionState struct {
SessionID string `json:"session_id"`
Workspace string `json:"workspace"`
Role string `json:"role"`
Task string `json:"task,omitempty"`
Model string `json:"model,omitempty"`
Hostname string `json:"hostname,omitempty"`
ContextUsage float64 `json:"context_usage,omitempty"`
CurrentTask string `json:"current_task,omitempty"`
Status string `json:"status,omitempty"`
Extras map[string]interface{} `json:"extras,omitempty"`
// Lifecycle.
RegisteredAt time.Time `json:"registered_at"`
LastSeen time.Time `json:"last_seen"`
EndedAt time.Time `json:"ended_at,omitempty"`
EndReason string `json:"end_reason,omitempty"`
EndHandoffID string `json:"end_handoff_id,omitempty"`
// Lifecycle flag, computed from EndedAt. Kept as its own JSON field so
// presence responses don't need derivation on the client side.
Ended bool `json:"ended"`
}
SessionState is the in-memory row for a session. Fields mirror the payload shape the bridge writes to bus_sessions so presence responses are byte-compat for external consumers.
type SimpleRouter ¶
type SimpleRouter struct {
// contains filtered or unexported fields
}
SimpleRouter implements Router with rule-based provider selection.
func NewSimpleRouter ¶
func NewSimpleRouter(cfg RoutingConfig) *SimpleRouter
NewSimpleRouter creates an empty router with the given routing config.
func (*SimpleRouter) DeregisterProvider ¶
func (r *SimpleRouter) DeregisterProvider(name string)
DeregisterProvider removes a provider by name.
func (*SimpleRouter) RegisterProvider ¶
func (r *SimpleRouter) RegisterProvider(p Provider)
RegisterProvider adds a provider to the pool.
func (*SimpleRouter) Route ¶
func (r *SimpleRouter) Route(ctx context.Context, req *CompletionRequest) (Provider, *RoutingDecision, error)
Route selects the best available provider for the request.
func (*SimpleRouter) Stats ¶
func (r *SimpleRouter) Stats() RouterStats
Stats returns current routing statistics.
type SourceStatus ¶ added in v0.3.0
type SourceStatus struct {
Name string `json:"name"`
Scanned int `json:"scanned"`
Matched int `json:"matched"`
FileExists bool `json:"file_exists"`
}
SourceStatus reports per-file scan diagnostics. FileExists=false distinguishes "file absent" from "file present but empty", which is critical for the "I got nothing — wrong filter or empty file?" debugging use case called out in Agent Q §3.2.
type StreamChunk ¶
type StreamChunk struct {
Delta string `json:"delta,omitempty"`
ToolCallDelta *ToolCallDelta `json:"tool_call_delta,omitempty"`
Done bool `json:"done"`
StopReason string `json:"stop_reason,omitempty"` // e.g. "end_turn", "max_tokens", "tool_use"
Usage *TokenUsage `json:"usage,omitempty"` // populated on final chunk
ProviderMeta *ProviderMeta `json:"provider_meta,omitempty"` // populated on final chunk
Error error `json:"-"`
}
StreamChunk is one piece of a streaming response.
type StreamTailer ¶
type StreamTailer interface {
// Tail starts watching a file/directory for new JSONL lines.
// It sends normalized CogBlocks on the output channel.
// It respects context cancellation for graceful shutdown.
Tail(ctx context.Context, path string, out chan<- CogBlock) error
// Name returns the adapter name (e.g., "claude-code", "openclaw").
Name() string
}
StreamTailer watches an external harness stream and emits normalized blocks.
type StubProvider ¶
type StubProvider struct {
// contains filtered or unexported fields
}
StubProvider is an in-memory Provider for testing.
func NewStubProvider ¶
func NewStubProvider(name, response string) *StubProvider
NewStubProvider creates a StubProvider that returns the given response.
func (*StubProvider) Capabilities ¶
func (s *StubProvider) Capabilities() ProviderCapabilities
func (*StubProvider) Complete ¶
func (s *StubProvider) Complete(_ context.Context, _ *CompletionRequest) (*CompletionResponse, error)
func (*StubProvider) Name ¶
func (s *StubProvider) Name() string
func (*StubProvider) Stream ¶
func (s *StubProvider) Stream(_ context.Context, _ *CompletionRequest) (<-chan StreamChunk, error)
type Subscription ¶ added in v0.3.0
type Subscription struct {
Events <-chan *EventEnvelope
Dropped func() uint64
Cancel func()
}
Subscription is returned by Subscribe. Close unsubscribes and drains the channel. Dropped returns the number of events dropped due to a full channel — callers can surface this to consumers as a _meta frame.
type SyncEnvelope ¶
type SyncEnvelope struct {
Version int `json:"version"`
OriginNodeID string `json:"origin_node_id"`
TargetNodeID string `json:"target_node_id"`
BlobHash string `json:"blob_hash"`
Timestamp string `json:"timestamp"`
Kind string `json:"kind"`
Signature string `json:"signature"`
}
SyncEnvelope describes a bridge envelope dropped into .cog/sync/inbox/.
type SyncEvent ¶
type SyncEvent struct {
Envelope SyncEnvelope
FilePath string
Valid bool
ValidationError string
AlreadyHave bool
}
SyncEvent reports a discovered sync envelope and its structural validation result.
type SyncWatcher ¶
SyncWatcher polls a Syncthing inbox directory for new SyncEnvelope files.
func NewSyncWatcher ¶
func NewSyncWatcher(blobStore *BlobStore, pollInterval time.Duration) *SyncWatcher
NewSyncWatcher creates a polling sync watcher.
type TRMConfig ¶
type TRMConfig struct {
DModel int // embedding dimension (384)
DState int // SSM state dimension (4)
DConv int // convolution kernel width (2)
NLayers int // number of Mamba blocks (2)
Expand int // expansion factor (1)
NProbes int // number of attention probes (4)
DHead int // attention head dimension (128)
NEventType int // number of event types (4)
}
TRMConfig holds the model hyperparameters. Must match the training config.
func DefaultTRMConfig ¶
func DefaultTRMConfig() TRMConfig
DefaultTRMConfig returns the config matching the trained model.
type TailerManager ¶
type TailerManager struct {
// contains filtered or unexported fields
}
TailerManager runs multiple stream tailers and tracks per-tailer stats.
func NewTailerManager ¶
func NewTailerManager(out chan<- CogBlock) *TailerManager
NewTailerManager creates a manager that forwards normalized blocks to out.
func (*TailerManager) Register ¶
func (m *TailerManager) Register(tailer StreamTailer, path string) error
Register adds a tailer and source path to the manager.
func (*TailerManager) Run ¶
func (m *TailerManager) Run(ctx context.Context) error
Run starts all registered tailers and blocks until they stop.
func (*TailerManager) Stats ¶
func (m *TailerManager) Stats() map[string]TailerStats
Stats returns a snapshot of current per-tailer metrics.
type TailerStats ¶
TailerStats captures manager-side ingestion state for a single tailer.
type TokenUsage ¶
type TokenUsage struct {
InputTokens int `json:"input_tokens"`
OutputTokens int `json:"output_tokens"`
CacheReadTokens int `json:"cache_read_tokens,omitempty"`
CacheWriteTokens int `json:"cache_write_tokens,omitempty"`
}
TokenUsage tracks token consumption for cost accounting.
type ToolCall ¶
type ToolCall struct {
ID string `json:"id"`
Name string `json:"name"`
Arguments string `json:"arguments"`
}
ToolCall is a model's request to invoke a tool.
type ToolCallDelta ¶
type ToolCallDelta struct {
Index int `json:"index"`
ID string `json:"id,omitempty"`
Name string `json:"name,omitempty"`
ArgsDelta string `json:"args_delta,omitempty"`
}
ToolCallDelta carries incremental streaming data for a tool call.
type ToolCallEvent ¶ added in v0.3.0
type ToolCallEvent struct {
CallID string `json:"call_id"`
ToolName string `json:"tool_name"`
Arguments json.RawMessage `json:"arguments,omitempty"`
Source string `json:"source"`
Ownership string `json:"ownership"`
Provider string `json:"provider,omitempty"`
InteractionID string `json:"interaction_id,omitempty"`
TurnIndex int `json:"turn_index,omitempty"`
SessionID string `json:"session_id"`
}
ToolCallEvent is the domain shape for a `tool.call` ledger entry. Fields map 1:1 to the ledger event payload spec in Agent S §4.2.
type ToolCallQuery ¶ added in v0.3.0
type ToolCallQuery struct {
SessionID string // exact session; empty = all sessions
ToolName string // exact match, or "cog_*" / "*_read" wildcard
Status string // "success" | "error" | "rejected" | "timeout" | "pending"
Source string // "mcp" | "openai-chat" | "anthropic-messages" | "kernel-loop"
Ownership string // "kernel" | "client"
CallID string // exact single-call lookup
Since time.Time // lower bound (inclusive); zero = no lower bound
Until time.Time // upper bound (exclusive); zero = no upper bound
Limit int // default 100, max 500
Order string // "desc" (default) | "asc"
IncludeArgs bool // include arguments payload (default: false)
IncludeOutput bool // include output_summary (default: false)
}
ToolCallQuery describes the filter set for QueryToolCalls. Each filter applies conjunctively; empty means no restriction.
type ToolCallQueryResult ¶ added in v0.3.0
type ToolCallQueryResult struct {
Count int `json:"count"`
Calls []ToolCallRow `json:"calls"`
Truncated bool `json:"truncated"`
SourcesChecked ToolCallSourceCounts `json:"sources_checked"`
}
ToolCallQueryResult is the return shape of QueryToolCalls.
func QueryToolCalls ¶ added in v0.3.0
func QueryToolCalls(workspaceRoot string, q ToolCallQuery) (*ToolCallQueryResult, error)
QueryToolCalls reads the ledger for one or all sessions, extracts tool.call/tool.result events, pairs them by call_id, applies q's filters, and returns the stitched row set.
This is a read-only, stateless function — safe to call concurrently with AppendEvent calls (JSONL appends are line-atomic; partial lines are skipped).
type ToolCallRecord ¶ added in v0.3.0
type ToolCallRecord struct {
ID string `json:"id,omitempty"`
Name string `json:"name"`
Arguments string `json:"arguments,omitempty"`
Result string `json:"result,omitempty"`
DurationMs int64 `json:"duration_ms,omitempty"`
Rejected bool `json:"rejected,omitempty"`
RejectReason string `json:"reject_reason,omitempty"`
}
ToolCallRecord is one kernel-owned tool invocation within a turn. Populated by the tool loop (tool_loop.go) and threaded back to the handler so it can be stored alongside the prompt/response in the turn.
type ToolCallRow ¶ added in v0.3.0
type ToolCallRow struct {
CallID string `json:"call_id"`
ToolName string `json:"tool_name"`
SessionID string `json:"session_id"`
Source string `json:"source"`
Ownership string `json:"ownership"`
CalledAt string `json:"called_at"`
CompletedAt string `json:"completed_at,omitempty"`
DurationMs int `json:"duration_ms"`
Status string `json:"status"`
Reason string `json:"reason,omitempty"`
OutputLength int `json:"output_length"`
Arguments json.RawMessage `json:"arguments,omitempty"`
OutputSummary string `json:"output_summary,omitempty"`
InteractionID string `json:"interaction_id,omitempty"`
TurnIndex int `json:"turn_index,omitempty"`
Provider string `json:"provider,omitempty"`
}
ToolCallRow is one row of the call+result stitched result set.
type ToolCallSourceCounts ¶ added in v0.3.0
type ToolCallSourceCounts struct {
MCP int `json:"mcp"`
OpenAI int `json:"openai_chat"`
Anthropic int `json:"anthropic_messages"`
KernelLoop int `json:"kernel_loop"`
Other int `json:"other"`
}
ToolCallSourceCounts captures per-source totals in the scanned window.
type ToolCallValidationResult ¶ added in v0.3.0
ToolCallValidationResult describes whether a model-emitted tool call is safe to run.
func ValidateToolCall ¶ added in v0.3.0
func ValidateToolCall(tc ToolCall, toolDefs []ToolDefinition) ToolCallValidationResult
ValidateToolCall verifies that the model requested a known tool with arguments that match the registered input schema.
type ToolDefinition ¶
type ToolDefinition struct {
Name string `json:"name"`
Description string `json:"description"`
InputSchema map[string]interface{} `json:"input_schema"`
}
ToolDefinition describes an MCP tool the model may invoke.
type ToolResultEvent ¶ added in v0.3.0
type ToolResultEvent struct {
CallID string `json:"call_id"`
ToolName string `json:"tool_name"`
Status string `json:"status"`
Reason string `json:"reason,omitempty"`
OutputLength int `json:"output_length"`
OutputSummary string `json:"output_summary,omitempty"`
Duration time.Duration `json:"-"`
Source string `json:"source"`
SessionID string `json:"session_id"`
}
ToolResultEvent mirrors ToolCallEvent for a completion.
type TraceQuery ¶ added in v0.3.0
type TraceQuery struct {
Source TraceSource
Level string
SessionID string
Substring string // case-insensitive
Since time.Time
Until time.Time
Limit int
Order string // "desc" (default) | "asc"
}
TraceQuery bundles caller-provided filter parameters. Zero values are the defaults: no filter, newest-first, limit=defaultTracesLimit.
type TraceQueryResult ¶ added in v0.3.0
type TraceQueryResult struct {
Count int `json:"count"`
Results []TraceResult `json:"results"`
Truncated bool `json:"truncated"`
SourcesChecked []SourceStatus `json:"sources_checked"`
}
TraceQueryResult is the outer envelope returned by QueryTraces.
func QueryTraces ¶ added in v0.3.0
func QueryTraces(workspaceRoot string, q TraceQuery) (*TraceQueryResult, error)
QueryTraces is the entry point. It reads each resolved source, applies the normalized filter set, merges results, sorts by timestamp per q.Order, and returns the globally-limited slice plus per-source diagnostics.
Missing files are not errors — they surface as file_exists=false in SourcesChecked so callers can distinguish "file absent" from "empty match".
type TraceResult ¶ added in v0.3.0
type TraceResult struct {
Source string `json:"source"`
Timestamp time.Time `json:"timestamp"`
SessionID string `json:"session_id,omitempty"`
Level string `json:"level,omitempty"`
Line json.RawMessage `json:"line"`
}
TraceResult is a single normalized row in the unified output. Line is the raw JSONL bytes — callers that need per-source fields unmarshal themselves.
type TraceSource ¶ added in v0.3.0
type TraceSource string
TraceSource identifies one of the known trace JSONL streams under .cog/run/. "all" expands to every canonical source at query time.
const ( SourceAttention TraceSource = "attention" SourceProprioceptive TraceSource = "proprioceptive" SourceInternalRequests TraceSource = "internal_requests" SourceAll TraceSource = "all" )
type TrajectoryModel ¶
type TrajectoryModel struct {
// contains filtered or unexported fields
}
TrajectoryModel tracks attention momentum and generates predictions. It is the "model" in the trefoil — built from observations of the field, generating anticipations that act back on the field.
The model is safe for concurrent reads (Stats, Momentum) and periodic writes (Update, called from the single consolidation goroutine).
func NewTrajectoryModel ¶
func NewTrajectoryModel() *TrajectoryModel
NewTrajectoryModel constructs an empty, uninitialized model.
func (*TrajectoryModel) LastPrediction ¶
func (m *TrajectoryModel) LastPrediction() []string
LastPrediction returns a copy of the most recent prediction set.
func (*TrajectoryModel) Momentum ¶
func (m *TrajectoryModel) Momentum() map[string]float64
Momentum returns a copy of the current momentum map. Safe for concurrent reads (e.g. from the HTTP handler goroutine).
func (*TrajectoryModel) Stats ¶
func (m *TrajectoryModel) Stats() (cycles int64, meanError float64)
Stats returns the total cycle count and mean prediction error.
func (*TrajectoryModel) Update ¶
func (m *TrajectoryModel) Update(attended []string, fieldScores map[string]float64) ObserverUpdate
Update feeds a new cycle's observations into the model. attended is the list of filesystem paths observed in the attention log since the last tick. fieldScores is the current salience map. Update is NOT safe for concurrent calls — it is called only from the single consolidation goroutine in process.go.
type TriggerAgentRequest ¶ added in v0.3.0
TriggerAgentRequest is the normalized input to the trigger-agent helper.
func (*TriggerAgentRequest) Normalize ¶ added in v0.3.0
func (r *TriggerAgentRequest) Normalize() error
Normalize fills defaults and validates.
type TrustContext ¶
type TrustContext = cogblock.TrustContext
type TrustState ¶
type TrustState struct {
LocalScore float64 `json:"local_score"`
LastHeartbeatHash string `json:"last_heartbeat_hash,omitempty"`
LastHeartbeatAt time.Time `json:"last_heartbeat_at,omitempty"`
CoherenceFingerprint string `json:"coherence_fingerprint,omitempty"`
}
TrustState tracks kernel-local identity and coherence trust metadata.
type TurnRecord ¶ added in v0.3.0
type TurnRecord struct {
TurnID string `json:"turn_id"` // UUID minted at turn-start
TurnIndex int `json:"turn_index"` // 1-based within session
SessionID string `json:"session_id"`
Timestamp time.Time `json:"timestamp"` // turn-start, UTC
DurationMs int64 `json:"duration_ms,omitempty"` // turn-end minus turn-start
Prompt string `json:"prompt"` // user message text (full)
Response string `json:"response"` // assistant message text (full)
ToolCalls []ToolCallRecord `json:"tool_calls,omitempty"` // kernel-tool transcript
Provider string `json:"provider,omitempty"`
Model string `json:"model,omitempty"`
Usage TurnUsage `json:"usage,omitempty"`
BlockID string `json:"block_id,omitempty"` // links to cogblock.ingest
Status string `json:"status,omitempty"` // "ok" | "error"
Error string `json:"error,omitempty"` // on status="error"
LedgerHash string `json:"ledger_hash,omitempty"` // turn.completed hash, filled after append
}
TurnRecord is one complete prompt → response exchange. Stored in full in the session sidecar; stored as a truncated preview in the turn.completed ledger event.
type TurnUsage ¶ added in v0.3.0
type TurnUsage struct {
InputTokens int `json:"input_tokens,omitempty"`
OutputTokens int `json:"output_tokens,omitempty"`
TotalTokens int `json:"total_tokens,omitempty"`
}
TurnUsage is a minimal (provider-neutral) token tally for a turn.
type URIResolution ¶
type URIResolution struct {
// Path is the absolute filesystem path.
Path string
// Fragment is the section anchor stripped from the URI (empty if none).
Fragment string
}
URIResolution is the result of resolving a cog:// URI to the filesystem.
func ResolveURI ¶
func ResolveURI(workspaceRoot, uri string) (*URIResolution, error)
ResolveURI resolves a cog:// URI to an absolute filesystem path. The #fragment part (section anchor) is separated and returned in URIResolution.Fragment without modifying the path resolution.
type URLDecomposer ¶ added in v0.3.0
type URLDecomposer struct {
// contains filtered or unexported fields
}
URLDecomposer implements Decomposer for URL inputs. It fetches the page, parses HTML metadata, and classifies the content by domain heuristics.
func NewURLDecomposer ¶ added in v0.3.0
func NewURLDecomposer(workspaceRoot string) *URLDecomposer
NewURLDecomposer creates a URLDecomposer with a 10-second HTTP timeout. workspaceRoot is the absolute path to the workspace directory (used for downloading artefacts like PDFs). Pass "" if downloading is not needed.
func (*URLDecomposer) CanDecompose ¶ added in v0.3.0
func (d *URLDecomposer) CanDecompose(req *IngestRequest) bool
CanDecompose reports true when the request format is FormatURL or the data looks like a URL (starts with http:// or https://).
func (*URLDecomposer) Decompose ¶ added in v0.3.0
func (d *URLDecomposer) Decompose(ctx context.Context, req *IngestRequest) (*IngestResult, error)
Decompose fetches the URL, extracts HTML metadata, classifies the content, and returns a normalized IngestResult. HTTP and parse errors are handled gracefully — the result will contain whatever metadata can be derived from the URL itself.
type ValidationResult ¶
type ValidationResult struct {
Pass bool `json:"pass"`
Layer string `json:"layer"`
Diagnostic *Diagnostic `json:"diagnostic,omitempty"`
Timestamp string `json:"timestamp"`
}
ValidationResult is the outcome of a single validation check.
type WriteConfigOptions ¶ added in v0.3.0
WriteConfigOptions controls WriteConfigPatch behaviour.
type WriteConfigResult ¶ added in v0.3.0
type WriteConfigResult struct {
Written bool `json:"written"`
RequiresRestart bool `json:"requires_restart"`
Violations []ConfigViolation `json:"violations,omitempty"`
EffectiveConfig map[string]any `json:"effective_config,omitempty"`
BackupPath string `json:"backup_path,omitempty"`
Diff []ConfigDiffEntry `json:"diff,omitempty"`
ChangedFields []string `json:"changed_fields,omitempty"`
Path string `json:"path"`
DryRun bool `json:"dry_run,omitempty"`
}
WriteConfigResult is the value returned from WriteConfigPatch / exposed by `cog_write_config` and `PATCH /v1/config`.
func WriteConfigPatch ¶ added in v0.3.0
func WriteConfigPatch(root string, patch map[string]any, opts WriteConfigOptions) (WriteConfigResult, error)
WriteConfigPatch merges the supplied JSON patch into kernel.yaml, validates the result, and — on success — writes atomically after rotating backups. The top-level mutex serializes concurrent callers.
type WriteResult ¶ added in v0.3.0
type WriteResult struct {
Path string // absolute filesystem path
URI string // canonical cog:// URI
}
WriteResult is the outcome of a successful CogDoc write or patch.
Source Files
¶
- agent_controller.go
- agent_state_query.go
- benchmark.go
- blobs_cmd.go
- blobstore.go
- bus_consumers.go
- bus_session.go
- bus_stream.go
- canvas_embed.go
- chat.go
- chunk.go
- cli.go
- cli_emit.go
- cli_mcp.go
- cogblock.go
- cogblock_ledger.go
- cogblock_normalize.go
- cogdoc_service.go
- coherence.go
- config.go
- config_write.go
- consolidate.go
- constellation_bridge.go
- context_assembly.go
- context_blocks.go
- context_frame.go
- conversation_query.go
- daemon_lifecycle.go
- dashboard_embed.go
- debug.go
- defaults.go
- docs_generate.go
- eventbus.go
- events_query.go
- experiment.go
- field.go
- gate.go
- index.go
- ingest.go
- ingest_policy.go
- ingest_url.go
- init.go
- kernel_log_query.go
- ledger.go
- ledger_query.go
- log_capture.go
- mcp_server.go
- mcp_sessions.go
- mcp_stubs.go
- membrane_default.go
- memory.go
- node_cmd.go
- node_manifest.go
- node_probe.go
- nucleus.go
- observer.go
- process.go
- procmgr.go
- proprioceptive.go
- provider.go
- provider_anthropic.go
- provider_claudecode.go
- provider_codex.go
- provider_ollama.go
- provider_openai.go
- provider_pi.go
- provider_stub.go
- router.go
- salience.go
- serve.go
- serve_anthropic.go
- serve_attention.go
- serve_blocks.go
- serve_bus.go
- serve_compat.go
- serve_config.go
- serve_events.go
- serve_foveated.go
- serve_mcp.go
- serve_sessions.go
- serve_sessions_mgmt.go
- sessions.go
- stream_tailer.go
- sync_watcher.go
- tailer_claudecode.go
- tailer_openclaw.go
- telemetry.go
- tool_calls_query.go
- tool_loop.go
- tool_observer.go
- traces_query.go
- transition_hooks.go
- trm.go
- trm_context.go
- trm_index.go
- trm_lightcone.go
- turn_storage.go
- uri.go
- uri_resolve.go
- uri_v2_stub.go