Documentation
¶
Overview ¶
benchmark.go — foveated context benchmark runner
Measures context assembly quality against a set of prompts with known expected CogDoc matches. For each prompt it records:
- Recall: fraction of expected docs that appeared in the assembled context
- Precision: fraction of injected docs that were expected
- Assembly latency (ms)
- Total token count of assembled context
- Model response (for manual quality review)
Run with: cogos-v3 bench [--prompts path] [--model name] [--budget n]
blobs_cmd.go — CLI commands for blob store management
Usage:
cogos-v3 blobs list — list all stored blobs cogos-v3 blobs store <file> — manually store a file cogos-v3 blobs get <hash> <out> — retrieve blob to file cogos-v3 blobs verify — check all pointers have matching blobs cogos-v3 blobs gc [--dry-run] — garbage collect unreferenced blobs cogos-v3 blobs init — initialize the blob store
blobstore.go — Content-addressed blob store for CogOS
Stores large binary content (PDFs, audio, model weights) outside of git in a content-addressed directory at .cog/blobs/. Files are addressed by SHA-256 hash and stored with a 2-character prefix directory for filesystem efficiency.
Layout:
.cog/blobs/ ├── a1/ │ └── b2c3d4e5f6... (blob content) ├── manifest.jsonl (index of all stored blobs) └── .gitkeep
The blob store is gitignored. CogDocs reference blobs via pointer files (type: blob.pointer) that carry the hash, size, and content type.
canvas_embed.go — embeds and serves the canvas-based dashboard
The canvas view is served at GET /canvas from the v3 daemon. It provides an infinite-canvas spatial interface with draggable nodes, real-time chat, and CogDoc visualization.
chat.go — interactive chat REPL
Two modes:
--direct Calls OllamaProvider directly (no daemon needed). Useful for
offline testing or when the daemon isn't running.
(default) POSTs to the running daemon at localhost:PORT/v1/chat/completions
and streams the SSE response to stdout.
Usage:
cogos-v3 chat # connect to daemon on default port 6931 cogos-v3 chat --port 6931 # explicit port cogos-v3 --port 6931 chat # flags must precede subcommand name cogos-v3 chat --direct # bypass daemon, talk directly to Ollama
chunk.go — Turn-aware document chunking for CogOS v3
Splits CogDoc content into chunks suitable for embedding. Conversations (ChatGPT exports, Claude sessions, etc.) are chunked by turn boundaries so that no chunk splits mid-turn. Non-conversation content falls back to paragraph/character-based chunking.
The target chunk size is in characters (~500 tokens at 4 chars/token). A single oversized turn becomes its own chunk regardless of target.
main.go — CogOS v3 kernel entry point
Starts the continuous process daemon. Three goroutines run concurrently:
- process.Run(ctx) — the cognitive loop (field updates, consolidation, heartbeat)
- server.Start() — the HTTP API
Flags:
--port API port (default 6931; v2 is 5100) --workspace path to workspace root (auto-detected from cwd if omitted) --config (reserved for future use)
coherence.go — CogOS v3 coherence validation
Simplified from apps/cogos/validation.go (v2.4.0). Provides the 4-layer validation stack as callable functions. The continuous process runs this on a cadence; it is not session-triggered.
Layers:
- Schema — frontmatter structure valid
- Invariants — system invariants hold (nucleus loaded, workspace intact)
- Policy — kernel boundary not violated
- Consistency — cross-artifact coherence
config.go — CogOS v3 configuration loading
context_assembly.go — foveated context assembly for chat requests
The engine owns the full context window. It accepts the client's messages[], decomposes them, scores conversation turns alongside CogDocs, and renders everything into a stability-ordered token stream within the configured budget.
Stability zones (ordered for KV cache optimization):
Zone 0: Nucleus (identity card) — most stable, always present Zone 1: CogDocs + client system prompt — shifts slowly per query Zone 2: Conversation history — scored by recency + relevance, evictable Zone 3: Current message — always present [Reserve: OutputReserve tokens for model generation]
Token budget is approximated as chars/4. Default budget: 32768 tokens (matches provider context_window from providers.yaml).
Any OpenAI-compatible client works transparently — the engine intercepts the standard messages[] array and manages what the model actually sees.
context_blocks.go — builder functions for well-known context blocks
Each build* function produces a single ContextBlock (or nil when the source data is unavailable). The foveated context pipeline calls these builders, collects non-nil results into a ContextFrame, and renders the frame.
context_frame.go — structured output type for the foveated context rendering pipeline
ContextFrame is the intermediate representation between context assembly and rendering. Each block is named, tiered by priority, annotated with a stability hint (for KV cache optimization), and carries its rendered content.
The rendering pipeline (serve_foveated.go) can compose, prioritize, and budget-fit blocks before emitting the final HTML comment block stream.
dashboard_embed.go — embeds and serves the web dashboard
The dashboard is a single HTML file served at GET / from the v3 daemon. No build step, no external dependencies, no separate process.
debug.go — introspection endpoints for the foveated context engine
Provides real-time visibility into engine state:
GET /v1/debug/last — full pipeline snapshot from the most recent chat request GET /v1/debug/context — current context window as zones with ordering and token counts
No external dependencies. Just curl it.
docs_generate.go — Auto-documentation pipeline
Walks the CogDoc corpus, parses frontmatter, groups by type/status/sector, and generates deterministic documentation outputs:
- DASHBOARD.md — inbox health (raw/enriched/integrated counts)
- INDEX.md — research index grouped by tags
- CATALOG.md — tool/skill inventory
- README.md — per-directory summaries
This is the efferent pathway: knowledge flows OUT of the CogDoc substrate as human-readable documentation. No LLM calls — purely deterministic.
Usage: cogos-v3 docs [--workspace PATH]
experiment.go — autoresearch experiment runner
An experiment is a CogDoc (YAML frontmatter + markdown body) that specifies:
- Which benchmark prompts file to use
- Model, budget, and method
- Optional comparison against a baseline run
Usage:
cogos-v3 experiment run <path-to-experiment.md>
The runner:
- Loads the experiment config from YAML frontmatter
- Loads the benchmark prompts
- Runs the benchmark suite
- Saves results as a new experiment log CogDoc
- If a previous run exists, computes and prints the recall/precision delta
- Flags regressions (recall drop > threshold)
field.go — CogOS v3 attentional field
The attentional field is the continuous salience map over the memory corpus. Every memory file gets a float64 score. The "fovea" is the top-N files by score that fit in the context window.
In v2, salience was computed once per session at context assembly time. In v3, the field is updated continuously by the process loop, decoupled from any external request.
gate.go — CogOS v3 attentional gate
The gate receives events (perturbations) and routes them into the fovea. It decides:
- Which memory files should be elevated in salience as a result of this event
- Whether the event triggers a state transition in the process
Stage 1: minimal routing — gate accepts events and records them. Stage 2+: gate will perform semantic matching against the attentional field.
index.go — CogDoc index for CogOS v3
BuildIndex walks .cog/mem/ and constructs an in-memory lookup table for all CogDoc files (Markdown files with YAML frontmatter). The index provides O(1) lookups by URI, type, tag, and status, plus forward and inverse reference graphs for coherence validation.
Index lifecycle:
- Built on startup (best-effort; errors are non-fatal).
- Rebuilt by Process.runConsolidation() on each consolidation tick.
- Served via /v1/resolve for URI resolution queries.
init.go — cogos init command
Scaffolds a new CogOS workspace with the minimum structure needed for the daemon to start: config files, memory directories, a default identity card, and an empty ledger.
Idempotent: does not overwrite existing files. Safe to run on an existing workspace to fill in missing structure.
ledger.go — CogOS v3 hash-chained event ledger
Ported from apps/cogos/ledger_core.go (v2.4.0). CLI command functions removed; EventEnvelope, hash chain, and append logic preserved.
Every significant cognitive event is recorded as an append-only JSONL entry. Entries are hash-chained (RFC 8785 canonical JSON + SHA-256) to provide tamper-evidence and causal ordering.
memory.go — CogOS v3 memory system interface
Thin interface over the CogDocs memory layout (.cog/mem/). Delegates search to the cog CLI wrapper (scripts/cog memory search). In stage 5, this will be replaced with local embedding-based retrieval.
nucleus.go — CogOS v3 nucleus
The nucleus is the always-loaded identity context: the runtime object that is never evicted from memory. It replaces the v2 pattern of loading the identity card from disk at session start.
In v3, the nucleus is loaded once at daemon startup and held in memory for the lifetime of the process. It is the "floor" of the attentional field.
observer.go — CogOS v3 observer loop (Field → Observer → Model → Field)
Implements the trefoil closed loop that makes the daemon a true observer:
Loop 1 (Field → Observer): Each consolidation tick reads attention signals from the attention log and current field scores — the raw percept. Loop 2 (Observer → Model): TrajectoryModel updates attention momentum via EMA, computes Jaccard prediction error against the previous cycle, and generates a new prediction. Both error and prediction are recorded in the ledger (hash-chained, irreversible — this is the arrow of time). Loop 3 (Model → Field): Predictions pre-warm the field (salience boost). Paths that drop out of the prediction are attenuated. Prediction errors above the surprise threshold emit an observer.surprise coherence signal.
The consolidation CogDoc written each cycle is the model's trace in the field — a legible record that the observer existed and acted.
process.go — CogOS v3 continuous process state machine
Implements the always-running cognitive process described in the v3 spec. The process has four states and an internal event loop that runs independently of external HTTP requests.
States:
Active — processing an external perturbation Receptive — idle, listening for input Consolidating — running internal maintenance (memory, coherence) Dormant — minimal activity, heartbeat only
The select loop is the core architectural difference from v2: v2 is request-triggered; v3 has internal tickers that fire regardless.
procmgr.go — Process lifecycle manager for Claude Code subprocesses.
Tracks all spawned claude processes (foreground, background, agent). Handles:
- Client disconnect / cancellation (SIGTERM → SIGKILL escalation)
- Background process lifecycle (outlive the HTTP request)
- Concurrent process limits (per-identity and global)
- Process inventory for observability
- Callback delivery when background tasks complete
Process kinds:
- Foreground: tied to an HTTP request. Killed on client disconnect.
- Background: fire-and-forget. Has its own timeout. Reports via callback.
- Agent: runs in a Docker container. Trust-bounded. Future implementation.
proprioceptive.go — Proprioceptive logging for TRM prediction-vs-reality tracking.
After each chat request, the TRM predicts which chunks will be referenced. This logger records predictions alongside actual references extracted from the response, enabling continuous calibration of the light cone.
Log format: JSONL at .cog/run/proprioceptive.jsonl
provider.go — CogOS v3 inference provider interface
Adapted from the workspace PROVIDER-SPEC.md contract. All LLM backends satisfy the Provider interface. The kernel never calls a model API directly — it always routes through a Provider.
Key design decisions:
- Models are organs, not the organism. Swappable, upgradeable.
- CompletionRequest carries foveated context, not raw strings.
- Router implements the sovereignty gradient: local-first, cloud-escalate.
- ProcessState uses string (maps to ProcessState.String() from process.go).
provider_anthropic.go — AnthropicProvider
Implements Provider against the Anthropic Messages API (POST /v1/messages). Auth: x-api-key header, read from the env var named by config.APIKeyEnv. Streaming: SSE (text/event-stream), reading typed events (message_start,
content_block_start, content_block_delta, message_delta, message_stop).
Tool calls: streamed incrementally as ToolCallDelta chunks; non-streaming
responses decode tool_use content blocks directly.
Context items are prepended to the system string as labelled sections.
provider_claudecode.go — ClaudeCodeProvider
Implements Provider by spawning `claude -p` subprocesses. Unlike the Anthropic and Ollama providers, ClaudeCodeProvider is agentic: the subprocess owns its own tool loop (filesystem, MCP, etc.).
Authentication: uses the host's Claude Max subscription via OAuth (keychain). Does NOT use --bare mode, which would require API keys.
Process lifecycle:
- Foreground: tied to HTTP request context. Cancelled on disconnect.
- Background: outlives the request. Reports back via callback.
- Agent: runs in Docker container. Trust-bounded, resource-limited.
Output: parsed from `--output-format stream-json --include-partial-messages` which emits NDJSON with Anthropic streaming events.
provider_codex.go — CodexProvider
Implements Provider by spawning `codex exec` subprocesses (OpenAI Codex CLI). Parses the NDJSON event stream (--json flag) and extracts agent_message items.
Authentication: uses the host's ChatGPT Pro subscription via codex CLI auth.
provider_ollama.go — OllamaProvider
Implements Provider against a local Ollama server (http://localhost:11434). Uses /api/chat for multi-turn conversations (not /api/generate). Streaming: Ollama returns newline-delimited JSON chunks. think=false: disables qwen3's thinking mode to avoid silent token burn.
provider_openai.go — OpenAICompatProvider
Implements Provider against any OpenAI-compatible API server: LM Studio, vLLM, llama.cpp server, text-generation-webui, or the OpenAI API itself. Uses /v1/chat/completions for both streaming (SSE) and non-streaming. Discovery: GET /v1/models to enumerate available models.
SSE format: "data: {...}\n\n" lines with "data: [DONE]" sentinel. No CGO dependencies — standard library net/http only.
provider_pi.go — PiProvider
Implements Provider by spawning `pi -p` subprocesses for local agentic inference. Pi handles the tool loop (read, bash, edit, write) against local models via Ollama, while the kernel handles context assembly.
This is the local counterpart to ClaudeCodeProvider:
- ClaudeCodeProvider: cloud agentic inference (Claude Max via OAuth)
- PiProvider: local agentic inference (Ollama via Pi)
The kernel assembles foveated context and injects it via --system-prompt. Pi runs the agent loop. Ollama runs the model.
Output: parsed from `--mode json` which emits NDJSON AgentSessionEvents.
provider_stub.go — StubProvider for testing
In-memory provider with configurable responses, error injection, and latency simulation. Used in unit tests and for offline development.
router.go — SimpleRouter + BuildRouter
SimpleRouter implements the Router interface with rule-based provider selection:
- Check process-state routing overrides
- Try preferred provider first, then fallback chain
- Filter by availability + required capabilities
- Score local > cloud (sovereignty gradient)
- Record every routing decision for future sentinel training
BuildRouter reads .cog/config/providers.yaml and instantiates enabled providers. Falls back to a default Ollama config when no providers.yaml is present.
salience.go — CogOS v3 git-derived salience scoring
Ported from apps/cogos/salience.go (v2.4.0). CLI command functions removed; core computation preserved.
Implements ADR-018: Salience System (Git-Derived Attention). Performance target: <5ms per file via go-git (vs 80ms in shell version).
serve.go — CogOS v3 HTTP API
Core endpoints:
GET /health — liveness + readiness probe GET /v1/context — current attentional field (debug) GET /v1/resolve — resolve a cog:// URI to a filesystem path POST /v1/chat/completions — OpenAI-compatible chat (streaming + non-streaming) POST /v1/messages — Anthropic Messages-compatible chat POST /v1/context/foveated — foveated context assembly for Claude Code hook GET /v1/proprioceptive — last 50 proprioceptive log entries + light cone status GET /v1/lightcone — light cone metadata (placeholder)
Constellation / attention endpoints (Phase 3, see serve_attention.go):
POST /v1/attention — emit attention signal GET /v1/constellation/fovea — current fovea state GET /v1/constellation/adjacent?uri=… — adjacent nodes by attentional proximity
The chat endpoint routes through the inference Router when one is set, otherwise returns 501.
serve_blocks.go — HTTP endpoints for block sync protocol
Phase 3 of the block sync protocol: remote blob exchange.
GET /v1/blocks/{hash} — retrieve a blob by hash
PUT /v1/blocks/{hash} — store a blob by hash
GET /v1/blocks/manifest — list all stored blobs (manifest exchange)
POST /v1/blocks/verify — verify a list of hashes, return missing ones
These endpoints enable workspace-to-workspace blob sync:
- Workspace A gets B's manifest
- Diffs against local manifest
- GETs missing blobs by hash
- Stores them locally
Content is verified by hash on both read and write — the hash IS the address.
serve_compat.go — v2 compatibility endpoints for Phase 0 cutover.
These endpoints allow v3 to replace v2 as the production kernel on port 5100. Consumers: OpenClaw cogos plugin, CogBus plugin, launchd service.
DEPRECATED: These compatibility routes exist only for migration from v2. They will be removed once all clients migrate to standard endpoints. Standard endpoints: /v1/chat/completions, /v1/messages, /mcp, /health
Endpoints:
GET /v1/card — kernel capability card (OpenClaw auth flow)
GET /v1/models — OpenAI-compatible model list
GET /v1/events/stream — SSE stub (CogBus keepalive)
POST /v1/bus/{bus_id}/ack — bus event acknowledgment stub
GET /memory/search — memory search (was missing from v2 too)
GET /memory/read — memory read (was missing from v2 too)
GET /coherence/check — coherence check
GET /v1/providers — provider list with health
GET /v1/taa — TAA context visibility stub
serve_foveated.go — POST /v1/context/foveated
Bridge endpoint matching the v2 foveated context API so that the Claude Code hook (foveated-context.py) can point at the v3 kernel.
Input: {prompt, iris: {size, used}, profile, session_id} Output: {context, tokens, anchor, goal, iris_pressure}
The "context" field is a rendered string of CogBlock HTML comment blocks that get injected into Claude's context window via the hook's additionalContext.
telemetry.go — OpenTelemetry tracing and metrics for the v3 kernel
Initializes a tracer and meter provider with OTLP HTTP exporters. Degrades gracefully to no-op when no collector is available.
Environment variables:
OTEL_EXPORTER_OTLP_ENDPOINT — collector endpoint (default: http://localhost:4318) OTEL_SERVICE_NAME — service name (default: cogos-v3)
Usage:
shutdown := initTelemetry(ctx)
defer shutdown(ctx)
// Use otel.Tracer("cogos-v3") and otel.Meter("cogos-v3") anywhere.
transition_hooks.go — ADR-072 state-transition hook dispatch.
Implements the per-transition handler layer described in ADR-072 ("State-Transition Hooks for Node Lifecycle"). On every state change, `transition()` invokes a per-state enter handler. Each handler:
- Runs the minimum-viable per-ADR work inline (small, bounded).
- Dispatches any matching declarative StateHook definitions loaded from `.cog/hooks/transitions/*.yaml`. Only the `shell:` form is implemented at this revision — agent-form hooks (ADR-072 Phase 3) are loaded and matched but logged as pending instead of executed.
Non-goals of this file:
- Event-bus emission of transition records (owned by a sibling track).
- Full agent-subprocess spawning with budget/timeout (ADR-072 Phase 3).
- Condition evaluation beyond the scaffold (ADR-072 Phase 4).
Concurrency: all handler bodies and hook executions run on goroutines spawned by `transition()`. They must not take `p.mu`; they may read immutable fields (cfg, sessionID) directly.
trm.go — Pure Go implementation of MambaTRM inference.
The TRM (Temporal Retrieval Model) uses Mamba selective state spaces to process temporally ordered session events and predict which workspace chunks are most relevant. It maintains a "light cone" — compressed SSM hidden state representing the observer's trajectory through the workspace.
This is a zero-dependency inference engine: no gonum, no CGO. All math is done with raw float32 slices and manual loops. The model is tiny (~1.7M params), so this is efficient enough.
Binary weight format (TRM1):
4 bytes: magic "TRM1" 4 bytes: uint32 number of tensors Per tensor: name_len(4) + name(N) + ndim(4) + shape(4*ndim) + data(4*numel) All values little-endian. Data is float32, row-major.
trm_context.go — TRM integration into the context assembly pipeline.
Provides:
- OllamaEmbed: embed a query via the local Ollama /api/embeddings endpoint
- trmScoreDocs: score CogDoc candidates using MambaTRM + embedding index
- loadTRMAtStartup: one-shot loader called from main.go
When the TRM is available, it replaces keyword+salience scoring as the primary CogDoc ranking signal. When unavailable (no weights, Ollama down, etc.), the pipeline falls back to the existing scoring transparently.
trm_index.go — Embedding index for TRM cosine pre-filtering.
Loads the binary embedding index (EMB1 format) and chunk metadata (JSON) exported by trm_export.py. Provides fast cosine similarity top-K search over the full embedding corpus.
Binary format (EMB1):
4 bytes: magic "EMB1" 4 bytes: uint32 num_chunks 4 bytes: uint32 dim (384) num_chunks * dim * 4 bytes: float32 data (row-major, little-endian)
trm_lightcone.go — Thread-safe per-conversation light cone state manager.
Each conversation maintains its own light cone — the SSM hidden state that compresses the observer's trajectory through the workspace. The LightConeManager provides concurrent-safe access keyed by conversation ID.
uri.go — cog:// URI projection system for CogOS v3
A cog:// URI has the form:
cog://type/path[#fragment]
"type" selects a Projection that maps to a filesystem location under the workspace root. "path" is the resource name within that projection. "fragment" (optional, after '#') identifies a section within the file.
Examples:
cog://mem/semantic/insights/eigenform.cog.md → .cog/mem/semantic/insights/eigenform.cog.md cog://mem/semantic/insights/eigenform.cog.md#Seed → same path, anchor "Seed" cog://conf/kernel.yaml → .cog/config/kernel.yaml cog://crystal → .cog/ledger/crystal.json
uri_v2_stub.go — stub for URIRegistry when coguri library is unavailable.
mcp_server.go references URIRegistry under the mcpserver build tag. When the coguri library isn't present (no coguri build tag), this stub provides a nil-valued placeholder so the package compiles cleanly. The nil check in mcp_server.go (if URIRegistry != nil) ensures the Resolve method is never actually called.
Index ¶
- Constants
- Variables
- func AppendEvent(workspaceRoot, sessionID string, envelope *EventEnvelope) error
- func ArchivedSessions(workspaceRoot, sessionID string) (map[string]struct{}, error)
- func CanonicalizeEvent(payload *EventPayload) ([]byte, error)
- func ChunkDocument(body string, targetSize int) []string
- func CollectReferencedHashes(workspaceRoot string) (map[string]bool, error)
- func ContentTypeFromExt(path string) string
- func DefaultManifestPath(workspaceRoot string) string
- func ExtractInlineRefs(content string) []string
- func ExtractReferencedPaths(response string) []string
- func FieldKeyToURI(workspaceRoot, absPath string) string
- func GetHashAlgorithm(workspaceRoot string) string
- func GetHotFiles(repoPath, scope string, limit int, threshold float64, daysWindow int, ...) ([]string, error)
- func HashEvent(canonicalBytes []byte, algorithm string) (string, error)
- func Main()
- func MemoryRead(workspaceRoot, path string) (string, error)
- func MemorySearch(workspaceRoot, query string) ([]string, error)
- func OllamaEmbed(ctx context.Context, cfg *Config, query string) ([]float32, error)
- func PathExistsOnDisk(path string) bool
- func PathToURI(workspaceRoot, path string) (string, error)
- func PrintSummary(results []BenchmarkResult)
- func ResolveToFieldKey(workspaceRoot, pointer string) string
- func RunExperiment(ctx context.Context, experimentPath, workspaceRoot string, process *Process, ...) error
- func RunInit(workspaceRoot string) error
- func SaveResults(workspaceRoot string, results []BenchmarkResult, model, method string) error
- func ShouldRedirectToBlob(path string, size int64) bool
- type AnthropicProvider
- func (p *AnthropicProvider) Available(_ context.Context) bool
- func (p *AnthropicProvider) Capabilities() ProviderCapabilities
- func (p *AnthropicProvider) Complete(ctx context.Context, req *CompletionRequest) (*CompletionResponse, error)
- func (p *AnthropicProvider) Name() string
- func (p *AnthropicProvider) Ping(ctx context.Context) (time.Duration, error)
- func (p *AnthropicProvider) Stream(ctx context.Context, req *CompletionRequest) (<-chan StreamChunk, error)
- type AssembleOption
- type AttentionProbe
- type AttentionalField
- func (f *AttentionalField) AllScores() map[string]float64
- func (f *AttentionalField) Boost(path string, delta float64)
- func (f *AttentionalField) Fovea(n int) []FileScore
- func (f *AttentionalField) LastUpdated() time.Time
- func (f *AttentionalField) Len() int
- func (f *AttentionalField) Score(path string) float64
- func (f *AttentionalField) Update() error
- type AttentionalZone
- type BackgroundTaskOpts
- type BenchmarkPrompt
- type BenchmarkResult
- type BenchmarkSuite
- type BlobEntry
- type BlobPointer
- type BlobStore
- func (bs *BlobStore) Exists(hash string) bool
- func (bs *BlobStore) GC(referencedHashes map[string]bool) (removed int, freed int64, err error)
- func (bs *BlobStore) Get(hash string) ([]byte, error)
- func (bs *BlobStore) Init() error
- func (bs *BlobStore) List() ([]BlobEntry, error)
- func (bs *BlobStore) PrintBlobList() error
- func (bs *BlobStore) Size() (int64, int, error)
- func (bs *BlobStore) Store(content []byte, contentType string, refs ...string) (string, error)
- func (bs *BlobStore) StoreFile(path string, contentType string, refs ...string) (string, error)
- func (bs *BlobStore) Verify(workspaceRoot string) (missing []string, err error)
- func (bs *BlobStore) WritePointer(path string, hash string, size int64, contentType string, originalPath string) error
- type BlockArtifact
- type BlockProvenance
- type BuildRouterOption
- type Capability
- type ChunkMeta
- type ClaudeCodeProvider
- func (p *ClaudeCodeProvider) Available(ctx context.Context) bool
- func (p *ClaudeCodeProvider) Capabilities() ProviderCapabilities
- func (p *ClaudeCodeProvider) Complete(ctx context.Context, req *CompletionRequest) (*CompletionResponse, error)
- func (p *ClaudeCodeProvider) Name() string
- func (p *ClaudeCodeProvider) Ping(ctx context.Context) (time.Duration, error)
- func (p *ClaudeCodeProvider) SpawnBackground(opts BackgroundTaskOpts) (string, error)
- func (p *ClaudeCodeProvider) Stream(ctx context.Context, req *CompletionRequest) (<-chan StreamChunk, error)
- type ClaudeCodeTailer
- type CodexProvider
- func (p *CodexProvider) Available(ctx context.Context) bool
- func (p *CodexProvider) Capabilities() ProviderCapabilities
- func (p *CodexProvider) Complete(ctx context.Context, req *CompletionRequest) (*CompletionResponse, error)
- func (p *CodexProvider) Name() string
- func (p *CodexProvider) Ping(ctx context.Context) (time.Duration, error)
- func (p *CodexProvider) Stream(ctx context.Context, req *CompletionRequest) (<-chan StreamChunk, error)
- type CogBlock
- type CogBlockKind
- type CogDocIndex
- type CoherenceReport
- type CompletionRequest
- type CompletionResponse
- type Config
- type ConsolidationAction
- type ConstellationBridge
- type ConstellationTrustSnapshot
- type ConsumerEntry
- type ContainerConfig
- type ContainerRuntime
- type ContainerStatus
- type ContentPart
- type ContextBlock
- type ContextFrame
- type ContextItem
- type ContextPackage
- type Conv1D
- type DaemonHealth
- type DaemonState
- type DebugBudget
- type DebugClientInfo
- type DebugContextView
- type DebugEngineInfo
- type DebugProviderInfo
- type DebugSnapshot
- type DebugZone
- type DebugZoneItem
- type Diagnostic
- type DocRef
- type EmbeddingIndex
- type EventEnvelope
- type EventMetadata
- type EventPayload
- type ExperimentConfig
- type ExperimentDelta
- type FileScore
- type FileWatcher
- type FovealDoc
- type Gate
- type GateEvent
- type GateResult
- type HealthChecker
- type HeartbeatReceipt
- type IndexResult
- type IndexedCogdoc
- type InferenceEvent
- type KernelHeartbeatPayload
- type LayerNorm
- type LightCone
- type LightConeInfo
- type LightConeManager
- func (m *LightConeManager) Count() int
- func (m *LightConeManager) Delete(convID string)
- func (m *LightConeManager) Get(convID string) *LightCone
- func (m *LightConeManager) List() []LightConeInfo
- func (m *LightConeManager) Prune(before time.Time) int
- func (m *LightConeManager) Set(convID string, lc *LightCone)
- type Linear
- type MambaBlock
- type MambaState
- type MambaTRM
- type ManagedProcess
- type ManagedProcessOpts
- type NerdctlRuntime
- func (n *NerdctlRuntime) Exec(containerID string, command []string) ([]byte, error)
- func (n *NerdctlRuntime) Logs(containerID string, follow bool) (io.ReadCloser, error)
- func (n *NerdctlRuntime) Pull(image string) error
- func (n *NerdctlRuntime) Start(image string, config ContainerConfig) (string, error)
- func (n *NerdctlRuntime) Status(containerID string) (ContainerStatus, error)
- func (n *NerdctlRuntime) Stop(containerID string) error
- type NilBridge
- type NodeHealth
- type NodeManifest
- type Nucleus
- type ObserverUpdate
- type OllamaProvider
- func (p *OllamaProvider) Available(ctx context.Context) bool
- func (p *OllamaProvider) Capabilities() ProviderCapabilities
- func (p *OllamaProvider) Complete(ctx context.Context, req *CompletionRequest) (*CompletionResponse, error)
- func (p *OllamaProvider) ContextWindow() int
- func (p *OllamaProvider) Name() string
- func (p *OllamaProvider) Ping(ctx context.Context) (time.Duration, error)
- func (p *OllamaProvider) Stream(ctx context.Context, req *CompletionRequest) (<-chan StreamChunk, error)
- type OpenAICompatProvider
- func (p *OpenAICompatProvider) Available(ctx context.Context) bool
- func (p *OpenAICompatProvider) Capabilities() ProviderCapabilities
- func (p *OpenAICompatProvider) Complete(ctx context.Context, req *CompletionRequest) (*CompletionResponse, error)
- func (p *OpenAICompatProvider) Name() string
- func (p *OpenAICompatProvider) Ping(ctx context.Context) (time.Duration, error)
- func (p *OpenAICompatProvider) Stream(ctx context.Context, req *CompletionRequest) (<-chan StreamChunk, error)
- type OpenClawTailer
- type PiProvider
- func (p *PiProvider) Available(ctx context.Context) bool
- func (p *PiProvider) Capabilities() ProviderCapabilities
- func (p *PiProvider) Complete(ctx context.Context, req *CompletionRequest) (*CompletionResponse, error)
- func (p *PiProvider) Name() string
- func (p *PiProvider) Ping(ctx context.Context) (time.Duration, error)
- func (p *PiProvider) Stream(ctx context.Context, req *CompletionRequest) (<-chan StreamChunk, error)
- type PredictedChunk
- type Process
- func (p *Process) AssembleContext(query string, messages []ProviderMessage, budget int, opts ...AssembleOption) (*ContextPackage, error)
- func (p *Process) CurrentCycleID() string
- func (p *Process) EmbeddingIndex() *EmbeddingIndex
- func (p *Process) Field() *AttentionalField
- func (p *Process) Fingerprint() string
- func (p *Process) Gate() *Gate
- func (p *Process) Index() *CogDocIndex
- func (p *Process) LightCones() *LightConeManager
- func (p *Process) NodeHealth() *NodeHealth
- func (p *Process) Observer() *TrajectoryModel
- func (p *Process) RecordBlock(block *CogBlock) string
- func (p *Process) Run(ctx context.Context) error
- func (p *Process) Send(evt *GateEvent) bool
- func (p *Process) SessionID() string
- func (p *Process) SetTRM(trm *MambaTRM, idx *EmbeddingIndex)
- func (p *Process) StartedAt() time.Time
- func (p *Process) State() ProcessState
- func (p *Process) SubmitExternal(evt *GateEvent) bool
- func (p *Process) TRM() *MambaTRM
- func (p *Process) TrustSnapshot() TrustState
- type ProcessKind
- type ProcessManager
- func (pm *ProcessManager) CanSpawn(identity string) error
- func (pm *ProcessManager) Finish(id string)
- func (pm *ProcessManager) Kill(id string)
- func (pm *ProcessManager) KillByIdentity(identity string) int
- func (pm *ProcessManager) KillBySource(source string) int
- func (pm *ProcessManager) List() []ProcessSummary
- func (pm *ProcessManager) Remove(id string)
- func (pm *ProcessManager) SetOnComplete(fn func(*ManagedProcess))
- func (pm *ProcessManager) Shutdown(timeout time.Duration)
- func (pm *ProcessManager) Stats() ProcessStats
- func (pm *ProcessManager) Track(cmd *exec.Cmd, opts ManagedProcessOpts) *ManagedProcess
- type ProcessManagerConfig
- type ProcessState
- type ProcessStats
- type ProcessStatus
- type ProcessSummary
- type Projection
- type ProprioceptiveEntry
- type ProprioceptiveLogger
- type Provider
- type ProviderCapabilities
- type ProviderConfig
- type ProviderMessage
- type ProviderMeta
- type ProviderSalienceEntry
- type ProviderSalienceSnapshot
- type ProviderScore
- type ProvidersConfig
- type RequestMetadata
- type RequestPriority
- type Router
- type RouterStats
- type RoutingConfig
- type RoutingDecision
- type SalienceConfig
- type SalienceScore
- type ScoreHead
- type ScoredMessage
- type Server
- type ServiceDef
- type ServiceHealth
- type SimpleRouter
- type StreamChunk
- type StreamTailer
- type StubProvider
- func (s *StubProvider) Available(_ context.Context) bool
- func (s *StubProvider) Capabilities() ProviderCapabilities
- func (s *StubProvider) Complete(_ context.Context, _ *CompletionRequest) (*CompletionResponse, error)
- func (s *StubProvider) Name() string
- func (s *StubProvider) Ping(_ context.Context) (time.Duration, error)
- func (s *StubProvider) Stream(_ context.Context, _ *CompletionRequest) (<-chan StreamChunk, error)
- type SyncEnvelope
- type SyncEvent
- type SyncWatcher
- type TRMConfig
- type TailerManager
- type TailerStats
- type TokenUsage
- type ToolCall
- type ToolCallDelta
- type ToolDefinition
- type TrajectoryModel
- type TrustContext
- type TrustState
- type URIResolution
- type ValidationResult
Constants ¶
const ( BlockMessage = cogblock.BlockMessage BlockToolCall = cogblock.BlockToolCall BlockToolResult = cogblock.BlockToolResult BlockImport = cogblock.BlockImport BlockAttention = cogblock.BlockAttention BlockSystemEvent = cogblock.BlockSystemEvent )
const ( BlockNucleus = "nucleus" // Identity card BlockProject = "project" // CLAUDE.md content BlockKnowledge = "knowledge" // Foveated CogDocs BlockNode = "node" // Sibling service health BlockField = "field" // Attentional field top-N BlockEvents = "events" // Recent ledger events BlockFocus = "focus" // Current anchor/intent )
BlockName constants identify the well-known context blocks.
const DefaultChunkSize = 2000
DefaultChunkSize is the target chunk size in characters (~500 tokens).
const DefaultFileWatcherPollInterval = time.Second
const DefaultOpenClawTailerScanInterval = time.Second
const DefaultSyncWatcherPollInterval = 5 * time.Second
Variables ¶
var ( // Version is injected at build time via -ldflags (e.g. "v0.1.0"). Version = "dev" // BuildTime is injected at build time via -ldflags. BuildTime = "unknown" )
var DefaultStability = map[string]int{ BlockNucleus: 95, BlockProject: 90, BlockKnowledge: 30, BlockNode: 70, BlockField: 40, BlockEvents: 20, BlockFocus: 10, }
DefaultStability maps block names to stability hints (0-100).
var DefaultTiers = map[string]int{ BlockNucleus: 0, BlockProject: 0, BlockKnowledge: 1, BlockNode: 2, BlockField: 2, BlockEvents: 2, BlockFocus: 2, }
DefaultTiers maps block names to their default tier.
var TraceEmitter func(ev trace.CycleEvent)
TraceEmitter is the bus-publish hook for cycle-trace events. The main package sets this at startup; engine calls it best-effort (never blocks the metabolic cycle on a nil hook or a slow consumer).
var TraceIdentity = func() string { return "cog" }
TraceIdentity returns the identity name stamped as the `source` field on emitted events. Set by the main package at startup; defaults to "cog".
var URIRegistry *uriRegistryStub
URIRegistry is nil when the coguri library is not linked.
Functions ¶
func AppendEvent ¶
func AppendEvent(workspaceRoot, sessionID string, envelope *EventEnvelope) error
AppendEvent appends an event to the process ledger with hash chaining. The ledger lives at .cog/ledger/{sessionID}/events.jsonl. Safe for concurrent callers (serialized via appendMu).
Uses an in-memory cache for the last event per session, turning the previous O(N) file scan per append into O(1) after the first access.
func ArchivedSessions ¶
func CanonicalizeEvent ¶
func CanonicalizeEvent(payload *EventPayload) ([]byte, error)
CanonicalizeEvent produces RFC 8785 canonical JSON for an event payload. Same logical content always produces the same bytes.
func ChunkDocument ¶
ChunkDocument splits a document body into chunks for embedding. It detects conversation format and uses turn-aware chunking when appropriate, falling back to paragraph-based chunking otherwise.
The body should have frontmatter already stripped.
func CollectReferencedHashes ¶
CollectReferencedHashes returns all blob hashes referenced by pointer CogDocs.
func ContentTypeFromExt ¶
ContentTypeFromExt returns a MIME content type for a file extension.
func DefaultManifestPath ¶ added in v0.2.0
DefaultManifestPath returns the expected manifest location for a workspace.
func ExtractInlineRefs ¶
ExtractInlineRefs scans document content for embedded cog:// URIs and returns a deduplicated, sorted slice of every unique URI found.
func ExtractReferencedPaths ¶
ExtractReferencedPaths extracts file paths from an LLM response.
func FieldKeyToURI ¶ added in v0.2.0
FieldKeyToURI converts an absolute filesystem path (field key) back to a canonical cog:// URI. This is the "project outward" half of the holographic pointer — the internal key becomes a portable, context-free identifier.
Returns the path unchanged if it can't be mapped to a cog:// URI.
func GetHashAlgorithm ¶
GetHashAlgorithm returns the hash algorithm configured for the workspace. Defaults to "sha256" if no genesis event is found.
func GetHotFiles ¶
func GetHotFiles(repoPath, scope string, limit int, threshold float64, daysWindow int, cfg *SalienceConfig) ([]string, error)
GetHotFiles returns paths with salience above threshold.
func HashEvent ¶
HashEvent computes the hash of canonical bytes using the given algorithm. Supported: "sha256" (default), "sha512".
func MemoryRead ¶
MemoryRead returns the text contents of a memory file. path may be either an absolute path or a memory-relative path (e.g. "semantic/foo.md").
func MemorySearch ¶
MemorySearch runs `cog memory search <query>` and returns matching paths. Falls back to a simple filepath.Walk grep if the cog binary is not available.
func OllamaEmbed ¶
OllamaEmbed calls Ollama to embed a query string. Returns a 384-dim float32 vector. The endpoint and model are taken from config; defaults are localhost:11434 and nomic-embed-text.
func PathExistsOnDisk ¶ added in v0.2.0
PathExistsOnDisk reports whether the resolved path actually exists.
func PathToURI ¶
PathToURI converts an absolute (or workspace-relative) filesystem path to a cog:// URI using the longest-matching prefix rule. Returns an error if no mapping covers the path.
func PrintSummary ¶
func PrintSummary(results []BenchmarkResult)
PrintSummary writes a tabular result summary to stdout.
func ResolveToFieldKey ¶ added in v0.2.0
ResolveToFieldKey normalizes any pointer form to the attentional field's canonical key (absolute filesystem path). Accepts:
- cog:// URIs: cog://mem/semantic/insights/foo.cog.md
- short cog: URIs: cog:mem/semantic/insights/foo.cog.md
- memory-relative: semantic/insights/foo.cog.md
- absolute paths: /Users/.../cog/.cog/mem/semantic/insights/foo.cog.md
This is the "resolve locally" half of the holographic pointer — any form collapses to the same key regardless of where in the system it originated.
func RunExperiment ¶
func RunExperiment(ctx context.Context, experimentPath, workspaceRoot string, process *Process, router Router) error
RunExperiment loads and executes an experiment document. workspaceRoot is used to resolve relative prompt file paths.
func RunInit ¶
RunInit scaffolds a CogOS workspace at the given root directory. It creates directories and writes default config files, skipping any that already exist.
func SaveResults ¶
func SaveResults(workspaceRoot string, results []BenchmarkResult, model, method string) error
SaveResults writes benchmark results as a CogDoc-format experiment log. The file is written under .cog/mem/episodic/experiments/.
func ShouldRedirectToBlob ¶
ShouldRedirectToBlob returns true if a file should be stored in the blob store instead of committed to git.
Types ¶
type AnthropicProvider ¶
type AnthropicProvider struct {
// contains filtered or unexported fields
}
AnthropicProvider implements Provider against the Anthropic Messages API.
func NewAnthropicProvider ¶
func NewAnthropicProvider(name string, cfg ProviderConfig) *AnthropicProvider
NewAnthropicProvider creates an AnthropicProvider from a ProviderConfig.
func (*AnthropicProvider) Available ¶
func (p *AnthropicProvider) Available(_ context.Context) bool
Available reports whether an API key is configured. For cloud providers we avoid a network round-trip on every health check — the presence of a non-empty API key is the availability signal.
func (*AnthropicProvider) Capabilities ¶
func (p *AnthropicProvider) Capabilities() ProviderCapabilities
Capabilities returns what Anthropic supports.
func (*AnthropicProvider) Complete ¶
func (p *AnthropicProvider) Complete(ctx context.Context, req *CompletionRequest) (*CompletionResponse, error)
Complete sends a non-streaming request and returns the full response.
func (*AnthropicProvider) Name ¶
func (p *AnthropicProvider) Name() string
Name returns the provider identifier.
func (*AnthropicProvider) Ping ¶
Ping probes the Anthropic API and returns measured round-trip latency. Uses GET /v1/models — lightweight, validates auth without running inference.
func (*AnthropicProvider) Stream ¶
func (p *AnthropicProvider) Stream(ctx context.Context, req *CompletionRequest) (<-chan StreamChunk, error)
Stream sends a streaming request and returns a channel of incremental chunks. The channel closes when generation is complete or the context is cancelled.
type AssembleOption ¶
type AssembleOption func(*assembleOpts)
AssembleOption configures optional AssembleContext parameters.
func WithContext ¶
func WithContext(ctx context.Context) AssembleOption
WithContext sets the request context for TRM embedding calls.
func WithConversationID ¶
func WithConversationID(id string) AssembleOption
WithConversationID sets the conversation ID for light cone tracking.
func WithIrisSignal ¶
func WithIrisSignal(signal irisSignal) AssembleOption
WithIrisSignal sets the current context-window usage signal for pressure-aware token estimation.
func WithManifestMode ¶
func WithManifestMode(enabled bool) AssembleOption
WithManifestMode switches CogDoc injection from full-body content to summary manifests with on-demand retrieval.
type AttentionProbe ¶
type AttentionProbe struct {
QProj [][]float32 // [d_head][d_model]
KProj [][]float32 // [d_head][d_model]
VProj [][]float32 // [d_head][d_model]
OutProj [][]float32 // [d_model][d_head]
DHead int
}
AttentionProbe lets trajectory context attend over the candidate set.
type AttentionalField ¶
type AttentionalField struct {
// contains filtered or unexported fields
}
AttentionalField holds the current salience map for the memory corpus. It is safe for concurrent reads (serve goroutine) and periodic writes (consolidation goroutine).
func NewAttentionalField ¶
func NewAttentionalField(cfg *Config) *AttentionalField
NewAttentionalField constructs an empty field. Call Update() to populate it.
func (*AttentionalField) AllScores ¶
func (f *AttentionalField) AllScores() map[string]float64
AllScores returns a copy of the full path→score map. Safe for external iteration (callers get a snapshot, not a live map).
func (*AttentionalField) Boost ¶
func (f *AttentionalField) Boost(path string, delta float64)
Boost adds delta to the score for path. Used by attention signals to apply a transient recency boost without a full field recomputation. The boost is overwritten on the next Update() call.
func (*AttentionalField) Fovea ¶
func (f *AttentionalField) Fovea(n int) []FileScore
Fovea returns the top n files by salience score (the "focal" context). If n <= 0, all files are returned.
func (*AttentionalField) LastUpdated ¶
func (f *AttentionalField) LastUpdated() time.Time
LastUpdated returns when the field was last recomputed.
func (*AttentionalField) Len ¶
func (f *AttentionalField) Len() int
Len returns the number of files currently in the field.
func (*AttentionalField) Score ¶
func (f *AttentionalField) Score(path string) float64
Score returns the current salience score for a single file. Returns 0.0 if the file is not in the field.
func (*AttentionalField) Update ¶
func (f *AttentionalField) Update() error
Update recomputes salience for memory files.
Three modes, selected automatically:
- HEAD unchanged + scores exist → no-op (instant)
- Previous HEAD known + new HEAD → delta scan (only new commits)
- No previous state → full scan (startup)
type AttentionalZone ¶
type AttentionalZone string
AttentionalZone maps to the v3 four-layer attentional field.
const ( ZoneNucleus AttentionalZone = "nucleus" // Identity, never drops below threshold ZoneMomentum AttentionalZone = "momentum" // Recent trajectory ZoneFoveal AttentionalZone = "foveal" // Current focus ZoneParafoveal AttentionalZone = "parafoveal" // Background, on demand )
type BackgroundTaskOpts ¶
type BackgroundTaskOpts struct {
Prompt string
Model string
Effort string
MCPConfig string
AllowedTools []string
Source string // "discord", "signal", "http", etc.
CallbackChannel string // channel to report results to
Identity string // NodeID of the requestor
MaxBudgetUSD float64
Timeout time.Duration
WorkDir string // working directory for the process
SystemPrompt string
}
BackgroundTaskOpts configures a fire-and-forget Claude Code task.
type BenchmarkPrompt ¶
type BenchmarkPrompt struct {
Prompt string `json:"prompt"`
ExpectedDocs []string `json:"expected_docs"` // partial path/ID fragments to match
ExpectedKeywords []string `json:"expected_keywords"` // words expected in response (future)
}
BenchmarkPrompt is a single test case.
func LoadPrompts ¶
func LoadPrompts(path string) ([]BenchmarkPrompt, error)
LoadPrompts reads benchmark prompts from a JSON file.
type BenchmarkResult ¶
type BenchmarkResult struct {
Prompt string
AssemblyMs int64
TotalTokens int
InjectedDocs []string
ExpectedDocs []string
Recall float64 // |injected ∩ expected| / |expected|
Precision float64 // |injected ∩ expected| / |injected|
Response string // for manual review
ResponseMs int64
}
BenchmarkResult is the measured output for a single prompt.
type BenchmarkSuite ¶
type BenchmarkSuite struct {
// contains filtered or unexported fields
}
BenchmarkSuite runs a set of prompts through context assembly and optionally inference, collecting quality metrics.
func NewBenchmarkSuite ¶
func NewBenchmarkSuite(process *Process, router Router, model string, budget int) *BenchmarkSuite
NewBenchmarkSuite constructs a suite bound to the given process and router. Pass nil for router to skip inference (assembly metrics only).
func (*BenchmarkSuite) Run ¶
func (b *BenchmarkSuite) Run(ctx context.Context, prompts []BenchmarkPrompt) []BenchmarkResult
Run executes all prompts sequentially and returns results.
type BlobEntry ¶
type BlobEntry struct {
Hash string `json:"hash"`
Size int64 `json:"size"`
ContentType string `json:"content_type"`
Refs []string `json:"refs,omitempty"` // CogDoc URIs that reference this blob
SyncedTo []string `json:"synced_to,omitempty"`
StoredAt string `json:"stored_at"`
}
BlobEntry is the metadata for a single stored blob.
type BlobPointer ¶
type BlobPointer struct {
Hash string `yaml:"hash" json:"hash"`
Size int64 `yaml:"size" json:"size"`
ContentType string `yaml:"content_type" json:"content_type"`
OriginalPath string `yaml:"original_path" json:"original_path"`
}
BlobPointer is the CogDoc frontmatter for a blob pointer file.
func FindBlobPointers ¶
func FindBlobPointers(workspaceRoot string) ([]BlobPointer, error)
FindBlobPointers walks the workspace and returns all blob pointer CogDocs.
type BlobStore ¶
type BlobStore struct {
// contains filtered or unexported fields
}
BlobStore manages content-addressed blob storage.
func NewBlobStore ¶
NewBlobStore creates a blob store rooted at workspaceRoot/.cog/blobs/.
func (*BlobStore) GC ¶
GC removes blobs not referenced by any CogDoc pointer in the workspace. Returns the number of blobs removed and total bytes freed.
func (*BlobStore) PrintBlobList ¶
PrintBlobList prints a formatted table of stored blobs.
func (*BlobStore) Store ¶
Store writes content to the blob store and returns the SHA-256 hash. If the blob already exists (same hash), this is a no-op.
type BlockArtifact ¶
type BlockArtifact = cogblock.BlockArtifact
type BlockProvenance ¶
type BlockProvenance = cogblock.BlockProvenance
type BuildRouterOption ¶
type BuildRouterOption func(*buildRouterOpts)
BuildRouterOption configures BuildRouter.
func WithProcessManager ¶
func WithProcessManager(pm *ProcessManager) BuildRouterOption
WithProcessManager provides a ProcessManager for providers that spawn subprocesses.
type Capability ¶
type Capability string
Capability is a single feature a provider may support.
const ( CapStreaming Capability = "streaming" CapToolUse Capability = "tool_use" CapToolCallValidation Capability = "tool_call_validation" CapVision Capability = "vision" CapLongContext Capability = "long_context" CapJSON Capability = "json_output" CapCaching Capability = "caching" CapBatch Capability = "batch" )
type ChunkMeta ¶
type ChunkMeta struct {
DocID string `json:"doc_id"`
Path string `json:"path"`
Title string `json:"title"`
SectionTitle string `json:"section_title"`
ChunkIdx int `json:"chunk_idx"`
ChunkID string `json:"chunk_id"`
TextPreview string `json:"text_preview"`
}
ChunkMeta holds metadata for a single embedded chunk.
type ClaudeCodeProvider ¶
type ClaudeCodeProvider struct {
// contains filtered or unexported fields
}
ClaudeCodeProvider implements Provider by spawning claude CLI processes.
func NewClaudeCodeProvider ¶
func NewClaudeCodeProvider(name string, cfg ProviderConfig, procMgr *ProcessManager) *ClaudeCodeProvider
NewClaudeCodeProvider creates a ClaudeCodeProvider from a ProviderConfig.
func (*ClaudeCodeProvider) Available ¶
func (p *ClaudeCodeProvider) Available(ctx context.Context) bool
Available checks that the claude binary exists and is authenticated.
func (*ClaudeCodeProvider) Capabilities ¶
func (p *ClaudeCodeProvider) Capabilities() ProviderCapabilities
Capabilities returns what this provider supports.
func (*ClaudeCodeProvider) Complete ¶
func (p *ClaudeCodeProvider) Complete(ctx context.Context, req *CompletionRequest) (*CompletionResponse, error)
Complete sends a prompt and waits for the full response.
func (*ClaudeCodeProvider) Name ¶
func (p *ClaudeCodeProvider) Name() string
Name returns the provider identifier.
func (*ClaudeCodeProvider) Ping ¶
Ping checks the binary is available and returns the startup overhead.
func (*ClaudeCodeProvider) SpawnBackground ¶
func (p *ClaudeCodeProvider) SpawnBackground(opts BackgroundTaskOpts) (string, error)
SpawnBackground starts a Claude Code process that outlives the HTTP request. Results are delivered via the process manager's callback mechanism.
func (*ClaudeCodeProvider) Stream ¶
func (p *ClaudeCodeProvider) Stream(ctx context.Context, req *CompletionRequest) (<-chan StreamChunk, error)
Stream spawns a claude process and returns incremental chunks. The returned channel closes when the process exits or ctx is cancelled. On ctx cancellation (client disconnect), the process is killed.
type ClaudeCodeTailer ¶
type ClaudeCodeTailer struct {
Watcher *FileWatcher
}
ClaudeCodeTailer tails Claude Code JSONL logs and emits normalized CogBlocks.
func (*ClaudeCodeTailer) Name ¶
func (t *ClaudeCodeTailer) Name() string
type CodexProvider ¶
type CodexProvider struct {
// contains filtered or unexported fields
}
CodexProvider implements Provider by spawning codex exec processes.
func NewCodexProvider ¶
func NewCodexProvider(name string, cfg ProviderConfig) *CodexProvider
NewCodexProvider creates a CodexProvider from a ProviderConfig.
func (*CodexProvider) Capabilities ¶
func (p *CodexProvider) Capabilities() ProviderCapabilities
func (*CodexProvider) Complete ¶
func (p *CodexProvider) Complete(ctx context.Context, req *CompletionRequest) (*CompletionResponse, error)
Complete sends a prompt and waits for the full response.
func (*CodexProvider) Name ¶
func (p *CodexProvider) Name() string
func (*CodexProvider) Stream ¶
func (p *CodexProvider) Stream(ctx context.Context, req *CompletionRequest) (<-chan StreamChunk, error)
Stream spawns a codex exec process and returns incremental chunks.
type CogBlock ¶
type CogBlock struct {
ID string `json:"id"`
Timestamp time.Time `json:"timestamp"`
SessionID string `json:"session_id,omitempty"`
ThreadID string `json:"thread_id,omitempty"`
// Source identification.
SourceChannel string `json:"source_channel"`
SourceTransport string `json:"source_transport"`
SourceIdentity string `json:"source_identity,omitempty"`
// Target.
TargetIdentity string `json:"target_identity,omitempty"`
WorkspaceID string `json:"workspace_id,omitempty"`
// Content.
Kind CogBlockKind `json:"kind"`
RawPayload json.RawMessage `json:"raw_payload,omitempty"`
Messages []ProviderMessage `json:"messages,omitempty"`
SystemPrompt string `json:"system_prompt,omitempty"`
// Provenance.
Provenance BlockProvenance `json:"provenance"`
TrustContext TrustContext `json:"trust_context"`
// Ledger linkage.
LedgerRef string `json:"ledger_ref,omitempty"`
// Artifacts produced from processing this block.
Artifacts []BlockArtifact `json:"artifacts,omitempty"`
}
CogBlock is the engine-local CogBlock that includes typed Messages. The canonical type definitions (CogBlockKind, BlockProvenance, TrustContext, BlockArtifact) live in pkg/cogblock and are re-exported below.
This struct mirrors cogblock.CogBlock but replaces the raw Messages field with the engine's typed []ProviderMessage for internal processing.
func NormalizeAnthropicRequest ¶
NormalizeAnthropicRequest converts an Anthropic Messages API request into a CogBlock.
func NormalizeGateEvent ¶
NormalizeGateEvent converts an internal GateEvent into a CogBlock.
func NormalizeMCPRequest ¶
func NormalizeMCPRequest(toolName string, input json.RawMessage) *CogBlock
NormalizeMCPRequest converts an MCP tool invocation that triggers cognition into a CogBlock.
func NormalizeOpenAIRequest ¶
NormalizeOpenAIRequest converts an OpenAI-compatible chat request into a CogBlock.
type CogBlockKind ¶
type CogBlockKind = cogblock.CogBlockKind
Re-export shared types from pkg/cogblock. These are type aliases so existing code compiles without changes.
type CogDocIndex ¶
type CogDocIndex struct {
// ByURI maps canonical cog:// URI → document.
ByURI map[string]*IndexedCogdoc
// ByType maps type string → all documents of that type.
ByType map[string][]*IndexedCogdoc
// ByTag maps tag string → all documents carrying that tag.
ByTag map[string][]*IndexedCogdoc
// ByStatus maps status string → all documents with that status.
ByStatus map[string][]*IndexedCogdoc
// RefGraph maps source URI → its explicit DocRef targets.
RefGraph map[string][]DocRef
// InverseRefs maps target URI → list of source URIs that reference it.
InverseRefs map[string][]string
}
CogDocIndex is the complete in-memory catalogue of the memory corpus. All fields are populated by BuildIndex; nil maps indicate an empty corpus.
func BuildIndex ¶
func BuildIndex(workspaceRoot string) (*CogDocIndex, error)
BuildIndex walks .cog/mem/ under workspaceRoot, parses CogDoc frontmatter, and returns a fully populated CogDocIndex.
Files with unparseable frontmatter are included with empty metadata (best-effort). If .cog/mem/ does not exist, an empty index is returned without error.
type CoherenceReport ¶
type CoherenceReport struct {
Pass bool `json:"pass"`
Results []ValidationResult `json:"results"`
Timestamp string `json:"timestamp"`
}
CoherenceReport aggregates all validation results from a single pass.
func RunCoherence ¶
func RunCoherence(cfg *Config, nucleus *Nucleus, idxArgs ...*CogDocIndex) *CoherenceReport
RunCoherence executes the 4-layer validation stack and returns a report. An optional *CogDocIndex enables Layer 4 dead-reference detection; without it Layer 4 passes trivially (maintaining backward compatibility).
type CompletionRequest ¶
type CompletionRequest struct {
// SystemPrompt carries nucleus content: identity, role, self-model.
SystemPrompt string `json:"system_prompt"`
// Messages is the conversation history in the current foveal window.
Messages []ProviderMessage `json:"messages"`
// Context is the assembled foveal content from the attentional field.
Context []ContextItem `json:"context,omitempty"`
// MaxTokens is the generation limit.
MaxTokens int `json:"max_tokens,omitempty"`
// Temperature controls randomness [0.0, 1.0].
Temperature *float64 `json:"temperature,omitempty"`
// TopP is the nucleus sampling parameter.
TopP *float64 `json:"top_p,omitempty"`
// Stop sequences that terminate generation.
Stop []string `json:"stop,omitempty"`
// Tools defines MCP tool definitions the model can invoke.
Tools []ToolDefinition `json:"tools,omitempty"`
// ToolChoice constrains tool use: "auto", "none", "required", or a name.
ToolChoice string `json:"tool_choice,omitempty"`
// ModelOverride, when non-empty, instructs the provider to use this model
// instead of its configured default. Set by --model flag or request body.
ModelOverride string `json:"model_override,omitempty"`
// InteractionID links the request back to the canonical ingress CogBlock.
InteractionID string `json:"interaction_id,omitempty"`
// Metadata carries routing/ledger information not sent to the model.
Metadata RequestMetadata `json:"metadata"`
}
CompletionRequest carries the assembled context package to the model. This is the output of the attentional field, not raw chat input.
type CompletionResponse ¶
type CompletionResponse struct {
Content string `json:"content"`
ToolCalls []ToolCall `json:"tool_calls,omitempty"`
StopReason string `json:"stop_reason"` // "end_turn" | "max_tokens" | "tool_use"
Usage TokenUsage `json:"usage"`
ProviderMeta ProviderMeta `json:"provider_meta"`
}
CompletionResponse is what a Provider returns from Complete().
type Config ¶
type Config struct {
// WorkspaceRoot is the absolute path to the cog-workspace root.
WorkspaceRoot string
// CogDir is WorkspaceRoot/.cog
CogDir string
// Port the HTTP API listens on. Default: 6931 (ln(2) × 10⁴).
Port int
// ConsolidationInterval is how often the consolidation loop fires (seconds).
ConsolidationInterval int
// HeartbeatInterval is the dormant-state heartbeat cadence (seconds).
HeartbeatInterval int
// SalienceDaysWindow is the git history window for salience scoring.
SalienceDaysWindow int
// OutputReserve is tokens reserved for model generation (subtracted from budget).
OutputReserve int
// TRMWeightsPath is the path to the TRM binary weights file.
// If empty, TRM is disabled and keyword+salience scoring is used.
TRMWeightsPath string
// TRMEmbeddingsPath is the path to the TRM embedding index binary.
TRMEmbeddingsPath string
// TRMChunksPath is the path to the TRM chunk metadata JSON.
TRMChunksPath string
// OllamaEmbedEndpoint is the Ollama /api/embeddings endpoint URL.
// Default: http://localhost:11434
OllamaEmbedEndpoint string
// OllamaEmbedModel is the embedding model name for Ollama.
// Default: nomic-embed-text
OllamaEmbedModel string
// ToolCallValidationEnabled gates runtime validation for model-emitted tool calls.
// Providers that advertise CapToolUse are trusted and skip this guardrail.
ToolCallValidationEnabled bool
// DigestPaths maps stream tailer adapter names to JSONL file/directory paths.
// Empty map means external digestion is disabled.
DigestPaths map[string]string
LocalModel string
// contains filtered or unexported fields
}
Config holds all runtime configuration for the v3 kernel.
type ConsolidationAction ¶
func (ConsolidationAction) Run ¶
func (a ConsolidationAction) Run() (int, error)
type ConstellationBridge ¶
type ConstellationBridge interface {
EmitHeartbeat(payload KernelHeartbeatPayload) (HeartbeatReceipt, error)
TrustSnapshot() ConstellationTrustSnapshot
Start(ctx context.Context) error
Stop()
}
ConstellationBridge defines the kernel-side integration point for constellation.
type ConstellationTrustSnapshot ¶
type ConstellationTrustSnapshot struct {
SelfCoherencePass bool `json:"self_coherence_pass"`
SelfTrustScore float64 `json:"self_trust_score"`
PeerTrustMean float64 `json:"peer_trust_mean"`
PeerCount int `json:"peer_count"`
TrustedPeerCount int `json:"trusted_peer_count"`
ConstellationHealthy bool `json:"constellation_healthy"`
Timestamp time.Time `json:"timestamp"`
}
ConstellationTrustSnapshot summarizes current local and peer trust state.
type ConsumerEntry ¶ added in v0.2.0
type ConsumerEntry struct {
Path string `yaml:"path" json:"path"`
Type string `yaml:"type" json:"type"` // json, sed, plist
JSONPath string `yaml:"jsonpath,omitempty" json:"jsonpath,omitempty"`
Template string `yaml:"template,omitempty" json:"template,omitempty"`
Match string `yaml:"match,omitempty" json:"match,omitempty"`
Replace string `yaml:"replace,omitempty" json:"replace,omitempty"`
Key string `yaml:"key,omitempty" json:"key,omitempty"`
}
ConsumerEntry declares a file that references this service's port.
type ContainerConfig ¶
type ContainerRuntime ¶
type ContainerRuntime interface {
Start(image string, config ContainerConfig) (containerID string, err error)
Stop(containerID string) error
Status(containerID string) (ContainerStatus, error)
Logs(containerID string, follow bool) (io.ReadCloser, error)
Exec(containerID string, command []string) ([]byte, error)
Pull(image string) error
}
type ContainerStatus ¶
type ContentPart ¶
type ContentPart struct {
Type string `json:"type"`
Text string `json:"text,omitempty"`
ImageURL string `json:"image_url,omitempty"`
}
ContentPart is a structured content element preserving multi-modal data (text and images) that would be lost by the text-only Content field.
type ContextBlock ¶ added in v0.2.0
type ContextBlock struct {
Name string `json:"name"` // e.g. "nucleus", "project", "knowledge", "node", "field", "events"
Tier int `json:"tier"` // 0=fixed, 1=priority, 2=flexible, 3=expendable
Stability int `json:"stability"` // 0-100: higher = less likely to change between turns (KV cache hint)
Content string `json:"content"` // Rendered markdown/text
Tokens int `json:"tokens"` // Estimated token count
Hash string `json:"hash"` // Content hash for change detection
}
ContextBlock is a single named section of the context frame.
func NewBlock ¶ added in v0.2.0
func NewBlock(name, content string) ContextBlock
NewBlock creates a ContextBlock with defaults from the tier/stability maps.
type ContextFrame ¶ added in v0.2.0
type ContextFrame struct {
Blocks []ContextBlock `json:"blocks"`
Budget int `json:"budget"`
UsedTokens int `json:"used_tokens"`
Anchor string `json:"anchor,omitempty"`
Goal string `json:"goal,omitempty"`
}
ContextFrame is the structured output of the foveated context rendering pipeline. Each block has a name, tier (priority), stability (KV cache hint), and content.
func (*ContextFrame) FitBudget ¶ added in v0.2.0
func (f *ContextFrame) FitBudget(budget int)
FitBudget evicts lowest-tier blocks until total tokens <= budget. Tier 0 blocks are never removed. Among blocks of equal tier, the least stable block is evicted first.
func (*ContextFrame) Render ¶ added in v0.2.0
func (f *ContextFrame) Render() string
Render serializes the frame as HTML comment blocks for hook injection. Blocks are sorted by stability descending (most stable first — KV cache friendly). Each block is emitted as:
<!-- block:{tier}:{name} hash:{hash} tokens:{tokens} stability:{stability} -->
{content}
---
type ContextItem ¶
type ContextItem struct {
ID string `json:"id"` // cog:// URI or memory address
Zone AttentionalZone `json:"zone"`
Salience float64 `json:"salience"`
Content string `json:"content"`
TokenEstimate int `json:"token_estimate,omitempty"`
}
ContextItem is a piece of foveated context assembled by the attentional field.
type ContextPackage ¶
type ContextPackage struct {
// NucleusText is the identity card content — always present (Zone 0).
NucleusText string
// ClientSystem is the client's system prompt if provided (Zone 1).
ClientSystem string
// FovealDocs are the CogDocs selected for injection (Zone 1).
FovealDocs []FovealDoc
// Conversation is the scored/filtered conversation history (Zone 2).
Conversation []ScoredMessage
// CurrentMessage is the latest user message — always present (Zone 3).
CurrentMessage *ProviderMessage
// TotalTokens is the approximate token count of the assembled context.
TotalTokens int
// OutputReserve is tokens reserved for generation.
OutputReserve int
// InjectedPaths is the list of injected absolute file paths (for logging).
InjectedPaths []string
}
ContextPackage is the assembled context for a single chat request.
func (*ContextPackage) FormatForProvider ¶
func (pkg *ContextPackage) FormatForProvider() (string, []ProviderMessage)
FormatForProvider renders a ContextPackage as (systemPrompt, messages) for the provider.
The system prompt is stability-ordered for KV cache optimization: nucleus → client system prompt → CogDocs (by salience descending).
Messages are in chronological order: conversation history → current message.
type Conv1D ¶
type Conv1D struct {
Weight [][][]float32 // [channels][1][kernel_size]
Bias []float32 // [channels]
Channels int
Kernel int
}
Conv1D implements depthwise 1D convolution with kernel size K. For single-step inference, we store the last (K-1) inputs as state.
type DaemonHealth ¶
type DaemonState ¶
type DebugBudget ¶
type DebugClientInfo ¶
type DebugContextView ¶
type DebugContextView struct {
Zones []DebugZone `json:"zones"`
Budget DebugBudget `json:"budget"`
}
DebugContextView shows the current context window as stability-ordered zones.
type DebugEngineInfo ¶
type DebugEngineInfo struct {
NucleusTokens int `json:"nucleus_tokens"`
ClientSystemTokens int `json:"client_system_tokens"`
CogDocsScored int `json:"cogdocs_scored"`
CogDocsInjected int `json:"cogdocs_injected"`
CogDocsInjectedPaths []string `json:"cogdocs_injected_paths"`
ConversationTurnsIn int `json:"conversation_turns_in"`
ConversationTurnsKept int `json:"conversation_turns_kept"`
CurrentMessageTokens int `json:"current_message_tokens"`
TotalTokens int `json:"total_tokens"`
Budget int `json:"budget"`
OutputReserve int `json:"output_reserve"`
FlexBudgetUsed int `json:"flex_budget_used"`
}
type DebugProviderInfo ¶
type DebugSnapshot ¶
type DebugSnapshot struct {
Timestamp time.Time `json:"timestamp"`
Client DebugClientInfo `json:"client"`
Engine DebugEngineInfo `json:"engine"`
Provider DebugProviderInfo `json:"provider"`
Context DebugContextView `json:"context"`
}
DebugSnapshot captures the full pipeline state of a single chat request.
type DebugZone ¶
type DebugZone struct {
Zone string `json:"zone"`
Tokens int `json:"tokens"`
ContentPreview string `json:"content_preview,omitempty"`
Items []DebugZoneItem `json:"items,omitempty"`
}
type DebugZoneItem ¶
type DebugZoneItem struct {
ID string `json:"id,omitempty"`
Title string `json:"title,omitempty"`
Role string `json:"role,omitempty"`
Tokens int `json:"tokens"`
Salience float64 `json:"salience,omitempty"`
Recency float64 `json:"recency,omitempty"`
Relevance float64 `json:"relevance,omitempty"`
Reason string `json:"reason,omitempty"`
Preview string `json:"preview"`
}
type Diagnostic ¶
type Diagnostic struct {
Rule string `json:"rule"`
Expected string `json:"expected"`
Actual string `json:"actual"`
Suggestion string `json:"suggestion"`
Severity string `json:"severity"` // "error", "warning", "info"
}
Diagnostic carries the details of a validation failure.
type DocRef ¶
type DocRef struct {
// URI is the target cog:// URI.
URI string `yaml:"uri"`
// Rel is the relationship label (e.g. "related", "supersedes", "depends-on").
Rel string `yaml:"rel"`
}
DocRef is an explicit typed reference declared in a CogDoc's frontmatter.
type EmbeddingIndex ¶
EmbeddingIndex holds the full embedding matrix and chunk metadata.
func LoadEmbeddingIndex ¶
func LoadEmbeddingIndex(embPath, chunksPath string) (*EmbeddingIndex, error)
LoadEmbeddingIndex loads the binary embedding file and chunk metadata JSON.
func (*EmbeddingIndex) CosineTopK ¶
func (idx *EmbeddingIndex) CosineTopK(query []float32, k int) []IndexResult
CosineTopK returns the top-K chunks by cosine similarity to the query. The query should already be L2-normalized (as are the stored embeddings).
func (*EmbeddingIndex) CosineTopKIndices ¶
func (idx *EmbeddingIndex) CosineTopKIndices(query []float32, k int) ([]int, [][]float32)
CosineTopKIndices returns the indices and embeddings of the top-K chunks. Useful for pre-filtering before TRM scoring.
func (*EmbeddingIndex) Size ¶
func (idx *EmbeddingIndex) Size() int
Size returns the number of chunks in the index.
type EventEnvelope ¶
type EventEnvelope struct {
HashedPayload EventPayload `json:"hashed_payload"`
Metadata EventMetadata `json:"metadata,omitempty"`
}
EventEnvelope is the canonical on-disk event shape.
func GetLastEvent ¶
func GetLastEvent(workspaceRoot, sessionID string) (*EventEnvelope, error)
GetLastEvent returns the last event in a session ledger, or nil if empty.
func GetLastGlobalEvent ¶
func GetLastGlobalEvent(workspaceRoot, currentSessionID string) (*EventEnvelope, error)
GetLastGlobalEvent returns the last event from the most-recently-modified session in .cog/ledger/ that is NOT currentSessionID. Used on process startup to chain the new session's genesis event to the previous session's final event, maintaining a continuous cross-session ledger. Returns nil (without error) if there is no prior session.
type EventMetadata ¶
type EventMetadata struct {
Hash string `json:"hash,omitempty"`
Seq int64 `json:"seq,omitempty"`
Source string `json:"source,omitempty"`
}
EventMetadata is NOT included in the hash (for extensibility).
type EventPayload ¶
type EventPayload struct {
Type string `json:"type"`
Timestamp string `json:"timestamp"`
SessionID string `json:"session_id"`
PriorHash string `json:"prior_hash,omitempty"`
Data map[string]interface{} `json:"data,omitempty"`
}
EventPayload is the content that gets canonicalized and hashed.
type ExperimentConfig ¶
type ExperimentConfig struct {
Type string `yaml:"type"`
Title string `yaml:"title"`
Created string `yaml:"created"`
Run struct {
PromptsFile string `yaml:"prompts_file"` // path to benchmark_prompts.json
Model string `yaml:"model"` // e.g. "qwen3.5:9b"
Budget int `yaml:"budget"` // token budget (0 = default 4096)
Method string `yaml:"method"` // e.g. "keyword-match"
RegressionThreshold float64 `yaml:"regression_threshold"` // recall drop that triggers flag (default 0.1)
BaselineRun string `yaml:"baseline_run"` // path to previous result CogDoc for comparison
} `yaml:"run"`
}
ExperimentConfig is the YAML frontmatter of an experiment CogDoc.
type ExperimentDelta ¶
ExperimentDelta is the change in aggregate metrics vs a baseline.
type FileScore ¶
FileScore pairs a file path with its total salience score.
func RankFilesBySalience ¶
func RankFilesBySalience(repoPath, scope string, limit, daysWindow int, cfg *SalienceConfig) ([]FileScore, error)
RankFilesBySalience walks scope and returns all .md/.cog.md files sorted by score.
Uses a single-pass commit walk: iterates the git log once and records which scope-files each commit touched. This is O(commits × files_per_commit) instead of the old O(files × commits) approach that ran a filtered log per file.
type FileWatcher ¶
FileWatcher polls a file for newly appended newline-delimited content.
func NewFileWatcher ¶
func NewFileWatcher(pollInterval time.Duration) *FileWatcher
NewFileWatcher creates a polling file watcher.
type FovealDoc ¶
type FovealDoc struct {
URI string
Path string
Title string
Content string
Summary string
SchemaIssues []string
Salience float64
Tokens int
Reason string // "high-salience", "query-match", or "both"
}
FovealDoc is a single CogDoc selected for context injection.
type Gate ¶
type Gate struct {
// contains filtered or unexported fields
}
Gate routes events into the attentional field.
func NewGate ¶
func NewGate(field *AttentionalField, cfg *Config) *Gate
NewGate constructs a Gate backed by the given attentional field.
func (*Gate) Process ¶
func (g *Gate) Process(evt *GateEvent) *GateResult
Process routes an event through the gate and returns a routing decision.
type GateEvent ¶
type GateEvent struct {
// Type is the event category (e.g. "user.message", "tool.call", "heartbeat").
Type string
// Content is the raw content of the event (e.g. user message text).
Content string
// Timestamp records when the event arrived.
Timestamp time.Time
// SessionID is the originating session (empty for internal events).
SessionID string
// Data holds type-specific structured data.
Data map[string]interface{}
}
GateEvent is an input to the attentional gate.
func NewGateEventFromBlock ¶
NewGateEventFromBlock builds a GateEvent from an ingress CogBlock while keeping GateEvent as the active process-routing primitive.
type GateResult ¶
type GateResult struct {
// Elevated is the set of memory files to bring into the fovea for this event.
Elevated []FileScore
// StateTransition is the suggested next process state (empty = no change).
StateTransition ProcessState
// Accepted records whether the event was accepted into the fovea.
Accepted bool
}
GateResult is the gate's routing decision for an event.
type HealthChecker ¶
type HealthChecker func(endpoint string, timeout time.Duration) (*DaemonHealth, error)
type HeartbeatReceipt ¶
type HeartbeatReceipt struct {
Hash string `json:"hash,omitempty"`
Timestamp time.Time `json:"timestamp,omitempty"`
PeersSent int `json:"peers_sent"`
}
HeartbeatReceipt summarizes the result of a bridge heartbeat emission.
type IndexResult ¶
IndexResult is a single search result from the embedding index.
type IndexedCogdoc ¶
type IndexedCogdoc struct {
// URI is the canonical cog:// address of this document.
URI string
// Path is the absolute filesystem path.
Path string
// ID is the value of the `id:` frontmatter field (may be empty).
ID string
// Title is the value of the `title:` frontmatter field.
Title string
// Type is the value of the `type:` frontmatter field (e.g. "insight").
Type string
// Tags is the value of the `tags:` frontmatter field.
Tags []string
// Status is the value of the `status:` frontmatter field (e.g. "active").
Status string
// Created is the value of the `created:` frontmatter field (string, any format).
Created string
// Refs are the explicit `refs:` entries in the frontmatter.
Refs []DocRef
// InlineRefs are cog:// URIs found in the document body (extracted by regex).
InlineRefs []string
}
IndexedCogdoc is a lightweight representation of a single CogDoc file, containing only the metadata needed for index lookups and coherence checks.
type InferenceEvent ¶
type InferenceEvent struct {
RequestID string `json:"request_id"`
Timestamp time.Time `json:"timestamp"`
Provider string `json:"provider"`
Model string `json:"model"`
ProcessState string `json:"process_state"`
Usage TokenUsage `json:"usage"`
CostUSD float64 `json:"cost_usd"`
Latency time.Duration `json:"latency"`
RoutingDecision *RoutingDecision `json:"routing_decision,omitempty"`
Escalated bool `json:"escalated"`
Source string `json:"source"`
Success bool `json:"success"`
Error string `json:"error,omitempty"`
}
InferenceEvent is the ledger event recorded for every inference request.
type KernelHeartbeatPayload ¶
type KernelHeartbeatPayload struct {
ProcessState string `json:"process_state"`
FieldSize int `json:"field_size"`
CoherenceFingerprint string `json:"coherence_fingerprint"`
NucleusFingerprint string `json:"nucleus_fingerprint"`
LedgerHead string `json:"ledger_head,omitempty"`
Timestamp time.Time `json:"timestamp"`
}
KernelHeartbeatPayload captures the kernel state exported to the constellation bridge.
type LightCone ¶
type LightCone struct {
States []MambaState // one per layer
}
LightCone holds per-layer SSM states — the compressed observer trajectory.
type LightConeInfo ¶
type LightConeInfo struct {
ConversationID string `json:"conversation_id"`
NLayers int `json:"n_layers"`
LayerNorms []float64 `json:"layer_norms"`
CompressedNorm float64 `json:"compressed_norm"`
UpdatedAt time.Time `json:"updated_at"`
}
LightConeInfo is a summary of a stored light cone for the /v1/lightcone endpoint.
type LightConeManager ¶
type LightConeManager struct {
// contains filtered or unexported fields
}
LightConeManager provides thread-safe per-conversation light cone storage.
func NewLightConeManager ¶
func NewLightConeManager(trm *MambaTRM) *LightConeManager
NewLightConeManager creates a new manager. The trm parameter is used for computing light cone norms (can be nil if norms are not needed).
func (*LightConeManager) Count ¶
func (m *LightConeManager) Count() int
Count returns the number of active light cones.
func (*LightConeManager) Delete ¶
func (m *LightConeManager) Delete(convID string)
Delete removes the light cone for a conversation.
func (*LightConeManager) Get ¶
func (m *LightConeManager) Get(convID string) *LightCone
Get returns the light cone for a conversation, or nil if none exists.
func (*LightConeManager) List ¶
func (m *LightConeManager) List() []LightConeInfo
List returns summary information for all stored light cones.
func (*LightConeManager) Prune ¶
func (m *LightConeManager) Prune(before time.Time) int
Prune removes light cones that haven't been updated since the given time. Returns the number of pruned entries.
func (*LightConeManager) Set ¶
func (m *LightConeManager) Set(convID string, lc *LightCone)
Set stores or updates the light cone for a conversation.
type Linear ¶
type Linear struct {
Weight [][]float32 // [out_features][in_features]
Bias []float32 // [out_features] or nil
InDim int
OutDim int
}
Linear represents a dense layer: y = x @ W^T + b
type MambaBlock ¶
type MambaBlock struct {
Norm LayerNorm
InProj [][]float32 // [2*d_inner][d_model] — projects to (x_ssm, z)
Conv Conv1D
XProj [][]float32 // [d_state*2+1][d_inner] — projects to (B, C, delta)
LogA [][]float32 // [d_inner][d_state]
D []float32 // [d_inner] skip connection
OutProj [][]float32 // [d_model][d_inner]
DInner int
DState int
}
MambaBlock is a single selective SSM block with pre-norm residual.
func (*MambaBlock) Step ¶
func (mb *MambaBlock) Step(x []float32, state *MambaState) ([]float32, *MambaState)
Step processes a single event through the Mamba block, updating the SSM state. Input: x [d_model], state: MambaState (or nil for fresh). Returns: output [d_model], new state.
Note: unlike the forward path, step() does NOT include the residual connection. This matches the Python SelectiveSSM.step() method used for inference.
type MambaState ¶
type MambaState struct {
H [][]float32 // [d_inner][d_state]
}
MambaState is the SSM hidden state for one layer.
type MambaTRM ¶
type MambaTRM struct {
Config TRMConfig
TypeEmbed [][]float32 // [n_event_types][d_model]
InputProj Linear // 2*d_model → d_model
Layers []MambaBlock // [n_layers]
FinalNorm LayerNorm // d_model
Probes []AttentionProbe // [n_probes]
ProbeNorms []LayerNorm // [n_probes]
Head ScoreHead
}
MambaTRM is the full temporal retrieval model.
func (*MambaTRM) GetLightConeNorms ¶
GetLightConeNorms returns per-layer SSM state norms and a compressed scalar.
func (*MambaTRM) ScoreCandidates ¶
ScoreCandidates scores a set of candidates against a trajectory context. context: [d_model] from Step(), candidates: [n][d_model] embeddings. Returns [n] scores (higher = more relevant).
type ManagedProcess ¶
type ManagedProcess struct {
ID string `json:"id"`
Kind ProcessKind `json:"kind"`
Status ProcessStatus `json:"status"`
Source string `json:"source"` // "http", "discord", "signal", etc.
CallbackChannel string `json:"callback_channel"` // where to deliver results
Identity string `json:"identity"` // NodeID of requestor
StartedAt time.Time `json:"started_at"`
FinishedAt *time.Time `json:"finished_at,omitempty"`
Error string `json:"error,omitempty"`
Usage *TokenUsage
// contains filtered or unexported fields
}
ManagedProcess tracks a single Claude Code subprocess.
func (*ManagedProcess) SetError ¶
func (p *ManagedProcess) SetError(err error)
SetError records an error on the process.
type ManagedProcessOpts ¶
type ManagedProcessOpts struct {
Kind ProcessKind
Source string
CallbackChannel string
Identity string
Cancel func()
}
ManagedProcessOpts configures tracking for a new process.
type NerdctlRuntime ¶
type NerdctlRuntime struct {
// contains filtered or unexported fields
}
func NewNerdctlRuntime ¶
func NewNerdctlRuntime() (*NerdctlRuntime, error)
func (*NerdctlRuntime) Exec ¶
func (n *NerdctlRuntime) Exec(containerID string, command []string) ([]byte, error)
func (*NerdctlRuntime) Logs ¶
func (n *NerdctlRuntime) Logs(containerID string, follow bool) (io.ReadCloser, error)
func (*NerdctlRuntime) Pull ¶
func (n *NerdctlRuntime) Pull(image string) error
func (*NerdctlRuntime) Start ¶
func (n *NerdctlRuntime) Start(image string, config ContainerConfig) (string, error)
func (*NerdctlRuntime) Status ¶
func (n *NerdctlRuntime) Status(containerID string) (ContainerStatus, error)
func (*NerdctlRuntime) Stop ¶
func (n *NerdctlRuntime) Stop(containerID string) error
type NilBridge ¶
type NilBridge struct{}
NilBridge provides neutral standalone-mode behavior when no constellation is configured.
func (NilBridge) EmitHeartbeat ¶
func (NilBridge) EmitHeartbeat(KernelHeartbeatPayload) (HeartbeatReceipt, error)
func (NilBridge) TrustSnapshot ¶
func (NilBridge) TrustSnapshot() ConstellationTrustSnapshot
type NodeHealth ¶ added in v0.2.0
type NodeHealth struct {
// contains filtered or unexported fields
}
NodeHealth holds the last-known health of all sibling services.
func NewNodeHealth ¶ added in v0.2.0
func NewNodeHealth() *NodeHealth
NewNodeHealth returns an empty NodeHealth.
func (*NodeHealth) Counts ¶ added in v0.2.0
func (nh *NodeHealth) Counts() (int, int)
Counts returns (healthy, total) for quick reporting.
func (*NodeHealth) Names ¶ added in v0.2.0
func (nh *NodeHealth) Names() []string
Names returns sorted service names.
func (*NodeHealth) Probe ¶ added in v0.2.0
func (nh *NodeHealth) Probe(manifest *NodeManifest, selfPort int)
Probe checks all sibling services defined in the manifest concurrently. Skips the kernel itself (it knows its own health). Each probe has a 2s timeout; total wall time is ~2s regardless of service count.
func (*NodeHealth) Snapshot ¶ added in v0.2.0
func (nh *NodeHealth) Snapshot() map[string]ServiceHealth
Snapshot returns a copy of the current service health map.
func (*NodeHealth) Summary ¶ added in v0.2.0
func (nh *NodeHealth) Summary() map[string]string
Summary returns a compact status map (service → status string).
type NodeManifest ¶ added in v0.2.0
type NodeManifest struct {
APIVersion string `yaml:"apiVersion" json:"apiVersion"`
Kind string `yaml:"kind" json:"kind"`
Services map[string]ServiceDef `yaml:"services" json:"services"`
}
NodeManifest is the single source of truth for services on this node.
func LoadManifest ¶ added in v0.2.0
func LoadManifest(path string) (*NodeManifest, error)
LoadManifest reads and parses a manifest.yaml file.
type Nucleus ¶
type Nucleus struct {
// Name is the identity name (e.g. "Cog", "Sandy").
Name string
// Role is the identity role descriptor.
Role string
// Card is the full text of the identity card (markdown).
Card string
// WorkspaceRoot is the absolute path to the workspace.
WorkspaceRoot string
// LoadedAt records when this nucleus was loaded.
LoadedAt time.Time
// contains filtered or unexported fields
}
Nucleus is the always-above-threshold identity context. It holds the parsed identity card and the workspace root.
func LoadNucleus ¶
LoadNucleus reads the current identity from .cog/config/identity.yaml and loads the corresponding identity card file. Falls back to an embedded default identity if no config or card exists.
type ObserverUpdate ¶
type ObserverUpdate struct {
// PredictionError is the Jaccard distance between the previous prediction
// and the paths actually attended this cycle (0 = perfect, 1 = total miss).
PredictionError float64
// Prediction is the set of paths the model expects to be attended next cycle.
Prediction []string
// Receding is the set of paths that were predicted last cycle but dropped
// out this cycle (expected, then stopped being attended).
Receding []string
// Cycle is the cycle number that just completed.
Cycle int64
// MeanError is the running mean prediction error across all cycles.
MeanError float64
}
ObserverUpdate is the result of a single TrajectoryModel.Update() call.
type OllamaProvider ¶
type OllamaProvider struct {
// contains filtered or unexported fields
}
OllamaProvider implements Provider against a local Ollama server.
func NewOllamaProvider ¶
func NewOllamaProvider(name string, cfg ProviderConfig) *OllamaProvider
NewOllamaProvider creates an OllamaProvider from a ProviderConfig.
func (*OllamaProvider) Available ¶
func (p *OllamaProvider) Available(ctx context.Context) bool
Available checks if Ollama is running and the configured model is loaded.
func (*OllamaProvider) Capabilities ¶
func (p *OllamaProvider) Capabilities() ProviderCapabilities
Capabilities returns what Ollama supports.
func (*OllamaProvider) Complete ¶
func (p *OllamaProvider) Complete(ctx context.Context, req *CompletionRequest) (*CompletionResponse, error)
Complete sends a non-streaming request and returns the full response.
func (*OllamaProvider) ContextWindow ¶
func (p *OllamaProvider) ContextWindow() int
ContextWindow returns the configured num_ctx for this provider.
func (*OllamaProvider) Name ¶
func (p *OllamaProvider) Name() string
Name returns the provider identifier.
func (*OllamaProvider) Stream ¶
func (p *OllamaProvider) Stream(ctx context.Context, req *CompletionRequest) (<-chan StreamChunk, error)
Stream sends a streaming request and returns a channel of chunks. The channel closes when generation is complete or the context is cancelled.
type OpenAICompatProvider ¶
type OpenAICompatProvider struct {
// contains filtered or unexported fields
}
OpenAICompatProvider implements Provider against any OpenAI-compatible server.
func NewOpenAICompatProvider ¶
func NewOpenAICompatProvider(name string, cfg ProviderConfig) *OpenAICompatProvider
NewOpenAICompatProvider creates an OpenAICompatProvider from a ProviderConfig.
func (*OpenAICompatProvider) Available ¶
func (p *OpenAICompatProvider) Available(ctx context.Context) bool
Available checks if the server is reachable and has at least one model.
func (*OpenAICompatProvider) Capabilities ¶
func (p *OpenAICompatProvider) Capabilities() ProviderCapabilities
Capabilities returns what this provider supports.
func (*OpenAICompatProvider) Complete ¶
func (p *OpenAICompatProvider) Complete(ctx context.Context, req *CompletionRequest) (*CompletionResponse, error)
Complete sends a non-streaming request and returns the full response.
func (*OpenAICompatProvider) Name ¶
func (p *OpenAICompatProvider) Name() string
Name returns the provider identifier.
func (*OpenAICompatProvider) Stream ¶
func (p *OpenAICompatProvider) Stream(ctx context.Context, req *CompletionRequest) (<-chan StreamChunk, error)
Stream sends a streaming request and returns a channel of incremental chunks. The channel closes when generation is complete or the context is cancelled.
type OpenClawTailer ¶
type OpenClawTailer struct {
Watcher *FileWatcher
ScanInterval time.Duration
}
OpenClawTailer watches OpenClaw JSONL logs and emits normalized CogBlocks.
func (*OpenClawTailer) Name ¶
func (t *OpenClawTailer) Name() string
type PiProvider ¶ added in v0.2.0
type PiProvider struct {
// contains filtered or unexported fields
}
PiProvider implements Provider by spawning pi CLI processes.
func NewPiProvider ¶ added in v0.2.0
func NewPiProvider(name string, cfg ProviderConfig, procMgr *ProcessManager) *PiProvider
NewPiProvider creates a PiProvider from a ProviderConfig.
func (*PiProvider) Available ¶ added in v0.2.0
func (p *PiProvider) Available(ctx context.Context) bool
func (*PiProvider) Capabilities ¶ added in v0.2.0
func (p *PiProvider) Capabilities() ProviderCapabilities
func (*PiProvider) Complete ¶ added in v0.2.0
func (p *PiProvider) Complete(ctx context.Context, req *CompletionRequest) (*CompletionResponse, error)
Complete sends a prompt and waits for the full response.
func (*PiProvider) Name ¶ added in v0.2.0
func (p *PiProvider) Name() string
func (*PiProvider) Stream ¶ added in v0.2.0
func (p *PiProvider) Stream(ctx context.Context, req *CompletionRequest) (<-chan StreamChunk, error)
Stream spawns a pi process in JSON mode and returns incremental chunks.
type PredictedChunk ¶
type PredictedChunk struct {
Path string `json:"path"`
SectionTitle string `json:"section_title,omitempty"`
Score float32 `json:"score"`
}
PredictedChunk is a single TRM prediction.
type Process ¶
type Process struct {
// NodeID is the stable kernel node identity persisted across restarts.
NodeID string
// TrustState carries local trust, coherence, and heartbeat metadata.
TrustState TrustState
// contains filtered or unexported fields
}
Process is the always-running cognitive process.
func NewProcess ¶
NewProcess constructs and initialises the process.
func (*Process) AssembleContext ¶
func (p *Process) AssembleContext(query string, messages []ProviderMessage, budget int, opts ...AssembleOption) (*ContextPackage, error)
AssembleContext builds a ContextPackage from the full client request.
It decomposes the incoming messages[], scores conversation history alongside CogDocs, manages eviction when the budget is exceeded, and prepares the context for stability-ordered rendering.
The budget is in approximate tokens (chars/4). Pass 0 to use the default (32768). ctx and convID are optional (pass context.Background() / "" when not available). When TRM is loaded and ctx is non-nil, TRM scoring is used for CogDoc ranking.
func (*Process) CurrentCycleID ¶ added in v0.2.0
CurrentCycleID returns the current iteration's cycle ID (may be empty between iterations). Exported so the agent harness and tool dispatch paths can correlate their own trace events with the running kernel cycle.
func (*Process) EmbeddingIndex ¶
func (p *Process) EmbeddingIndex() *EmbeddingIndex
EmbeddingIndex returns the embedding index (nil if not loaded).
func (*Process) Field ¶
func (p *Process) Field() *AttentionalField
Field returns the attentional field (for use by the serve layer).
func (*Process) Fingerprint ¶
Fingerprint returns a stable trust fingerprint for the current process state.
func (*Process) Index ¶
func (p *Process) Index() *CogDocIndex
Index returns the current CogDoc index (may be nil before first consolidation).
func (*Process) LightCones ¶
func (p *Process) LightCones() *LightConeManager
LightCones returns the per-conversation light cone manager.
func (*Process) NodeHealth ¶ added in v0.2.0
func (p *Process) NodeHealth() *NodeHealth
NodeHealth returns the current node health state (for use by the serve layer).
func (*Process) Observer ¶
func (p *Process) Observer() *TrajectoryModel
Observer returns the trajectory model (for use by the HTTP layer).
func (*Process) RecordBlock ¶
RecordBlock writes a CogBlock to the process ledger and returns the ledger ref.
func (*Process) Send ¶
Send delivers an external event to the process loop (non-blocking). Returns false if the channel is full.
func (*Process) SetTRM ¶
func (p *Process) SetTRM(trm *MambaTRM, idx *EmbeddingIndex)
SetTRM installs the TRM model and embedding index (called at startup).
func (*Process) State ¶
func (p *Process) State() ProcessState
State returns the current process state (safe for concurrent reads).
func (*Process) SubmitExternal ¶ added in v0.2.0
SubmitExternal is the canonical entry point for external perturbations (dashboard chat, modality inlets, etc.) into the metabolic cycle. It is a non-blocking alias of Send(): the external channel has bounded capacity and the cycle must never stall on a slow or noisy producer. Callers that need backpressure should observe the returned bool and drop/log accordingly.
func (*Process) TrustSnapshot ¶
func (p *Process) TrustSnapshot() TrustState
TrustSnapshot returns a copy of the current trust metadata.
type ProcessKind ¶
type ProcessKind int
ProcessKind classifies how a process is managed.
const ( // ProcessForeground is tied to an HTTP request and killed on disconnect. ProcessForeground ProcessKind = iota // ProcessBackground outlives the request and reports via callback. ProcessBackground // ProcessAgent runs in a sandboxed Docker container. ProcessAgent )
func (ProcessKind) String ¶
func (k ProcessKind) String() string
type ProcessManager ¶
type ProcessManager struct {
// contains filtered or unexported fields
}
ProcessManager tracks all active Claude Code subprocesses.
func NewProcessManager ¶
func NewProcessManager(cfg ProcessManagerConfig) *ProcessManager
NewProcessManager creates a process manager.
func (*ProcessManager) CanSpawn ¶
func (pm *ProcessManager) CanSpawn(identity string) error
CanSpawn checks whether a new process is allowed under the concurrency limits.
func (*ProcessManager) Finish ¶
func (pm *ProcessManager) Finish(id string)
Finish marks a background process as complete and fires the callback.
func (*ProcessManager) Kill ¶
func (pm *ProcessManager) Kill(id string)
Kill sends SIGTERM to a process, then SIGKILL after 5 seconds.
func (*ProcessManager) KillByIdentity ¶
func (pm *ProcessManager) KillByIdentity(identity string) int
KillByIdentity cancels all foreground processes for a given NodeID. Background processes are NOT killed — they were explicitly requested.
func (*ProcessManager) KillBySource ¶
func (pm *ProcessManager) KillBySource(source string) int
KillBySource cancels all processes from a given source (e.g., when a Discord channel is closed or a client session ends).
func (*ProcessManager) List ¶
func (pm *ProcessManager) List() []ProcessSummary
List returns a snapshot of all tracked processes.
func (*ProcessManager) Remove ¶
func (pm *ProcessManager) Remove(id string)
Remove unregisters a process. Called when a foreground process completes.
func (*ProcessManager) SetOnComplete ¶
func (pm *ProcessManager) SetOnComplete(fn func(*ManagedProcess))
SetOnComplete registers a callback for when background processes finish.
func (*ProcessManager) Shutdown ¶
func (pm *ProcessManager) Shutdown(timeout time.Duration)
Shutdown gracefully terminates all running processes. Sends SIGTERM to all, waits up to timeout, then SIGKILL.
func (*ProcessManager) Stats ¶
func (pm *ProcessManager) Stats() ProcessStats
func (*ProcessManager) Track ¶
func (pm *ProcessManager) Track(cmd *exec.Cmd, opts ManagedProcessOpts) *ManagedProcess
Track registers a new process with the manager. Call before cmd.Start().
type ProcessManagerConfig ¶
type ProcessManagerConfig struct {
MaxGlobal int // 0 = unlimited
MaxPerIdentity int // 0 = unlimited
}
ProcessManagerConfig configures the process manager.
type ProcessState ¶
type ProcessState int
ProcessState represents the four operational states of the v3 process.
const ( StateActive ProcessState = iota // Processing external input StateReceptive // Idle, waiting StateConsolidating // Internal maintenance StateDormant // Minimal activity )
func (ProcessState) String ¶
func (s ProcessState) String() string
type ProcessStats ¶
type ProcessStats struct {
Total int `json:"total"`
Running int `json:"running"`
Completed int `json:"completed"`
Failed int `json:"failed"`
Cancelled int `json:"cancelled"`
ByKind map[string]int `json:"by_kind"`
BySource map[string]int `json:"by_source"`
}
Stats returns aggregate counts.
type ProcessStatus ¶
type ProcessStatus int
ProcessStatus tracks the lifecycle state of a managed process.
const ( ProcessRunning ProcessStatus = iota ProcessCompleted ProcessFailed ProcessCancelled ProcessTimedOut )
func (ProcessStatus) String ¶
func (s ProcessStatus) String() string
type ProcessSummary ¶
type ProcessSummary struct {
ID string `json:"id"`
Kind string `json:"kind"`
Status string `json:"status"`
Source string `json:"source"`
Identity string `json:"identity,omitempty"`
StartedAt string `json:"started_at"`
Duration string `json:"duration"`
CallbackChannel string `json:"callback_channel,omitempty"`
Error string `json:"error,omitempty"`
}
ProcessSummary is a JSON-friendly snapshot of a managed process.
type Projection ¶
type Projection struct {
// Base is the workspace-relative prefix under the workspace root
// (e.g. ".cog/mem/"). Mutually exclusive with ExtBase.
Base string
// ExtBase is a workspace-root-relative prefix for paths that live
// outside .cog/ (e.g. ".claude/skills/").
ExtBase string
// Pattern controls resolution: "direct" | "directory" | "glob" | "singleton".
Pattern string
// Suffix is appended to the resolved path for "direct" patterns
// (e.g. ".cog.md" for specs).
Suffix string
// GlobPat is a fmt.Sprintf template (one %s) for "glob" patterns.
// E.g. "%s-*.md" matches numbered ADR files.
GlobPat string
}
Projection defines how a cog:// URI type maps to the filesystem.
type ProprioceptiveEntry ¶
type ProprioceptiveEntry struct {
Timestamp string `json:"timestamp"`
Event string `json:"event,omitempty"`
Provider string `json:"provider,omitempty"`
ToolName string `json:"tool_name,omitempty"`
ToolCallID string `json:"tool_call_id,omitempty"`
ToolArgs string `json:"tool_args,omitempty"`
Reason string `json:"reason,omitempty"`
Query string `json:"query"`
Predicted []PredictedChunk `json:"predicted"`
Actual []string `json:"actual"`
Hits int `json:"hits"`
Delta float64 `json:"delta"`
ResponseLen int `json:"response_len"`
}
ProprioceptiveEntry is a single prediction-vs-reality log entry.
func ComputeEntry ¶
func ComputeEntry(query string, predicted []PredictedChunk, response string) ProprioceptiveEntry
ComputeEntry builds a ProprioceptiveEntry from TRM predictions and a response.
type ProprioceptiveLogger ¶
type ProprioceptiveLogger struct {
// contains filtered or unexported fields
}
ProprioceptiveLogger writes proprioceptive entries to a JSONL file.
func NewProprioceptiveLogger ¶
func NewProprioceptiveLogger(logPath string) *ProprioceptiveLogger
NewProprioceptiveLogger creates a logger writing to the given path. The parent directory is created if it does not exist.
func (*ProprioceptiveLogger) Log ¶
func (p *ProprioceptiveLogger) Log(entry ProprioceptiveEntry)
Log appends a proprioceptive entry to the JSONL log.
type Provider ¶
type Provider interface {
// Complete sends a context package and waits for the full response.
Complete(ctx context.Context, req *CompletionRequest) (*CompletionResponse, error)
// Stream sends a request and returns a channel of incremental chunks.
// The channel closes when done or on error. Providers that don't support
// streaming must fall back to Complete and send a single chunk.
Stream(ctx context.Context, req *CompletionRequest) (<-chan StreamChunk, error)
// Name returns the provider identifier (e.g. "ollama", "anthropic").
Name() string
// Available reports whether the provider is ready to serve requests.
// For local providers: checks the model server is running and model loaded.
Available(ctx context.Context) bool
// Capabilities returns what this provider supports.
Capabilities() ProviderCapabilities
// Ping probes the endpoint and returns measured latency.
Ping(ctx context.Context) (time.Duration, error)
}
Provider is the fundamental abstraction for any LLM backend. Anthropic, Ollama, MLX, OpenRouter — all satisfy this interface.
type ProviderCapabilities ¶
type ProviderCapabilities struct {
Capabilities []Capability `json:"capabilities"`
MaxContextTokens int `json:"max_context_tokens"`
MaxOutputTokens int `json:"max_output_tokens"`
ModelsAvailable []string `json:"models_available"`
IsLocal bool `json:"is_local"`
AgenticHarness bool `json:"agentic_harness,omitempty"`
CostPerInputToken float64 `json:"cost_per_input_token"`
CostPerOutputToken float64 `json:"cost_per_output_token"`
}
ProviderCapabilities describes what a provider can do.
func (ProviderCapabilities) HasAllCapabilities ¶
func (pc ProviderCapabilities) HasAllCapabilities(required []Capability) bool
HasAllCapabilities checks if the provider supports all required capabilities.
func (ProviderCapabilities) HasCapability ¶
func (pc ProviderCapabilities) HasCapability(cap Capability) bool
HasCapability checks if the provider supports a specific capability.
type ProviderConfig ¶
type ProviderConfig struct {
Type string `yaml:"type,omitempty" json:"type,omitempty"`
APIKeyEnv string `yaml:"api_key_env,omitempty" json:"api_key_env,omitempty"`
Endpoint string `yaml:"endpoint,omitempty" json:"endpoint,omitempty"`
Model string `yaml:"model" json:"model"`
ContextWindow int `yaml:"context_window,omitempty" json:"context_window,omitempty"`
MaxTokens int `yaml:"max_tokens,omitempty" json:"max_tokens,omitempty"`
Timeout int `yaml:"timeout,omitempty" json:"timeout,omitempty"`
Headers map[string]string `yaml:"headers,omitempty" json:"headers,omitempty"`
Options map[string]interface{} `yaml:"options,omitempty" json:"options,omitempty"`
Enabled *bool `yaml:"enabled,omitempty" json:"enabled,omitempty"`
}
ProviderConfig configures a single provider instance.
func (ProviderConfig) IsEnabled ¶
func (pc ProviderConfig) IsEnabled() bool
IsEnabled returns whether the provider is active (default: true).
type ProviderMessage ¶
type ProviderMessage struct {
Role string `json:"role"` // "user", "assistant", "system", "tool"
Content string `json:"content"`
ContentParts []ContentPart `json:"content_parts,omitempty"`
Name string `json:"name,omitempty"`
ToolCallID string `json:"tool_call_id,omitempty"`
ToolCalls []ToolCall `json:"tool_calls,omitempty"`
}
ProviderMessage is a single conversation turn.
type ProviderMeta ¶
type ProviderMeta struct {
Provider string `json:"provider"`
Model string `json:"model"`
Latency time.Duration `json:"latency"`
Region string `json:"region,omitempty"`
Cached bool `json:"cached,omitempty"`
}
ProviderMeta carries provenance for the ledger.
type ProviderSalienceEntry ¶
type ProviderSalienceEntry struct {
ID string `json:"id"`
Salience float64 `json:"salience"`
Zone AttentionalZone `json:"zone"`
}
ProviderSalienceEntry records a single item's salience score.
type ProviderSalienceSnapshot ¶
type ProviderSalienceSnapshot struct {
TopItems []ProviderSalienceEntry `json:"top_items"`
FocalPoint string `json:"focal_point"`
MomentumVector []float64 `json:"momentum_vector,omitempty"`
}
ProviderSalienceSnapshot captures attentional field state at request time.
type ProviderScore ¶
type ProviderScore struct {
Provider string `json:"provider"`
RawScore float64 `json:"raw_score"`
SwapPenalty float64 `json:"swap_penalty"`
AdjustedScore float64 `json:"adjusted_score"`
Available bool `json:"available"`
CapabilitiesMet bool `json:"capabilities_met"`
}
ProviderScore records a single provider's routing score.
type ProvidersConfig ¶
type ProvidersConfig struct {
Providers map[string]ProviderConfig `yaml:"providers" json:"providers"`
Routing RoutingConfig `yaml:"routing" json:"routing"`
}
ProvidersConfig is the top-level configuration from .cog/config/providers.yaml.
type RequestMetadata ¶
type RequestMetadata struct {
RequestID string `json:"request_id"`
ProcessState string `json:"process_state"` // from ProcessState.String()
Priority RequestPriority `json:"priority"`
PreferLocal bool `json:"prefer_local,omitempty"`
PreferProvider string `json:"prefer_provider,omitempty"` // force-route to named provider
MaxCostUSD *float64 `json:"max_cost_usd,omitempty"`
RequiredCapabilities []Capability `json:"required_capabilities,omitempty"`
Source string `json:"source,omitempty"`
SalienceSnapshot *ProviderSalienceSnapshot `json:"salience_snapshot,omitempty"`
}
RequestMetadata carries routing/ledger data that doesn't go to the model.
type RequestPriority ¶
type RequestPriority int
RequestPriority controls routing urgency.
const ( PriorityLow RequestPriority = 0 PriorityNormal RequestPriority = 1 PriorityHigh RequestPriority = 2 PriorityCritical RequestPriority = 3 )
type Router ¶
type Router interface {
// Route selects the best provider for a request.
Route(ctx context.Context, req *CompletionRequest) (Provider, *RoutingDecision, error)
// RegisterProvider adds a provider to the pool.
RegisterProvider(p Provider)
// DeregisterProvider removes a provider.
DeregisterProvider(name string)
// Stats returns routing statistics.
Stats() RouterStats
}
Router selects which Provider handles a given request. Maps to the externalized gating network from the MoE architecture.
func BuildRouter ¶
func BuildRouter(cfg *Config, opts ...BuildRouterOption) (Router, error)
BuildRouter constructs a Router from workspace configuration. Reads .cog/config/providers.yaml; falls back to a default Ollama config.
type RouterStats ¶
type RouterStats struct {
TotalRequests int64 `json:"total_requests"`
RequestsByProvider map[string]int64 `json:"requests_by_provider"`
ToolCallRejectionsByProvider map[string]int64 `json:"tool_call_rejections_by_provider,omitempty"`
EscalationCount int64 `json:"escalation_count"`
FallbackCount int64 `json:"fallback_count"`
SovereigntyRatio float64 `json:"sovereignty_ratio"`
TotalCostUSD float64 `json:"total_cost_usd"`
TokensByProvider map[string]TokenUsage `json:"tokens_by_provider"`
AvgLatencyByProvider map[string]time.Duration `json:"avg_latency_by_provider"`
}
RouterStats tracks routing patterns for observability.
type RoutingConfig ¶
type RoutingConfig struct {
Default string `yaml:"default" json:"default"`
LocalThreshold float64 `yaml:"local_threshold" json:"local_threshold"`
FallbackChain []string `yaml:"fallback_chain" json:"fallback_chain"`
MaxCostPerDayUSD float64 `yaml:"max_cost_per_day_usd,omitempty" json:"max_cost_per_day_usd,omitempty"`
ProcessStateRouting map[string]string `yaml:"process_state_routing,omitempty" json:"process_state_routing,omitempty"`
}
RoutingConfig controls Router behaviour.
type RoutingDecision ¶
type RoutingDecision struct {
RequestID string `json:"request_id"`
SelectedProvider string `json:"selected_provider"`
Scores []ProviderScore `json:"scores"`
Reason string `json:"reason"`
Escalated bool `json:"escalated"`
FallbackUsed bool `json:"fallback_used"`
FallbackFrom string `json:"fallback_from,omitempty"`
Timestamp time.Time `json:"timestamp"`
LatencyNs int64 `json:"latency_ns"`
}
RoutingDecision records why the router chose a specific provider.
type SalienceConfig ¶
type SalienceConfig struct {
WeightRecency float64
WeightFrequency float64
WeightChurn float64
WeightAuthorship float64
DecayModel string
HalfLife int // days
}
SalienceConfig holds weights and decay parameters for salience computation.
func DefaultSalienceConfig ¶
func DefaultSalienceConfig() *SalienceConfig
DefaultSalienceConfig returns sensible defaults.
type SalienceScore ¶
type SalienceScore struct {
Recency float64
Frequency float64
Churn float64
Authorship float64
Total float64
CommitCount int
TotalChanges int
UniqueAuthors int
DaysAgo int
}
SalienceScore holds the computed salience breakdown for a file.
func ComputeFileSalience ¶
func ComputeFileSalience(repoPath, filePath string, daysWindow int, cfg *SalienceConfig) (*SalienceScore, error)
ComputeFileSalience computes salience for a single file from its git history. Returns a zero-score result (not nil) if the file has no commits in the window.
NOTE: For batch scoring (many files), use RankFilesBySalience which opens the repo once. This function opens the repo per call and is only suitable for single-file queries or tests.
type ScoreHead ¶
type ScoreHead struct {
W1 [][]float32 // [d_model][2*d_model]
B1 []float32 // [d_model]
W2 [][]float32 // [1][d_model]
B2 []float32 // [1]
DIn int // 2*d_model
DMid int // d_model
}
ScoreHead is the final scoring MLP: Linear(2*d) → GELU → Linear(1).
type ScoredMessage ¶
type ScoredMessage struct {
Role string
Content string
Tokens int
TurnIndex int // 0 = oldest
RecencyScore float64 // 1.0 = most recent, decays toward 0
RelevanceScore float64 // keyword overlap with current query
CombinedScore float64 // weighted combination
}
ScoredMessage is a conversation turn scored for retention.
type Server ¶
type Server struct {
// contains filtered or unexported fields
}
Server wraps the HTTP server and its dependencies.
type ServiceDef ¶ added in v0.2.0
type ServiceDef struct {
Port int `yaml:"port" json:"port"`
Binary string `yaml:"binary,omitempty" json:"binary,omitempty"`
Workdir string `yaml:"workdir,omitempty" json:"workdir,omitempty"`
Venv string `yaml:"venv,omitempty" json:"venv,omitempty"`
Command string `yaml:"command" json:"command"`
Health string `yaml:"health" json:"health"`
Restart string `yaml:"restart" json:"restart"`
Launchd string `yaml:"launchd,omitempty" json:"launchd,omitempty"`
DependsOn []string `yaml:"depends_on" json:"depends_on"`
Consumers []ConsumerEntry `yaml:"consumers,omitempty" json:"consumers,omitempty"`
}
ServiceDef describes a single managed service.
type ServiceHealth ¶ added in v0.2.0
type ServiceHealth struct {
Port int `json:"port"`
Status string `json:"status"` // healthy, degraded, down
At time.Time `json:"probed_at"`
}
ServiceHealth is the probed state of a single service.
type SimpleRouter ¶
type SimpleRouter struct {
// contains filtered or unexported fields
}
SimpleRouter implements Router with rule-based provider selection.
func NewSimpleRouter ¶
func NewSimpleRouter(cfg RoutingConfig) *SimpleRouter
NewSimpleRouter creates an empty router with the given routing config.
func (*SimpleRouter) DeregisterProvider ¶
func (r *SimpleRouter) DeregisterProvider(name string)
DeregisterProvider removes a provider by name.
func (*SimpleRouter) RegisterProvider ¶
func (r *SimpleRouter) RegisterProvider(p Provider)
RegisterProvider adds a provider to the pool.
func (*SimpleRouter) Route ¶
func (r *SimpleRouter) Route(ctx context.Context, req *CompletionRequest) (Provider, *RoutingDecision, error)
Route selects the best available provider for the request.
func (*SimpleRouter) Stats ¶
func (r *SimpleRouter) Stats() RouterStats
Stats returns current routing statistics.
type StreamChunk ¶
type StreamChunk struct {
Delta string `json:"delta,omitempty"`
ToolCallDelta *ToolCallDelta `json:"tool_call_delta,omitempty"`
Done bool `json:"done"`
StopReason string `json:"stop_reason,omitempty"` // e.g. "end_turn", "max_tokens", "tool_use"
Usage *TokenUsage `json:"usage,omitempty"` // populated on final chunk
ProviderMeta *ProviderMeta `json:"provider_meta,omitempty"` // populated on final chunk
Error error `json:"-"`
}
StreamChunk is one piece of a streaming response.
type StreamTailer ¶
type StreamTailer interface {
// Tail starts watching a file/directory for new JSONL lines.
// It sends normalized CogBlocks on the output channel.
// It respects context cancellation for graceful shutdown.
Tail(ctx context.Context, path string, out chan<- CogBlock) error
// Name returns the adapter name (e.g., "claude-code", "openclaw").
Name() string
}
StreamTailer watches an external harness stream and emits normalized blocks.
type StubProvider ¶
type StubProvider struct {
// contains filtered or unexported fields
}
StubProvider is an in-memory Provider for testing.
func NewStubProvider ¶
func NewStubProvider(name, response string) *StubProvider
NewStubProvider creates a StubProvider that returns the given response.
func (*StubProvider) Capabilities ¶
func (s *StubProvider) Capabilities() ProviderCapabilities
func (*StubProvider) Complete ¶
func (s *StubProvider) Complete(_ context.Context, _ *CompletionRequest) (*CompletionResponse, error)
func (*StubProvider) Name ¶
func (s *StubProvider) Name() string
func (*StubProvider) Stream ¶
func (s *StubProvider) Stream(_ context.Context, _ *CompletionRequest) (<-chan StreamChunk, error)
type SyncEnvelope ¶
type SyncEnvelope struct {
Version int `json:"version"`
OriginNodeID string `json:"origin_node_id"`
TargetNodeID string `json:"target_node_id"`
BlobHash string `json:"blob_hash"`
Timestamp string `json:"timestamp"`
Kind string `json:"kind"`
Signature string `json:"signature"`
}
SyncEnvelope describes a bridge envelope dropped into .cog/sync/inbox/.
type SyncEvent ¶
type SyncEvent struct {
Envelope SyncEnvelope
FilePath string
Valid bool
ValidationError string
AlreadyHave bool
}
SyncEvent reports a discovered sync envelope and its structural validation result.
type SyncWatcher ¶
SyncWatcher polls a Syncthing inbox directory for new SyncEnvelope files.
func NewSyncWatcher ¶
func NewSyncWatcher(blobStore *BlobStore, pollInterval time.Duration) *SyncWatcher
NewSyncWatcher creates a polling sync watcher.
type TRMConfig ¶
type TRMConfig struct {
DModel int // embedding dimension (384)
DState int // SSM state dimension (4)
DConv int // convolution kernel width (2)
NLayers int // number of Mamba blocks (2)
Expand int // expansion factor (1)
NProbes int // number of attention probes (4)
DHead int // attention head dimension (128)
NEventType int // number of event types (4)
}
TRMConfig holds the model hyperparameters. Must match the training config.
func DefaultTRMConfig ¶
func DefaultTRMConfig() TRMConfig
DefaultTRMConfig returns the config matching the trained model.
type TailerManager ¶
type TailerManager struct {
// contains filtered or unexported fields
}
TailerManager runs multiple stream tailers and tracks per-tailer stats.
func NewTailerManager ¶
func NewTailerManager(out chan<- CogBlock) *TailerManager
NewTailerManager creates a manager that forwards normalized blocks to out.
func (*TailerManager) Register ¶
func (m *TailerManager) Register(tailer StreamTailer, path string) error
Register adds a tailer and source path to the manager.
func (*TailerManager) Run ¶
func (m *TailerManager) Run(ctx context.Context) error
Run starts all registered tailers and blocks until they stop.
func (*TailerManager) Stats ¶
func (m *TailerManager) Stats() map[string]TailerStats
Stats returns a snapshot of current per-tailer metrics.
type TailerStats ¶
TailerStats captures manager-side ingestion state for a single tailer.
type TokenUsage ¶
type TokenUsage struct {
InputTokens int `json:"input_tokens"`
OutputTokens int `json:"output_tokens"`
CacheReadTokens int `json:"cache_read_tokens,omitempty"`
CacheWriteTokens int `json:"cache_write_tokens,omitempty"`
}
TokenUsage tracks token consumption for cost accounting.
type ToolCall ¶
type ToolCall struct {
ID string `json:"id"`
Name string `json:"name"`
Arguments string `json:"arguments"`
}
ToolCall is a model's request to invoke a tool.
type ToolCallDelta ¶
type ToolCallDelta struct {
Index int `json:"index"`
ID string `json:"id,omitempty"`
Name string `json:"name,omitempty"`
ArgsDelta string `json:"args_delta,omitempty"`
}
ToolCallDelta carries incremental streaming data for a tool call.
type ToolDefinition ¶
type ToolDefinition struct {
Name string `json:"name"`
Description string `json:"description"`
InputSchema map[string]interface{} `json:"input_schema"`
}
ToolDefinition describes an MCP tool the model may invoke.
type TrajectoryModel ¶
type TrajectoryModel struct {
// contains filtered or unexported fields
}
TrajectoryModel tracks attention momentum and generates predictions. It is the "model" in the trefoil — built from observations of the field, generating anticipations that act back on the field.
The model is safe for concurrent reads (Stats, Momentum) and periodic writes (Update, called from the single consolidation goroutine).
func NewTrajectoryModel ¶
func NewTrajectoryModel() *TrajectoryModel
NewTrajectoryModel constructs an empty, uninitialized model.
func (*TrajectoryModel) LastPrediction ¶
func (m *TrajectoryModel) LastPrediction() []string
LastPrediction returns a copy of the most recent prediction set.
func (*TrajectoryModel) Momentum ¶
func (m *TrajectoryModel) Momentum() map[string]float64
Momentum returns a copy of the current momentum map. Safe for concurrent reads (e.g. from the HTTP handler goroutine).
func (*TrajectoryModel) Stats ¶
func (m *TrajectoryModel) Stats() (cycles int64, meanError float64)
Stats returns the total cycle count and mean prediction error.
func (*TrajectoryModel) Update ¶
func (m *TrajectoryModel) Update(attended []string, fieldScores map[string]float64) ObserverUpdate
Update feeds a new cycle's observations into the model. attended is the list of filesystem paths observed in the attention log since the last tick. fieldScores is the current salience map. Update is NOT safe for concurrent calls — it is called only from the single consolidation goroutine in process.go.
type TrustContext ¶
type TrustContext = cogblock.TrustContext
type TrustState ¶
type TrustState struct {
LocalScore float64 `json:"local_score"`
LastHeartbeatHash string `json:"last_heartbeat_hash,omitempty"`
LastHeartbeatAt time.Time `json:"last_heartbeat_at,omitempty"`
CoherenceFingerprint string `json:"coherence_fingerprint,omitempty"`
}
TrustState tracks kernel-local identity and coherence trust metadata.
type URIResolution ¶
type URIResolution struct {
// Path is the absolute filesystem path.
Path string
// Fragment is the section anchor stripped from the URI (empty if none).
Fragment string
}
URIResolution is the result of resolving a cog:// URI to the filesystem.
func ResolveURI ¶
func ResolveURI(workspaceRoot, uri string) (*URIResolution, error)
ResolveURI resolves a cog:// URI to an absolute filesystem path. The #fragment part (section anchor) is separated and returned in URIResolution.Fragment without modifying the path resolution.
type ValidationResult ¶
type ValidationResult struct {
Pass bool `json:"pass"`
Layer string `json:"layer"`
Diagnostic *Diagnostic `json:"diagnostic,omitempty"`
Timestamp string `json:"timestamp"`
}
ValidationResult is the outcome of a single validation check.
Source Files
¶
- benchmark.go
- blobs_cmd.go
- blobstore.go
- canvas_embed.go
- chat.go
- chunk.go
- cli.go
- cogblock.go
- cogblock_ledger.go
- cogblock_normalize.go
- coherence.go
- config.go
- consolidate.go
- constellation_bridge.go
- context_assembly.go
- context_blocks.go
- context_frame.go
- daemon_lifecycle.go
- dashboard_embed.go
- debug.go
- defaults.go
- docs_generate.go
- experiment.go
- field.go
- gate.go
- index.go
- init.go
- ledger.go
- memory.go
- node_cmd.go
- node_manifest.go
- node_probe.go
- nucleus.go
- observer.go
- process.go
- procmgr.go
- proprioceptive.go
- provider.go
- provider_anthropic.go
- provider_claudecode.go
- provider_codex.go
- provider_ollama.go
- provider_openai.go
- provider_pi.go
- provider_stub.go
- router.go
- salience.go
- serve.go
- serve_anthropic.go
- serve_attention.go
- serve_blocks.go
- serve_compat.go
- serve_foveated.go
- serve_mcp_stub.go
- stream_tailer.go
- sync_watcher.go
- tailer_claudecode.go
- tailer_openclaw.go
- telemetry.go
- transition_hooks.go
- trm.go
- trm_context.go
- trm_index.go
- trm_lightcone.go
- uri.go
- uri_resolve.go
- uri_v2_stub.go