Documentation
¶
Overview ¶
Package brain provides in-process access to the Thinking Brain. Previously a separate HTTP sidecar (synapses-intelligence), the brain is now embedded directly so no external process or port is required.
All public methods are fail-silent: errors are silently discarded so that brain failures never degrade the MCP hot path. The graph-only path always works.
Package brain — model_manager.go provides RAM-aware on-demand model loading for the Ollama backend (Sprint 17 #3).
ModelManager is called by the Scheduler's drain goroutine before dispatching P1/P2 tasks. It checks available RAM, optionally pre-loads the model, and can downgrade a 4B model to 2B when memory is constrained.
Integration path (all background):
Scheduler.runEligible → ModelManager.EnsureModel(ctx) → returns model name to use, or "" to defer this drain cycle
P0 (user-waiting) tasks bypass the scheduler entirely and rely on Scheduler.ShouldDegrade() for their own gate.
Package brain — pulse.go provides cross-platform system health monitoring.
SystemPulse samples RAM and CPU every 10 seconds and classifies system health into three levels (Green/Yellow/Red) to guide work scheduling. Ollama model residency is polled separately every 30 seconds via /api/ps.
Usage:
p := NewSystemPulse() p.Start() defer p.Stop() state := p.Current()
Package brain — scheduler.go provides priority-aware dispatch for brain tasks.
The Scheduler routes background work (P1/P2) through a bounded, deduped queue executed by a single drain goroutine. This prevents concurrent Ollama requests and ensures low-priority background work (file-save ingestion, archivist) does not contend with user-waiting operations.
Priority model:
P0 — NOW: user is waiting (enrich, guardian, HyDE).
Bypass the scheduler entirely. Use ShouldDegrade() to skip the LLM call
when system is Red or Yellow+no-model.
P1 — SOON: background tasks that should complete this session (archivist, navigator).
Deferred up to 5 min under Yellow/Red health; degraded/dropped after TTL.
P2 — IDLE: best-effort work that piggybacks on model residency (ingest, bulk).
Deferred up to 15 min under Yellow/Red; silently dropped after TTL.
Queue invariants:
- Bounded: at most 100 items. When full, oldest P2 task is evicted first.
- Dedup: same key → keep the latest fn only (e.g. file saved 5× → 1 ingest).
- TTL: P1 tasks expire after 5 min, P2 after 15 min.
- Serial: the single drain goroutine executes one task at a time.
Lifecycle: NewScheduler → Start → (Submit calls) → Stop. If pulse is nil, Submit runs fn immediately in a new goroutine (NullBrain / test path).
Package brain provides the public API types for synapses-intelligence. Synapses imports this package to integrate the Thinking Brain.
Index ¶
- type ADR
- type ADRRequest
- type AgentStatus
- type Brain
- type BrainStatsProvider
- type ClaimInput
- type Client
- func (c *Client) Available() bool
- func (c *Client) BrainHealth() map[string]interface{}
- func (c *Client) BuildContextPacket(ctx context.Context, req ContextPacketRequest) *ContextPacket
- func (c *Client) Close()
- func (c *Client) ExplainViolation(ctx context.Context, req ViolationRequest) (string, string)
- func (c *Client) Generate(ctx context.Context, prompt string) (string, error)
- func (c *Client) GenerateHypothetical(ctx context.Context, query string) string
- func (c *Client) GetADR(_ context.Context, id string) (*ADR, error)
- func (c *Client) GetADRs(_ context.Context, fileFilter string) ([]ADR, error)
- func (c *Client) GetSummary(_ context.Context, nodeID string) string
- func (c *Client) HealthCheck(_ context.Context) (string, error)
- func (c *Client) Ingest(_ context.Context, req IngestRequest)
- func (c *Client) ListInstalledModels(ctx context.Context) []string
- func (c *Client) LogDecision(ctx context.Context, req DecisionRequest)
- func (c *Client) Memorize(ctx context.Context, req archivist.MemorizeRequest) (archivist.MemorizeResponse, error)
- func (c *Client) Prune(ctx context.Context, content string) (string, error)
- func (c *Client) QueryDecisions(ctx context.Context, entityName string, limit int) ([]DecisionLogEntry, error)
- func (c *Client) ScheduleNLClassification(req NLClassifyRequest, applyFn func([]NLClassifyResult))
- func (c *Client) SetPhase(_ context.Context, req SetPhaseRequest) (*SDLCConfig, error)
- func (c *Client) SetQualityMode(_ context.Context, mode QualityMode) (*SDLCConfig, error)
- func (c *Client) Summary(projectID, nodeID string) string
- func (c *Client) SystemUnderRAMPressure() bool
- func (c *Client) UpsertADR(_ context.Context, req ADRRequest) (*ADR, error)
- type ConstraintItem
- type ContextPacket
- type ContextPacketRequest
- type CoordinateRequest
- type CoordinateResponse
- type DecisionLogEntry
- type DecisionRequest
- type EnrichRequest
- type EnrichResponse
- type HealthLevel
- type IngestRequest
- type IngestResponse
- type ModelManager
- type NLCandidate
- type NLClassifyRequest
- type NLClassifyResult
- type NullBrain
- func (n *NullBrain) AllADRs() ([]ADR, error)
- func (n *NullBrain) Available() bool
- func (n *NullBrain) BuildContextPacket(_ context.Context, _ ContextPacketRequest) (*ContextPacket, error)
- func (n *NullBrain) Coordinate(_ context.Context, _ CoordinateRequest) (CoordinateResponse, error)
- func (n *NullBrain) Enrich(_ context.Context, _ EnrichRequest) (EnrichResponse, error)
- func (n *NullBrain) EnsureModel(_ context.Context, _ io.Writer) error
- func (n *NullBrain) ExplainViolation(_ context.Context, _ ViolationRequest) (ViolationResponse, error)
- func (n *NullBrain) Generate(_ context.Context, _ string) (string, error)
- func (n *NullBrain) GetADR(_ string) (ADR, error)
- func (n *NullBrain) GetADRsForFile(_ string, _ int) ([]ADR, error)
- func (n *NullBrain) GetPatterns(_ string, _ int) []PatternHint
- func (n *NullBrain) GetSDLCConfig() SDLCConfig
- func (n *NullBrain) Ingest(_ context.Context, req IngestRequest) (IngestResponse, error)
- func (n *NullBrain) LogDecision(_ context.Context, _ DecisionRequest) error
- func (n *NullBrain) Memorize(_ context.Context, _ archivist.MemorizeRequest) (archivist.MemorizeResponse, error)
- func (n *NullBrain) ModelName() string
- func (n *NullBrain) Prune(_ context.Context, content string) (string, error)
- func (n *NullBrain) QueryDecisions(_ context.Context, _ string, _ int) ([]DecisionLogEntry, error)
- func (n *NullBrain) SetQualityMode(_ QualityMode, _ string) error
- func (n *NullBrain) SetSDLCPhase(_ SDLCPhase, _ string) error
- func (n *NullBrain) Summary(_, _ string) string
- func (n *NullBrain) UpsertADR(_ ADRRequest) error
- type PatternHint
- type QualityGate
- type QualityMode
- type RuleInput
- type SDLCConfig
- type SDLCPhase
- type Scheduler
- type SetPhaseRequest
- type SnapshotInput
- type SynapsesSnapshotInput
- type SystemPulse
- type SystemState
- type TaskPriority
- type TierState
- type TierStatusProvider
- type ViolationExplanation
- type ViolationRequest
- type ViolationResponse
- type WorkClaim
Constants ¶
This section is empty.
Variables ¶
This section is empty.
Functions ¶
This section is empty.
Types ¶
type ADR ¶
type ADR struct {
ID string `json:"id"`
Title string `json:"title"`
Status string `json:"status"` // proposed | accepted | deprecated | superseded
ContextText string `json:"context,omitempty"`
Decision string `json:"decision"`
Consequences string `json:"consequences,omitempty"`
LinkedFiles []string `json:"linked_files,omitempty"` // file path glob patterns
CreatedAt string `json:"created_at"`
UpdatedAt string `json:"updated_at"`
}
ADR is an Architectural Decision Record — a persistent cold-memory entry for a significant design choice. ADRs are injected into get_context output when their linked_files patterns match the queried entity's file.
type ADRRequest ¶
type ADRRequest struct {
ID string `json:"id"`
Title string `json:"title"`
Status string `json:"status"` // proposed | accepted | deprecated | superseded
ContextText string `json:"context,omitempty"`
Decision string `json:"decision"`
Consequences string `json:"consequences,omitempty"`
LinkedFiles []string `json:"linked_files,omitempty"`
}
ADRRequest is the input to UpsertADR.
type AgentStatus ¶
type AgentStatus struct {
AgentID string `json:"agent_id"`
Scope string `json:"scope"`
ScopeType string `json:"scope_type"`
// ExpiresIn is the seconds until this claim expires (0 = unknown).
ExpiresIn int `json:"expires_in_seconds,omitempty"`
}
AgentStatus is a compact view of another agent's current work claim.
type Brain ¶
type Brain interface {
// Ingest summarizes a changed code snippet and persists it in brain.sqlite.
// Called on file-save events for changed functions/methods/structs.
// Returns immediately if LLM is unavailable.
Ingest(ctx context.Context, req IngestRequest) (IngestResponse, error)
// Enrich adds semantic summaries and a 2-sentence insight to a carved context subgraph.
// Summaries are loaded from brain.sqlite (fast path, no LLM call).
// Insight is generated by the LLM only if Enrich feature is enabled.
Enrich(ctx context.Context, req EnrichRequest) (EnrichResponse, error)
// ExplainViolation generates a plain-English explanation and fix for a rule violation.
// Results are cached in brain.sqlite to avoid repeated LLM calls for the same violation.
ExplainViolation(ctx context.Context, req ViolationRequest) (ViolationResponse, error)
// Coordinate suggests work distribution when agents conflict on a scope.
Coordinate(ctx context.Context, req CoordinateRequest) (CoordinateResponse, error)
// Summary returns the stored semantic summary for a node ID.
// Returns "" if no summary has been ingested for this node.
Summary(projectID, nodeID string) string
// Available returns true if the configured LLM backend is reachable.
Available() bool
// ModelName returns the configured LLM model tag.
ModelName() string
// EnsureModel checks if the configured model is present locally; if not,
// it pulls it (Ollama registry for ollama, HuggingFace GGUF for local).
// Returns nil when the model is ready, an error if the download fails.
EnsureModel(ctx context.Context, w io.Writer) error
// BuildContextPacket assembles a structured Context Packet for an agent.
// Returns nil (with no error) when the Brain is unavailable — callers fall
// back to raw Synapses context unchanged.
BuildContextPacket(ctx context.Context, req ContextPacketRequest) (*ContextPacket, error)
// LogDecision records an agent's completed work and updates co-occurrence
// patterns in brain.sqlite. Non-fatal; errors are returned but do not
// block the calling agent.
LogDecision(ctx context.Context, req DecisionRequest) error
// SetSDLCPhase persists the project's current SDLC phase.
SetSDLCPhase(phase SDLCPhase, agentID string) error
// SetQualityMode persists the project's current quality mode.
SetQualityMode(mode QualityMode, agentID string) error
// GetSDLCConfig returns the current SDLC config.
GetSDLCConfig() SDLCConfig
// GetPatterns returns learned co-occurrence patterns sorted by confidence.
// If trigger is non-empty, only patterns with that trigger are returned.
// limit caps the number of results (0 = default of 20).
GetPatterns(trigger string, limit int) []PatternHint
// Prune strips boilerplate (navigation, ads, footers) from raw web page text
// using the Tier 0 (0.8B) model. Returns cleaned technical content.
// Falls back to returning the original content if the LLM is unavailable.
Prune(ctx context.Context, content string) (string, error)
// Memorize synthesizes a session transcript into persistent memory entries
// and code annotations. Powered by the Archivist fine-tuned model (T2, cold standby).
// Returns empty response (no error) when the Archivist LLM is unavailable.
Memorize(ctx context.Context, req archivist.MemorizeRequest) (archivist.MemorizeResponse, error)
// Generate sends a raw prompt to the brain's primary LLM and returns the
// response. Used for one-off LLM calls (e.g., brain-enhanced drift summaries)
// that don't fit into the ingest/enrich/guardian pipeline.
// Returns ("", err) if the LLM is unavailable.
Generate(ctx context.Context, prompt string) (string, error)
// UpsertADR creates or updates an Architectural Decision Record.
UpsertADR(req ADRRequest) error
// GetADR returns the ADR with the given ID.
GetADR(id string) (ADR, error)
// AllADRs returns all ADRs ordered by updated_at descending.
AllADRs() ([]ADR, error)
// GetADRsForFile returns accepted ADRs whose linked_files patterns match the given file path.
GetADRsForFile(filePath string, limit int) ([]ADR, error)
// QueryDecisions returns up to limit decision log entries, optionally
// filtered by entityName (empty = all recent), ordered by created_at DESC.
QueryDecisions(ctx context.Context, entityName string, limit int) ([]DecisionLogEntry, error)
}
Brain is the public interface for the Thinking Brain. All methods are safe for concurrent use. Use New() to create a configured instance. Use NullBrain when the brain is disabled — all methods return zero values.
func New ¶
func New(cfg config.BrainConfig) Brain
New creates a fully-configured Brain from cfg. Returns NullBrain if cfg.Enabled is false. Returns NullBrain (with logged warning) if the store cannot be opened.
type BrainStatsProvider ¶
type BrainStatsProvider interface {
BrainStats() map[string]interface{}
}
BrainStatsProvider exposes cumulative telemetry counters for health dashboards. Type-assert Brain to this interface to access stats.
type ClaimInput ¶
type ClaimInput struct {
AgentID string `json:"agent_id"`
Scope string `json:"scope"`
ScopeType string `json:"scope_type"`
ExpiresAt string `json:"expires_at,omitempty"` // RFC3339
}
ClaimInput is a single work claim from another agent.
type Client ¶
type Client struct {
// contains filtered or unexported fields
}
Client wraps the in-process Brain implementation. It exposes the same method signatures as the former HTTP client so all callers compile without changes. Create with NewInProcess; always non-nil (uses NullBrain on failure).
Background scheduling: when the brain is enabled, Client creates a SystemPulse, a ModelManager, and a Scheduler. Low-priority background tasks (Ingest) are submitted to the Scheduler as P2 tasks and executed by the drain goroutine only when system health is Green AND the ModelManager confirms a model can be loaded. High-priority P0 tasks (BuildContextPacket, ExplainViolation) check ShouldDegrade() before invoking the LLM to fast-fail under resource pressure.
func NewClient
deprecated
func NewInProcess ¶
func NewInProcess(cfg *brainconfig.BrainConfig) *Client
NewInProcess creates a Client backed by an in-process Brain. If cfg is nil or cfg.Enabled is false, returns a Client wrapping NullBrain (all methods return zero values). Never returns nil.
When enabled, NewInProcess starts a SystemPulse (health monitor) and a Scheduler (priority task queue). Both are stopped when Close() is called.
func (*Client) Available ¶
Available reports whether the brain LLM backend is accessible. Implements the federation.BrainSummaryProvider interface.
func (*Client) BrainHealth ¶
BrainHealth returns structured per-tier health data for session_init. Returns nil if the underlying Brain does not implement BrainStatsProvider (e.g. NullBrain when brain is disabled).
func (*Client) BuildContextPacket ¶
func (c *Client) BuildContextPacket(ctx context.Context, req ContextPacketRequest) *ContextPacket
BuildContextPacket builds and returns an enriched context packet.
Returns nil when:
- The brain is unavailable or returns an error.
- System health is Red, or Yellow with no model loaded (ShouldDegrade). Callers fall back to raw Synapses context unchanged.
func (*Client) Close ¶
func (c *Client) Close()
Close shuts down the in-process brain, scheduler, and system pulse, releasing all associated resources.
func (*Client) ExplainViolation ¶
ExplainViolation returns (explanation, fix) for an architecture violation.
Returns ("", "") when:
- The brain is unavailable.
- System health warrants degradation (ShouldDegrade returns true).
func (*Client) Generate ¶
Generate sends a prompt to the brain's LLM and returns the raw response. Returns ("", nil) if brain is unavailable or system health warrants degradation. Used for brain-enhanced drift summaries in the federation resolver.
func (*Client) GenerateHypothetical ¶
GenerateHypothetical generates a hypothetical document for HyDE-based retrieval.
func (*Client) GetSummary ¶
GetSummary returns the cached summary for nodeID, or "" if not yet summarized.
func (*Client) HealthCheck ¶
HealthCheck returns ("ok", nil) when the brain is available, or an error when not.
func (*Client) Ingest ¶
func (c *Client) Ingest(_ context.Context, req IngestRequest)
Ingest submits a code node for semantic summarization.
The request is enqueued as a P2 (IDLE priority) task via the Scheduler and executed by the background drain goroutine when system health is Green. Under Yellow or Red health, the task is deferred up to 15 minutes.
The caller's ctx is intentionally not forwarded to the queued fn — the context may expire before the task is eligible to run. The queued fn uses a fresh background context so the LLM call succeeds when the drain goroutine fires.
func (*Client) ListInstalledModels ¶
ListInstalledModels returns the names of all Ollama models installed locally. Returns nil when Ollama is not configured or the query fails.
func (*Client) LogDecision ¶
func (c *Client) LogDecision(ctx context.Context, req DecisionRequest)
LogDecision records a reasoning decision. Fire-and-forget.
func (*Client) Memorize ¶
func (c *Client) Memorize(ctx context.Context, req archivist.MemorizeRequest) (archivist.MemorizeResponse, error)
Memorize synthesizes a session transcript into persistent memory entries. Returns empty response (no error) when the Archivist LLM is unavailable.
func (*Client) Prune ¶
Prune uses the Tier 0 LLM to extract core technical content from raw text (e.g. web pages, over-budget context packets), discarding boilerplate. Returns the original content unchanged if brain unavailable.
func (*Client) QueryDecisions ¶
func (c *Client) QueryDecisions(ctx context.Context, entityName string, limit int) ([]DecisionLogEntry, error)
QueryDecisions returns up to limit decision log entries, optionally filtered by entityName (empty string = all), ordered by created_at DESC.
func (*Client) ScheduleNLClassification ¶
func (c *Client) ScheduleNLClassification(req NLClassifyRequest, applyFn func([]NLClassifyResult))
ScheduleNLClassification enqueues a P1 brain task that classifies the entity type of each NLCandidate using the LLM and calls applyFn with the results. applyFn is called from within the scheduler goroutine; it must be safe to call concurrently with graph reads but not concurrent graph writes (the watcher reparseMu serialises graph mutations).
No-op when:
- The candidate list is empty.
- The brain is unavailable (NullBrain).
- System health warrants P1 deferral (task is queued; applyFn runs later).
The caller's context is NOT forwarded to the queued task — it may expire before the P1 scheduler fires. A fresh background context is used instead.
func (*Client) SetPhase ¶
func (c *Client) SetPhase(_ context.Context, req SetPhaseRequest) (*SDLCConfig, error)
SetPhase updates the active SDLC phase. Returns the updated SDLCConfig.
func (*Client) SetQualityMode ¶
func (c *Client) SetQualityMode(_ context.Context, mode QualityMode) (*SDLCConfig, error)
SetQualityMode updates the active quality mode. Returns the updated SDLCConfig.
func (*Client) Summary ¶
Summary returns the brain-generated summary for a node, scoped by projectID. Implements the federation.BrainSummaryProvider interface.
func (*Client) SystemUnderRAMPressure ¶
SystemUnderRAMPressure returns true when the system health is Yellow or Red — indicating that available RAM is below 3 GB (Yellow) or 1.5 GB (Red). The embedding background pass should be deferred when this returns true to avoid OOM during concurrent indexing + embedding on memory-constrained machines.
Returns false when the pulse is nil (NullBrain / brain disabled) so that the embed pass always runs when the brain is not configured.
type ConstraintItem ¶
type ConstraintItem struct {
RuleID string `json:"rule_id"`
Severity string `json:"severity"` // "error" | "warning"
Description string `json:"description"`
// Hint is a cached fix suggestion (from violation_cache or derived from description).
Hint string `json:"hint,omitempty"`
}
ConstraintItem is a single architectural rule the agent must respect.
type ContextPacket ¶
type ContextPacket struct {
// Header
AgentID string `json:"agent_id,omitempty"`
EntityName string `json:"entity_name"`
EntityType string `json:"entity_type,omitempty"`
GeneratedAt string `json:"generated_at"`
Phase SDLCPhase `json:"phase"`
QualityMode QualityMode `json:"quality_mode"`
// Section 1: Semantic Focus (fast path — SQLite only, no LLM)
// 1-sentence intent summary for the root entity.
RootSummary string `json:"root_summary,omitempty"`
// Summaries for key dependencies (entityName → summary). Replaces raw code.
DependencySummaries map[string]string `json:"dependency_summaries,omitempty"`
// Section 2: Architectural Insight (LLM path — optional, 2-3s)
Insight string `json:"insight,omitempty"`
Concerns []string `json:"concerns,omitempty"`
// Section 3: Architectural Constraints
ActiveConstraints []ConstraintItem `json:"active_constraints,omitempty"`
// Section 4: Team Coordination
TeamStatus []AgentStatus `json:"team_status,omitempty"`
// Section 5: Quality Gate — concrete checklist for current phase+mode
QualityGate QualityGate `json:"quality_gate"`
// Section 6: Learned Patterns ("when editing X, also check Y")
PatternHints []PatternHint `json:"pattern_hints,omitempty"`
// Section 7: Phase Guidance — what the agent should do next
PhaseGuidance string `json:"phase_guidance,omitempty"`
// LLMUsed indicates whether a live LLM call was made during packet assembly.
// False means all data came from brain.sqlite (sub-millisecond path).
LLMUsed bool `json:"llm_used,omitempty"`
// PacketQuality is a 0.0–1.0 heuristic reflecting how complete this packet is.
// 0.0 = no summaries ingested yet; 0.5 = summaries present, no insight; 1.0 = full.
// Agents can use this to decide whether to request a follow-up LLM enrichment pass.
PacketQuality float64 `json:"packet_quality"`
// GraphWarnings are actionable warnings derived from graph topology.
// Examples: "High blast radius (12 callers)", "No tests found for this file".
// These supplement the LLM Insight with deterministic, always-available guidance.
GraphWarnings []string `json:"graph_warnings,omitempty"`
// ComplexityScore is a dimensionless topology-derived risk indicator:
// (fanIn + fanOut) * (1 + fanOut/10.0). Always populated when the enricher ran,
// regardless of Ollama availability. 0.0 = isolated/leaf node.
ComplexityScore float64 `json:"complexity_score,omitempty"`
// DeterministicPath is true when Phase and ComplexityScore were derived from
// graph topology without an LLM call. False only when the enricher was not invoked.
DeterministicPath bool `json:"deterministic_path,omitempty"`
}
ContextPacket is the central output of the Brain's context builder. It is a purpose-built, structured document assembled for a specific agent and phase — replacing raw graph nodes with semantic summaries and actionable guidance.
A nil ContextPacket means the Brain is unavailable; callers use raw context as fallback.
type ContextPacketRequest ¶
type ContextPacketRequest struct {
AgentID string `json:"agent_id,omitempty"`
ProjectID string `json:"project_id,omitempty"`
Snapshot SynapsesSnapshotInput `json:"snapshot"`
Phase SDLCPhase `json:"phase,omitempty"` // "" = use stored project phase
QualityMode QualityMode `json:"quality_mode,omitempty"` // "" = use stored project mode
EnableLLM bool `json:"enable_llm"` // true = allow LLM insight (~2s)
}
ContextPacketRequest is the input to Brain.BuildContextPacket(). All fields are optional — empty Phase/QualityMode fall back to the stored project config.
type CoordinateRequest ¶
type CoordinateRequest struct {
// NewAgentID is the agent trying to claim work.
NewAgentID string `json:"new_agent_id"`
// NewScope is the scope the new agent wants to claim.
NewScope string `json:"new_scope"`
// ConflictingClaims are existing claims that overlap with NewScope.
ConflictingClaims []WorkClaim `json:"conflicting_claims"`
}
CoordinateRequest describes an agent registration that conflicts with existing claims.
type CoordinateResponse ¶
type CoordinateResponse struct {
// Suggestion is a plain-English recommendation for the new agent.
Suggestion string `json:"suggestion"`
// AlternativeScope is a concrete non-conflicting scope the new agent could claim instead.
AlternativeScope string `json:"alternative_scope"`
// Degraded is true when the orchestrate tier's circuit tripped and a
// lower-tier fallback model was used for the coordination suggestion.
Degraded bool `json:"degraded,omitempty"`
}
CoordinateResponse suggests how to distribute work to avoid the conflict.
type DecisionLogEntry ¶
type DecisionLogEntry struct {
ID string `json:"id"`
AgentID string `json:"agent_id"`
Phase string `json:"phase"`
EntityName string `json:"entity_name"`
Action string `json:"action"`
RelatedEntities []string `json:"related_entities,omitempty"`
Outcome string `json:"outcome"`
Notes string `json:"notes,omitempty"`
CreatedAt string `json:"created_at"`
}
DecisionLogEntry is a single row from the brain's decision_log table.
type DecisionRequest ¶
type DecisionRequest struct {
AgentID string `json:"agent_id"`
Phase string `json:"phase"`
EntityName string `json:"entity_name"`
Action string `json:"action"` // "edit"|"test"|"review"|"fix_violation"
RelatedEntities []string `json:"related_entities"`
Outcome string `json:"outcome"` // "success"|"violation"|"reverted"|""
Notes string `json:"notes"`
}
DecisionRequest feeds the Brain's co-occurrence learning loop. Agents call LogDecision after completing work on an entity.
type EnrichRequest ¶
type EnrichRequest struct {
// ProjectID scopes summary lookups to a specific project.
ProjectID string `json:"project_id,omitempty"`
// RootID is the graph node ID of the queried entity.
RootID string `json:"root_id"`
// RootName is the entity name (e.g., "AuthService").
RootName string `json:"root_name"`
// RootType is the node type (e.g., "struct").
RootType string `json:"root_type"`
// AllNodeIDs contains every node ID in the carved subgraph, for summary lookup.
AllNodeIDs []string `json:"all_node_ids"`
// CalleeNames are names of entities the root calls directly.
CalleeNames []string `json:"callee_names"`
// CallerNames are names of entities that call the root.
CallerNames []string `json:"caller_names"`
// RelatedNames are other nodes in the subgraph.
RelatedNames []string `json:"related_names"`
// TaskContext is optional context from a linked task (from task_id).
TaskContext string `json:"task_context,omitempty"`
// RootFile is the file path of the root entity; used for SDLC phase inference
// and domain focus in the enricher's deterministic pass.
RootFile string `json:"root_file,omitempty"`
// FanIn is the total caller count (may exceed len(CallerNames) when capped).
// Used for the deterministic complexity score calculation.
FanIn int `json:"fan_in,omitempty"`
}
EnrichRequest carries the carved subgraph data from a get_context call.
type EnrichResponse ¶
type EnrichResponse struct {
// Insight is a 2-sentence analysis of the entity's architectural role.
Insight string `json:"insight"`
// Concerns are specific observations (e.g., "handles auth tokens", "rate limit boundary").
Concerns []string `json:"concerns"`
// Summaries maps nodeID → 1-sentence summary for nodes that have been ingested.
// These are loaded from brain.sqlite (no LLM call needed — fast lookup).
Summaries map[string]string `json:"summaries"`
// LLMUsed is true when the LLM was called to generate the Insight field.
// False means Insight is empty (LLM unavailable, feature disabled, or timed out).
LLMUsed bool `json:"llm_used"`
// Degraded is true when the primary enrich tier's circuit tripped and a
// lower-tier fallback model was used. Summaries are still present; Insight
// may be weaker. Callers can use this to decide whether to re-request later.
Degraded bool `json:"degraded,omitempty"`
}
EnrichResponse is added to the get_context response.
type HealthLevel ¶
type HealthLevel int
HealthLevel classifies the current system resource state.
const ( // HealthGreen indicates ample resources: RAM > 3 GB free and CPU < 0.7. // All work can proceed normally. HealthGreen HealthLevel = iota // HealthYellow indicates moderate pressure: RAM 1.5–3 GB free or CPU 0.7–0.9. // Prefer P0 (critical) work; defer lower-priority tasks. HealthYellow // HealthRed indicates resource exhaustion: RAM < 1.5 GB free or CPU > 0.9. // Degrade all work; shed load where possible. HealthRed )
func (HealthLevel) String ¶
func (h HealthLevel) String() string
String returns a human-readable label for the health level.
type IngestRequest ¶
type IngestRequest struct {
// ProjectID scopes summaries to a specific project, preventing collisions
// when multiple projects have identically-named entities (e.g., "auth.Login").
ProjectID string `json:"project_id,omitempty"`
// NodeID is the stable graph node identifier (e.g., "pkg:func:AuthService.Validate").
NodeID string `json:"node_id"`
// NodeName is the short name of the entity (e.g., "Validate").
NodeName string `json:"node_name"`
// NodeType is one of: "function", "method", "struct", "interface", "variable".
NodeType string `json:"node_type"`
// Package is the Go/language package name.
Package string `json:"package"`
// Code is the source snippet — capped at 500 chars to keep prompts small.
Code string `json:"code"`
}
IngestRequest carries a code snippet for semantic summarization. Called on file-save events for changed functions/methods/structs.
type IngestResponse ¶
type IngestResponse struct {
NodeID string `json:"node_id"`
Summary string `json:"summary"` // 1-sentence intent summary
Tags []string `json:"tags,omitempty"` // 1-3 domain labels, e.g. ["auth","http"]
}
IngestResponse is returned after summarization.
type ModelManager ¶
type ModelManager struct {
// contains filtered or unexported fields
}
ModelManager decides which model to use for background LLM tasks and pre-loads it when there is sufficient free RAM. It is safe for concurrent use.
Decision logic in EnsureModel:
- Model already loaded in Ollama → use it at no extra RAM cost.
- Sufficient RAM for preferred model → warm it up, return preferred.
- Preferred is 4B, only 2B fits → warm up 2B fallback, return fallback name. NOTE: Sprint 17 #4 (fallback chains) will make OllamaClients use the returned fallback name to route inference to the 2B tier. Until then, EnsureModel's return value is used as a go/no-go signal only; actual inference uses whichever model the OllamaClients are configured with (primary). The warmup pre-positions the fallback model so Ollama can swap quickly once #4 wires the routing.
- Insufficient RAM for any model → return "" (drain cycle deferred).
When pulse is nil (NullBrain / testing) all RAM checks are skipped and EnsureModel returns the primary model unconditionally.
func NewModelManager ¶
func NewModelManager(pulse *SystemPulse, cfg brainconfig.BrainConfig) *ModelManager
NewModelManager creates a ModelManager configured for cfg.
pulse must be the same *SystemPulse passed to the Scheduler so that both share the same system-state snapshot. Pass nil to disable RAM gating (useful in unit tests without a live system monitor).
func (*ModelManager) EnsureModel ¶
func (m *ModelManager) EnsureModel(ctx context.Context) string
EnsureModel returns the Ollama model name to use for the next drain cycle, or "" if no model can be loaded given current RAM constraints.
When the returned model is non-empty, the caller may proceed with LLM tasks. When the returned model is "", the caller should skip this drain cycle and retry on the next tick; tasks remain in the deferred queue.
type NLCandidate ¶
type NLCandidate struct {
// Name is the normalised candidate name (lowercase, trimmed).
Name string
// Context is up to 200 chars of surrounding text for classification.
Context string
// NodeID is the existing graph NodeID of the knowledge node to update.
// Empty means no node was created in Tier 0/1 (should not happen in practice).
NodeID string
}
NLCandidate is a single entity candidate for Tier 2 LLM classification. It carries the candidate name and its surrounding context sentence so the LLM can infer entity type and relationship type from minimal context.
type NLClassifyRequest ¶
type NLClassifyRequest struct {
// FilePath is the source markdown file, used as the scheduler dedup key.
FilePath string
// Candidates are the unresolved entity candidates from Tier 0/1 extraction.
Candidates []NLCandidate
}
NLClassifyRequest bundles the file path and candidates for a single ScheduleNLClassification call. Used as a P1 scheduler task key.
type NLClassifyResult ¶
type NLClassifyResult struct {
// NodeID matches the input NLCandidate.NodeID.
NodeID string
// NodeType is one of: concept | entity | artifact | decision.
// Empty means the LLM returned an unrecognised value; caller keeps the default.
NodeType string
// Description is a one-sentence LLM-generated summary of the entity.
// Empty when the LLM is unavailable or returns an invalid response.
Description string
}
NLClassifyResult is the LLM's classification of a single NLCandidate.
type NullBrain ¶
type NullBrain struct{}
NullBrain is a no-op Brain implementation used when the brain is disabled or when the LLM backend is unavailable. All methods return zero values without errors, so callers never need to guard against a nil Brain.
func (*NullBrain) BuildContextPacket ¶
func (n *NullBrain) BuildContextPacket(_ context.Context, _ ContextPacketRequest) (*ContextPacket, error)
BuildContextPacket returns nil — the caller should fall back to raw Synapses context.
func (*NullBrain) Coordinate ¶
func (n *NullBrain) Coordinate(_ context.Context, _ CoordinateRequest) (CoordinateResponse, error)
Coordinate is a no-op implementation.
func (*NullBrain) Enrich ¶
func (n *NullBrain) Enrich(_ context.Context, _ EnrichRequest) (EnrichResponse, error)
Enrich is a no-op implementation.
func (*NullBrain) EnsureModel ¶
EnsureModel is a no-op implementation.
func (*NullBrain) ExplainViolation ¶
func (n *NullBrain) ExplainViolation(_ context.Context, _ ViolationRequest) (ViolationResponse, error)
ExplainViolation is a no-op implementation.
func (*NullBrain) GetADRsForFile ¶
GetADRsForFile returns nil — no brain is configured.
func (*NullBrain) GetPatterns ¶
func (n *NullBrain) GetPatterns(_ string, _ int) []PatternHint
GetPatterns returns nil — no patterns are stored when brain is disabled.
func (*NullBrain) GetSDLCConfig ¶
func (n *NullBrain) GetSDLCConfig() SDLCConfig
GetSDLCConfig returns safe development-phase defaults.
func (*NullBrain) Ingest ¶
func (n *NullBrain) Ingest(_ context.Context, req IngestRequest) (IngestResponse, error)
Ingest is a no-op implementation.
func (*NullBrain) LogDecision ¶
func (n *NullBrain) LogDecision(_ context.Context, _ DecisionRequest) error
LogDecision is a no-op implementation.
func (*NullBrain) Memorize ¶
func (n *NullBrain) Memorize(_ context.Context, _ archivist.MemorizeRequest) (archivist.MemorizeResponse, error)
Memorize is a no-op implementation.
func (*NullBrain) QueryDecisions ¶
QueryDecisions returns nil — no brain is configured.
func (*NullBrain) SetQualityMode ¶
func (n *NullBrain) SetQualityMode(_ QualityMode, _ string) error
SetQualityMode is a no-op implementation.
func (*NullBrain) SetSDLCPhase ¶
SetSDLCPhase is a no-op implementation.
func (*NullBrain) UpsertADR ¶
func (n *NullBrain) UpsertADR(_ ADRRequest) error
UpsertADR is a no-op implementation.
type PatternHint ¶
type PatternHint struct {
Trigger string `json:"trigger"`
CoChange string `json:"co_change"`
Reason string `json:"reason,omitempty"`
Confidence float64 `json:"confidence"`
}
PatternHint is a learned co-occurrence: "when editing X, also check Y".
type QualityGate ¶
type QualityGate struct {
RequireTests bool `json:"require_tests"`
RequireDocs bool `json:"require_docs"`
RequirePRCheck bool `json:"require_pr_check"` // enterprise only
Checklist []string `json:"checklist"`
}
QualityGate lists concrete requirements for the current phase+mode combination. The agent treats Checklist as an ordered to-do list before declaring work done.
type QualityMode ¶
type QualityMode string
QualityMode controls how strict the quality gate is for an agent's work.
const ( QualityQuick QualityMode = "quick" // prototype: just make it work QualityStandard QualityMode = "standard" // default: unit tests required QualityEnterprise QualityMode = "enterprise" // full: tests + docs + PR checklist )
Quality mode constants.
type RuleInput ¶
type RuleInput struct {
RuleID string `json:"rule_id"`
Severity string `json:"severity"`
Description string `json:"description"`
}
RuleInput is a single architectural rule reference.
type SDLCConfig ¶
type SDLCConfig struct {
Phase SDLCPhase `json:"phase"`
QualityMode QualityMode `json:"quality_mode"`
UpdatedAt string `json:"updated_at"`
UpdatedBy string `json:"updated_by,omitempty"`
}
SDLCConfig is the current project SDLC state (returned by SetSDLCPhase / SetQualityMode).
type SDLCPhase ¶
type SDLCPhase string
SDLCPhase identifies the current stage in the software development lifecycle.
type Scheduler ¶
type Scheduler struct {
// contains filtered or unexported fields
}
Scheduler routes brain tasks by priority and system health.
P0 tasks (user-waiting) bypass the scheduler — callers use ShouldDegrade() to decide whether to skip the LLM call and return a heuristic response.
P1 and P2 tasks go through Submit() and are executed serially by the internal drain goroutine. Only one task runs at a time — no concurrent Ollama requests.
When a ModelManager is attached via WithModelManager, the drain goroutine calls EnsureModel before executing each batch. If EnsureModel returns "" (insufficient RAM to load any model), the batch is skipped and retried on the next tick. Tasks remain in the deferred queue — they are not dropped.
Lifecycle: NewScheduler → (optional WithModelManager) → Start → (Submit / ShouldDegrade calls) → Stop.
func NewScheduler ¶
func NewScheduler(pulse *SystemPulse) *Scheduler
NewScheduler creates a Scheduler. Call Start() before submitting tasks.
If pulse is nil, Submit() runs fn immediately in a new goroutine (useful for NullBrain / testing paths where system monitoring is not available).
func (*Scheduler) QueueSize ¶
QueueSize returns the current number of pending deferred tasks. Intended for observability and testing.
func (*Scheduler) ShouldDegrade ¶
ShouldDegrade reports whether the current system state warrants skipping an LLM call for a P0 (user-waiting) task.
Returns true when:
- Health is Red (RAM < 1.5 GB or CPU > 0.9): loading a model would risk OOM.
- Health is Yellow AND no model is currently loaded: loading would consume RAM that is already under pressure.
When ShouldDegrade returns true, callers should return a heuristic/template response immediately rather than issuing an Ollama request.
func (*Scheduler) Start ¶
func (s *Scheduler) Start()
Start launches the background drain goroutine. Safe to call multiple times — subsequent calls after the first are no-ops (sync.Once).
func (*Scheduler) Stop ¶
func (s *Scheduler) Stop()
Stop signals the drain goroutine to exit and waits for it to finish. Any tasks still in the deferred queue at shutdown time are silently dropped. Safe to call multiple times and safe to call without a prior Start().
func (*Scheduler) Submit ¶
func (s *Scheduler) Submit(key string, priority TaskPriority, fn func())
Submit enqueues a P1 or P2 task for deferred execution by the drain goroutine.
If pulse is nil (no system monitoring), fn is run immediately in a new goroutine — preserving the fire-and-forget contract for NullBrain / test paths.
Submit never blocks. It always returns immediately, regardless of queue state. If the queue is full and cannot evict, the task is silently dropped. If Stop() has already been called, the task is silently dropped (drain goroutine has exited; enqueuing would orphan the task permanently).
func (*Scheduler) WithModelManager ¶
func (s *Scheduler) WithModelManager(mgr *ModelManager) *Scheduler
WithModelManager attaches a ModelManager to the Scheduler. The drain goroutine will call ModelManager.EnsureModel before executing each eligible task batch, skipping the batch when no model can be loaded.
Call before Start() to avoid a data race. Returns s for optional chaining:
sched := NewScheduler(pulse).WithModelManager(mgr) sched.Start()
type SetPhaseRequest ¶
type SetPhaseRequest struct {
Phase string `json:"phase"`
}
SetPhaseRequest is the payload for setting the SDLC phase.
type SnapshotInput ¶
type SnapshotInput = SynapsesSnapshotInput
SnapshotInput is a backward-compat alias for SynapsesSnapshotInput.
type SynapsesSnapshotInput ¶
type SynapsesSnapshotInput struct {
RootNodeID string `json:"root_node_id,omitempty"`
RootName string `json:"root_name"`
RootType string `json:"root_type,omitempty"`
RootFile string `json:"root_file,omitempty"` // used for constraint hint lookups
CalleeNames []string `json:"callee_names,omitempty"` // what root calls directly
CallerNames []string `json:"caller_names,omitempty"` // what calls root directly
RelatedNames []string `json:"related_names,omitempty"` // transitive neighbours
ApplicableRules []RuleInput `json:"applicable_rules,omitempty"` // rules whose pattern matches RootFile
ActiveClaims []ClaimInput `json:"active_claims,omitempty"` // work claims from other agents
TaskContext string `json:"task_context,omitempty"`
TaskID string `json:"task_id,omitempty"`
HasTests bool `json:"has_tests"` // whether *_test.go exists for root file
FanIn int `json:"fan_in"` // total caller count (may exceed len(CallerNames)
RootDoc string `json:"root_doc,omitempty"` // AST doc comment; fallback summary when brain.sqlite has no summary)
}
SynapsesSnapshotInput carries the raw structural data from a Synapses get_context call. Synapses (or the HTTP caller) populates this; the Brain uses it to build the packet.
type SystemPulse ¶
type SystemPulse struct {
// contains filtered or unexported fields
}
SystemPulse samples system resources on a background goroutine and exposes the latest snapshot via Current(). It is safe for concurrent use.
Lifecycle: NewSystemPulse → (optional WithOllamaURL) → Start → (use Current) → Stop. SystemPulse is NOT restartable: once Stop() is called, Start() is a no-op and the pulse remains stopped. Create a new instance to restart.
func NewSystemPulse ¶
func NewSystemPulse() *SystemPulse
NewSystemPulse creates a new SystemPulse. Call Start() to begin sampling. The pulse is ready for use immediately; Current() returns a zero-value SystemState until Start() is called.
func (*SystemPulse) Current ¶
func (p *SystemPulse) Current() SystemState
Current returns a copy of the most recent SystemState snapshot. Safe for concurrent use; never blocks. Returns a zero-value SystemState (SampledAt.IsZero() == true) if called before Start().
func (*SystemPulse) Start ¶
func (p *SystemPulse) Start()
Start launches the background sampling goroutine. It is safe to call multiple times — subsequent calls after the first are no-ops (sync.Once). A single initial sample is taken synchronously before the goroutine starts so that Current() always returns a non-zero SampledAt after Start() returns.
func (*SystemPulse) Stop ¶
func (p *SystemPulse) Stop()
Stop signals the background goroutine to exit and waits for it to finish. It is safe to call multiple times and safe to call without a prior Start() — in both cases it returns promptly without blocking.
func (*SystemPulse) WithOllamaURL ¶
func (p *SystemPulse) WithOllamaURL(url string) *SystemPulse
WithOllamaURL overrides the Ollama /api/ps URL used for model residency polling. Call before Start(). The url must be the full URL including path, e.g. "http://gpu-server:11434/api/ps".
This is necessary when Ollama runs on a non-default host or port so that OllamaModelLoaded in SystemState reflects the correct Ollama instance.
type SystemState ¶
type SystemState struct {
// AvailableRAM is the amount of RAM free for allocation, in bytes.
AvailableRAM int64
// CPULoadNorm is the normalised 1-minute CPU load average in [0.0, 1.0].
// Computed as load1 / numCPU, clamped to 1.0.
CPULoadNorm float64
// OllamaModelLoaded is the name of the model currently resident in Ollama,
// or "" if Ollama is not running or no model is loaded.
OllamaModelLoaded string
// Health is the derived health classification for the current state.
Health HealthLevel
// SampledAt is the wall-clock time when this state was last updated.
SampledAt time.Time
}
SystemState is a snapshot of system resource availability. All fields are safe to read without a lock (returned by value from Current()).
type TaskPriority ¶
type TaskPriority int
TaskPriority classifies urgency of a brain task relative to system load.
const ( // PriorityP0 is reserved for user-waiting tasks (enrich, guardian, HyDE). // P0 tasks do NOT go through the scheduler queue — they call the brain directly. // Use ShouldDegrade() to decide whether to skip the LLM call when under pressure. PriorityP0 TaskPriority = 0 // PriorityP1 tasks run when health is Green, or are deferred up to 5 min // under Yellow/Red. Examples: archivist (session end), navigator background. PriorityP1 TaskPriority = 1 // PriorityP2 tasks run when health is Green, or are deferred up to 15 min // under Yellow/Red. Examples: ingest (file save), bulk descriptions. // P2 tasks are evicted first when the queue is full. PriorityP2 TaskPriority = 2 )
type TierState ¶
type TierState struct {
Open bool `json:"open"`
Failures int `json:"failures"`
CooldownRemaining float64 `json:"cooldown_remaining_s"`
}
TierState describes the current circuit-breaker state for one tier.
type TierStatusProvider ¶
TierStatusProvider is implemented by the production Brain to expose per-tier circuit-breaker status for the /v1/health/tiers endpoint. NullBrain does not implement this interface; callers should type-assert.
type ViolationExplanation ¶
type ViolationExplanation = ViolationResponse
ViolationExplanation is a backward-compat alias for ViolationResponse.
type ViolationRequest ¶
type ViolationRequest struct {
// RuleID is the unique rule identifier from synapses.json.
RuleID string `json:"rule_id"`
// RuleSeverity is "error" or "warning".
RuleSeverity string `json:"rule_severity"`
// Description is the rule's human-readable description.
Description string `json:"description"`
// SourceFile is the file that triggered the violation.
SourceFile string `json:"source_file"`
// TargetName is the entity name being imported/called in violation of the rule.
TargetName string `json:"target_name"`
}
ViolationRequest carries a single architectural rule violation.
type ViolationResponse ¶
type ViolationResponse struct {
// Explanation describes the violation in plain language.
Explanation string `json:"explanation"`
// Fix is a concrete, actionable suggestion to resolve the violation.
Fix string `json:"fix"`
// Degraded is true when the guardian tier's circuit tripped and a
// lower-tier fallback model was used for the explanation.
Degraded bool `json:"degraded,omitempty"`
}
ViolationResponse is a plain-English explanation with an actionable fix.
Source Files
¶
Directories
¶
| Path | Synopsis |
|---|---|
|
Package archivist synthesizes agent session transcripts into persistent memory entries and code annotations.
|
Package archivist synthesizes agent session transcripts into persistent memory entries and code annotations. |
|
Package config provides BrainConfig loading and defaults for synapses-intelligence.
|
Package config provides BrainConfig loading and defaults for synapses-intelligence. |
|
Package contextbuilder assembles a structured Context Packet from a Synapses graph snapshot, SDLC phase/mode, and the Brain's learned data.
|
Package contextbuilder assembles a structured Context Packet from a Synapses graph snapshot, SDLC phase/mode, and the Brain's learned data. |
|
Package enricher implements the Context Enricher — Feature 2 of synapses-intelligence.
|
Package enricher implements the Context Enricher — Feature 2 of synapses-intelligence. |
|
Package guardian implements the Rule Guardian — Feature 3 of synapses-intelligence.
|
Package guardian implements the Rule Guardian — Feature 3 of synapses-intelligence. |
|
Package ingestor implements the Semantic Ingestor — Feature 1 of synapses-intelligence.
|
Package ingestor implements the Semantic Ingestor — Feature 1 of synapses-intelligence. |
|
Package llm provides the LLM client abstraction for synapses-intelligence.
|
Package llm provides the LLM client abstraction for synapses-intelligence. |
|
Package orchestrator implements the Task Orchestrator — Feature 4 of synapses-intelligence.
|
Package orchestrator implements the Task Orchestrator — Feature 4 of synapses-intelligence. |
|
Package pruner strips boilerplate from web content using the Tier 0 (0.8B) model.
|
Package pruner strips boilerplate from web content using the Tier 0 (0.8B) model. |
|
Package sdlc provides SDLC phase awareness and quality mode profiles.
|
Package sdlc provides SDLC phase awareness and quality mode profiles. |
|
Package store manages the brain's own SQLite database.
|
Package store manages the brain's own SQLite database. |