Documentation
¶
Overview ¶
Package planner is the home of the two-axis work planning and dispatch subsystem (gm-s47n, docs/design/work-planning.md).
This package will eventually own:
- Layer 1 read API (OperationalContext) — gm-s47n.2.5
- Layer 3 scorers (Conflicts, Affinity) — gm-s47n.4
- Layer 4 session-health telemetry — gm-s47n.5
- Layer 5 planner UX entry points — gm-s47n.6
- Source analysis scheduling — gm-s47n.9
gm-s47n.2.1 ships the data shapes only — no behavior. Concrete triggers and decay are split across gm-s47n.2.{2,3,4,5}.
Why a single package ¶
The planner reads from existing core structs (AgentRef, Session, Workspace, Assignment in internal/core/orchestration.go), the new session_profiles table defined here, and the SourceAnalysis abstraction in internal/sourceanalysis. None of those packages should depend on planner — the dependency arrow points one way: planner → core / sourceanalysis. Keeps the planner replaceable without rippling through the data layer.
Index ¶
- Constants
- Variables
- func AgeFileProfile(in map[string]float64, halfLife int) map[string]float64
- func AgeProfile(in map[ConceptTag]float64, halfLife int) map[ConceptTag]float64
- func ConceptDrift(lifetime, lastN map[ConceptTag]float64) float64
- func DecayConcepts(events []EventContribution, halfLife int) map[ConceptTag]float64
- func DecayFiles(events []EventContribution, halfLife int) map[string]float64
- func DecayWeight(eventsSinceMostRecent int, halfLife int) float64
- func LastNConceptVector(ctx context.Context, lastBeads []core.WorkItemID, lookup BeadConceptLookup) (map[ConceptTag]float64, error)
- func MergeContribution(profile map[ConceptTag]float64, tags []ConceptTag, weight float64) map[ConceptTag]float64
- func MergeFileContribution(profile map[string]float64, paths []string, weight float64) map[string]float64
- func SavePolicyFile(path string, p DispatchPolicy) error
- type AffinityBeadInputs
- type AffinityScores
- type AffinityWeights
- type AgentLookup
- type AnalysisScopeStatus
- type AssignmentLookup
- type BeadConceptLookup
- type BeadTarget
- type ClaimEvent
- type CompletionEvent
- type ConceptTag
- type DispatchGate
- func (g *DispatchGate) AllowDispatch(sessionID string, now time.Time) (bool, string)
- func (g *DispatchGate) Inflight() int
- func (g *DispatchGate) Policy() DispatchPolicy
- func (g *DispatchGate) RecordCompletion(sessionID string)
- func (g *DispatchGate) RecordDispatch(sessionID string, now time.Time)
- func (g *DispatchGate) SetEnabled(on bool)
- func (g *DispatchGate) SetPolicy(p DispatchPolicy)
- type DispatchPolicy
- type EventContribution
- type GitScopeStatus
- type OperationalContext
- type OperationalContextReaders
- type ProfileLookup
- type ProfileStore
- func (s *ProfileStore) EnsureSchema(ctx context.Context) error
- func (s *ProfileStore) GetProfile(ctx context.Context, sessionID string) (*SessionProfile, error)
- func (s *ProfileStore) MultiGet(ctx context.Context, sessionIDs []string) ([]*SessionProfile, error)
- func (s *ProfileStore) RecordClaim(ctx context.Context, ev ClaimEvent) (*SessionProfile, error)
- func (s *ProfileStore) RecordCompletion(ctx context.Context, ev CompletionEvent) (*SessionProfile, error)
- func (s *ProfileStore) UpsertProfile(ctx context.Context, p *SessionProfile) error
- type ProfileWriter
- type RecycleDecision
- type RecycleDrivers
- type RecycleInputs
- type RecycleReason
- type ScopeStatus
- type SemanticBeadInputs
- type SemanticConflict
- type SessionHealth
- type SessionLookup
- type SessionProfile
- type WorkspaceCollision
- type WorkspaceLookup
Constants ¶
const ( RecyclePressureThreshold = 0.85 RecycleDriftThreshold = 0.70 RecycleConceptOverlapMaxForDrift = 0.30 RecycleTimeOnTaskHours = 4 )
Recycle thresholds — spec §5.5. Constants so callers + tests reference the canonical values without re-typing them.
const DefaultDecayHalfLife = 5
DefaultDecayHalfLife is the number of completed bead events over which a concept or file weight halves. Half-life is in EVENTS, not wall time, per spec §4 Layer 1 — an idle session must not lose its priming overnight.
const DefaultMaxConcurrent = 4
DefaultMaxConcurrent is a fleet-wide cap on auto-dispatched sessions in flight. Holds the planner's blast radius even when the per-session rate limit is generous.
const DefaultMinIntervalPerSession = 5 * time.Minute
DefaultMinIntervalPerSession matches the spec example (§11). Operators tune via the policy file or the CLI.
const LastBeadsRingSize = 5
LastBeadsRingSize is the default size of the SessionProfile.LastBeads ring buffer. Five is large enough to compute "what did this session just touch?" recency scores without paying for the full session history every dispatch.
Variables ¶
var DefaultAffinityWeights = AffinityWeights{
ConceptOverlap: 0.30,
FileFamiliarity: 0.20,
WorkspaceMatch: 0.20,
Recency: 0.15,
Headroom: 0.15,
}
DefaultAffinityWeights — the spec defaults. Matches §5.4.
var ErrSessionLookupRequired = errors.New(
"planner.ReadOperationalContext: Sessions reader is required",
)
ErrSessionLookupRequired is returned when the Sessions reader is nil — every other reader is optional, but without a Session there is nothing to populate.
var ErrSessionNotFound = errors.New("planner: session not found")
ErrSessionNotFound is returned when SessionLookup.FindSession returns (nil, nil) for the requested id. Distinguishes "lookup succeeded, no row" from "lookup errored".
var SchemaSQL string
SchemaSQL is the DDL for the new session_profiles table — see schema.sql for the column-by-column rationale and gm-s47n.2.1.
Embedded so the migration runner (gm-s47n.2.3) can apply it alongside the existing core tables without dragging filesystem access through the planner package boundary.
Functions ¶
func AgeFileProfile ¶
AgeFileProfile is the string-keyed twin of AgeProfile, used for the SessionProfile.Files map.
func AgeProfile ¶
func AgeProfile(in map[ConceptTag]float64, halfLife int) map[ConceptTag]float64
AgeProfile rescales an existing decayed profile as if every stored weight referred to one additional event back. Used by the write hooks (.2.3) when a new bead lands without changing the prior events: each existing weight gets multiplied by 0.5^(1/h), then the new bead's contribution is added on top.
Pure: returns a new map; the input is not mutated. nil input returns nil — callers can chain without nil-checking.
halfLife <= 0 falls back to DefaultDecayHalfLife.
func ConceptDrift ¶
func ConceptDrift(lifetime, lastN map[ConceptTag]float64) float64
ConceptDrift is the cosine distance between two concept vectors. Range [0, 1] for non-negative weight maps:
0 — vectors point the same direction (same topic mix) 1 — vectors are orthogonal (completely different concepts)
Empty / zero-magnitude vectors return 0 (no drift detectable from missing data — a session with no recent concept activity is not "drifting", it's idle).
Pure function. The math is identical to the standard cosine similarity ↔ distance pair, restricted to non-negative weights. Floating-point clamp guards against tiny excursions outside [0, 1] when norms are nearly identical.
func DecayConcepts ¶
func DecayConcepts(events []EventContribution, halfLife int) map[ConceptTag]float64
DecayConcepts walks the event stream oldest-to-newest and returns the summed concept profile S(t). Stream order matters: events[0] is the oldest, events[len-1] is the most recent (the "now" reference for the n - i computation).
halfLife <= 0 falls back to DefaultDecayHalfLife.
Returns an empty (non-nil) map when there are no events so callers can range over the result without nil-checking.
func DecayFiles ¶
func DecayFiles(events []EventContribution, halfLife int) map[string]float64
DecayFiles is the file-axis twin of DecayConcepts.
func DecayWeight ¶
DecayWeight returns 0.5 ^ ((n - i) / h) — the per-event factor for an event i positions back from the most-recent event in a stream of n events with half-life h.
eventsSinceMostRecent = n - i
0 → most recent event (weight 1.0) h → exactly half the most recent (weight 0.5)
halfLife <= 0 falls back to DefaultDecayHalfLife so callers passing a sentinel zero don't get a division-by-zero. Negative eventsSinceMostRecent is clamped to 0 — "future" events are treated as right-now.
func LastNConceptVector ¶
func LastNConceptVector( ctx context.Context, lastBeads []core.WorkItemID, lookup BeadConceptLookup, ) (map[ConceptTag]float64, error)
LastNConceptVector aggregates the concept profiles of the last-N completed beads into a single vector. Each bead's concept map is summed key-wise; no decay is applied here — SessionProfile.Concepts already carries the recency-decayed lifetime view, and the drift calculation needs the un-decayed last-N snapshot to compare against.
Per-bead lookup errors are skipped (one missing bead shouldn't poison the whole snapshot). Returning an empty map + nil err is the contract for "lookup ran but had nothing to add" — the caller treats that as zero drift.
func MergeContribution ¶
func MergeContribution(profile map[ConceptTag]float64, tags []ConceptTag, weight float64) map[ConceptTag]float64
MergeContribution adds a bead's concept tags onto a previously aged profile at full weight (the new event sits at i = n, so weight = 0.5^(0/h) = 1.0). Returns a new map; input not mutated.
Use AgeProfile + MergeContribution together when a new event lands: age the existing profile, then merge the new event in at full weight. The result equals what DecayConcepts would have produced if we'd recomputed from the full event stream — but without keeping the stream around (the SessionProfile only stores the summed weights, not the per-event history).
func MergeFileContribution ¶
func MergeFileContribution(profile map[string]float64, paths []string, weight float64) map[string]float64
MergeFileContribution is the string-keyed twin.
func SavePolicyFile ¶
func SavePolicyFile(path string, p DispatchPolicy) error
SavePolicyFile writes the policy as JSON to path. Creates parent directories as needed (0o755). The file itself is 0o644 — the kill switch is not a secret.
Types ¶
type AffinityBeadInputs ¶
type AffinityBeadInputs struct {
BeadID string
Concepts []ConceptTag
Targets []string // repo-relative paths
Repositories []string
Branch string // optional; "" = no branch preference
}
AffinityBeadInputs is the bead's contribution to the affinity score: id, concept tags, target files, repo affinity, and concept-recency lookup. Callers derive these from the bead's stored fields plus rig conventions.
Repositories is the set of repos the bead may touch. Affinity takes the max workspace-match over this set.
LastBeadConcepts is the concept set on the session's most recent completed bead — supplied by the caller so the affinity scorer doesn't need a second SessionProfile read. Empty when the session has no history yet.
type AffinityScores ¶
type AffinityScores struct {
ConceptOverlap float64 `json:"concept_overlap"`
FileFamiliarity float64 `json:"file_familiarity"`
WorkspaceMatch float64 `json:"workspace_match"`
Recency float64 `json:"recency"`
Headroom float64 `json:"headroom"`
// Combined is the weighted sum. Tracked separately so the
// explanation surface always carries the same number it
// rendered the breakdown for — never re-derive on the consumer
// side.
Combined float64 `json:"combined"`
}
AffinityScores is the per-sub-score breakdown. Every field in [0, 1]. Explanation surfaces consume this directly.
func Affinity ¶
func Affinity( bead AffinityBeadInputs, ctx OperationalContext, weights *AffinityWeights, ) AffinityScores
Affinity computes the five sub-scores and the weighted combined score for (bead, ctx). When weights is nil, DefaultAffinityWeights applies.
ctx may be a partial OperationalContext: nil Profile / Health / Workspace are tolerated as "the session has no signal here yet" and the corresponding sub-scores fall to 0. The combined score degrades smoothly — a fresh session with no profile gets a low affinity for every bead, which is the right default (the planner should prefer warm sessions when one is available).
The function is pure and synchronous; safe for concurrent use.
type AffinityWeights ¶
type AffinityWeights struct {
ConceptOverlap float64 `json:"concept_overlap"`
FileFamiliarity float64 `json:"file_familiarity"`
WorkspaceMatch float64 `json:"workspace_match"`
Recency float64 `json:"recency"`
Headroom float64 `json:"headroom"`
}
AffinityWeights is the per-sub-score weighting. Sum SHOULD equal 1.0 for the combined score to stay in [0, 1] — the function does not normalise; callers that experiment with weights are responsible for keeping them sane.
DefaultAffinityWeights matches spec §5.4. Operators tuning weights at runtime construct their own.
type AgentLookup ¶
type AgentLookup interface {
ReadAgent(ctx context.Context, id core.AgentID) (*core.AgentRef, error)
}
AgentLookup mirrors core.OrchestrationPlane.ReadAgent.
type AnalysisScopeStatus ¶
type AnalysisScopeStatus struct {
Backend string `json:"backend,omitempty"`
State string `json:"state"`
IndexedAt time.Time `json:"indexed_at,omitempty"`
IndexedCommit string `json:"indexed_commit,omitempty"`
HeadSHA string `json:"head_sha,omitempty"`
Reason string `json:"reason,omitempty"`
}
AnalysisScopeStatus summarises source-analysis freshness for a scope. State is one of: current, stale, missing, unavailable.
type AssignmentLookup ¶
type AssignmentLookup interface {
FindAssignment(ctx context.Context, assignmentID string) (*core.Assignment, error)
}
AssignmentLookup is the missing piece: there is no GetAssignment on core.OrchestrationPlane today. Adaptors that index assignments internally implement this directly; the rest can derive it from ObservedState() and filter (a future helper). Either way, an implementation lives outside this package — the planner just declares what it needs.
type BeadConceptLookup ¶
type BeadConceptLookup interface {
// BeadConcepts returns the concept-tag → weight map associated
// with the given completed bead. Returning a nil/empty map +
// nil error is "no concepts known for this bead" — the planner
// treats it as a non-error skip.
BeadConcepts(ctx context.Context, beadID core.WorkItemID) (map[ConceptTag]float64, error)
}
BeadConceptLookup retrieves the concept profile for a previously completed bead. The concrete implementation lands with gm-s47n.1 (WorkItem.concepts enrichment); callers that don't have it yet pass nil and accept ConceptDrift staying at zero.
One method, no surface — keeps the interface narrow so any concept source (a dolt table, an external classifier, a static fixture in tests) can plug in.
type BeadTarget ¶
BeadTarget is the planner-derived operational location for a ready bead — what (repo, branch, worktree_path) the bead would land in if dispatched. Callers derive it from the bead's repository / branch fields plus rig conventions; this package stays agnostic about the derivation so adaptors that route differently (one worktree per bead vs one per branch) can both feed in.
WorktreePath is optional — leave empty when dispatch hasn't materialised a worktree yet. The collision check still fires on (repo, branch) alone in that case.
type ClaimEvent ¶
type ClaimEvent struct {
SessionID string
AssignmentID string
AgentID core.AgentID
BeadID core.WorkItemID
// Concepts / Files come from the bead's declared targets +
// concepts (gm-s47n.1 once it lands; provider's structured
// extras until then).
Concepts []ConceptTag
Files []string
// HalfLife overrides DefaultDecayHalfLife per call so a future
// per-rig setting can flow through without changing the API.
HalfLife int
// Now is injected for deterministic test runs; nil → time.Now.
Now func() time.Time
}
ClaimEvent is the input to RecordClaim — what the adaptor knows at bead-claim time. SessionID + BeadID are required; everything else is optional and absent values just contribute nothing.
type CompletionEvent ¶
type CompletionEvent struct {
SessionID string
AssignmentID string
AgentID core.AgentID
BeadID core.WorkItemID
ActualConcepts []ConceptTag
ActualFiles []string
// TokensUsed / ContextWindowMax are optional snapshots the
// adaptor may have observed at completion time (e.g. via the
// agent's telemetry hook). Zero values leave the existing
// profile fields untouched.
TokensUsed int
ContextWindowMax int
HalfLife int
Now func() time.Time
}
CompletionEvent is the input to RecordCompletion. ActualConcepts / ActualFiles come from the retrospective (.8.1) when available; callers without retrospective output pass declared values and take the small inflation hit described in the file header.
type ConceptTag ¶
type ConceptTag string
ConceptTag is a short tag from the controlled concept vocabulary (docs/design/work-planning.md §6). Free string at this layer; vocabulary governance is gm-s47n.7.
type DispatchGate ¶
type DispatchGate struct {
// contains filtered or unexported fields
}
DispatchGate is the runtime guard the auto-dispatch loop calls. Owns mutable per-session timestamps + the inflight counter.
Concurrency: every method holds a sync.Mutex for the duration of its read/write. Auto-dispatch is fundamentally serial (one decision at a time per loop tick), so the mutex isn't a hot path; correctness over micro-optimisation.
func NewDispatchGate ¶
func NewDispatchGate(policy DispatchPolicy) *DispatchGate
NewDispatchGate constructs a gate seeded with the given policy. Pass DispatchPolicy{} for the all-defaults flavour (Enabled=false, 5-minute interval, 4 concurrent).
func (*DispatchGate) AllowDispatch ¶
AllowDispatch reports whether the daemon may dispatch against sessionID right now. Returns (true, "") when the dispatch is permitted; (false, reason) when blocked. The reason string is safe to log + surface to the operator and follows a stable vocabulary so future tooling can pattern-match on it without regex'ing free text:
"auto-dispatch disabled" "max-concurrent reached" "per-session rate limit"
Time is injected so tests are deterministic.
func (*DispatchGate) Inflight ¶
func (g *DispatchGate) Inflight() int
Inflight reports the current count of in-flight auto-dispatches. Read-only; useful for observability + the CLI's status command.
func (*DispatchGate) Policy ¶
func (g *DispatchGate) Policy() DispatchPolicy
Policy returns the current policy snapshot. The returned value is a copy — callers may inspect freely without affecting the gate's internal state.
func (*DispatchGate) RecordCompletion ¶
func (g *DispatchGate) RecordCompletion(sessionID string)
RecordCompletion decrements the inflight counter. Safe to call when no prior RecordDispatch was issued — clamps at zero so a double-completion event doesn't push the counter negative.
func (*DispatchGate) RecordDispatch ¶
func (g *DispatchGate) RecordDispatch(sessionID string, now time.Time)
RecordDispatch stamps the dispatch decision so subsequent AllowDispatch calls see it. Increments the inflight counter. Pair with RecordCompletion when the session's bead finishes.
func (*DispatchGate) SetEnabled ¶
func (g *DispatchGate) SetEnabled(on bool)
SetEnabled is the runtime kill switch. Pulled out as its own method because the operator-facing CLI needs a one-call toggle without round-tripping the whole policy. Pair-symmetric with the spec wording: "the kill-switch is one command away."
func (*DispatchGate) SetPolicy ¶
func (g *DispatchGate) SetPolicy(p DispatchPolicy)
SetPolicy replaces the policy wholesale. Common operator path: load from disk, mutate one field, write back.
type DispatchPolicy ¶
type DispatchPolicy struct {
// Enabled is the master kill switch. false → AllowDispatch
// always returns ("no", "auto-dispatch disabled"). Default
// false (opt-in per spec §4 Layer 5).
Enabled bool `json:"enabled"`
// MinIntervalPerSession is the minimum wall-clock gap between
// successive auto-dispatches against the same session id.
// Zero falls back to DefaultMinIntervalPerSession.
MinIntervalPerSession time.Duration `json:"min_interval_per_session"`
// MaxConcurrent caps the number of auto-dispatches the gate
// will allow to be in flight at any instant. Zero falls back
// to DefaultMaxConcurrent.
MaxConcurrent int `json:"max_concurrent"`
}
DispatchPolicy is the data that gates the auto-dispatch daemon. Mutable by the operator at runtime; persisted via PolicyStore.
Wire shape on disk: JSON. Stable field tags so a rig file written by gemba 1.0 keeps loading on gemba 2.0.
func LoadPolicyFile ¶
func LoadPolicyFile(path string) (DispatchPolicy, error)
LoadPolicyFile reads a DispatchPolicy from a JSON file. A missing file returns (DispatchPolicy{}, nil) — the zero value with Enabled=false, which matches the "opt-in, default safe" contract. Other errors propagate.
type EventContribution ¶
type EventContribution struct {
Concepts []ConceptTag
Files []string
Weight float64
}
EventContribution is one bead-completion's contribution to the session profile. Both maps are caller-supplied and not mutated.
The Weight field is the *intrinsic* weight of the contribution — usually 1.0 (the bead happened) but DecayConcepts respects the caller's choice in case a future scorer wants to down-weight a canceled bead, an emergency rollback, or a bead the operator explicitly excluded from priming.
type GitScopeStatus ¶
type GitScopeStatus struct {
State string `json:"state"`
ChangedFiles int `json:"changed_files,omitempty"`
Ahead int `json:"ahead,omitempty"`
Behind int `json:"behind,omitempty"`
HeadSHA string `json:"head_sha,omitempty"`
Upstream string `json:"upstream,omitempty"`
Reason string `json:"reason,omitempty"`
}
GitScopeStatus summarises the repository state for a scope's worktree. State is one of: clean, dirty, unavailable.
type OperationalContext ¶
type OperationalContext struct {
Agent *core.AgentRef `json:"agent,omitempty"`
Session *core.Session `json:"session,omitempty"`
Workspace *core.Workspace `json:"workspace,omitempty"`
Assignment *core.Assignment `json:"assignment,omitempty"`
Profile *SessionProfile `json:"profile,omitempty"`
Health *SessionHealth `json:"health,omitempty"`
ScopeStatus *ScopeStatus `json:"scope_status,omitempty"`
}
OperationalContext is the single read shape the planner returns from `OperationalContext(session_id)` (gm-s47n.2.5). Scorers, coach UI, and auto-dispatch all consume this same struct.
One pointer per join — nil means "not yet provisioned" (e.g. a session may exist without an Assignment in transient states; the planner handles that case explicitly rather than treating it as an error).
Wire shape: stable JSON, every field omitempty so a partial snapshot serialises cleanly when the planner wants to surface a degraded view.
func ReadOperationalContext ¶
func ReadOperationalContext( ctx context.Context, sessionID string, r OperationalContextReaders, ) (*OperationalContext, error)
ReadOperationalContext is the join point. Resolution order:
- Session (required; the rest is meaningless without it).
- Agent (from session.AgentID via Agents.ReadAgent).
- Assignment (from session.AssignmentID).
- Workspace (from assignment.WorkspaceID; nil if no assignment or the assignment doesn't carry a workspace yet).
- Profile (from session.ID).
- Health (derived from session + profile snapshots).
Errors from steps 2..5 are SWALLOWED — the partial context is returned with the unresolved component nil. The Sessions step is the only one whose error propagates; without a Session, callers can't do anything useful with the result anyway. This matches the spec §4 Layer 1 "graceful degradation" requirement: a planner must keep working even if one of its data sources is degraded.
type OperationalContextReaders ¶
type OperationalContextReaders struct {
Sessions SessionLookup
Agents AgentLookup
Assignments AssignmentLookup
Workspaces WorkspaceLookup
Profiles ProfileLookup
// BeadConcepts is the optional gm-s47n.5.1 concept-drift input.
// When wired, ComputeHealth uses it to derive the last-N concept
// vector from profile.LastBeads and compute concept_drift; nil
// leaves drift at zero. The concrete impl ships with gm-s47n.1
// WorkItem-concept enrichment.
BeadConcepts BeadConceptLookup
Now func() time.Time
}
OperationalContextReaders bundles the dependencies for ReadOperationalContext. Any nil reader means "skip that component" — the resulting OperationalContext leaves the corresponding pointer nil. Callers compose only the readers they have wired today.
Now is injected so SessionHealth.TimeOnTask is deterministic in tests; production wires time.Now.
type ProfileLookup ¶
type ProfileLookup interface {
GetProfile(ctx context.Context, sessionID string) (*SessionProfile, error)
}
ProfileLookup reads SessionProfile rows from the dolt store. The concrete implementation lives in gm-s47n.2.4 (Profile read API); declaring the interface here lets gm-s47n.2.5 ship before .2.4 without a circular dependency.
type ProfileStore ¶
type ProfileStore struct {
// contains filtered or unexported fields
}
ProfileStore is a *sql.DB-backed reader for the session_profiles dolt table. Concurrency: every method is safe for concurrent use because the underlying *sql.DB pool handles pooling + locking.
Construct with NewProfileStore(db). The store does NOT own the *sql.DB — the caller is responsible for Close. This matches every other store in the codebase (internal/adapter/dolt) and keeps the scoring loop free to reuse a single pool across multiple stores.
func NewProfileStore ¶
func NewProfileStore(db *sql.DB) *ProfileStore
NewProfileStore wraps the given *sql.DB. Returns nil when db is nil so callers building optional readers can `if store != nil` without an extra check.
func (*ProfileStore) EnsureSchema ¶
func (s *ProfileStore) EnsureSchema(ctx context.Context) error
EnsureSchema creates the session_profiles table if it doesn't already exist. Idempotent — safe to call from server boot. Uses the embedded SchemaSQL from gm-s47n.2.1.
func (*ProfileStore) GetProfile ¶
func (s *ProfileStore) GetProfile(ctx context.Context, sessionID string) (*SessionProfile, error)
GetProfile reads the profile row for sessionID. Returns (nil, nil) when no row exists — matches the SessionLookup "absent ≠ error" convention so the operational-context join degrades gracefully when a session predates its profile.
Errors during JSON decode of the concepts/files/last_beads columns get wrapped in a typed adaptor error (KindAdaptorDegraded) so the planner can distinguish "schema is right but a row's payload is malformed" from a transport fault. Either way, the operational-context aggregator at .2.5 swallows the error and leaves Profile nil — graceful degradation per spec §4 Layer 1.
func (*ProfileStore) MultiGet ¶
func (s *ProfileStore) MultiGet(ctx context.Context, sessionIDs []string) ([]*SessionProfile, error)
MultiGet reads several profiles in one SQL round-trip. The result preserves caller order and returns nil entries for sessions with no row (mirrors the GetProfile "absent ≠ error" contract). Useful for the coach grid render path which fans out per-session profile reads.
func (*ProfileStore) RecordClaim ¶
func (s *ProfileStore) RecordClaim(ctx context.Context, ev ClaimEvent) (*SessionProfile, error)
RecordClaim ages the session's existing profile by one event step and merges the bead's declared concepts/files at full weight. Returns the resulting profile.
Idempotency: this method does NOT check whether the same bead id was already claimed. Callers that re-deliver a claim event (network retry, double-submit, replay) get the contribution merged twice. The adaptor-level wiring is responsible for deduping at the event boundary; the planner's contract is "every call mutates the profile", consistent with the rest of the gm-s47n family.
func (*ProfileStore) RecordCompletion ¶
func (s *ProfileStore) RecordCompletion(ctx context.Context, ev CompletionEvent) (*SessionProfile, error)
RecordCompletion handles the bead-finished event. Same age + merge as RecordClaim plus:
- bead id appended to last_beads ring (capped at LastBeadsRingSize)
- tokens_used / context_window_max overwritten when the event supplies them (zero leaves the existing snapshot intact)
- context_pct recomputed when both numerator and denominator are observable
func (*ProfileStore) UpsertProfile ¶
func (s *ProfileStore) UpsertProfile(ctx context.Context, p *SessionProfile) error
UpsertProfile writes or replaces the row for p.SessionID. Upsert semantics via INSERT ... ON DUPLICATE KEY UPDATE — works against dolt's MySQL surface. Encodes concepts / files / last_beads as JSON columns matching schema.sql.
type ProfileWriter ¶
type ProfileWriter interface {
RecordClaim(ctx context.Context, ev ClaimEvent) (*SessionProfile, error)
RecordCompletion(ctx context.Context, ev CompletionEvent) (*SessionProfile, error)
UpsertProfile(ctx context.Context, p *SessionProfile) error
}
ProfileWriter is the narrow interface adaptor-side hooks code against. *ProfileStore satisfies it; tests substitute a fake.
RecordClaim and RecordCompletion both return the resulting SessionProfile so the caller can publish it (e.g. emit a session.profile.updated event in a future bead) without an extra read.
type RecycleDecision ¶
type RecycleDecision struct {
Recycle bool `json:"recycle"`
Reason RecycleReason `json:"reason"`
DriverScores RecycleDrivers `json:"drivers"`
}
RecycleDecision is the pure-function output. Action is the operator-facing answer ("recycle" / "keep"); Reason names which rule fired (or "ok" when none did). DriverScores carries the numbers behind the call so coach-mode UI / audit logs can show the operator exactly why the planner chose what it chose.
func ShouldRecycle ¶
func ShouldRecycle(in RecycleInputs) RecycleDecision
ShouldRecycle returns the structured decision for the given inputs. Pure: same inputs always produce the same output. Safe for concurrent use.
The function never panics on missing data — it leans toward "keep the session" whenever a driver is unavailable, since the recycle decision discards a real warm context. Operators tune thresholds via per-rig settings; the v1 constants live in this file and match spec §5.5.
type RecycleDrivers ¶
type RecycleDrivers struct {
ContextPressure float64 `json:"context_pressure"`
ConceptDrift float64 `json:"concept_drift"`
TimeOnTaskHours float64 `json:"time_on_task_hours"`
IncomingAffinity float64 `json:"incoming_affinity"`
MedianAffinity float64 `json:"median_affinity"`
ConceptOverlap float64 `json:"concept_overlap"`
}
RecycleDrivers is the numeric snapshot used for the decision — kept on the result struct so the operator-facing surface can render "pressure 0.91 (rule fires when > 0.85)" without recomputing.
type RecycleInputs ¶
type RecycleInputs struct {
// Health is the SessionHealth snapshot from gm-s47n.5.1. nil
// → all driver thresholds skip.
Health *SessionHealth
// IncomingBead is the candidate bead the planner is about to
// dispatch against this session. Empty Concepts means rule 2
// can't compute the overlap; rule 3 falls back to "this is
// the start of a new area" if the session profile has any
// concepts at all (the bead's empty list cannot intersect).
IncomingBead AffinityBeadInputs
// IncomingAffinityScores are the precomputed Affinity scores
// for (IncomingBead, this session). Saves the caller the
// recompute when ShouldRecycle runs alongside the affinity
// pass. Combined drives rule 1.
IncomingAffinityScores AffinityScores
// ReadySetAffinities is the combined-affinity score of every
// ready bead against this session. Used to compute the median
// for rule 1. Empty slice → rule 1 cannot fire.
ReadySetAffinities []float64
// SessionConcepts is the lifetime concept profile. Used to
// compute the new-area / overlap drivers for rules 2 + 3.
// nil → rules 2 and 3 fall back to the safest answer
// (no concepts known, so the bead is "all new" — defended
// below in evaluateConceptOverlap).
SessionConcepts map[ConceptTag]float64
}
RecycleInputs bundles everything ShouldRecycle needs. Each field is optional; missing data leans toward "keep the session" — the planner should never recycle on incomplete signal because the cold-start cost is real and the worst observable outcome is just a slightly-warm session taking a slightly-mismatched bead.
type RecycleReason ¶
type RecycleReason string
RecycleReason is the structured why behind a recycle decision. One value per spec rule plus "ok" when no rule fires.
Stable wire shape: emitted to logs + the dispatch decision audit trail (gm-s47n.6.2). Don't rename without bumping the audit-log version.
const ( RecycleOK RecycleReason = "ok" RecyclePressureBelowMedian RecycleReason = "context_pressure_high_affinity_below_median" RecycleDriftAndMismatch RecycleReason = "concept_drift_high_overlap_low" RecycleTimeOnTaskNewArea RecycleReason = "time_on_task_new_concept_area" )
type ScopeStatus ¶
type ScopeStatus struct {
Git *GitScopeStatus `json:"git,omitempty"`
Analysis *AnalysisScopeStatus `json:"analysis,omitempty"`
}
ScopeStatus is an operator-facing snapshot of the execution scope backing a session. It answers the questions a user needs before trusting a worktree: is Git clean, is the branch synchronized, and is source analysis current for the checked-out commit?
type SemanticBeadInputs ¶
type SemanticBeadInputs struct {
BeadID string
Targets []sourceanalysis.Target
}
SemanticBeadInputs is one bead's contribution to the semantic graph: its id + the file targets the planner derived. Repository is required so cross-repo dependents don't accidentally collide (an "auth.go" in repo A is a different file from "auth.go" in repo B; the SourceAnalysis backend respects this via sourceanalysis.Target.Repository).
Callers build this from WorkItem.targets[] after expanding any glob patterns to concrete files. Glob expansion lives in the .4.3 composer; this function expects already-resolved paths.
type SemanticConflict ¶
type SemanticConflict struct {
A string `json:"a"`
B string `json:"b"`
Reason string `json:"reason"`
}
SemanticConflict is a single edge in the semantic-conflict graph. A and B are bead ids in stable lexicographic order so two equal pairs always produce the same edge (the .4.3 composer uses this for de-dup against the file-overlap edge set).
Reason names which bead's changes touch which file the other bead targets. Surface text only — never parsed.
func SemanticConflicts ¶
func SemanticConflicts( ctx context.Context, beads []SemanticBeadInputs, sa sourceanalysis.SourceAnalysis, ) ([]SemanticConflict, error)
SemanticConflicts returns every semantic-conflict edge induced by the bead set. Output is sorted by (A, B) so equal inputs always produce equal outputs — important for change-detection up the stack and for de-dup against the other conflict graphs.
When SourceAnalysis is unavailable (errors.Is(err, sourceanalysis.ErrUnavailable)), returns (nil, ErrUnavailable). The .4.3 composer treats this as a soft skip: log + proceed with file + workspace conflict only.
Other errors propagate as-is — they indicate a real failure in the SourceAnalysis backend (process crash, malformed query) that the operator needs to see.
Implementation note: each unique (Repository, Path) pair across the bead set is queried at most once; results are cached for the duration of the call. Beads that share target files (a common case when multiple beads are scoped to the same module) pay the SA round-trip just once.
type SessionHealth ¶
type SessionHealth struct {
// ContextPressure = TokensUsed / ContextWindowMax. Zero when
// ContextWindowMax is not yet observed (session pre-telemetry).
ContextPressure float64 `json:"context_pressure"`
// ConceptDrift = cosine distance between the session's last-N
// concept vector and the lifetime concept vector. Range [0, 1];
// 0 means "still on the same topic", 1 means "completely shifted".
ConceptDrift float64 `json:"concept_drift"`
// TimeOnTask is wall clock since core.Session.StartedAt.
TimeOnTask time.Duration `json:"time_on_task_ns"`
}
SessionHealth is the planner-derived liveness snapshot for a session — three numbers spec §4 Layer 4 names. NOT persisted as a table; computed on each read from the SessionProfile + the live core.Session record. Lives here so the OperationalContext can carry a single, stable shape.
Thresholds (warn / strong-warn) live with the planner (Layer 4 implementation, gm-s47n.5) — this struct is just the numbers.
func ComputeHealth ¶
func ComputeHealth( ctx context.Context, sess *core.Session, profile *SessionProfile, concepts BeadConceptLookup, now func() time.Time, ) (*SessionHealth, error)
ComputeHealth produces the SessionHealth snapshot per gm-s47n.5.1. All three numbers are derived together so callers (the CLI in .5.2, the SPA in .5.3, the operational-context aggregator in .5.4) get a coherent snapshot from one call instead of stitching it themselves.
concepts is optional. When nil OR profile.LastBeads is empty, ConceptDrift stays at zero (no last-N vector available); the rest of the snapshot still populates so callers can render a degraded-but-useful card.
now is injected for deterministic tests; production wires time.Now. nil falls through to time.Now.
Returns (nil, nil) when sess is nil — preserves the "Health follows Session" invariant operational_context.go relies on. A non-nil sess always yields a non-nil SessionHealth: at minimum TimeOnTask is meaningful from a session alone, and the other fields have valid zero defaults.
type SessionLookup ¶
type SessionLookup interface {
// FindSession returns the session by id. Implementations should
// return (nil, nil) for "not found" so callers can distinguish
// transient errors from genuine absence; an error here is treated
// as fatal by ReadOperationalContext.
FindSession(ctx context.Context, sessionID string) (*core.Session, error)
}
SessionLookup is the per-session subset of core.OrchestrationPlane the planner reads. Narrow interface so a fake test fixture is one screen of code.
Why not the full OrchestrationPlane: the bead's read path needs exactly one method. Coupling the planner to the whole interface would force every test to stub two dozen methods it doesn't touch.
type SessionProfile ¶
type SessionProfile struct {
// SessionID matches core.Session.ID — primary key in the dolt
// table.
SessionID string `json:"session_id"`
// AssignmentID denormalised from core.Assignment so the planner
// can answer "what is this session currently on?" in one read.
AssignmentID string `json:"assignment_id"`
// AgentID denormalised from core.AgentRef.ID so per-agent
// rollups don't require a second lookup.
AgentID core.AgentID `json:"agent_id"`
// Concepts is the recency-decayed concept profile: tag → weight.
// JSON object on the wire; dolt column is JSON.
Concepts map[ConceptTag]float64 `json:"concepts,omitempty"`
// Files is the recency-decayed file-touch profile: repo-relative
// path → weight. Same shape as Concepts.
Files map[string]float64 `json:"files,omitempty"`
// TokensUsed is the most recent observed token usage on the
// session's underlying LLM context. Surfaced via the agent's
// telemetry hook (gm-s47n.2.3).
TokensUsed int `json:"tokens_used,omitempty"`
// ContextWindowMax is the agent's hard context window. Stored
// alongside TokensUsed so a single read computes
// `tokens_used / context_window_max` without a separate manifest
// lookup. May be zero on first observation.
ContextWindowMax int `json:"context_window_max,omitempty"`
// ContextPct is the precomputed `TokensUsed / ContextWindowMax`
// snapshot — derived but persisted so the SPA / planner share the
// same number without re-deriving. Always in [0, 1] when
// ContextWindowMax is non-zero; zero when not yet observed.
ContextPct float64 `json:"context_pct,omitempty"`
// LastBeads is the ring buffer of the last N completed bead ids
// (newest last). Capped at LastBeadsRingSize. The retrospective
// (gm-s47n.8) reads it to detect "session just shipped X" cases.
LastBeads []core.WorkItemID `json:"last_beads,omitempty"`
// LastActivityAt is the timestamp of the most recent BEAD-EVENT
// boundary (claim or completion) on this session — distinct from
// core.Session.LastHeartbeat which updates on every health ping.
// "Activity" here means "the session moved the planner's world".
LastActivityAt time.Time `json:"last_activity_at"`
// CreatedAt / UpdatedAt — table audit columns. The planner reads
// UpdatedAt to break ties when two sessions have the same
// affinity (newer profile wins).
CreatedAt time.Time `json:"created_at"`
UpdatedAt time.Time `json:"updated_at"`
}
SessionProfile is the per-session warm-context snapshot the planner consults to compute affinity (spec §5.4).
It lives in dolt (table: session_profiles) keyed by SessionID. The schema joins to:
- core.Session via SessionID
- core.Assignment via AssignmentID (denormalised here so the planner doesn't need a second join just to find the bead the session is on)
Decay applies to Concepts and Files: each entry's value is the recency-decayed weight summed over completed beads (spec §5.1).
type WorkspaceCollision ¶
type WorkspaceCollision struct {
// A and B are bead ids. For bead↔live edges, B is the bead
// being blocked by the live session and A is left empty —
// callers branch on LiveSessionID being non-empty rather than
// on string sentinels.
A string `json:"a,omitempty"`
B string `json:"b"`
// Reason is a one-line human explanation: "same repo+branch",
// "same worktree_path", "live session in branch X", etc. The
// .4.3 composer surfaces this verbatim in the coach UI; never
// parse it back.
Reason string `json:"reason"`
// LiveSessionID is non-empty when this edge is a bead↔live
// collision. The corresponding live OperationalContext can be
// looked up via the planner's session resolver.
LiveSessionID string `json:"live_session_id,omitempty"`
}
WorkspaceCollision is a single edge in the workspace-conflict graph. A is always a bead id; B is either a bead id (bead↔bead collision) or a sentinel string when the right-hand side is a live session — in that case LiveSessionID names the session.
func WorkspaceCollisions ¶
func WorkspaceCollisions(beads []BeadTarget, live []OperationalContext) []WorkspaceCollision
WorkspaceCollisions returns every workspace-conflict edge induced by the bead set plus the live operational contexts. Output is sorted by (LiveSessionID, A, B) so equal inputs always produce equal outputs — important for change-detection up the stack.
Two beads (or a bead + live ctx) collide when ANY of the following conditions hold:
Same Repository AND same Branch (both non-empty on each side). Empty Repository or Branch means "the planner couldn't infer the routing target" — those entries are silently skipped rather than treated as wildcard matches, so an under-specified bead doesn't collide with everything.
Same canonicalised WorktreePath (both non-empty). Comparison uses filepath.Clean so /a/b and /a/b/ never miss; symlinks are NOT resolved here — adaptors that need symlink-aware comparison pre-canonicalise their inputs.
Both relations are checked; either one alone is enough to emit an edge. Reason names which one fired (when both, "same repo+branch + same worktree_path").
`live` carries the currently-active sessions. The function only reads .Session.ID + .Workspace; passing a partial OperationalContext (e.g. with nil Profile or Health) is fine.
Source Files
¶
Directories
¶
| Path | Synopsis |
|---|---|
|
Package autodispatch implements the auto-dispatch daemon (gm-s47n.6.3, work-planning.md §4 Layer 5.2).
|
Package autodispatch implements the auto-dispatch daemon (gm-s47n.6.3, work-planning.md §4 Layer 5.2). |
|
Package claims implements the per-rig owner-claim cross-check (gm-v5z2.6, work-planning.md §4 Layer 5.1 inputs + §4 Layer 5.2 gate 2).
|
Package claims implements the per-rig owner-claim cross-check (gm-v5z2.6, work-planning.md §4 Layer 5.1 inputs + §4 Layer 5.2 gate 2). |
|
Package conflicts is the bead-set conflict scorer (gm-s47n.4.3).
|
Package conflicts is the bead-set conflict scorer (gm-s47n.4.3). |
|
Package dispatch persists dispatch decisions — the moment a coach (or, later, the auto-dispatch daemon) picks a bead for a session (gm-s47n.6.2, work-planning.md §5 Layer 5).
|
Package dispatch persists dispatch decisions — the moment a coach (or, later, the auto-dispatch daemon) picks a bead for a session (gm-s47n.6.2, work-planning.md §5 Layer 5). |
|
Package intent implements the operator-pinned session focus directive (gm-v5z2.3, work-planning.md §4 Layer 1.3).
|
Package intent implements the operator-pinned session focus directive (gm-v5z2.3, work-planning.md §4 Layer 1.3). |
|
Package retro implements the turn-retrospective comparator (gm-s47n.8.1, work-planning.md §7).
|
Package retro implements the turn-retrospective comparator (gm-s47n.8.1, work-planning.md §7). |
|
Package runway implements the per-session runway estimator (gm-v5z2.4, work-planning.md §4 Layer 5.1).
|
Package runway implements the per-session runway estimator (gm-v5z2.4, work-planning.md §4 Layer 5.1). |
|
Package scanscheduler owns scheduling re-indexing of the source analysis tool (gm-s47n.9, work-planning.md §8).
|
Package scanscheduler owns scheduling re-indexing of the source analysis tool (gm-s47n.9, work-planning.md §8). |
|
Package scoring implements the WP2.0 score components Selection (gm-v5z2.7) composes:
|
Package scoring implements the WP2.0 score components Selection (gm-v5z2.7) composes: |
|
Package selection implements the WP2.0 Layer 5 Selection step (gm-v5z2.7, work-planning.md §4 Layer 5).
|
Package selection implements the WP2.0 Layer 5 Selection step (gm-v5z2.7, work-planning.md §4 Layer 5). |
|
Package semantic implements the gm-s47n.4.2 SemanticDetector for the conflicts package: two beads conflict semantically when one bead's targets are dependents of the other bead's targets in the SourceAnalysis index.
|
Package semantic implements the gm-s47n.4.2 SemanticDetector for the conflicts package: two beads conflict semantically when one bead's targets are dependents of the other bead's targets in the SourceAnalysis index. |
|
Package sizecalibration implements the bead-size calibration loop (gm-v5z2.8 part b, work-planning.md §7.6).
|
Package sizecalibration implements the bead-size calibration loop (gm-v5z2.8 part b, work-planning.md §7.6). |
|
Package targets implements the target-overlap glob algorithm (gm-s47n.4.1) — given two WorkItems' target glob sets, decides whether they could touch the same concrete file path.
|
Package targets implements the target-overlap glob algorithm (gm-s47n.4.1) — given two WorkItems' target glob sets, decides whether they could touch the same concrete file path. |