sqlite

package
v0.5.0 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Mar 25, 2026 License: Apache-2.0 Imports: 21 Imported by: 0

Documentation

Overview

Package sqlite contains SQLite repository implementations for LiDAR domain types.

All database read/write operations for tracks, observations, scenes, evaluations, and analysis runs belong here rather than in the domain layer packages (L3-L6). This keeps domain logic free of SQL noise and makes it easier to swap storage backends for testing.

Track Snapshot Pattern

Two tables store track measurement data with different lifecycles:

  • lidar_tracks — L5 live tracking buffer, pruned after ~5 minutes. PK: track_id. Serves the real-time TrackAPI endpoint.
  • lidar_run_tracks — L8 immutable snapshots tied to analysis runs. PK: (run_id, track_id). Serves run comparison, labelling, and sweeps.

The 15 shared measurement columns (sensor_id through classification_model) are intentionally duplicated in both tables: live tracks are ephemeral, run-track snapshots are permanent. Go-layer DRY is enforced through:

  • l5tracks.TrackMeasurement — embedded struct in both TrackedObject and RunTrack
  • track_measurement_sql.go — shared SQL column list, scan helpers, and insert args
  • lidar_all_tracks VIEW — UNION ALL for ad-hoc cross-table queries

See docs/lidar/architecture/lidar-data-layer-model.md §L5/L8 for the full layer context.

See docs/lidar/architecture/lidar-layer-alignment-refactor-review.md §2 for the design rationale.

Index

Constants

View Source
const (
	TrackTentative = l5tracks.TrackTentative
	TrackConfirmed = l5tracks.TrackConfirmed
	TrackDeleted   = l5tracks.TrackDeleted
)

Constants re-exported from l5tracks for track lifecycle states.

Variables

View Source
var ErrNotFound = sql.ErrNoRows

ErrNotFound is returned when a queried record does not exist. Callers outside the storage layer should check against this sentinel instead of importing database/sql for sql.ErrNoRows.

View Source
var HungarianAssign = l5tracks.HungarianAssign

HungarianAssign performs Hungarian algorithm assignment.

Functions

func ClearRuns

func ClearRuns(db DBClient, sensorID string) error

ClearRuns removes all analysis runs and their associated run tracks for a sensor. This is intended for development/debug resets and should not be exposed in production without auth. The CASCADE foreign key on lidar_run_tracks will automatically delete associated run track records.

func ClearTracks

func ClearTracks(db DBClient, sensorID string) error

ClearTracks removes all tracks, observations, and clusters for a sensor. This is intended for development/debug resets and should not be exposed in production without auth.

func DeleteRun

func DeleteRun(db DBClient, runID string) error

DeleteRun removes a specific analysis run and its associated run tracks. The CASCADE foreign key on lidar_run_tracks will automatically delete associated run track records.

func InsertCluster

func InsertCluster(db DBClient, cluster *WorldCluster) (int64, error)

InsertCluster inserts a cluster into the database and returns its ID.

func InsertTrack

func InsertTrack(exec Executor, track *TrackedObject, frameID string) error

InsertTrack inserts a new track into the database.

func InsertTrackObservation

func InsertTrackObservation(exec Executor, obs *TrackObservation) error

InsertTrackObservation inserts a track observation into the database.

func PruneDeletedTracks

func PruneDeletedTracks(db DBClient, sensorID string, ttl time.Duration) (int64, error)

PruneDeletedTracks removes tracks in the 'deleted' state (and their observations) whose last update is older than the supplied TTL. This prevents the database from growing unboundedly as the tracker continuously creates and deletes short-lived spurious tracks. Returns the number of tracks pruned and any error encountered.

func RegisterAnalysisRunManager

func RegisterAnalysisRunManager(sensorID string, manager *AnalysisRunManager)

RegisterAnalysisRunManager registers a manager for a sensor ID.

func SetLogWriters

func SetLogWriters(ops, diag, trace io.Writer)

SetLogWriters configures the three logging streams for the sqlite package. Pass nil for any writer to disable that stream.

func UpdateTrack

func UpdateTrack(db DBClient, track *TrackedObject) error

UpdateTrack updates an existing track in the database.

Types

type AnalysisRun

type AnalysisRun struct {
	RunID               string          `json:"run_id"`
	CreatedAt           time.Time       `json:"created_at"`
	CompletedAt         *time.Time      `json:"completed_at,omitempty"`
	SourceType          string          `json:"source_type"` // "pcap" or "live"
	SourcePath          string          `json:"source_path,omitempty"`
	SensorID            string          `json:"sensor_id"`
	RunConfigID         string          `json:"run_config_id,omitempty"`
	RequestedParamSetID string          `json:"requested_param_set_id,omitempty"`
	ParamSetID          string          `json:"param_set_id,omitempty"`
	ConfigHash          string          `json:"config_hash,omitempty"`
	ParamsHash          string          `json:"params_hash,omitempty"`
	SchemaVersion       string          `json:"schema_version,omitempty"`
	ParamSetType        string          `json:"param_set_type,omitempty"`
	BuildVersion        string          `json:"build_version,omitempty"`
	BuildGitSHA         string          `json:"build_git_sha,omitempty"`
	ReplayCaseID        string          `json:"replay_case_id,omitempty"`
	StatisticsJSON      json.RawMessage `json:"statistics_json,omitempty"`
	ExecutionConfig     json.RawMessage `json:"execution_config,omitempty"`
	FrameStartNs        *int64          `json:"frame_start_ns,omitempty"`
	FrameEndNs          *int64          `json:"frame_end_ns,omitempty"`
	DurationSecs        float64         `json:"duration_secs"`
	TotalFrames         int             `json:"total_frames"`
	TotalClusters       int             `json:"total_clusters"`
	TotalTracks         int             `json:"total_tracks"`
	ConfirmedTracks     int             `json:"confirmed_tracks"`
	ProcessingTimeMs    int64           `json:"processing_time_ms"`
	Status              string          `json:"status"` // "running", "completed", "failed"
	ErrorMessage        string          `json:"error_message,omitempty"`
	ParentRunID         string          `json:"parent_run_id,omitempty"`
	Notes               string          `json:"notes,omitempty"`
	VRLogPath           string          `json:"vrlog_path,omitempty"` // Path to VRLOG recording for replay

	// Derived fields (not persisted in DB, computed on retrieval)
	ReplayCaseName string          `json:"replay_case_name,omitempty"` // Derived from SourcePath filename
	LabelRollup    *RunLabelRollup `json:"label_rollup,omitempty"`     // Derived from run-track labels
}

AnalysisRun represents a complete analysis session with immutable run-config provenance.

func (*AnalysisRun) PopulateReplayCaseName

func (r *AnalysisRun) PopulateReplayCaseName()

PopulateReplayCaseName sets ReplayCaseName from SourcePath by extracting the base filename without extension. E.g. "/data/kirk1.pcap" → "kirk1".

type AnalysisRunManager

type AnalysisRunManager struct {
	// contains filtered or unexported fields
}

AnalysisRunManager coordinates analysis run lifecycle and track collection. It is safe for concurrent use and provides hooks for the tracking pipeline.

func GetAnalysisRunManager

func GetAnalysisRunManager(sensorID string) *AnalysisRunManager

GetAnalysisRunManager retrieves the manager for a sensor ID.

func NewAnalysisRunManager

func NewAnalysisRunManager(db DBClient, sensorID string) *AnalysisRunManager

NewAnalysisRunManager creates a new manager for tracking analysis runs.

func NewAnalysisRunManagerDI

func NewAnalysisRunManagerDI(db DBClient, sensorID string) *AnalysisRunManager

NewAnalysisRunManagerDI creates a new manager without registering it in the global registry. Prefer this constructor when wiring dependencies explicitly via pipeline.SensorRuntime.

func (*AnalysisRunManager) CompleteRun

func (m *AnalysisRunManager) CompleteRun() error

CompleteRun finalizes the current analysis run with statistics.

func (*AnalysisRunManager) CurrentRunID

func (m *AnalysisRunManager) CurrentRunID() string

CurrentRunID returns the current run ID, or empty string if no run is active.

func (*AnalysisRunManager) FailRun

func (m *AnalysisRunManager) FailRun(errMsg string) error

FailRun marks the current run as failed with an error message.

func (*AnalysisRunManager) IsRunActive

func (m *AnalysisRunManager) IsRunActive() bool

IsRunActive returns true if there's an active analysis run.

func (*AnalysisRunManager) RecordClusters

func (m *AnalysisRunManager) RecordClusters(count int)

RecordClusters increments the cluster count for the current run.

func (*AnalysisRunManager) RecordFrame

func (m *AnalysisRunManager) RecordFrame(timestampNs int64)

RecordFrame increments the frame count and tracks the frame timestamp. The timestampNs is the data timestamp (e.g. PCAP packet time), not wall-clock.

func (*AnalysisRunManager) RecordTrack

func (m *AnalysisRunManager) RecordTrack(track *TrackedObject) bool

RecordTrack records a track for the current analysis run. This inserts a RunTrack record and returns true if this is a new track.

func (*AnalysisRunManager) StartRun

func (m *AnalysisRunManager) StartRun(sourcePath string, _ RunParams) (string, error)

StartRun begins a new analysis run for PCAP processing. It returns the run ID that can be used for track association.

func (*AnalysisRunManager) StartRunWithConfig

func (m *AnalysisRunManager) StartRunWithConfig(opts AnalysisRunStartOptions) (string, error)

StartRunWithConfig begins a new analysis run backed by immutable run-config provenance. It records the exact effective config and optional launch intent before execution starts.

type AnalysisRunStartOptions

type AnalysisRunStartOptions struct {
	PreferredRunID      string
	SourceType          string
	SourcePath          string
	SensorID            string
	ParentRunID         string
	ReplayCaseID        string
	RequestedParamSetID string
	RequestedParamsJSON json.RawMessage
	EffectiveConfig     *cfgpkg.TuningConfig
}

AnalysisRunStartOptions captures immutable run-config provenance for an analysis replay.

type AnalysisRunStore

type AnalysisRunStore struct {
	// contains filtered or unexported fields
}

AnalysisRunStore provides persistence for analysis runs.

func NewAnalysisRunStore

func NewAnalysisRunStore(db DBClient) *AnalysisRunStore

NewAnalysisRunStore creates a new AnalysisRunStore.

func (*AnalysisRunStore) CompleteRun

func (s *AnalysisRunStore) CompleteRun(runID string, stats *AnalysisStats) error

CompleteRun marks a run as completed with final statistics.

func (*AnalysisRunStore) GetLabelingProgress

func (s *AnalysisRunStore) GetLabelingProgress(runID string) (total, labeled int, byClass map[string]int, err error)

GetLabelingProgress returns labeling statistics for a run.

func (*AnalysisRunStore) GetLabelingProgressWithRollup

func (s *AnalysisRunStore) GetLabelingProgressWithRollup(runID string) (total, labeled int, byClass map[string]int, rollup *RunLabelRollup, err error)

GetLabelingProgressWithRollup returns labeling statistics plus the current mutually-exclusive rollup for a run.

func (*AnalysisRunStore) GetRun

func (s *AnalysisRunStore) GetRun(runID string) (*AnalysisRun, error)

GetRun retrieves an analysis run by ID.

func (*AnalysisRunStore) GetRunLabelRollup

func (s *AnalysisRunStore) GetRunLabelRollup(runID string) (*RunLabelRollup, error)

GetRunLabelRollup returns the current human labelling state for one run.

func (*AnalysisRunStore) GetRunTrack

func (s *AnalysisRunStore) GetRunTrack(runID, trackID string) (*RunTrack, error)

GetRunTrack retrieves a single track for an analysis run.

func (*AnalysisRunStore) GetRunTracks

func (s *AnalysisRunStore) GetRunTracks(runID string) ([]*RunTrack, error)

GetRunTracks retrieves all tracks for an analysis run.

func (*AnalysisRunStore) GetUnlabeledTracks

func (s *AnalysisRunStore) GetUnlabeledTracks(runID string, limit int) ([]*RunTrack, error)

GetUnlabeledTracks returns tracks that need labeling.

func (*AnalysisRunStore) InsertRun

func (s *AnalysisRunStore) InsertRun(run *AnalysisRun) error

InsertRun creates a new analysis run.

func (*AnalysisRunStore) InsertRunTrack

func (s *AnalysisRunStore) InsertRunTrack(track *RunTrack) error

InsertRunTrack inserts a track for an analysis run. Uses retry logic to handle SQLITE_BUSY errors from concurrent writes.

func (*AnalysisRunStore) ListRuns

func (s *AnalysisRunStore) ListRuns(limit int) ([]*AnalysisRun, error)

ListRuns retrieves recent analysis runs.

func (*AnalysisRunStore) UpdateRunStatus

func (s *AnalysisRunStore) UpdateRunStatus(runID, status, errorMsg string) error

UpdateRunStatus updates the status of an analysis run.

func (*AnalysisRunStore) UpdateRunVRLogPath

func (s *AnalysisRunStore) UpdateRunVRLogPath(runID, vrlogPath string) error

UpdateRunVRLogPath updates the vrlog_path of an analysis run.

func (*AnalysisRunStore) UpdateTrackLabel

func (s *AnalysisRunStore) UpdateTrackLabel(runID, trackID, userLabel, qualityLabel string, confidence float32, labelerID, labelSource string) error

UpdateTrackLabel updates the user label and quality label for a track. Both userLabel and qualityLabel can be empty strings, which will be stored as NULL in the database. Values are trimmed and canonicalised before storage. This function does NOT validate enum values - it accepts any string and stores it as-is. Validation of label enum values should be performed by the caller (e.g., API handlers) using ValidateUserLabel() and ValidateQualityLabel() from the api package.

func (*AnalysisRunStore) UpdateTrackQualityFlags

func (s *AnalysisRunStore) UpdateTrackQualityFlags(runID, trackID string, isSplit, isMerge bool, linkedIDs []string) error

UpdateTrackQualityFlags updates the split/merge flags for a track.

type AnalysisStats

type AnalysisStats struct {
	DurationSecs     float64
	TotalFrames      int
	TotalClusters    int
	TotalTracks      int
	ConfirmedTracks  int
	ProcessingTimeMs int64
	CompletedAt      time.Time
	FrameStartNs     int64
	FrameEndNs       int64
}

AnalysisStats holds statistics for a completed analysis run.

type BackgroundParams

type BackgroundParams = l3grid.BackgroundParams

BackgroundParams contains parameters for background subtraction from L3 (grid layer).

type BackgroundParamsExport

type BackgroundParamsExport struct {
	BackgroundUpdateFraction       float32 `json:"background_update_fraction"`
	ClosenessSensitivityMultiplier float32 `json:"closeness_sensitivity_multiplier"`
	SafetyMarginMeters             float32 `json:"safety_margin_meters"`
	NeighborConfirmationCount      int     `json:"neighbor_confirmation_count"`
	NoiseRelativeFraction          float32 `json:"noise_relative_fraction"`
	SeedFromFirstObservation       bool    `json:"seed_from_first_observation"`
	FreezeDurationNanos            int64   `json:"freeze_duration_nanos"`
}

BackgroundParamsExport is the JSON-serializable background params.

func FromBackgroundParams

func FromBackgroundParams(p BackgroundParams) BackgroundParamsExport

FromBackgroundParams creates export params from BackgroundParams.

type ClassificationParamsExport

type ClassificationParamsExport struct {
	ModelType  string                 `json:"model_type"`
	Thresholds map[string]interface{} `json:"thresholds,omitempty"`
}

ClassificationParamsExport is the JSON-serializable classification params.

type ClusteringParamsExport

type ClusteringParamsExport struct {
	Eps      float64 `json:"eps"`
	MinPts   int     `json:"min_pts"`
	CellSize float64 `json:"cell_size,omitempty"`
}

ClusteringParamsExport is the JSON-serializable clustering params.

func FromDBSCANParams

func FromDBSCANParams(p DBSCANParams) ClusteringParamsExport

FromDBSCANParams creates export params from DBSCANParams.

type DBClient

type DBClient interface {
	Exec(query string, args ...any) (sql.Result, error)
	Query(query string, args ...any) (*sql.Rows, error)
	QueryRow(query string, args ...any) *sql.Row
	Begin() (*sql.Tx, error)
}

DBClient is the minimal query/exec surface shared by *sql.DB and *db.DB. It standardises store constructors on behaviour rather than a concrete type.

type DBSCANParams

type DBSCANParams = l4perception.DBSCANParams

DBSCANParams contains parameters for DBSCAN clustering from L4 (perception layer).

type Evaluation

type Evaluation struct {
	EvaluationID        string  `json:"evaluation_id"`
	ReplayCaseID        string  `json:"replay_case_id"`
	ReferenceRunID      string  `json:"reference_run_id"`
	CandidateRunID      string  `json:"candidate_run_id"`
	DetectionRate       float64 `json:"detection_rate"`
	Fragmentation       float64 `json:"fragmentation"`
	FalsePositiveRate   float64 `json:"false_positive_rate"`
	VelocityCoverage    float64 `json:"velocity_coverage"`
	QualityPremium      float64 `json:"quality_premium"`
	TruncationRate      float64 `json:"truncation_rate"`
	VelocityNoiseRate   float64 `json:"velocity_noise_rate"`
	StoppedRecoveryRate float64 `json:"stopped_recovery_rate"`
	CompositeScore      float64 `json:"composite_score"`
	MatchedCount        int     `json:"matched_count"`
	ReferenceCount      int     `json:"reference_count"`
	CandidateCount      int     `json:"candidate_count"`
	CreatedAt           int64   `json:"created_at"`
}

Evaluation represents a persisted ground truth evaluation result comparing a candidate analysis run against a reference run for a given scene.

type EvaluationStore

type EvaluationStore struct {
	// contains filtered or unexported fields
}

EvaluationStore provides persistence for ground truth evaluation results.

func NewEvaluationStore

func NewEvaluationStore(db DBClient) *EvaluationStore

NewEvaluationStore creates a new EvaluationStore.

func (*EvaluationStore) Delete

func (s *EvaluationStore) Delete(evaluationID string) error

Delete removes an evaluation by ID.

func (*EvaluationStore) Get

func (s *EvaluationStore) Get(evaluationID string) (*Evaluation, error)

Get returns a single evaluation by ID.

func (*EvaluationStore) Insert

func (s *EvaluationStore) Insert(eval *Evaluation) error

Insert persists a new evaluation result. If EvaluationID is empty, a UUID is generated.

func (*EvaluationStore) ListByScene

func (s *EvaluationStore) ListByScene(sceneID string) ([]*Evaluation, error)

ListByScene returns all evaluations for a given scene, ordered by creation time descending.

type Executor

type Executor interface {
	Exec(query string, args ...any) (sql.Result, error)
}

Executor is satisfied by both *sql.DB and *sql.Tx, allowing callers to pass either directly or inside a batching transaction.

type ImmutableRunConfigBackfillResult

type ImmutableRunConfigBackfillResult struct {
	RunsSeen           int
	RunsUpdated        int
	RunsSkipped        int
	ReplayCasesSeen    int
	ReplayCasesUpdated int
	ReplayCasesSkipped int
}

ImmutableRunConfigBackfillResult holds backfill counters.

func BackfillImmutableRunConfigReferences

func BackfillImmutableRunConfigReferences(db DBClient, _ bool) (*ImmutableRunConfigBackfillResult, error)

BackfillImmutableRunConfigReferences is a no-op after migration 000036 dropped the legacy params_json and optimal_params_json columns that the backfill previously read from.

type LabelFilter

type LabelFilter struct {
	TrackID          string
	ClassLabel       string
	StartTimestampNs string
	EndTimestampNs   string
	Limit            int
}

LabelFilter describes optional list filters for lidar labels. Timestamp fields stay as strings so HTTP callers can preserve existing SQLite coercion semantics for query parameters.

type LabelStore

type LabelStore struct {
	// contains filtered or unexported fields
}

LabelStore provides CRUD and export access for lidar_track_annotations.

func NewLabelStore

func NewLabelStore(db DBClient) *LabelStore

NewLabelStore creates a new LabelStore.

func (*LabelStore) CreateLabel

func (s *LabelStore) CreateLabel(label *LidarLabel) error

CreateLabel inserts a new manual label.

func (*LabelStore) DeleteLabel

func (s *LabelStore) DeleteLabel(labelID string) error

DeleteLabel removes a label by ID.

func (*LabelStore) ExportLabels

func (s *LabelStore) ExportLabels() ([]LidarLabel, error)

ExportLabels returns all labels ordered oldest-first for stable export output.

func (*LabelStore) GetLabel

func (s *LabelStore) GetLabel(labelID string) (*LidarLabel, error)

GetLabel returns a label by ID.

func (*LabelStore) ListLabels

func (s *LabelStore) ListLabels(filter LabelFilter) ([]LidarLabel, error)

ListLabels returns labels filtered by the supplied fields, ordered newest-first.

func (*LabelStore) UpdateLabel

func (s *LabelStore) UpdateLabel(labelID string, updates *LidarLabel) error

UpdateLabel updates explicitly provided fields on a label.

type LidarLabel

type LidarLabel struct {
	LabelID          string   `json:"label_id"`
	TrackID          string   `json:"track_id"`
	ClassLabel       string   `json:"class_label"`
	StartTimestampNs int64    `json:"start_timestamp_ns"`
	EndTimestampNs   *int64   `json:"end_timestamp_ns,omitempty"`
	Confidence       *float32 `json:"confidence,omitempty"`
	CreatedBy        *string  `json:"created_by,omitempty"`
	CreatedAtNs      int64    `json:"created_at_ns"`
	UpdatedAtNs      *int64   `json:"updated_at_ns,omitempty"`
	Notes            *string  `json:"notes,omitempty"`
	ReplayCaseID     *string  `json:"replay_case_id,omitempty"`
	SourceFile       *string  `json:"source_file,omitempty"`
}

LidarLabel represents a manual label applied to a track for training or validation.

type MissedRegion

type MissedRegion struct {
	RegionID      string  `json:"region_id"`
	RunID         string  `json:"run_id"`
	CenterX       float64 `json:"center_x"`
	CenterY       float64 `json:"center_y"`
	RadiusM       float64 `json:"radius_m"`
	TimeStartNs   int64   `json:"time_start_ns"`
	TimeEndNs     int64   `json:"time_end_ns"`
	ExpectedLabel string  `json:"expected_label"`
	LabelerID     string  `json:"labeler_id,omitempty"`
	LabeledAt     *int64  `json:"labeled_at,omitempty"`
	Notes         string  `json:"notes,omitempty"`
}

MissedRegion represents an area where an object should have been tracked but was not detected by the tracker. Used for ground truth evaluation.

type MissedRegionStore

type MissedRegionStore struct {
	// contains filtered or unexported fields
}

MissedRegionStore provides persistence for missed region annotations.

func NewMissedRegionStore

func NewMissedRegionStore(db DBClient) *MissedRegionStore

NewMissedRegionStore creates a new MissedRegionStore.

func (*MissedRegionStore) Delete

func (s *MissedRegionStore) Delete(regionID string) error

Delete removes a missed region by ID.

func (*MissedRegionStore) Insert

func (s *MissedRegionStore) Insert(region *MissedRegion) error

Insert creates a new missed region in the database. If region.RegionID is empty, a new UUID is generated.

func (*MissedRegionStore) ListByRun

func (s *MissedRegionStore) ListByRun(runID string) ([]*MissedRegion, error)

ListByRun returns all missed regions for a given run.

type ReplayCase

type ReplayCase struct {
	ReplayCaseID             string          `json:"replay_case_id"`
	SensorID                 string          `json:"sensor_id"`
	PCAPFile                 string          `json:"pcap_file"`
	PCAPStartSecs            *float64        `json:"pcap_start_secs,omitempty"`
	PCAPDurationSecs         *float64        `json:"pcap_duration_secs,omitempty"`
	Description              string          `json:"description,omitempty"`
	ReferenceRunID           string          `json:"reference_run_id,omitempty"`
	OptimalParamsJSON        json.RawMessage `json:"optimal_params_json,omitempty"`
	RecommendedParamSetID    string          `json:"recommended_param_set_id,omitempty"`
	RecommendedParamsHash    string          `json:"recommended_params_hash,omitempty"`
	RecommendedSchemaVersion string          `json:"recommended_schema_version,omitempty"`
	RecommendedParamSetType  string          `json:"recommended_param_set_type,omitempty"`
	RecommendedParams        json.RawMessage `json:"recommended_params,omitempty"`
	CreatedAtNs              int64           `json:"created_at_ns"`
	UpdatedAtNs              *int64          `json:"updated_at_ns,omitempty"`
}

ReplayCase represents a LiDAR evaluation replay case tying a PCAP to a sensor and parameters. A replay case is a specific environment captured in a PCAP with optional reference ground truth.

type ReplayCaseStore

type ReplayCaseStore struct {
	// contains filtered or unexported fields
}

ReplayCaseStore provides persistence for LiDAR evaluation replay cases.

func NewReplayCaseStore

func NewReplayCaseStore(db DBClient) *ReplayCaseStore

NewReplayCaseStore creates a new ReplayCaseStore.

func (*ReplayCaseStore) DeleteScene

func (s *ReplayCaseStore) DeleteScene(sceneID string) error

DeleteScene deletes a replay case by ID.

func (*ReplayCaseStore) GetScene

func (s *ReplayCaseStore) GetScene(sceneID string) (*ReplayCase, error)

GetScene retrieves a replay case by ID.

func (*ReplayCaseStore) InsertScene

func (s *ReplayCaseStore) InsertScene(scene *ReplayCase) error

InsertScene creates a new replay case in the database. If scene.ReplayCaseID is empty, a new UUID is generated.

func (*ReplayCaseStore) ListScenes

func (s *ReplayCaseStore) ListScenes(sensorID string) ([]*ReplayCase, error)

ListScenes retrieves all replay cases, optionally filtered by sensor_id.

func (*ReplayCaseStore) SetOptimalParams

func (s *ReplayCaseStore) SetOptimalParams(sceneID string, paramsJSON json.RawMessage) error

SetOptimalParams sets the optimal parameters JSON for a replay case.

func (*ReplayCaseStore) SetReferenceRun

func (s *ReplayCaseStore) SetReferenceRun(sceneID, runID string) error

SetReferenceRun sets the reference run ID for a replay case.

func (*ReplayCaseStore) UpdateScene

func (s *ReplayCaseStore) UpdateScene(scene *ReplayCase) error

UpdateScene updates an existing replay case's mutable fields for the given scene ID. Updates description, reference_run_id, pcap_start_secs, and pcap_duration_secs; empty strings are stored as NULL, which clears those fields.

type RunComparison

type RunComparison = l8analytics.RunComparison

RunComparison shows differences between two analysis runs. Canonical type is in l8analytics.

func CompareRuns

func CompareRuns(store *AnalysisRunStore, run1ID, run2ID string) (*RunComparison, error)

CompareRuns compares two analysis runs by matching their tracks using temporal IoU and spatial proximity. It populates RunComparison with matched tracks, split candidates, merge candidates, and tracks unique to each run.

type RunLabelRollup

type RunLabelRollup struct {
	Total      int `json:"total"`
	Classified int `json:"classified"`
	TaggedOnly int `json:"tagged_only"`
	Unlabelled int `json:"unlabelled"`
}

RunLabelRollup summarises the current human labelling state for a run. Counts are mutually exclusive and always sum to Total.

func (*RunLabelRollup) LabelledCount

func (r *RunLabelRollup) LabelledCount() int

LabelledCount returns tracks with any human-applied label state.

type RunParams

type RunParams struct {
	Version        string                     `json:"version"`
	Background     BackgroundParamsExport     `json:"background"`
	Clustering     ClusteringParamsExport     `json:"clustering"`
	Tracking       TrackingParamsExport       `json:"tracking"`
	Classification ClassificationParamsExport `json:"classification,omitempty"`
}

RunParams captures all configurable parameters for reproducibility.

func DefaultRunParams

func DefaultRunParams() RunParams

DefaultRunParams returns run parameters loaded from the canonical tuning defaults file (config/tuning.defaults.json). Panics if the file cannot be found — intended for tests and tools.

func ParseRunParams

func ParseRunParams(data json.RawMessage) (*RunParams, error)

ParseRunParams deserializes RunParams from JSON.

func RunParamsFromTuning

func RunParamsFromTuning(cfg *config.TuningConfig) RunParams

RunParamsFromTuning builds RunParams from a loaded TuningConfig. Use this in production code where the TuningConfig is already loaded.

func (*RunParams) ToJSON

func (p *RunParams) ToJSON() (json.RawMessage, error)

ToJSON serializes RunParams to JSON.

type RunTrack

type RunTrack struct {
	RunID   string `json:"run_id"`
	TrackID string `json:"track_id"`

	// Shared measurement fields (same 15 columns as lidar_tracks)
	TrackMeasurement

	// User labels (for ML training)
	UserLabel       string  `json:"user_label,omitempty"`
	LabelConfidence float32 `json:"label_confidence,omitempty"`
	LabelerID       string  `json:"labeler_id,omitempty"`
	LabeledAt       int64   `json:"labeled_at,omitempty"`
	QualityLabel    string  `json:"quality_label,omitempty"`
	LabelSource     string  `json:"label_source,omitempty"` // human_manual, carried_over, auto_suggested

	// Track quality flags
	IsSplitCandidate bool     `json:"is_split_candidate,omitempty"`
	IsMergeCandidate bool     `json:"is_merge_candidate,omitempty"`
	LinkedTrackIDs   []string `json:"linked_track_ids,omitempty"`
}

RunTrack represents a track within a specific analysis run. This extends TrackedObject with run-specific fields like user labels.

func RunTrackFromTrackedObject

func RunTrackFromTrackedObject(runID string, t *TrackedObject) *RunTrack

RunTrackFromTrackedObject creates a RunTrack from a TrackedObject. The shared TrackMeasurement is copied directly.

type SQLDB

type SQLDB = sql.DB

SQLDB is a type alias for sql.DB, exported so that packages outside the storage layer can reference the database connection type without importing database/sql directly. This keeps the database/sql import boundary narrow: only internal/db and the internal/lidar/storage tree should import it.

type SQLTx

type SQLTx = sql.Tx

SQLTx is a type alias for sql.Tx, exported so that packages outside the storage layer can reference the transaction type without importing database/sql directly.

type SuspendedSweepInfo

type SuspendedSweepInfo struct {
	SweepID         string    `json:"sweep_id"`
	SensorID        string    `json:"sensor_id"`
	CheckpointRound int       `json:"checkpoint_round"`
	StartedAt       time.Time `json:"started_at"`
}

SuspendedSweepInfo is a lightweight summary of a suspended sweep for the resume UI. It omits large JSON blobs to keep the response compact.

type SweepRecord

type SweepRecord struct {
	ID                        int64           `json:"id"`
	SweepID                   string          `json:"sweep_id"`
	SensorID                  string          `json:"sensor_id"`
	Mode                      string          `json:"mode"`
	Status                    string          `json:"status"`
	Request                   json.RawMessage `json:"request"`
	Results                   json.RawMessage `json:"results,omitempty"`
	Charts                    json.RawMessage `json:"charts,omitempty"`
	Recommendation            json.RawMessage `json:"recommendation,omitempty"`
	RoundResults              json.RawMessage `json:"round_results,omitempty"`
	Error                     string          `json:"error,omitempty"`
	StartedAt                 time.Time       `json:"started_at"`
	CompletedAt               *time.Time      `json:"completed_at,omitempty"`
	ObjectiveName             string          `json:"objective_name,omitempty"`
	ObjectiveVersion          string          `json:"objective_version,omitempty"`
	TransformPipelineName     string          `json:"transform_pipeline_name,omitempty"`
	TransformPipelineVersion  string          `json:"transform_pipeline_version,omitempty"`
	ScoreComponents           json.RawMessage `json:"score_components,omitempty"`
	RecommendationExplanation json.RawMessage `json:"recommendation_explanation,omitempty"`
	LabelProvenanceSummary    json.RawMessage `json:"label_provenance_summary,omitempty"`
}

SweepRecord represents a persisted sweep or auto-tune run.

type SweepStore

type SweepStore struct {
	// contains filtered or unexported fields
}

SweepStore provides persistence for sweep and auto-tune results.

func NewSweepStore

func NewSweepStore(db DBClient) *SweepStore

NewSweepStore creates a new SweepStore.

func (*SweepStore) DeleteSweep

func (s *SweepStore) DeleteSweep(sweepID string) error

DeleteSweep removes a sweep record.

func (*SweepStore) GetSuspendedSweep

func (s *SweepStore) GetSuspendedSweep() (string, int, error)

GetSuspendedSweep implements sweep.SweepPersister. It returns the most recent suspended sweep's ID and checkpoint round, or ("", 0, nil) when none exists.

func (*SweepStore) GetSuspendedSweepInfo

func (s *SweepStore) GetSuspendedSweepInfo() (*SuspendedSweepInfo, error)

GetSuspendedSweepInfo returns full suspended sweep info for the HTTP handler. Returns nil when no suspended sweep exists.

func (*SweepStore) GetSweep

func (s *SweepStore) GetSweep(sweepID string) (*SweepRecord, error)

GetSweep returns a single sweep record by ID.

func (*SweepStore) InsertSweep

func (s *SweepStore) InsertSweep(record SweepRecord) error

InsertSweep creates a new sweep record when a sweep starts.

func (*SweepStore) ListSweeps

func (s *SweepStore) ListSweeps(sensorID string, limit int) ([]SweepSummary, error)

ListSweeps returns recent sweeps for a sensor, ordered by most recent first. The results omit large JSON blobs (results, charts, etc.) for performance.

func (*SweepStore) LoadSweepCheckpoint

func (s *SweepStore) LoadSweepCheckpoint(sweepID string) (round int, bounds, results, request json.RawMessage, err error)

LoadSweepCheckpoint loads a checkpoint for resuming a suspended auto-tune.

func (*SweepStore) RecoverOrphanedSweeps

func (s *SweepStore) RecoverOrphanedSweeps() (int64, error)

RecoverOrphanedSweeps marks any "running" sweeps as "failed" on startup. This handles the case where the server was restarted mid-sweep, leaving database records stuck in "running" status with no in-memory counterpart.

func (*SweepStore) SaveSweepCheckpoint

func (s *SweepStore) SaveSweepCheckpoint(sweepID string, round int, bounds, results, request json.RawMessage) error

SaveSweepCheckpoint persists a mid-run checkpoint so a suspended auto-tune can be resumed.

func (*SweepStore) SaveSweepComplete

func (s *SweepStore) SaveSweepComplete(sweepID, status string, results, recommendation, roundResults json.RawMessage, completedAt time.Time, errMsg string, scoreComponents, recommendationExplanation, labelProvenanceSummary json.RawMessage, transformPipelineName, transformPipelineVersion string) error

SaveSweepComplete implements sweep.SweepPersister for the Runner/AutoTuner integration.

func (*SweepStore) SaveSweepStart

func (s *SweepStore) SaveSweepStart(sweepID, sensorID, mode string, request json.RawMessage, startedAt time.Time, objectiveName, objectiveVersion string) error

SaveSweepStart implements sweep.SweepPersister for the Runner/AutoTuner integration.

func (*SweepStore) UpdateSweepCharts

func (s *SweepStore) UpdateSweepCharts(sweepID string, charts json.RawMessage) error

UpdateSweepCharts saves chart configuration for a sweep.

func (*SweepStore) UpdateSweepResults

func (s *SweepStore) UpdateSweepResults(sweepID, status string, results, recommendation, roundResults json.RawMessage, completedAt *time.Time, errMsg string, scoreComponents, recommendationExplanation, labelProvenanceSummary json.RawMessage, transformPipelineName, transformPipelineVersion string) error

UpdateSweepResults updates a sweep record with results on completion or error.

type SweepSummary

type SweepSummary struct {
	ID          int64      `json:"id"`
	SweepID     string     `json:"sweep_id"`
	SensorID    string     `json:"sensor_id"`
	Mode        string     `json:"mode"`
	Status      string     `json:"status"`
	Error       string     `json:"error,omitempty"`
	StartedAt   time.Time  `json:"started_at"`
	CompletedAt *time.Time `json:"completed_at,omitempty"`
}

SweepSummary is a lightweight version of SweepRecord for list views (omits large JSON blobs).

type TrackMatch

type TrackMatch = l8analytics.TrackMatch

TrackMatch represents a matched track between two runs.

type TrackMeasurement

type TrackMeasurement = l5tracks.TrackMeasurement

TrackMeasurement contains the measurement fields shared between TrackedObject (live tracks) and RunTrack (analysis snapshots).

type TrackMerge

type TrackMerge = l8analytics.TrackMerge

TrackMerge represents a suspected track merge between runs.

type TrackObservation

type TrackObservation struct {
	TrackID     string
	TSUnixNanos int64
	FrameID     string

	// Position (world frame)
	X, Y, Z float32

	// Velocity (world frame)
	VelocityX, VelocityY float32
	SpeedMps             float32
	HeadingRad           float32

	// Shape
	BoundingBoxLength float32
	BoundingBoxWidth  float32
	BoundingBoxHeight float32
	HeightP95         float32
	IntensityMean     float32
}

TrackObservation represents a single observation of a track at a point in time.

func GetTrackObservations

func GetTrackObservations(db DBClient, trackID string, limit int) ([]*TrackObservation, error)

GetTrackObservations retrieves observations for a track.

func GetTrackObservationsInRange

func GetTrackObservationsInRange(db DBClient, sensorID string, startNanos, endNanos int64, limit int, trackID string) ([]*TrackObservation, error)

GetTrackObservationsInRange returns observations for a sensor within a time window (inclusive). Joins against tracks to scope by sensor.

type TrackPoint

type TrackPoint = l5tracks.TrackPoint

TrackPoint represents a single point associated with a track.

type TrackSplit

type TrackSplit = l8analytics.TrackSplit

TrackSplit represents a suspected track split between runs.

type TrackState

type TrackState = l5tracks.TrackState

TrackState represents the Kalman filter state of a tracked object.

type TrackStore

type TrackStore interface {
	InsertCluster(cluster *WorldCluster) (int64, error)
	InsertTrack(track *TrackedObject, frameID string) error
	UpdateTrack(track *TrackedObject) error
	InsertTrackObservation(obs *TrackObservation) error
	ClearTracks(sensorID string) error
	GetTrack(trackID string) (*TrackedObject, error)
	GetActiveTracks(sensorID string, state string) ([]*TrackedObject, error)
	GetTracksInRange(sensorID string, state string, startNanos, endNanos int64, limit int) ([]*TrackedObject, error)
	GetTrackObservations(trackID string, limit int) ([]*TrackObservation, error)
	GetTrackObservationsInRange(sensorID string, startNanos, endNanos int64, limit int, trackID string) ([]*TrackObservation, error)
	GetRecentClusters(sensorID string, startNanos, endNanos int64, limit int) ([]*WorldCluster, error)
}

TrackStore defines the interface for track persistence operations.

type TrackedObject

type TrackedObject = l5tracks.TrackedObject

TrackedObject represents a tracked object/vehicle from L5 (tracking layer).

func GetActiveTracks

func GetActiveTracks(db DBClient, sensorID string, state string) ([]*TrackedObject, error)

GetActiveTracks retrieves active tracks from the database. If state is empty, returns all non-deleted tracks.

func GetTracksInRange

func GetTracksInRange(db DBClient, sensorID string, state string, startNanos, endNanos int64, limit int) ([]*TrackedObject, error)

GetTracksInRange retrieves tracks whose lifespan overlaps the given time window (nanoseconds). A track is included if its start is on/before endNanos and its end (or start when end is NULL) is on/after startNanos. Deleted tracks are excluded by default unless state explicitly requests them.

type TrackerConfig

type TrackerConfig = l5tracks.TrackerConfig

TrackerConfig contains configuration for the object tracker from L5 (tracking layer).

type TrackingParamsExport

type TrackingParamsExport struct {
	MaxTracks               int           `json:"max_tracks"`
	MaxMisses               int           `json:"max_misses"`
	HitsToConfirm           int           `json:"hits_to_confirm"`
	GatingDistanceSquared   float32       `json:"gating_distance_squared"`
	ProcessNoisePos         float32       `json:"process_noise_pos"`
	ProcessNoiseVel         float32       `json:"process_noise_vel"`
	MeasurementNoise        float32       `json:"measurement_noise"`
	DeletedTrackGracePeriod time.Duration `json:"deleted_track_grace_period_nanos"`
}

TrackingParamsExport is the JSON-serializable tracking params.

func FromTrackerConfig

func FromTrackerConfig(c TrackerConfig) TrackingParamsExport

FromTrackerConfig creates export params from TrackerConfig.

type WorldCluster

type WorldCluster = l4perception.WorldCluster

WorldCluster represents a cluster of points in world coordinates from L4 (perception layer).

func GetRecentClusters

func GetRecentClusters(db DBClient, sensorID string, startNanos, endNanos int64, limit int) ([]*WorldCluster, error)

GetRecentClusters retrieves recent clusters from the database.

type WorldPoint

type WorldPoint = l4perception.WorldPoint

WorldPoint represents a single point in world coordinates from L4 (perception layer).

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL