sqlite

package
v0.0.0-...-d497cf0 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Mar 6, 2026 License: MIT Imports: 31 Imported by: 0

Documentation

Overview

Package sqlite provides the blocked_issues_cache optimization for GetReadyWork performance.

Performance Impact

GetReadyWork originally used a recursive CTE to compute blocked issues on every query, taking ~752ms on a 10K issue database. With the cache, queries complete in ~29ms (25x speedup) by using a simple NOT EXISTS check against the materialized cache table.

Cache Architecture

The blocked_issues_cache table stores issue_id values for all issues that are currently blocked. An issue is blocked if:

  • It has a 'blocks' dependency on an open/in_progress/blocked issue (direct blocking)
  • It has a 'blocks' dependency on an external:* reference (cross-project blocking, bd-om4a)
  • It has a 'conditional-blocks' dependency where the blocker hasn't failed (bd-kzda)
  • It has a 'waits-for' dependency on a spawner with unclosed children (bd-xo1o.2)
  • Its parent is blocked and it's connected via 'parent-child' dependency (transitive blocking)

WaitsFor gates (bd-xo1o.2): B waits for spawner A's dynamically-bonded children. Gate types: "all-children" (default, blocked until ALL close) or "any-children" (until ANY closes).

Conditional blocks (bd-kzda): B runs only if A fails. B is blocked until A is closed with a failure close reason (failed, rejected, wontfix, canceled, abandoned, etc.). If A succeeds (closed without failure), B stays blocked.

The cache is maintained automatically by invalidating and rebuilding whenever:

  • A 'blocks', 'conditional-blocks', 'waits-for', or 'parent-child' dependency is added or removed
  • Any issue's status changes (affects whether it blocks others)
  • An issue is closed (closed issues don't block others; conditional-blocks checks close_reason)

Related and discovered-from dependencies do NOT trigger cache invalidation since they don't affect blocking semantics.

Cache Invalidation Strategy

On any triggering change, the entire cache is rebuilt from scratch (DELETE + INSERT). This full-rebuild approach is chosen because:

  • Rebuild is fast (<50ms even on 10K databases) due to optimized CTE logic
  • Simpler implementation than incremental updates
  • Dependency changes are rare compared to reads
  • Guarantees consistency - no risk of partial/stale updates

The rebuild happens within the same transaction as the triggering change, ensuring atomicity and consistency. The cache can never be in an inconsistent state visible to queries.

Transaction Safety

All cache operations support both transaction and direct database execution:

  • rebuildBlockedCache accepts optional *sql.Tx parameter
  • If tx != nil, uses transaction; otherwise uses direct db connection
  • Cache invalidation during CreateIssue/UpdateIssue/AddDependency happens in their tx
  • Ensures cache is always consistent with the database state

Performance Characteristics

Query performance (GetReadyWork):

  • Before cache: ~752ms (recursive CTE on 10K issues)
  • With cache: ~29ms (NOT EXISTS check)
  • Speedup: 25x

Write overhead:

  • Cache rebuild: <50ms (full DELETE + INSERT)
  • Only triggered on dependency/status changes (rare operations)
  • Trade-off: slower writes for much faster reads

Edge Cases Handled

1. Parent-child transitive blocking:

  • Children of blocked parents are automatically marked as blocked
  • Propagates through arbitrary depth hierarchies (limited to depth 50)

2. Multiple blockers:

  • Issue blocked by multiple open issues stays blocked until all are closed
  • DISTINCT in CTE ensures issue appears once in cache

3. Status changes:

  • Closing a blocker removes all blocked descendants from cache
  • Reopening a blocker adds them back

4. Dependency removal:

  • Removing last blocker unblocks the issue
  • Removing parent-child link unblocks orphaned subtree

5. Foreign key cascades:

  • Cache entries automatically deleted when issue is deleted (ON DELETE CASCADE)
  • No manual cleanup needed

Future Optimizations

If rebuild becomes a bottleneck in very large databases (>100K issues):

  • Consider incremental updates for specific dependency types
  • Add indexes to dependencies table for CTE performance
  • Implement dirty tracking to avoid rebuilds when cache is unchanged

However, current performance is excellent for realistic workloads.

Package sqlite implements dependency management for the SQLite storage backend.

Package sqlite implements dirty issue tracking for incremental JSONL export.

Package sqlite provides external dependency resolution for cross-project blocking.

External dependencies use the format: external:<project>:<capability> They are satisfied when:

  • The project is configured in external_projects config
  • The project's beads database has a closed issue with provides:<capability> label

Resolution happens lazily at query time (GetReadyWork) rather than during cache rebuild, to keep cache rebuilds fast and avoid holding multiple DB connections.

Package sqlite implements the storage interface using SQLite.

Package sqlite implements the storage interface using SQLite.

Package sqlite - migration safety invariants

Package sqlite - database migrations

Package sqlite implements multi-repo hydration for the SQLite storage backend.

Package sqlite implements multi-repo export for the SQLite storage backend.

Package sqlite - schema compatibility probing

Package sqlite implements the storage interface using SQLite.

This package has been split into focused files for better maintainability:

Core storage components:

  • store.go: SQLiteStorage struct, New() constructor, initialization logic, and database utility methods (Close, Path, IsClosed, UnderlyingDB, etc.)
  • queries.go: Issue CRUD operations including CreateIssue, GetIssue, UpdateIssue, DeleteIssue, DeleteIssues, SearchIssues
  • config.go: Configuration and metadata management (SetConfig, GetConfig, SetMetadata, GetMetadata, OrphanHandling)
  • comments.go: Comment operations (AddIssueComment, GetIssueComments)

Supporting components:

  • schema.go: Database schema definitions
  • migrations.go: Schema migration logic
  • dependencies.go: Dependency management (AddDependency, RemoveDependency, etc.)
  • labels.go: Label operations
  • events.go: Event tracking
  • dirty.go: Dirty issue tracking for incremental exports
  • batch_ops.go: Batch operations for bulk imports
  • hash_ids.go: Hash-based ID generation
  • validators.go: Input validation functions
  • util.go: Utility functions

Historical notes (bd-0a43): Prior to this refactoring, sqlite.go was 1050+ lines containing all storage logic. The monolithic structure made it difficult to navigate and understand specific functionality. This split maintains all existing functionality while improving code organization and discoverability.

Package sqlite implements the storage interface using SQLite.

Package sqlite implements the storage interface using SQLite.

Index

Constants

View Source
const CustomStatusConfigKey = "status.custom"

CustomStatusConfigKey is the config key for custom status states

View Source
const CustomTypeConfigKey = "types.custom"

CustomTypeConfigKey is the config key for custom issue types

Variables

View Source
var (
	// ErrNotFound indicates the requested resource was not found in the database
	ErrNotFound = errors.New("not found")

	// ErrInvalidID indicates an ID format or validation error
	ErrInvalidID = errors.New("invalid ID")

	// ErrConflict indicates a unique constraint violation or conflicting state
	ErrConflict = errors.New("conflict")

	// ErrCycle indicates a dependency cycle would be created
	ErrCycle = errors.New("dependency cycle detected")
)

Sentinel errors for common database conditions

View Source
var ErrSchemaIncompatible = fmt.Errorf("database schema is incompatible")

ErrSchemaIncompatible is returned when the database schema is incompatible with the current version

Functions

func CheckExternalDeps

func CheckExternalDeps(ctx context.Context, refs []string) map[string]*ExternalDepStatus

CheckExternalDeps checks multiple external dependencies with batching optimization. Groups refs by project and opens each external DB only once, checking all capabilities for that project in a single query. This avoids O(N) DB opens when multiple issues depend on the same external project. Returns a map of ref -> status.

func CleanOrphanedRefs

func CleanOrphanedRefs(db *sql.DB) (deps int, labels int, err error)

CleanOrphanedRefs removes orphaned dependencies and labels that reference non-existent issues. This runs BEFORE migrations to prevent the chicken-and-egg problem where: 1. bd doctor --fix tries to open the database 2. Opening triggers migrations with invariant checks 3. Invariant check fails due to orphaned refs from prior tombstone deletion 4. Fix never runs because database won't open

Returns counts of cleaned items for logging.

func EnsureIDs

func EnsureIDs(ctx context.Context, conn *sql.Conn, prefix string, issues []*types.Issue, actor string, orphanHandling OrphanHandling, skipPrefixValidation bool) error

EnsureIDs generates or validates IDs for issues For issues with empty IDs, generates unique hash-based IDs For issues with existing IDs, validates they match the prefix and parent exists (if hierarchical) For hierarchical IDs with missing parents, behavior depends on orphanHandling mode When skipPrefixValidation is true, existing IDs are not validated against the prefix (used during import)

func GenerateBatchIssueIDs

func GenerateBatchIssueIDs(ctx context.Context, conn *sql.Conn, prefix string, issues []*types.Issue, actor string, usedIDs map[string]bool) error

GenerateBatchIssueIDs generates unique IDs for multiple issues in a single batch Tracks used IDs to prevent intra-batch collisions

func GenerateIssueID

func GenerateIssueID(ctx context.Context, conn *sql.Conn, prefix string, issue *types.Issue, actor string) (string, error)

GenerateIssueID generates a unique hash-based ID for an issue Uses adaptive length based on database size and tries multiple nonces on collision

func GetAdaptiveIDLength

func GetAdaptiveIDLength(ctx context.Context, conn *sql.Conn, prefix string) (int, error)

GetAdaptiveIDLength returns the appropriate hash length based on database size

func GetInvariantNames

func GetInvariantNames() []string

GetInvariantNames returns the names of all registered invariants (for testing/inspection)

func GetUnsatisfiedExternalDeps

func GetUnsatisfiedExternalDeps(ctx context.Context, refs []string) []string

GetUnsatisfiedExternalDeps returns external dependencies that are not satisfied.

func IsBusyError

func IsBusyError(err error) bool

IsBusyError checks if an error is a database busy/locked error

func IsConflict

func IsConflict(err error) bool

IsConflict checks if an error is or wraps ErrConflict

func IsCycle

func IsCycle(err error) bool

IsCycle checks if an error is or wraps ErrCycle

func IsForeignKeyConstraintError

func IsForeignKeyConstraintError(err error) bool

IsForeignKeyConstraintError checks if an error is a FOREIGN KEY constraint violation This can occur when importing issues that reference deleted issues (e.g., after merge)

func IsHierarchicalID

func IsHierarchicalID(id string) (isHierarchical bool, parentID string)

IsHierarchicalID checks if an issue ID is hierarchical (has a parent). Hierarchical IDs have the format {parentID}.{N} where N is a numeric child suffix. Returns true and the parent ID if hierarchical, false and empty string otherwise.

This correctly handles prefixes that contain dots (e.g., "my.project-abc123" is NOT hierarchical, but "my.project-abc123.1" IS hierarchical with parent "my.project-abc123").

The key insight is that hierarchical IDs always end with .{digits} where the digits represent the child number (1, 2, 3, etc.).

func IsNotFound

func IsNotFound(err error) bool

IsNotFound checks if an error is or wraps ErrNotFound

func IsUniqueConstraintError

func IsUniqueConstraintError(err error) bool

IsUniqueConstraintError checks if an error is a UNIQUE constraint violation

func ParseHierarchicalID

func ParseHierarchicalID(id string) (parentID string, childNum int, ok bool)

ParseHierarchicalID extracts the parent ID and child number from a hierarchical ID. Returns (parentID, childNum, true) for hierarchical IDs like "bd-abc.1" -> ("bd-abc", 1, true). Returns ("", 0, false) for non-hierarchical IDs. (GH#728 fix)

func RunMigrations

func RunMigrations(db *sql.DB) error

RunMigrations executes all registered migrations in order with invariant checking. Uses EXCLUSIVE transaction to prevent race conditions when multiple processes open the database simultaneously (GH#720).

func ValidateIssueIDPrefix

func ValidateIssueIDPrefix(id, prefix string) error

ValidateIssueIDPrefix validates that an issue ID matches the configured prefix Supports both top-level (bd-a3f8e9) and hierarchical (bd-a3f8e9.1) IDs

Types

type AdaptiveIDConfig

type AdaptiveIDConfig struct {
	// MaxCollisionProbability is the threshold at which we scale up ID length (e.g., 0.25 = 25%)
	MaxCollisionProbability float64

	// MinLength is the minimum hash length to use (default 3)
	MinLength int

	// MaxLength is the maximum hash length to use (default 8)
	MaxLength int
}

AdaptiveIDConfig holds configuration for adaptive ID length scaling

func DefaultAdaptiveConfig

func DefaultAdaptiveConfig() AdaptiveIDConfig

DefaultAdaptiveConfig returns sensible defaults for base36 encoding With base36 (0-9, a-z), we can use shorter IDs than hex:

3 chars: ~46K namespace, good for up to ~160 issues (25% collision prob)
4 chars: ~1.7M namespace, good for up to ~980 issues
5 chars: ~60M namespace, good for up to ~5.9K issues
6 chars: ~2.2B namespace, good for up to ~35K issues
7 chars: ~78B namespace, good for up to ~212K issues
8 chars: ~2.8T namespace, good for up to ~1M+ issues

type BatchCreateOptions

type BatchCreateOptions struct {
	OrphanHandling       OrphanHandling // How to handle missing parent issues
	SkipPrefixValidation bool           // Skip prefix validation for existing IDs (used during import)
}

BatchCreateOptions contains options for batch issue creation

type CollisionDetail

type CollisionDetail struct {
	ID                string       // The issue ID that collided
	IncomingIssue     *types.Issue // The issue from the import file
	ExistingIssue     *types.Issue // The issue currently in the database
	ConflictingFields []string     // List of field names that differ
	RemapIncoming     bool         // If true, remap incoming; if false, remap existing
}

CollisionDetail provides detailed information about a collision

type CollisionResult

type CollisionResult struct {
	ExactMatches []string           // IDs that match exactly (idempotent import)
	Collisions   []*CollisionDetail // Issues with same ID but different content
	NewIssues    []string           // IDs that don't exist in DB yet
	Renames      []*RenameDetail    // Issues with same content but different ID (renames)
}

CollisionResult categorizes incoming issues by their relationship to existing DB state

func DetectCollisions

func DetectCollisions(ctx context.Context, s *SQLiteStorage, incomingIssues []*types.Issue) (*CollisionResult, error)

DetectCollisions compares incoming JSONL issues against DB state It distinguishes between:

  1. Exact match (idempotent) - ID and content are identical
  2. ID match but different content (collision/update) - same ID, different fields
  3. New issue - ID doesn't exist in DB
  4. External ref match - Different ID but same external_ref (update from external system)

When an incoming issue has an external_ref, we match by external_ref first, then by ID. This enables re-syncing from external systems (Jira, GitHub, Linear).

Returns a CollisionResult categorizing all incoming issues.

type CompactionCandidate

type CompactionCandidate struct {
	IssueID        string
	ClosedAt       time.Time
	OriginalSize   int
	EstimatedSize  int
	DependentCount int
}

CompactionCandidate represents an issue eligible for compaction

type DeleteIssuesResult

type DeleteIssuesResult struct {
	DeletedCount      int
	DependenciesCount int
	LabelsCount       int
	EventsCount       int
	OrphanedIssues    []string
}

DeleteIssuesResult contains statistics about a batch deletion operation

type ExternalDepStatus

type ExternalDepStatus struct {
	Ref        string // The full external reference (external:project:capability)
	Project    string // Parsed project name
	Capability string // Parsed capability name
	Satisfied  bool   // Whether the dependency is satisfied
	Reason     string // Human-readable reason if not satisfied
}

ExternalDepStatus represents whether an external dependency is satisfied

func CheckExternalDep

func CheckExternalDep(ctx context.Context, ref string) *ExternalDepStatus

CheckExternalDep checks if a single external dependency is satisfied. Returns status information about the dependency.

type FreshnessChecker

type FreshnessChecker struct {
	// contains filtered or unexported fields
}

FreshnessChecker monitors the database file for external modifications. It detects when the database file has been replaced (e.g., by git merge) and triggers a reconnection to ensure fresh data is visible.

This addresses the issue where the daemon's long-lived SQLite connection becomes stale after external file replacement (not just in-place writes).

func NewFreshnessChecker

func NewFreshnessChecker(dbPath string, onStale func() error) *FreshnessChecker

NewFreshnessChecker creates a new freshness checker for the given database path. The onStale callback is called when file replacement is detected.

func (*FreshnessChecker) Check

func (fc *FreshnessChecker) Check() bool

Check examines the database file for changes and triggers reconnection if needed. Returns true if the file was replaced and reconnection was triggered. This method is safe for concurrent use.

func (*FreshnessChecker) DebugState

func (fc *FreshnessChecker) DebugState() (inode uint64, mtime time.Time, size int64)

DebugState returns the current tracked state for testing/debugging.

func (*FreshnessChecker) Disable

func (fc *FreshnessChecker) Disable()

Disable disables freshness checking.

func (*FreshnessChecker) Enable

func (fc *FreshnessChecker) Enable()

Enable enables freshness checking.

func (*FreshnessChecker) IsEnabled

func (fc *FreshnessChecker) IsEnabled() bool

IsEnabled returns whether freshness checking is enabled.

func (*FreshnessChecker) UpdateState

func (fc *FreshnessChecker) UpdateState()

UpdateState updates the tracked file state after a successful reconnection. Call this after reopening the database to establish a new baseline.

type Migration

type Migration struct {
	Name string
	Func func(*sql.DB) error
}

Migration represents a single database migration

type MigrationInfo

type MigrationInfo struct {
	Name        string `json:"name"`
	Description string `json:"description"`
}

MigrationInfo contains metadata about a migration for inspection

func ListMigrations

func ListMigrations() []MigrationInfo

ListMigrations returns list of all registered migrations with descriptions Note: This returns ALL registered migrations, not just pending ones (all are idempotent)

type MigrationInvariant

type MigrationInvariant struct {
	Name        string
	Description string
	Check       func(*sql.DB, *Snapshot) error
}

MigrationInvariant represents a database invariant that must hold after migrations

type OrphanHandling

type OrphanHandling string

OrphanHandling defines how to handle orphan issues during import

const (
	OrphanStrict    OrphanHandling = "strict"    // Reject imports with orphans
	OrphanResurrect OrphanHandling = "resurrect" // Auto-resurrect parents from JSONL
	OrphanSkip      OrphanHandling = "skip"      // Skip orphans silently
	OrphanAllow     OrphanHandling = "allow"     // Allow orphans (default)
)

type RenameDetail

type RenameDetail struct {
	OldID string       // ID in database (to be deleted)
	NewID string       // ID in incoming (to be created)
	Issue *types.Issue // The issue with new ID
}

RenameDetail captures a rename/remap detected during collision detection

type SQLiteStorage

type SQLiteStorage struct {
	// contains filtered or unexported fields
}

SQLiteStorage implements the Storage interface using SQLite

func New

func New(ctx context.Context, path string) (*SQLiteStorage, error)

New creates a new SQLite storage backend with default 30s busy timeout

func NewReadOnly

func NewReadOnly(ctx context.Context, path string) (*SQLiteStorage, error)

NewReadOnly opens an existing database in read-only mode. This prevents any modification to the database file, including: - WAL journal mode changes - Schema/migration updates - WAL checkpointing on close

Use this for read-only commands (list, ready, show, stats, etc.) to avoid triggering file watchers. See GH#804.

Returns an error if the database doesn't exist (unlike New which creates it).

func NewReadOnlyWithTimeout

func NewReadOnlyWithTimeout(ctx context.Context, path string, busyTimeout time.Duration) (*SQLiteStorage, error)

NewReadOnlyWithTimeout opens an existing database in read-only mode with configurable timeout.

func NewWithTimeout

func NewWithTimeout(ctx context.Context, path string, busyTimeout time.Duration) (*SQLiteStorage, error)

NewWithTimeout creates a new SQLite storage backend with configurable busy timeout. A timeout of 0 means fail immediately if the database is locked.

func (*SQLiteStorage) AddComment

func (s *SQLiteStorage) AddComment(ctx context.Context, issueID, actor, comment string) error

AddComment adds a comment to an issue

func (*SQLiteStorage) AddDependency

func (s *SQLiteStorage) AddDependency(ctx context.Context, dep *types.Dependency, actor string) error

AddDependency adds a dependency between issues with cycle prevention

func (*SQLiteStorage) AddIssueComment

func (s *SQLiteStorage) AddIssueComment(ctx context.Context, issueID, author, text string) (*types.Comment, error)

AddIssueComment adds a comment to an issue

func (*SQLiteStorage) AddLabel

func (s *SQLiteStorage) AddLabel(ctx context.Context, issueID, label, actor string) error

AddLabel adds a label to an issue

func (*SQLiteStorage) ApplyCompaction

func (s *SQLiteStorage) ApplyCompaction(ctx context.Context, issueID string, level int, originalSize int, compressedSize int, commitHash string) error

ApplyCompaction updates the compaction metadata for an issue after successfully compacting it. This sets compaction_level, compacted_at, compacted_at_commit, and original_size fields.

func (*SQLiteStorage) BeginTx

func (s *SQLiteStorage) BeginTx(ctx context.Context) (*sql.Tx, error)

BeginTx starts a new database transaction This is used by commands that need to perform multiple operations atomically

func (*SQLiteStorage) CheckEligibility

func (s *SQLiteStorage) CheckEligibility(ctx context.Context, issueID string, tier int) (bool, string, error)

CheckEligibility checks if a specific issue is eligible for compaction at the given tier. Returns (eligible, reason, error). If not eligible, reason explains why.

func (*SQLiteStorage) CheckpointWAL

func (s *SQLiteStorage) CheckpointWAL(ctx context.Context) error

CheckpointWAL checkpoints the WAL file to flush changes to the main database file. In WAL mode, writes go to the -wal file, leaving the main .db file untouched. Checkpointing: - Ensures data persistence by flushing WAL to main database - Reduces WAL file size - Makes database safe for backup/copy operations

func (*SQLiteStorage) ClearAllExportHashes

func (s *SQLiteStorage) ClearAllExportHashes(ctx context.Context) error

ClearAllExportHashes removes all export hashes from the database. This is primarily used for test isolation to force re-export of issues.

func (*SQLiteStorage) ClearDirtyIssuesByID

func (s *SQLiteStorage) ClearDirtyIssuesByID(ctx context.Context, issueIDs []string) error

ClearDirtyIssuesByID removes specific issue IDs from the dirty_issues table This avoids race conditions by only clearing issues that were actually exported

func (*SQLiteStorage) ClearRepoMtime

func (s *SQLiteStorage) ClearRepoMtime(ctx context.Context, repoPath string) error

ClearRepoMtime removes the mtime cache entry for a repository. This is used when a repo is removed from the multi-repo configuration.

func (*SQLiteStorage) Close

func (s *SQLiteStorage) Close() error

Close closes the database connection. For read-write connections, it checkpoints the WAL to ensure all writes are flushed to the main database file. For read-only connections (GH#804), it skips checkpointing to avoid file modifications.

func (*SQLiteStorage) CloseIssue

func (s *SQLiteStorage) CloseIssue(ctx context.Context, id string, reason string, actor string, session string) error

CloseIssue closes an issue with a reason. The session parameter tracks which Claude Code session closed the issue (can be empty).

func (*SQLiteStorage) CreateIssue

func (s *SQLiteStorage) CreateIssue(ctx context.Context, issue *types.Issue, actor string) error

CreateIssue creates a new issue

func (*SQLiteStorage) CreateIssues

func (s *SQLiteStorage) CreateIssues(ctx context.Context, issues []*types.Issue, actor string) error

CreateIssues creates multiple issues atomically in a single transaction. This provides significant performance improvements over calling CreateIssue in a loop: - Single connection acquisition - Single transaction - Atomic ID range reservation (one counter update for N issues) - All-or-nothing atomicity

Expected 5-10x speedup for batches of 10+ issues. CreateIssues creates multiple issues atomically in a single transaction.

This method is optimized for bulk issue creation and provides significant performance improvements over calling CreateIssue in a loop:

  • Single database connection and transaction
  • Atomic ID range reservation (one counter update for N IDs)
  • All-or-nothing semantics (rolls back on any error)
  • 5-15x faster than sequential CreateIssue calls

All issues are validated before any database changes occur. If any issue fails validation, the entire batch is rejected.

ID Assignment:

  • Issues with empty ID get auto-generated IDs from a reserved range
  • Issues with explicit IDs use those IDs (caller must ensure uniqueness)
  • Mix of explicit and auto-generated IDs is supported

Timestamps:

  • All issues in the batch receive identical created_at/updated_at timestamps
  • This reflects that they were created as a single atomic operation

Usage:

// Bulk import from external source
issues := []*types.Issue{...}
if err := store.CreateIssues(ctx, issues, "import"); err != nil {
    return err
}

// After importing with explicit IDs, sync counters to prevent collisions

REMOVED: SyncAllCounters example - no longer needed with hash IDs

Performance:

  • 100 issues: ~30ms (vs ~900ms with CreateIssue loop)
  • 1000 issues: ~950ms (vs estimated 9s with CreateIssue loop)

When to use:

  • Bulk imports from external systems (use CreateIssues)
  • Creating multiple related issues at once (use CreateIssues)
  • Single issue creation (use CreateIssue for simplicity)
  • Interactive user operations (use CreateIssue)

func (*SQLiteStorage) CreateIssuesWithFullOptions

func (s *SQLiteStorage) CreateIssuesWithFullOptions(ctx context.Context, issues []*types.Issue, actor string, opts BatchCreateOptions) error

CreateIssuesWithFullOptions creates multiple issues with full options control

func (*SQLiteStorage) CreateIssuesWithOptions

func (s *SQLiteStorage) CreateIssuesWithOptions(ctx context.Context, issues []*types.Issue, actor string, orphanHandling OrphanHandling) error

CreateIssuesWithOptions creates multiple issues with configurable orphan handling

func (*SQLiteStorage) CreateTombstone

func (s *SQLiteStorage) CreateTombstone(ctx context.Context, id string, actor string, reason string) error

CreateTombstone converts an existing issue to a tombstone record. This is a soft-delete that preserves the issue in the database with status="tombstone". The issue will still appear in exports but be excluded from normal queries. Dependencies must be removed separately before calling this method.

func (*SQLiteStorage) DeleteConfig

func (s *SQLiteStorage) DeleteConfig(ctx context.Context, key string) error

DeleteConfig deletes a configuration value

func (*SQLiteStorage) DeleteIssue

func (s *SQLiteStorage) DeleteIssue(ctx context.Context, id string) error

DeleteIssue permanently removes an issue from the database

func (*SQLiteStorage) DeleteIssues

func (s *SQLiteStorage) DeleteIssues(ctx context.Context, ids []string, cascade bool, force bool, dryRun bool) (*DeleteIssuesResult, error)

DeleteIssues deletes multiple issues in a single transaction If cascade is true, recursively deletes dependents If cascade is false but force is true, deletes issues and orphans their dependents If cascade and force are both false, returns an error if any issue has dependents If dryRun is true, only computes statistics without deleting

func (*SQLiteStorage) DeleteIssuesBySourceRepo

func (s *SQLiteStorage) DeleteIssuesBySourceRepo(ctx context.Context, sourceRepo string) (int, error)

DeleteIssuesBySourceRepo permanently removes all issues from a specific source repository. This is used when a repo is removed from the multi-repo configuration. It also cleans up related data: dependencies, labels, comments, events, and dirty markers. Returns the number of issues deleted.

func (*SQLiteStorage) DetectCycles

func (s *SQLiteStorage) DetectCycles(ctx context.Context) ([][]*types.Issue, error)

DetectCycles finds circular dependencies and returns the actual cycle paths. Uses O(V+E) DFS with shared visited set instead of O(2^n) SQL path enumeration. Note: relates-to dependencies are excluded because they are intentionally bidirectional ("see also" relationships) and do not represent problematic cycles.

func (*SQLiteStorage) DisableFreshnessChecking

func (s *SQLiteStorage) DisableFreshnessChecking()

DisableFreshnessChecking disables external modification detection.

func (*SQLiteStorage) EnableFreshnessChecking

func (s *SQLiteStorage) EnableFreshnessChecking()

EnableFreshnessChecking enables detection of external database file modifications. This is used by the daemon to detect when the database file has been replaced (e.g., by git merge) and automatically reconnect.

When enabled, read operations will check if the database file has been replaced and trigger a reconnection if necessary. This adds minimal overhead (~1ms per check) but ensures the daemon always sees the latest data.

func (*SQLiteStorage) ExecInTransaction

func (s *SQLiteStorage) ExecInTransaction(ctx context.Context, fn func(*sql.Tx) error) error

ExecInTransaction is deprecated. Use withTx instead.

func (*SQLiteStorage) ExportToMultiRepo

func (s *SQLiteStorage) ExportToMultiRepo(ctx context.Context) (map[string]int, error)

ExportToMultiRepo writes issues to their respective JSONL files based on source_repo. Issues are grouped by source_repo and written atomically to each repository. Returns a map of repo path -> exported issue count. Returns nil with no error if not in multi-repo mode (backward compatibility).

func (*SQLiteStorage) GetAllConfig

func (s *SQLiteStorage) GetAllConfig(ctx context.Context) (map[string]string, error)

GetAllConfig gets all configuration key-value pairs

func (*SQLiteStorage) GetAllDependencyRecords

func (s *SQLiteStorage) GetAllDependencyRecords(ctx context.Context) (map[string][]*types.Dependency, error)

GetAllDependencyRecords returns all dependency records grouped by issue ID This is optimized for bulk export operations to avoid N+1 queries

func (*SQLiteStorage) GetBlockedIssueIDs

func (s *SQLiteStorage) GetBlockedIssueIDs(ctx context.Context) ([]string, error)

GetBlockedIssueIDs returns all issue IDs currently in the blocked cache

func (*SQLiteStorage) GetBlockedIssues

func (s *SQLiteStorage) GetBlockedIssues(ctx context.Context, filter types.WorkFilter) ([]*types.BlockedIssue, error)

GetBlockedIssues returns issues that are blocked by dependencies or have status=blocked Note: Pinned issues are excluded from the output. Note: Includes external: references in blocked_by list.

func (*SQLiteStorage) GetCloseReason

func (s *SQLiteStorage) GetCloseReason(ctx context.Context, issueID string) (string, error)

GetCloseReason retrieves the close reason from the most recent closed event for an issue

func (*SQLiteStorage) GetCloseReasonsForIssues

func (s *SQLiteStorage) GetCloseReasonsForIssues(ctx context.Context, issueIDs []string) (map[string]string, error)

GetCloseReasonsForIssues retrieves close reasons for multiple issues in a single query

func (*SQLiteStorage) GetCommentsForIssues

func (s *SQLiteStorage) GetCommentsForIssues(ctx context.Context, issueIDs []string) (map[string][]*types.Comment, error)

GetCommentsForIssues fetches comments for multiple issues in a single query Returns a map of issue_id -> []*Comment

func (*SQLiteStorage) GetConfig

func (s *SQLiteStorage) GetConfig(ctx context.Context, key string) (string, error)

GetConfig gets a configuration value

func (*SQLiteStorage) GetCustomStatuses

func (s *SQLiteStorage) GetCustomStatuses(ctx context.Context) ([]string, error)

GetCustomStatuses retrieves the list of custom status states from config. Custom statuses are stored as comma-separated values in the "status.custom" config key. Returns an empty slice if no custom statuses are configured.

func (*SQLiteStorage) GetCustomTypes

func (s *SQLiteStorage) GetCustomTypes(ctx context.Context) ([]string, error)

GetCustomTypes retrieves the list of custom issue types from config. Custom types are stored as comma-separated values in the "types.custom" config key. Returns an empty slice if no custom types are configured.

func (*SQLiteStorage) GetDependencies

func (s *SQLiteStorage) GetDependencies(ctx context.Context, issueID string) ([]*types.Issue, error)

GetDependencies returns issues that this issue depends on

func (*SQLiteStorage) GetDependenciesWithMetadata

func (s *SQLiteStorage) GetDependenciesWithMetadata(ctx context.Context, issueID string) ([]*types.IssueWithDependencyMetadata, error)

GetDependenciesWithMetadata returns issues that this issue depends on, including dependency type

func (*SQLiteStorage) GetDependencyCounts

func (s *SQLiteStorage) GetDependencyCounts(ctx context.Context, issueIDs []string) (map[string]*types.DependencyCounts, error)

GetDependencyCounts returns dependency and dependent counts for multiple issues in a single query

func (*SQLiteStorage) GetDependencyRecords

func (s *SQLiteStorage) GetDependencyRecords(ctx context.Context, issueID string) ([]*types.Dependency, error)

GetDependencyRecords returns raw dependency records for an issue

func (*SQLiteStorage) GetDependencyTree

func (s *SQLiteStorage) GetDependencyTree(ctx context.Context, issueID string, maxDepth int, showAllPaths bool, reverse bool) ([]*types.TreeNode, error)

GetDependencyTree returns the full dependency tree with optional deduplication When showAllPaths is false (default), nodes appearing via multiple paths (diamond dependencies) appear only once at their shallowest depth in the tree. When showAllPaths is true, all paths are shown with duplicate nodes at different depths. When reverse is true, shows dependent tree (what was discovered from this) instead of dependency tree (what blocks this).

func (*SQLiteStorage) GetDependents

func (s *SQLiteStorage) GetDependents(ctx context.Context, issueID string) ([]*types.Issue, error)

GetDependents returns issues that depend on this issue

func (*SQLiteStorage) GetDependentsWithMetadata

func (s *SQLiteStorage) GetDependentsWithMetadata(ctx context.Context, issueID string) ([]*types.IssueWithDependencyMetadata, error)

GetDependentsWithMetadata returns issues that depend on this issue, including dependency type

func (*SQLiteStorage) GetDirtyIssueCount

func (s *SQLiteStorage) GetDirtyIssueCount(ctx context.Context) (int, error)

GetDirtyIssueCount returns the count of dirty issues (for monitoring/debugging)

func (*SQLiteStorage) GetDirtyIssueHash

func (s *SQLiteStorage) GetDirtyIssueHash(ctx context.Context, issueID string) (string, error)

GetDirtyIssueHash returns the stored content hash for a dirty issue, if it exists

func (*SQLiteStorage) GetDirtyIssues

func (s *SQLiteStorage) GetDirtyIssues(ctx context.Context) ([]string, error)

GetDirtyIssues returns the list of issue IDs that need to be exported

func (*SQLiteStorage) GetEpicsEligibleForClosure

func (s *SQLiteStorage) GetEpicsEligibleForClosure(ctx context.Context) ([]*types.EpicStatus, error)

GetEpicsEligibleForClosure returns all epics with their completion status

func (*SQLiteStorage) GetEvents

func (s *SQLiteStorage) GetEvents(ctx context.Context, issueID string, limit int) ([]*types.Event, error)

GetEvents returns the event history for an issue

func (*SQLiteStorage) GetExportHash

func (s *SQLiteStorage) GetExportHash(ctx context.Context, issueID string) (string, error)

GetExportHash retrieves the content hash of the last export for an issue. Returns empty string if no hash is stored (first export).

func (*SQLiteStorage) GetIssue

func (s *SQLiteStorage) GetIssue(ctx context.Context, id string) (*types.Issue, error)

GetIssue retrieves an issue by ID

func (*SQLiteStorage) GetIssueByExternalRef

func (s *SQLiteStorage) GetIssueByExternalRef(ctx context.Context, externalRef string) (*types.Issue, error)

GetIssueByExternalRef retrieves an issue by external reference

func (*SQLiteStorage) GetIssueComments

func (s *SQLiteStorage) GetIssueComments(ctx context.Context, issueID string) ([]*types.Comment, error)

GetIssueComments retrieves all comments for an issue

func (*SQLiteStorage) GetIssuesByLabel

func (s *SQLiteStorage) GetIssuesByLabel(ctx context.Context, label string) ([]*types.Issue, error)

GetIssuesByLabel returns issues with a specific label

func (*SQLiteStorage) GetJSONLFileHash

func (s *SQLiteStorage) GetJSONLFileHash(ctx context.Context) (string, error)

GetJSONLFileHash retrieves the stored hash of the JSONL file. Returns empty string if no hash is stored (bd-160).

func (*SQLiteStorage) GetLabels

func (s *SQLiteStorage) GetLabels(ctx context.Context, issueID string) ([]string, error)

GetLabels returns all labels for an issue

func (*SQLiteStorage) GetLabelsForIssues

func (s *SQLiteStorage) GetLabelsForIssues(ctx context.Context, issueIDs []string) (map[string][]string, error)

GetLabelsForIssues fetches labels for multiple issues in a single query Returns a map of issue_id -> []labels

func (*SQLiteStorage) GetMetadata

func (s *SQLiteStorage) GetMetadata(ctx context.Context, key string) (string, error)

GetMetadata gets a metadata value (for internal state like import hashes)

func (*SQLiteStorage) GetMoleculeProgress

func (s *SQLiteStorage) GetMoleculeProgress(ctx context.Context, moleculeID string) (*types.MoleculeProgressStats, error)

GetMoleculeProgress returns efficient progress stats for a molecule. Uses indexed queries on dependencies table instead of loading all steps.

func (*SQLiteStorage) GetNewlyUnblockedByClose

func (s *SQLiteStorage) GetNewlyUnblockedByClose(ctx context.Context, closedIssueID string) ([]*types.Issue, error)

GetNewlyUnblockedByClose returns issues that became unblocked when the given issue was closed. This is used by the --suggest-next flag on bd close to show what work is now available. An issue is "newly unblocked" if:

  • It had a 'blocks' dependency on the closed issue
  • It is now unblocked (not in blocked_issues_cache)
  • It has status open or in_progress (ready to work on)

The cache is already rebuilt by CloseIssue before this is called, so we just need to find dependents that are no longer blocked.

func (*SQLiteStorage) GetNextChildID

func (s *SQLiteStorage) GetNextChildID(ctx context.Context, parentID string) (string, error)

GetNextChildID generates the next hierarchical child ID for a given parent Returns formatted ID as parentID.{counter} (e.g., bd-a3f8e9.1 or bd-a3f8e9.1.5) Works at any depth (max 3 levels)

func (*SQLiteStorage) GetOrphanHandling

func (s *SQLiteStorage) GetOrphanHandling(ctx context.Context) OrphanHandling

GetOrphanHandling gets the import.orphan_handling config value Returns OrphanAllow (the default) if not set or if value is invalid

func (*SQLiteStorage) GetReadyWork

func (s *SQLiteStorage) GetReadyWork(ctx context.Context, filter types.WorkFilter) ([]*types.Issue, error)

GetReadyWork returns issues with no open blockers By default, shows both 'open' and 'in_progress' issues so epics/tasks ready to close are visible. Excludes pinned issues which are persistent anchors, not actionable work.

func (*SQLiteStorage) GetStaleIssues

func (s *SQLiteStorage) GetStaleIssues(ctx context.Context, filter types.StaleFilter) ([]*types.Issue, error)

GetStaleIssues returns issues that haven't been updated recently

func (*SQLiteStorage) GetStatistics

func (s *SQLiteStorage) GetStatistics(ctx context.Context) (*types.Statistics, error)

GetStatistics returns aggregate statistics

func (*SQLiteStorage) GetTier1Candidates

func (s *SQLiteStorage) GetTier1Candidates(ctx context.Context) ([]*CompactionCandidate, error)

GetTier1Candidates returns issues eligible for Tier 1 compaction. Criteria: - Status = closed - Closed for at least compact_tier1_days - No open dependents within compact_tier1_dep_levels depth - Not already compacted (compaction_level = 0)

func (*SQLiteStorage) GetTier2Candidates

func (s *SQLiteStorage) GetTier2Candidates(ctx context.Context) ([]*CompactionCandidate, error)

GetTier2Candidates returns issues eligible for Tier 2 compaction. Criteria: - Status = closed - Closed for at least compact_tier2_days - No open dependents within compact_tier2_dep_levels depth - Already at compaction_level = 1 - Either has many commits (compact_tier2_commits) or many dependent issues

func (*SQLiteStorage) HydrateFromMultiRepo

func (s *SQLiteStorage) HydrateFromMultiRepo(ctx context.Context) (map[string]int, error)

HydrateFromMultiRepo loads issues from all configured repositories into the database. Uses mtime caching to skip unchanged JSONL files for performance. Returns the number of issues imported from each repo.

func (*SQLiteStorage) ImportIssueComment

func (s *SQLiteStorage) ImportIssueComment(ctx context.Context, issueID, author, text string, createdAt string) (*types.Comment, error)

ImportIssueComment adds a comment during import, preserving the original timestamp. Unlike AddIssueComment which uses CURRENT_TIMESTAMP, this method uses the provided createdAt time from the JSONL file. This prevents timestamp drift during sync cycles. GH#735: Comment created_at timestamps were being overwritten with current time during import.

func (*SQLiteStorage) IsBlocked

func (s *SQLiteStorage) IsBlocked(ctx context.Context, issueID string) (bool, []string, error)

IsBlocked checks if an issue is blocked by open dependencies (GH#962). Returns true if the issue is in the blocked_issues_cache, along with a list of issue IDs that are blocking it. This is used to prevent closing issues that still have open blockers.

func (*SQLiteStorage) IsClosed

func (s *SQLiteStorage) IsClosed() bool

IsClosed returns true if Close() has been called on this storage

func (*SQLiteStorage) MarkIssueDirty

func (s *SQLiteStorage) MarkIssueDirty(ctx context.Context, issueID string) error

MarkIssueDirty marks an issue as dirty (needs to be exported to JSONL) This should be called whenever an issue is created, updated, or has dependencies changed

func (*SQLiteStorage) MarkIssuesDirty

func (s *SQLiteStorage) MarkIssuesDirty(ctx context.Context, issueIDs []string) error

MarkIssuesDirty marks multiple issues as dirty in a single transaction More efficient when marking multiple issues (e.g., both sides of a dependency)

func (*SQLiteStorage) Path

func (s *SQLiteStorage) Path() string

Path returns the absolute path to the database file

func (*SQLiteStorage) QueryContext

func (s *SQLiteStorage) QueryContext(ctx context.Context, query string, args ...interface{}) (*sql.Rows, error)

QueryContext exposes the underlying database QueryContext method for advanced queries

func (*SQLiteStorage) RemoveDependency

func (s *SQLiteStorage) RemoveDependency(ctx context.Context, issueID, dependsOnID string, actor string) error

RemoveDependency removes a dependency

func (*SQLiteStorage) RemoveLabel

func (s *SQLiteStorage) RemoveLabel(ctx context.Context, issueID, label, actor string) error

RemoveLabel removes a label from an issue

func (*SQLiteStorage) RenameCounterPrefix

func (s *SQLiteStorage) RenameCounterPrefix(ctx context.Context, oldPrefix, newPrefix string) error

RenameCounterPrefix is a no-op with hash-based IDs Kept for backward compatibility with rename-prefix command

func (*SQLiteStorage) RenameDependencyPrefix

func (s *SQLiteStorage) RenameDependencyPrefix(ctx context.Context, oldPrefix, newPrefix string) error

RenameDependencyPrefix updates the prefix in all dependency records GH#630: This was previously a no-op, causing dependencies to break after rename-prefix

func (*SQLiteStorage) ResetCounter

func (s *SQLiteStorage) ResetCounter(ctx context.Context, prefix string) error

ResetCounter is a no-op with hash-based IDs Kept for backward compatibility

func (*SQLiteStorage) RunInTransaction

func (s *SQLiteStorage) RunInTransaction(ctx context.Context, fn func(tx storage.Transaction) error) error

RunInTransaction executes a function within a database transaction.

The transaction uses BEGIN IMMEDIATE to acquire a write lock early, preventing deadlocks when multiple goroutines compete for the same lock.

Transaction lifecycle:

  1. Acquire dedicated connection from pool
  2. Begin IMMEDIATE transaction with retry on SQLITE_BUSY
  3. Execute user function with Transaction interface
  4. On success: COMMIT
  5. On error or panic: ROLLBACK

Panic safety: If the callback panics, the transaction is rolled back and the panic is re-raised to the caller.

func (*SQLiteStorage) SearchIssues

func (s *SQLiteStorage) SearchIssues(ctx context.Context, query string, filter types.IssueFilter) ([]*types.Issue, error)

SearchIssues finds issues matching query and filters

func (*SQLiteStorage) SetConfig

func (s *SQLiteStorage) SetConfig(ctx context.Context, key, value string) error

SetConfig sets a configuration value

func (*SQLiteStorage) SetExportHash

func (s *SQLiteStorage) SetExportHash(ctx context.Context, issueID, contentHash string) error

SetExportHash stores the content hash of an issue after successful export.

func (*SQLiteStorage) SetJSONLFileHash

func (s *SQLiteStorage) SetJSONLFileHash(ctx context.Context, fileHash string) error

SetJSONLFileHash stores the hash of the JSONL file after export (bd-160).

func (*SQLiteStorage) SetMetadata

func (s *SQLiteStorage) SetMetadata(ctx context.Context, key, value string) error

SetMetadata sets a metadata value (for internal state like import hashes)

func (*SQLiteStorage) TryResurrectParent

func (s *SQLiteStorage) TryResurrectParent(ctx context.Context, parentID string) (bool, error)

TryResurrectParent attempts to resurrect a deleted parent issue from JSONL history. If the parent is found in the JSONL file, it creates a tombstone issue (status=closed) to preserve referential integrity for hierarchical children.

This function is called during import when a child issue references a missing parent.

Returns:

  • true if parent was successfully resurrected or already exists
  • false if parent was not found in JSONL history
  • error if resurrection failed for any other reason

func (*SQLiteStorage) TryResurrectParentChain

func (s *SQLiteStorage) TryResurrectParentChain(ctx context.Context, childID string) (bool, error)

TryResurrectParentChain recursively resurrects all missing parents in a hierarchical ID chain. For example, if resurrecting "bd-abc.1.2", this ensures both "bd-abc" and "bd-abc.1" exist.

Returns:

  • true if entire chain was successfully resurrected or already exists
  • false if any parent in the chain was not found in JSONL history
  • error if resurrection failed for any other reason

func (*SQLiteStorage) UnderlyingConn

func (s *SQLiteStorage) UnderlyingConn(ctx context.Context) (*sql.Conn, error)

UnderlyingConn returns a single connection from the pool for scoped use.

This provides a connection with explicit lifetime boundaries, useful for: - One-time DDL operations (CREATE TABLE, ALTER TABLE) - Migration scripts that need transaction control - Operations that benefit from connection-level state

IMPORTANT: The caller MUST close the connection when done:

conn, err := storage.UnderlyingConn(ctx)
if err != nil {
    return err
}
defer conn.Close()

For general queries and transactions, prefer UnderlyingDB() which manages the connection pool automatically.

EXAMPLE (extension table migration):

conn, err := storage.UnderlyingConn(ctx)
if err != nil {
    return err
}
defer conn.Close()

_, err = conn.ExecContext(ctx, `
    CREATE TABLE IF NOT EXISTS vc_executions (
        id INTEGER PRIMARY KEY AUTOINCREMENT,
        issue_id TEXT NOT NULL,
        FOREIGN KEY (issue_id) REFERENCES issues(id) ON DELETE CASCADE
    )
`)

func (*SQLiteStorage) UnderlyingDB

func (s *SQLiteStorage) UnderlyingDB() *sql.DB

UnderlyingDB returns the underlying *sql.DB connection for extensions.

This allows extensions (like VC) to create their own tables in the same database while leveraging the existing connection pool and schema. The returned *sql.DB is safe for concurrent use and shares the same transaction isolation and locking behavior as the core storage operations.

IMPORTANT SAFETY RULES:

1. DO NOT call Close() on the returned *sql.DB

  • The SQLiteStorage owns the connection lifecycle
  • Closing it will break all storage operations
  • Use storage.Close() to close the database

2. DO NOT modify connection pool settings

  • Avoid SetMaxOpenConns, SetMaxIdleConns, SetConnMaxLifetime, etc.
  • The storage has already configured these for optimal performance

3. DO NOT change SQLite PRAGMAs

  • The database is configured with WAL mode, foreign keys, and busy timeout
  • Changing these (e.g., journal_mode, synchronous, locking_mode) can cause corruption

4. Expect errors after storage.Close()

  • Check storage.IsClosed() before long-running operations if needed
  • Pass contexts with timeouts to prevent hanging on closed connections

5. Keep write transactions SHORT

  • SQLite has a single-writer lock even in WAL mode
  • Long-running write transactions will block core storage operations
  • Use read transactions (BEGIN DEFERRED) when possible

GOOD PRACTICES:

- Create extension tables with FOREIGN KEY constraints to maintain referential integrity - Use the same DATETIME format (RFC3339 / ISO8601) for consistency - Leverage SQLite indexes for query performance - Test with -race flag to catch concurrency issues

EXAMPLE (creating a VC extension table):

db := storage.UnderlyingDB()
_, err := db.Exec(`
    CREATE TABLE IF NOT EXISTS vc_executions (
        id INTEGER PRIMARY KEY AUTOINCREMENT,
        issue_id TEXT NOT NULL,
        status TEXT NOT NULL,
        created_at DATETIME DEFAULT CURRENT_TIMESTAMP,
        FOREIGN KEY (issue_id) REFERENCES issues(id) ON DELETE CASCADE
    );
    CREATE INDEX IF NOT EXISTS idx_vc_executions_issue ON vc_executions(issue_id);
`)

func (*SQLiteStorage) UpdateIssue

func (s *SQLiteStorage) UpdateIssue(ctx context.Context, id string, updates map[string]interface{}, actor string) error

UpdateIssue updates fields on an issue

func (*SQLiteStorage) UpdateIssueID

func (s *SQLiteStorage) UpdateIssueID(ctx context.Context, oldID, newID string, issue *types.Issue, actor string) error

UpdateIssueID updates an issue ID and all its text fields in a single transaction

type SchemaProbeResult

type SchemaProbeResult struct {
	Compatible     bool
	MissingTables  []string
	MissingColumns map[string][]string // table -> missing columns
	ErrorMessage   string
}

SchemaProbeResult contains the results of a schema compatibility check

type Snapshot

type Snapshot struct {
	IssueCount      int
	ConfigKeys      []string
	DependencyCount int
	LabelCount      int
}

Snapshot captures database state before migrations for validation

Directories

Path Synopsis

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL