den

package module
v0.11.2 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: May 3, 2026 License: MIT Imports: 21 Imported by: 6

README

Den

Go gophers organizing documents in their den
"Every burrow needs a den — a place to store what matters and find it again when you need it."

CI Release Go Version Go Report Card License Docs

An ODM for Go with two storage backends — SQLite and PostgreSQL. Same API, your choice of engine.

Each Go struct you register is a document, stored as a JSONB row in a SQL table that Den calls a collection. The SQL schema is one table per type with a JSONB data column plus a small set of secondary indexes Den maintains for you. You query collections with a fluent builder, relate them with typed links, and run it all in transactions. The SQLite backend compiles into your binary with no external dependencies. The PostgreSQL backend connects to your existing database. Switch between them by changing one line.

[!NOTE] Den is a document store, not a relational database. It does not support SQL, JOINs, or schema migrations in the traditional sense. If you need relational modeling, use Bun or GORM instead.

Features

  • Two backends, one API — SQLite (embedded, pure Go, no CGO) and PostgreSQL (server-based, JSONB + GIN indexes)
  • Chainable QuerySetNewQuery[T](db).Where(...).Sort(...).Limit(n).All(ctx) with lazy evaluation
  • Range iterationIter() returns iter.Seq2[*T, error] for memory-efficient streaming with Go's range
  • Typed relationsLink[T] for one-to-one, []Link[T] for one-to-many, with cascade write/delete and eager/lazy fetch
  • Back-referencesBackLinks[T] finds all documents referencing a given target
  • Native aggregationAvg, Sum, Min, Max pushed down to SQL; GroupBy and Project for analytics
  • Full-text search — FTS5 for SQLite, tsvector for PostgreSQL, same Search() API
  • Lifecycle hooks — BeforeInsert, AfterUpdate, Validate, and more — interfaces on your struct, no registration
  • Change tracking — opt-in via Tracked: IsChanged, GetChanges, Revert with byte-level snapshots
  • Soft delete — embed SoftDelete alongside Base, automatic query filtering, HardDelete for permanent removal
  • Attachments & storage — embed Attachment, install a den.Storage backend once, let the hard-delete cascade clean bytes automatically
  • Optimistic concurrency — revision-based conflict detection with ErrRevisionConflict
  • TransactionsRunInTransaction with panic-safe rollback
  • Migrations — registry-based, each migration runs atomically in a transaction
  • Struct tag validation — optional validate:"required,email" tags via go-playground/validator, enabled with validate.WithValidation()
  • Expression indexesden:"index", den:"unique", nullable unique for pointer fields

Quick Start

mkdir myapp && cd myapp
go mod init myapp
go get github.com/oliverandrich/den@latest
package main

import (
    "context"
    "fmt"
    "log"

    "github.com/oliverandrich/den"
    _ "github.com/oliverandrich/den/backend/sqlite" // register sqlite:// scheme
    "github.com/oliverandrich/den/document"
    "github.com/oliverandrich/den/where"
)

type Product struct {
    document.Base
    Name  string  `json:"name"  den:"index"`
    Price float64 `json:"price" den:"index"`
}

func main() {
    ctx := context.Background()

    // Open a SQLite database
    db, err := den.OpenURL(ctx, "sqlite:///products.db")
    if err != nil {
        log.Fatal(err)
    }
    defer db.Close()

    // Register document types (creates tables and indexes)
    if err := den.Register(ctx, db, &Product{}); err != nil {
        log.Fatal(err)
    }

    // Insert
    p := &Product{Name: "Widget", Price: 9.99}
    if err := den.Insert(ctx, db, p); err != nil {
        log.Fatal(err)
    }
    fmt.Printf("Inserted: %s (ID: %s)\n", p.Name, p.ID)

    // Query
    products, err := den.NewQuery[Product](db,
        where.Field("price").Lt(20.0),
    ).Sort("name", den.Asc).All(ctx)
    if err != nil {
        log.Fatal(err)
    }
    for _, prod := range products {
        fmt.Printf("  %s — $%.2f\n", prod.Name, prod.Price)
    }

    // Iterate (streaming, memory-efficient)
    for doc, err := range den.NewQuery[Product](db).Iter(ctx) {
        if err != nil {
            log.Fatal(err)
        }
        fmt.Printf("  %s\n", doc.Name)
    }
}

To use PostgreSQL instead, change the DSN and the import:

import _ "github.com/oliverandrich/den/backend/postgres" // instead of sqlite

db, err := den.OpenURL(ctx, "postgres://user:pass@localhost/mydb")

Architecture

den/
├── den.go, crud.go, queryset.go    Core API: Open, CRUD, QuerySet
├── iter.go                         Iter() — iter.Seq2 for range loops
├── aggregate.go                    Avg, Sum, Min, Max, GroupBy, Project
├── link.go, backlinks.go           Link[T] relations, BackLinks
├── search.go                       Full-text search (FTSProvider)
├── track.go                        Change tracking: IsChanged, GetChanges
├── soft_delete.go                  Soft delete, HardDelete
├── hooks.go                        Lifecycle hook interfaces
├── revision.go                     Optimistic concurrency
├── tx.go                           Transactions
├── storage.go                      Storage interface
├── storage/                        Storage backend registry + OpenURL
├── storage/file/                   Local filesystem backend (file:// scheme)
├── document/                       Base + composable SoftDelete, Tracked, Attachment embeds
├── where/                          Query condition builders
├── backend/
│   ├── sqlite/                     SQLite backend (pure Go, no CGO)
│   └── postgres/                   PostgreSQL backend (pgx)
├── validate/                       Optional struct tag validation
├── migrate/                        Migration framework
└── dentest/                        Test helpers
Backend Interface

Both backends implement the same Backend interface. The ReadWriter subset is shared between backends and transactions, so CRUD code works identically inside and outside transactions.

type ReadWriter interface {
    Get(ctx, collection, id) ([]byte, error)
    Put(ctx, collection, id, data) error
    Delete(ctx, collection, id) error
    Query(ctx, collection, *Query) (Iterator, error)
    Count(ctx, collection, *Query) (int64, error)
    Exists(ctx, collection, *Query) (bool, error)
    Aggregate(ctx, collection, op, field, *Query) (*float64, error)
}
Document Types

Every document embeds document.Base — the required anchor that provides ID, CreatedAt, UpdatedAt, Rev. Opt-in features are available as separate composable embeds:

Embed Purpose
document.Base Required. Provides ID, CreatedAt, UpdatedAt, Rev
document.SoftDelete Adds DeletedAt and IsDeleted() for non-destructive deletion
document.Tracked Adds byte-snapshot machinery for IsChanged, GetChanges, Revert
document.Attachment Adds StoragePath, Mime, Size, SHA256 — file reference paired with a den.Storage backend

Compose freely: struct { document.Base; document.SoftDelete; document.Tracked; document.Attachment; ... }.

Query Operators
where.Field("price").Gt(10)           // comparison
where.Field("status").In("a", "b")    // set membership
where.Field("tags").Contains("go")    // array contains
where.Field("email").IsNil()          // null check
where.Field("name").RegExp("^W")      // regular expression
where.And(cond1, cond2)               // logical combinators
where.Field("addr.city").Eq("Berlin") // nested fields (dot notation)

Validation

Den supports automatic struct tag validation via go-playground/validator. Enable it as an option when opening the database:

import "github.com/oliverandrich/den/validate"

db, err := den.OpenURL(ctx, "sqlite:///data.db", validate.WithValidation())

Then add validate tags to your document structs:

type User struct {
    document.Base
    Name  string `json:"name"  den:"unique" validate:"required,min=3,max=50"`
    Email string `json:"email" den:"unique" validate:"required,email"`
    Age   int    `json:"age"                validate:"gte=0,lte=130"`
}

Validation runs automatically before every insert and update. Errors wrap den.ErrValidation and can be inspected for field-level detail:

err := den.Insert(ctx, db, &User{Name: "ab"})
if errors.Is(err, den.ErrValidation) {
    var ve *validate.Errors
    if errors.As(err, &ve) {
        for _, fe := range ve.Fields {
            fmt.Printf("%s failed on %s\n", fe.Field, fe.Tag)
        }
    }
}

Tag validation and the Validator interface coexist — tag validation runs first (structural rules), then Validate() (business logic). Without validate.WithValidation(), no tag validation occurs (fully backward compatible).

Testing

Den provides a dentest package for test setup:

func TestMyFeature(t *testing.T) {
    db := dentest.MustOpen(t, &Product{}, &Category{})
    // File-backed SQLite in t.TempDir(), auto-closed via t.Cleanup
}

For PostgreSQL tests:

func TestMyFeature(t *testing.T) {
    db := dentest.MustOpenPostgres(t, "postgres://localhost/test", &Product{})
}

Benchmarks

Measured on an Apple M4 Pro (14 cores), Go 1.25, PostgreSQL 17 on localhost. The fixture is a ~1 KB article document (title, body, status, category, tags, price, indexed timestamp, embedded author link, metadata map) — closer to a real blog or catalog entry than a minimal struct.

Reproduce locally with just bench-readme. Numbers exclude connection-setup overhead (the bench helper opens the DB once and reuses it).

Serial workloads

Single-goroutine latency per operation. Lower is better.

Scenario SQLite Postgres SQLite allocs Postgres allocs
Insert (single) 148.2 µs 186.6 µs 31 29
InsertMany (100) 9.98 ms 13.91 ms 3411 2916
InsertMany (1000) 91.67 ms 142.16 ms 34021 29064
FindByID 5.1 µs 37.4 µs 42 31
FindByIDs (10) 266.6 µs 930.5 µs 343 328
Query + Sort + Limit(10) 730.5 µs 2.12 ms 328 291
Query + Sort + Limit(100) 1.50 ms 3.83 ms 2941 2544
Iter (1000 rows) 3.07 ms 2.93 ms 29050 25036
Count(filter) 25.5 µs 805.0 µs 29 31
Sum(filter) 177.6 µs 1.02 ms 35 41
FTS Search 885.9 µs 2.04 ms 603 513
WithFetchLinks (20 rows) 78.4 µs 632.9 µs 658 570
Update (single) 119.0 µs 307.8 µs 62 49
QuerySet.Update (100) 9.01 ms 18.35 ms 5247 4341
RunInTransaction 159.8 µs 299.5 µs 78 55
Concurrent workloads

b.RunParallel with Go's default GOMAXPROCS. Higher ops/sec is better. SQLite serializes writers by design (BEGIN IMMEDIATE), so write-heavy numbers plateau at single-writer speed; PostgreSQL's MVCC scales writes across connections.

Scenario SQLite Postgres
FindByID 70.1k ops/s 82.9k ops/s
Insert (single) 6.3k ops/s 23.4k ops/s
Mixed reads/writes 80/20 27.2k ops/s 63.3k ops/s
Queue consumer (SkipLocked) 23.9k ops/s 19.4k ops/s

Development

Den uses just as command runner:

just setup      # Check that all required dev tools are installed
just test       # Run all tests (SQLite only)
just test-all   # Run all tests including PostgreSQL
just lint       # Run golangci-lint
just fmt        # Format all Go files
just coverage   # Run tests with coverage report
just vuln       # Run vulnerability check
just tidy       # Tidy module dependencies
just beans      # List active beans (issue tracker)

Requires Go 1.25+. Run just setup to verify your dev environment.

Dependencies

Dependency Purpose
github.com/oklog/ulid/v2 ULID-based document IDs
github.com/goccy/go-json Fast JSON encoding
modernc.org/sqlite SQLite backend (pure Go, no CGO)
github.com/jackc/pgx/v5 PostgreSQL backend
github.com/go-playground/validator/v10 Struct tag validation (optional, via den/validate)

License

Den is licensed under the MIT License.

The Go Gopher was originally designed by Renee French and is licensed under CC BY 4.0.

Documentation

Index

Constants

View Source
const (
	// FieldID is the document.Base.ID JSON field name. Maps to a
	// 26-character ULID string; sortable chronologically.
	FieldID = "_id"

	// FieldCreatedAt is the document.Base.CreatedAt JSON field name.
	// Set on Insert, never touched afterwards.
	FieldCreatedAt = "_created_at"

	// FieldUpdatedAt is the document.Base.UpdatedAt JSON field name.
	// Refreshed by Insert and Update.
	FieldUpdatedAt = "_updated_at"

	// FieldRev is the document.Base.Rev JSON field name. Present
	// only when the type opts into revision tracking via
	// DenSettings().UseRevision; absent (omitempty) otherwise.
	FieldRev = "_rev"

	// FieldDeletedAt is the document.SoftDelete.DeletedAt JSON field
	// name. Available only on types that embed document.SoftDelete.
	// Default queries auto-filter rows where this is non-nil; opt
	// back in via QuerySet.IncludeDeleted or den.IncludeDeleted as
	// a CRUDOption.
	FieldDeletedAt = "_deleted_at"

	// FieldDeletedBy is the document.SoftDelete.DeletedBy JSON field
	// name. Optional audit value populated via the SoftDeleteBy
	// CRUDOption on the soft-delete path.
	FieldDeletedBy = "_deleted_by"

	// FieldDeleteReason is the document.SoftDelete.DeleteReason JSON
	// field name. Optional audit value populated via the
	// SoftDeleteReason CRUDOption on the soft-delete path.
	FieldDeleteReason = "_delete_reason"
)

Reserved JSON field names that Den's standard embeds (document.Base and document.SoftDelete) install on every registered type. The underscore prefix namespaces these away from user-defined fields and matches the MongoDB convention.

Use the constants whenever you need the JSON name in code that takes a string — `where.Field`, `Sort`, `SetFields`, `After` / `Before`, `Project`'s `den:"from:..."` tag — so a refactor stays compile-safe instead of relying on string literals scattered across the codebase.

The Go-side struct fields (Base.ID, Base.CreatedAt, …) keep their natural names; only the JSON tag (and therefore the SQL column access path) uses the underscore form. Storage is independent of these constants — renaming would be a breaking storage change, not a source rename.

Variables

View Source
var (
	ErrNotFound          = errors.New("den: document not found")
	ErrMultipleMatches   = errors.New("den: more than one document matched")
	ErrDuplicate         = errors.New("den: duplicate key")
	ErrRevisionConflict  = errors.New("den: revision conflict")
	ErrNotRegistered     = errors.New("den: document type not registered")
	ErrValidation        = errors.New("den: validation failed")
	ErrTransactionFailed = errors.New("den: transaction failed")
	ErrNoSnapshot        = errors.New("den: no snapshot — document was never loaded from database")
	ErrMigrationFailed   = errors.New("den: migration failed")
	ErrLocked            = errors.New("den: row is locked by another transaction")
	ErrDeadlock          = errors.New("den: deadlock detected")
	ErrSerialization     = errors.New("den: serialization failure")
	ErrFTSNotSupported   = errors.New("den: backend does not support full-text search")
	// ErrLockRequiresTransaction is returned when a terminal method runs on a
	// QuerySet whose ForUpdate was set but whose scope is a *DB. Row locking
	// is only meaningful inside a transaction because the lock is released
	// when the enclosing statement commits.
	ErrLockRequiresTransaction = errors.New("den: ForUpdate requires a transaction scope (*Tx)")
	// ErrIncompatibleScope is returned when a CRUDOption demands a scope the
	// caller did not provide (e.g. ContinueOnError requires *DB because the
	// caller's transaction cannot be split into per-document transactions).
	ErrIncompatibleScope = errors.New("den: option not compatible with the provided scope")
	// ErrIncompatibleOptions is returned when two mutually-exclusive
	// CRUDOptions are passed together.
	ErrIncompatibleOptions = errors.New("den: incompatible options combined")
	// ErrIncompatiblePagination is returned by terminal QuerySet methods when
	// the caller mixed cursor pagination (After/Before) with offset pagination
	// (Skip). The two styles have no defined interaction — pick one.
	ErrIncompatiblePagination = errors.New("den: cursor pagination (After/Before) cannot be combined with offset pagination (Skip)")
	// ErrUnsupportedScheme is returned by OpenURL when no backend opener is
	// registered for the DSN's scheme — typically because the caller forgot
	// the side-effect import (e.g. `_ "github.com/oliverandrich/den/backend/sqlite"`).
	// Wrapped with the actual scheme via fmt.Errorf so callers can use
	// errors.Is to detect this case without scraping error strings.
	ErrUnsupportedScheme = errors.New("den: unsupported database scheme")
)

Functions

func AdvisoryLock added in v0.8.0

func AdvisoryLock(ctx context.Context, tx *Tx, key int64) error

AdvisoryLock acquires an application-defined lock on key that persists until the transaction commits or rolls back. Concurrent transactions attempting to lock the same key block until the holder ends. See the Transaction interface for backend-specific behavior.

func BackLinks[T any](ctx context.Context, s Scope, linkField string, targetID string, opts ...CRUDOption) ([]*T, error)

BackLinks finds all documents of type T that reference the given target ID through the specified link field. For example, BackLinks[House](ctx, db, "door", doorID) returns all Houses whose "door" link points to doorID. The scope parameter accepts either a *DB or a *Tx.

linkField is the JSON tag on the holder's link field. Renaming the JSON tag silently breaks every BackLinks call against this collection. Prefer BackLinksField when the holder has exactly one Link[T] field for the target type — it's compile-checked on H and T and immune to JSON-tag renames. Use this string form to disambiguate when multiple Link[T] fields point at the same target type.

func BackLinksField added in v0.11.0

func BackLinksField[H any, T any](ctx context.Context, s Scope, targetID string, opts ...CRUDOption) ([]*H, error)

BackLinksField is the typed alternative to BackLinks: it identifies the link relationship through the Go type parameters (H = the holder, T = the target) instead of a string field name. Internally the holder struct is walked once to find the unique Link[T] field; its JSON tag is then used for the underlying query.

houses, err := den.BackLinksField[House, Door](ctx, db, doorID)

JSON-tag renames on the holder's link field are caught the next time BackLinksField runs, not silently ignored.

Errors with a clear message in the cases the typed lookup deliberately rejects: when the holder has no Link[T] field at all, when it has more than one (use string-based BackLinks to disambiguate), or when the only matching fields are []Link[T] slices (use a manual where.Field(...).Contains(targetID) query — Eq doesn't match against array contents).

func Collections

func Collections(db *DB) []string

Collections returns the names of all registered collections in sorted order.

func Delete

func Delete[T any](ctx context.Context, s Scope, document *T, opts ...CRUDOption) error

Delete removes a document from the database. Options: WithLinkRule to cascade deletes to linked documents.

func DeleteMany

func DeleteMany[T any](ctx context.Context, s Scope, conditions []where.Condition, opts ...CRUDOption) (int64, error)

DeleteMany deletes all documents matching the given conditions. Returns the number of deleted documents.

When scope is a *DB, all deletes run in one new transaction; when scope is a *Tx, the deletes run inline in the caller's transaction.

func FetchAllLinks[T any](ctx context.Context, s Scope, doc *T) error

FetchAllLinks resolves all link fields on a document. See FetchLink for the scope semantics. The eager / lazy tag on each Link field is ignored here — calling FetchAllLinks is itself the explicit ask for full hydration.

func FetchLink[T any](ctx context.Context, s Scope, doc *T, fieldName string) error

FetchLink resolves a single named link field on a document. The scope parameter accepts either a *DB (read from the backend directly) or a *Tx (read from the enclosing transaction).

The fieldName is the JSON tag on the parent's link field. Renaming the JSON tag silently breaks every FetchLink call against this collection. Prefer FetchLinkField when you can pass the typed link pointer directly — it's compile-checked.

func FetchLinkField added in v0.11.0

func FetchLinkField[T any](ctx context.Context, s Scope, link *Link[T]) error

FetchLinkField resolves the link by typed pointer instead of a stringly-named field on the parent. Use it when you have the Link[T] in hand directly — refactor-safe and immune to JSON-tag renames on the parent struct.

No-op when the link's ID is empty (cascade-write input) or when Loaded is already true (idempotent — matches FetchLink).

func FindByID

func FindByID[T any](ctx context.Context, s Scope, id string, opts ...CRUDOption) (*T, error)

FindByID retrieves a document by its ID.

`den:"eager"`-tagged link fields on T are hydrated by default; pass WithoutFetchLinks to suppress hydration.

func FindByIDs

func FindByIDs[T any](ctx context.Context, s Scope, ids []string, opts ...CRUDOption) ([]*T, error)

FindByIDs retrieves multiple documents by their IDs in a single query. Missing IDs are silently skipped. Order is not guaranteed.

`den:"eager"`-tagged link fields on T are batch-resolved by default; pass WithoutFetchLinks to suppress hydration.

func FindOneAndUpdate

func FindOneAndUpdate[T any](ctx context.Context, s Scope, fields SetFields, conditions []where.Condition, opts ...CRUDOption) (*T, error)

FindOneAndUpdate atomically finds the single matching document, applies the field updates, and returns the modified document. The find and replace are wrapped in a transaction for atomicity.

Returns ErrNotFound if no document matches and ErrMultipleMatches if more than one matches — the conditions must identify the document uniquely.

Field names in fields are validated against the registered struct before the write transaction opens; an unknown name aborts the call without touching storage. Mirrors QuerySet.Update's pre-tx validation contract.

When scope is a *DB, a new transaction is opened; when scope is a *Tx, the operation runs inline in the caller's transaction.

Pass IncludeDeleted to consider soft-deleted documents in the match.

func FindOneAndUpsert added in v0.11.0

func FindOneAndUpsert[T any](
	ctx context.Context,
	s Scope,
	defaults *T,
	fields SetFields,
	conditions []where.Condition,
	opts ...CRUDOption,
) (*T, bool, error)

FindOneAndUpsert atomically finds the single document matching conditions and applies fields, or inserts a new document built from defaults with fields applied on top. The third return value reports which path ran: true means a new document was inserted, false means an existing one was updated.

Conditions must identify the document uniquely: ErrMultipleMatches is returned if more than one document matches. The match-and-write happen in a single transaction so the upsert is atomic against itself; concurrent upserts on the same missing row rely on a unique constraint to fail one of the inserters with ErrDuplicate — there is no internal retry, and no row lock is taken on the lookup (an absent row cannot be locked).

On the miss path the defaults pointer is mutated by Insert (ID, CreatedAt, UpdatedAt are populated) and returned as the result. Callers reusing a shared defaults template across upserts should pass a fresh value each call — a stale ID would otherwise be carried into the next Insert.

Hooks follow the standard Insert / Update order. Exactly one path runs:

  • Hit: BeforeUpdate → BeforeSave → tag-validation → Validate → write → AfterUpdate → AfterSave
  • Miss: BeforeInsert → BeforeSave → tag-validation → Validate → write → AfterInsert → AfterSave

Soft-deleted matches are ignored by default — pass IncludeDeleted to have them satisfy the lookup. DeletedAt is left as-is when an existing soft-deleted document is updated; clear it explicitly via fields if the caller wants to resurrect.

Field names in fields are validated against the registered struct before the write transaction opens; an unknown name aborts the call without touching storage. Mirrors QuerySet.Update's pre-tx validation contract.

When scope is a *DB, a new transaction is opened; when scope is a *Tx, the operation runs inline in the caller's transaction.

func FindOrCreate added in v0.11.0

func FindOrCreate[T any](
	ctx context.Context,
	s Scope,
	defaults *T,
	conditions []where.Condition,
	opts ...CRUDOption,
) (*T, bool, error)

FindOrCreate is the find-or-create-with-defaults shorthand: returns the existing document if conditions match exactly one row, otherwise inserts `defaults` as a new row. Existing rows are never modified.

Equivalent to `FindOneAndUpsert(ctx, s, defaults, SetFields{}, conditions, opts...)` — same atomicity, same hook firing rules, same `ErrMultipleMatches` on non-unique conditions, same `(doc, inserted, err)` return shape. Reach for it when the typical "fetch this row by unique key, create with these defaults if missing, leave the rest alone" pattern doesn't need the post-find field updates FindOneAndUpsert can apply.

func GetChanges

func GetChanges[T any](db *DB, doc *T) (map[string]FieldChange, error)

GetChanges returns a map of field names to their before/after values for all fields that changed since the document was loaded. Returns nil if nothing changed or no snapshot exists.

func Insert

func Insert[T any](ctx context.Context, s Scope, document *T, opts ...CRUDOption) error

Insert adds a new document to the database. If the document's ID is empty, a new ULID is generated. Options: WithLinkRule to cascade writes to linked documents.

The scope parameter accepts either a *DB (operating outside a transaction) or a *Tx (operating inside RunInTransaction). See the Scope interface.

func InsertMany

func InsertMany[T any](ctx context.Context, s Scope, documents []*T, opts ...CRUDOption) error

InsertMany inserts multiple documents in a single transaction.

When scope is a *DB, a new transaction is opened for the batch; when scope is a *Tx, the inserts run inline in the caller's transaction.

See PreValidate and ContinueOnError for the available behavioral options.

func IsChanged

func IsChanged[T any](db *DB, doc *T) (bool, error)

IsChanged reports whether the document has changed since it was loaded. Returns false if the document has no snapshot (never loaded or not Trackable).

func LockByID added in v0.8.0

func LockByID[T any](ctx context.Context, tx *Tx, id string, opts ...LockOption) (*T, error)

LockByID retrieves a document by ID and acquires a row-level lock that persists for the lifetime of the transaction. Without options, concurrent transactions attempting to lock the same row block until this transaction commits or rolls back. Pass SkipLocked or NoWait to change that behavior.

On PostgreSQL this maps to SELECT ... FOR UPDATE; on SQLite it is a no-op because IMMEDIATE transactions already serialize writers.

The *Tx parameter enforces transaction scope at compile time — a lock outside a transaction releases immediately and would be meaningless. Returns ErrNotFound if the document does not exist. Returns ErrLocked when NoWait is set and the row is held by another transaction.

func NewID added in v0.4.0

func NewID() string

NewID generates a new ULID string. ULIDs are lexicographically sortable and timestamp-ordered. Use this for document IDs, worker IDs, or any unique identifier.

func Refresh

func Refresh[T any](ctx context.Context, s Scope, document *T, opts ...CRUDOption) error

Refresh re-reads a document from the database by its ID, overwriting all fields on the provided struct.

`den:"eager"`-tagged link fields on T are hydrated by default; pass WithoutFetchLinks to suppress hydration.

func Register

func Register(ctx context.Context, db *DB, types ...any) error

Register analyzes the given document types and registers their collections with the database. Must be called before any CRUD operations.

func RegisterBackend added in v0.2.0

func RegisterBackend(scheme string, opener func(ctx context.Context, dsn string) (Backend, error))

RegisterBackend registers a backend opener for a URL scheme. The opener receives the context supplied to OpenURL so that expensive setup work (dialing, metadata table creation) can honor deadlines and cancellation. Called by backend packages in their init() functions.

The scheme is normalized to lowercase so registration and lookup stay case-insensitive, matching URL-scheme semantics: "sqlite", "SQLite", and "SQLITE" all address the same backend.

Panics if scheme is empty, opener is nil, or a different opener is already registered for scheme — mirrors storage.Register semantics. Duplicate registrations surface mis-wiring (two backend packages claiming the same scheme, a replace-directive fork, or a manual call after a side-effect import) at process startup instead of at first lookup.

func Revert added in v0.8.0

func Revert[T any](db *DB, doc *T) error

Revert restores the document to its state at load time by decoding the stored snapshot back over its fields. Returns ErrNoSnapshot if the document was never loaded from the database or does not embed document.Tracked.

Named Revert rather than Rollback to avoid name collision with the backend transaction's Rollback method — this operation is purely an in-memory restore against the document snapshot and has nothing to do with transactions.

func RunInTransaction

func RunInTransaction(ctx context.Context, db *DB, fn func(tx *Tx) error) error

RunInTransaction executes fn within a transaction. If fn returns nil, the transaction is committed. If fn returns an error, the transaction is rolled back.

The *Tx passed to fn does not itself carry the context; entry points inside fn take ctx explicitly. Use the ctx closed over from the caller.

func Save added in v0.11.0

func Save[T any](ctx context.Context, s Scope, document *T, opts ...CRUDOption) error

Save inserts the document if its ID is empty, otherwise updates it. Convenience helper for the common "I have a *T; persist it" case where the caller does not want to think about whether the row exists yet.

Trade-off vs explicit Insert / Update: Save loses control over the branch — if a stale-rev Update would have failed with ErrRevisionConflict, an empty-ID Save instead silently routes to Insert. Use the explicit calls when conflict semantics matter.

Options pass through to whichever underlying call runs. Hooks fire on exactly one path (Insert hooks on the new-doc branch, Update hooks on the existing-doc branch).

func Update

func Update[T any](ctx context.Context, s Scope, document *T, opts ...CRUDOption) error

Update updates an existing document in the database. Options: WithLinkRule to cascade writes, IgnoreRevision to skip conflict check.

func UpdateMany added in v0.11.0

func UpdateMany[T any](ctx context.Context, s Scope, conditions []where.Condition, fields SetFields) (int64, error)

UpdateMany applies fields to every document matching conditions. Returns the number of modified documents.

Top-level shorthand for `NewQuery[T](s, conditions...).Update(ctx, fields)` — discoverable next to DeleteMany / Insert / Update instead of buried in the QuerySet chain. All semantics (per-row hooks, fail-fast on error, SetFields key validation, transaction wrapping) come from QuerySet.Update; see QuerySet.Update for the full contract.

Types

type AfterDeleter

type AfterDeleter interface {
	AfterDelete(ctx context.Context) error
}

AfterDeleter fires after any deletion completes — both soft and hard. See BeforeDeleter for the full hook ordering on each path.

type AfterInserter

type AfterInserter interface {
	AfterInsert(ctx context.Context) error
}

type AfterSaver

type AfterSaver interface {
	AfterSave(ctx context.Context) error
}

type AfterSoftDeleter added in v0.11.0

type AfterSoftDeleter interface {
	AfterSoftDelete(ctx context.Context) error
}

AfterSoftDeleter fires only on the soft-delete path — after the write, before AfterDelete. HardDelete() bypasses this hook. See BeforeDeleter for the full hook ordering.

type AfterUpdater

type AfterUpdater interface {
	AfterUpdate(ctx context.Context) error
}

type AggregateOp

type AggregateOp string

AggregateOp identifies a SQL aggregate function.

const (
	OpSum   AggregateOp = "SUM"
	OpAvg   AggregateOp = "AVG"
	OpMin   AggregateOp = "MIN"
	OpMax   AggregateOp = "MAX"
	OpCount AggregateOp = "COUNT"
)

type Backend

type Backend interface {
	Get(ctx context.Context, collection, id string) ([]byte, error)
	Put(ctx context.Context, collection, id string, data []byte) error
	Delete(ctx context.Context, collection, id string) error

	Query(ctx context.Context, collection string, q *Query) (Iterator, error)
	Count(ctx context.Context, collection string, q *Query) (int64, error)
	Exists(ctx context.Context, collection string, q *Query) (bool, error)
	Aggregate(ctx context.Context, collection string, op AggregateOp, field string, q *Query) (*float64, error)
	GroupBy(ctx context.Context, collection string, groupFields []string, aggs []GroupByAgg, q *Query) ([]GroupByRow, error)

	EnsureIndex(ctx context.Context, collection string, idx IndexDefinition) error
	DropIndex(ctx context.Context, collection string, name string) error
	ListRecordedIndexes(ctx context.Context, collection string) ([]RecordedIndex, error)

	EnsureCollection(ctx context.Context, name string, meta CollectionMeta) error
	DropCollection(ctx context.Context, name string) error

	Begin(ctx context.Context) (Transaction, error)

	Encoder() Encoder

	Ping(ctx context.Context) error
	Close() error
}

Backend defines the contract that all storage engines must implement.

type BeforeDeleter

type BeforeDeleter interface {
	BeforeDelete(ctx context.Context) error
}

BeforeDeleter fires before any deletion — both soft and hard. The hook runs before the soft-delete flip OR the physical row removal, whichever the call resolves to. Use BeforeSoftDeleter for soft-only logic.

Ordering on the soft path: BeforeDelete → BeforeSoftDelete → [write] → AfterSoftDelete → AfterDelete.

Ordering on the hard path (HardDelete() option, or no SoftDelete embed): BeforeDelete → [write] → AfterDelete; the soft-only hooks are skipped.

type BeforeInserter

type BeforeInserter interface {
	BeforeInsert(ctx context.Context) error
}

type BeforeSaver

type BeforeSaver interface {
	BeforeSave(ctx context.Context) error
}

type BeforeSoftDeleter added in v0.11.0

type BeforeSoftDeleter interface {
	BeforeSoftDelete(ctx context.Context) error
}

BeforeSoftDeleter fires only on the soft-delete path — after BeforeDelete, before the write. HardDelete() bypasses this hook. Use it for audit-log side effects that should not fire on permanent deletion.

Full ordering: BeforeDelete → BeforeSoftDelete → [write] → AfterSoftDelete → AfterDelete.

type BeforeUpdater

type BeforeUpdater interface {
	BeforeUpdate(ctx context.Context) error
}

type CRUDOption

type CRUDOption func(*crudOpts)

CRUDOption configures CRUD operations.

func ContinueOnError added in v0.11.0

func ContinueOnError() CRUDOption

ContinueOnError makes InsertMany write each document in its own short-lived transaction instead of failing the whole batch on the first error. The returned error (if any) is an *InsertManyError listing per-document failures by input index.

Loses cross-document atomicity — successful inserts are committed even when later ones fail. Returns ErrIncompatibleScope when called with a *Tx scope; returns ErrIncompatibleOptions when combined with PreValidate (each doc gets its own transaction, so a global pre-pass would leave the per-doc guarantee ill-defined).

func HardDelete

func HardDelete() CRUDOption

HardDelete returns a CRUDOption that makes Delete permanently remove a document from storage, bypassing soft-delete. Hooks and link cascade are still applied. Compose with other CRUDOptions such as WithLinkRule:

den.Delete(ctx, db, doc, den.HardDelete())
den.Delete(ctx, db, doc, den.HardDelete(), den.WithLinkRule(den.LinkDelete))

func IgnoreRevision

func IgnoreRevision() CRUDOption

IgnoreRevision returns a CRUDOption that skips revision checking.

func IncludeDeleted added in v0.11.0

func IncludeDeleted() CRUDOption

IncludeDeleted returns a CRUDOption that makes lookup-style operations consider soft-deleted documents. Currently honored by FindOneAndUpdate and FindOneAndUpsert: without it, soft-deleted matches are skipped (Upsert then inserts a fresh document); with it, the soft-deleted document is updated in place and DeletedAt is left untouched.

Mirrors the QuerySet.IncludeDeleted modifier so the same name covers both query-driven reads and CRUD-style lookups.

func MaxRecordedFailures added in v0.11.0

func MaxRecordedFailures(n int) CRUDOption

MaxRecordedFailures caps how many per-document failures InsertMany records in the returned *InsertManyError when ContinueOnError is in effect. The error's TotalFailures field always reports the uncapped count, and Truncated flags that the list was sampled.

Passing 0 disables the cap (records every failure). The default when the option is not passed is a modest cap (currently 100) so that runaway batches do not quietly allocate unbounded memory.

Returns ErrIncompatibleOptions when combined without ContinueOnError — without ContinueOnError InsertMany is fail-fast and never constructs an *InsertManyError, so the cap would be a silent no-op.

func PreValidate added in v0.11.0

func PreValidate() CRUDOption

PreValidate makes InsertMany run the full insert hook + validation chain on every document before opening the write transaction. If any document fails, no writes are attempted.

BeforeInsert / BeforeSave / Validate fire exactly once per document; the pre-pass caches the encoded bytes and the in-transaction commit only runs Put + AfterInsert / AfterSave. (When combined with WithLinkRule(LinkWrite), cascade must run inside the tx so the hook chain runs again there — the optimization does not apply to that combination.)

func SoftDeleteBy added in v0.11.0

func SoftDeleteBy(actor string) CRUDOption

SoftDeleteBy returns a CRUDOption that records an actor identifier (user ID, service name, etc.) on the document's DeletedBy field during a soft-delete. Silently ignored on the hard-delete path or on documents that do not embed document.SoftDelete — there is nowhere to store the value.

func SoftDeleteReason added in v0.11.0

func SoftDeleteReason(reason string) CRUDOption

SoftDeleteReason returns a CRUDOption that records a free-form reason on the document's DeleteReason field during a soft-delete. Silently ignored on the hard-delete path or on documents that do not embed document.SoftDelete.

func WithLinkRule

func WithLinkRule(rule LinkRule) CRUDOption

WithLinkRule sets the link cascading rule for Insert/Update/Delete.

func WithoutFetchLinks() CRUDOption

WithoutFetchLinks suppresses link hydration on a CRUD-style read, including fields tagged `den:"eager"`. Mirrors the QuerySet modifier of the same name; honored by FindByID, FindByIDs, Refresh, BackLinks, BackLinksField, FindOneAndUpdate, FindOneAndUpsert, and FindOrCreate. On a type with no eager-tagged links it's a no-op.

type CollectionMeta

type CollectionMeta struct {
	Name              string
	Fields            []FieldMeta
	Indexes           []IndexDefinition
	HasSoftDelete     bool
	HasRevision       bool
	HasChangeTracking bool
}

CollectionMeta holds structural metadata for a registered collection.

HasSoftDelete is derived from the document struct: true when the type embeds document.SoftDelete (detected structurally via the `_deleted_at` JSON field). HasRevision reflects DenSettings.UseRevision — a runtime flag on the collection, not a structural property. HasChangeTracking is true when the type implements document.Trackable (typically by embedding document.Tracked); since the snapshot lives only in memory it has no persistence impact, but tooling that walks Meta can use the flag to know which collections expose IsChanged / GetChanges / Revert.

func Meta

func Meta[T any](db *DB) (CollectionMeta, error)

Meta returns the collection metadata for the given document type.

type DB

type DB struct {
	// contains filtered or unexported fields
}

DB is the main entry point for Den operations. It wraps a Backend and holds the collection registry.

func Open

func Open(ctx context.Context, backend Backend, opts ...Option) (*DB, error)

Open creates a new DB using the given backend directly. The context governs any registration work triggered by WithTypes (collection table creation, index provisioning); callers with long-running startup work can pass a timeout or cancellable context to abort it cleanly.

Use OpenURL for URL-based opening with automatic backend selection.

func OpenURL added in v0.2.0

func OpenURL(ctx context.Context, dsn string, opts ...Option) (*DB, error)

OpenURL opens a database connection using a URL-style DSN. The context governs the backend's connection setup (metadata table creation, server version checks) and any registration work triggered by WithTypes.

Supported schemes depend on which backend packages are imported:

  • sqlite:///path/to/db — import _ "github.com/oliverandrich/den/backend/sqlite"
  • sqlite://:memory: — SQLite in-memory database
  • postgres://user:pass@host:5432/db — import _ "github.com/oliverandrich/den/backend/postgres"
  • postgresql://user:pass@host/db — PostgreSQL (alias)

Backend packages register themselves automatically via init().

func (*DB) Backend

func (db *DB) Backend() Backend

Backend returns the underlying backend. Useful for advanced use cases or backend-specific type assertions.

func (*DB) Close

func (db *DB) Close() error

Close closes the database and its underlying backend.

func (*DB) Ping

func (db *DB) Ping(ctx context.Context) error

Ping verifies the backend is reachable and operational.

func (*DB) Storage added in v0.9.0

func (db *DB) Storage() Storage

Storage returns the Storage configured on db, or nil if none was installed. Application code that owns the upload flow (web handlers, CLI importers) calls Store directly via this accessor.

type DanglingLinkError added in v0.11.0

type DanglingLinkError struct {
	Collection string
	ID         string
}

DanglingLinkError describes a Link[T] whose ID does not resolve to any row in the target collection. Returned by the batched link-resolver when a parent references a deleted or never-existed target. Wraps ErrNotFound so callers can keep the simple `errors.Is(err, ErrNotFound)` check, but also exposes Collection and ID for callers that need to surface "which link broke" without parsing the error message.

func (*DanglingLinkError) Error added in v0.11.0

func (e *DanglingLinkError) Error() string

func (*DanglingLinkError) Unwrap added in v0.11.0

func (e *DanglingLinkError) Unwrap() error

type DenSettable

type DenSettable interface {
	DenSettings() Settings
}

DenSettable is implemented by document types that provide custom settings.

type DropStaleOption added in v0.8.0

type DropStaleOption func(*dropStaleConfig)

DropStaleOption configures DropStaleIndexes.

func DryRun added in v0.8.0

func DryRun() DropStaleOption

DryRun causes DropStaleIndexes to report the indexes that would be dropped without actually dropping them.

type DropStaleResult added in v0.8.0

type DropStaleResult struct {
	Dropped []StaleIndex
	Kept    []StaleIndex
}

DropStaleResult summarizes a DropStaleIndexes call. Dropped contains the indexes that were (or would be, under DryRun) removed. Kept contains indexes that are still referenced by a current IndexDefinition.

func DropStaleIndexes added in v0.8.0

func DropStaleIndexes(ctx context.Context, db *DB, opts ...DropStaleOption) (DropStaleResult, error)

DropStaleIndexes removes indexes previously created by Register() that no longer correspond to a registered IndexDefinition. Managed indexes (for example the PostgreSQL GIN index, FTS triggers, or tables) are not tracked and therefore cannot be dropped by this function.

Typically invoked from a migration or deployment script after a struct has changed. Pass DryRun() to inspect what would be dropped without making changes.

type Encoder

type Encoder interface {
	Encode(v any) ([]byte, error)
	Decode(data []byte, v any) error
}

Encoder serializes and deserializes documents for a specific backend. Each backend provides its own implementation.

type FTSProvider

type FTSProvider interface {
	FTSSearcher
	EnsureFTS(ctx context.Context, collection string, fields []string) error
}

FTSProvider extends FTSSearcher with the registration-time setup hook. Backends implement the full interface; transactions implement only FTSSearcher because index/trigger creation is a one-time setup operation that does not belong on a transactional path.

type FTSSearcher added in v0.11.0

type FTSSearcher interface {
	Search(ctx context.Context, collection string, query string, q *Query) (Iterator, error)
}

FTSSearcher is the read-side full-text search contract. Both backends and transactions implement it so QuerySet.Search honors the caller's scope: `NewQuery[T](db).Search(...)` reads committed state, while `NewQuery[T](tx).Search(...)` sees the tx's uncommitted writes (the FTS index is updated in-tx by triggers on SQLite and by tsvector + GIN under MVCC on PostgreSQL).

type FieldChange

type FieldChange struct {
	Before any
	After  any
}

FieldChange holds the before and after values for a changed field.

type FieldMeta

type FieldMeta struct {
	Name      string
	GoName    string
	Type      string
	Indexed   bool
	Unique    bool
	FTS       bool
	IsPointer bool
}

FieldMeta describes a single field within a collection.

type GroupByAgg added in v0.7.0

type GroupByAgg struct {
	Op    AggregateOp
	Field string // source field (ignored for OpCount)
}

GroupByAgg describes a single aggregate expression in a GROUP BY query.

type GroupByBuilder

type GroupByBuilder[T any] struct {
	// contains filtered or unexported fields
}

GroupByBuilder allows specifying group-by fields. The builder is typically obtained from QuerySet.GroupBy.

func (GroupByBuilder[T]) Into

func (gb GroupByBuilder[T]) Into(ctx context.Context, target any) error

Into executes the group-by aggregation and maps results into the target slice. The query is pushed down to the database as a SQL GROUP BY statement.

func (GroupByBuilder[T]) OrderByAgg added in v0.11.0

func (gb GroupByBuilder[T]) OrderByAgg(op AggregateOp, field string, dir SortDirection) GroupByBuilder[T]

OrderByAgg appends an ORDER BY entry that sorts grouped results by an aggregate expression. Op selects the aggregate column; field names its source field (ignored for OpCount, which sorts by COUNT(*)). Multiple calls define tie-breakers in the order they were added.

To order by a group key, use the ordinary QuerySet.Sort chain on the underlying query set — Sort fields that match a group key translate to ORDER BY the group-key expression. Sort fields that are neither a group key nor an aggregate error out at Into.

type GroupByRow added in v0.7.0

type GroupByRow struct {
	Keys   []string  // group key values (text representation), one per group field
	Values []float64 // aggregate values, matching GroupByAgg order
}

GroupByRow holds one result row from a GROUP BY query. Keys holds one entry per field passed to GroupBy (in the same order); Values holds one entry per GroupByAgg in the order they were requested.

type GroupBySortEntry added in v0.11.0

type GroupBySortEntry struct {
	Op    AggregateOp
	Field string
	Dir   SortDirection
}

GroupBySortEntry describes an ORDER BY entry over an aggregate expression inside a GROUP BY query. Op selects which aggregate column to order by; Field names the aggregate's source field (ignored for OpCount).

type IndexDefinition

type IndexDefinition struct {
	Name   string
	Fields []string
	Unique bool
}

IndexDefinition describes a secondary index on a collection.

type InsertFailure added in v0.11.0

type InsertFailure struct {
	Index int
	Err   error
}

InsertFailure pairs a failed document's position in the input slice with the underlying error. Used by InsertManyError.

type InsertManyError added in v0.11.0

type InsertManyError struct {
	Failures      []InsertFailure
	Truncated     bool
	TotalFailures int
}

InsertManyError aggregates per-document failures from InsertMany when the ContinueOnError option is set. Failures are listed in input order.

Failures may be shorter than TotalFailures when the caller caps the recorded list via MaxRecordedFailures; Truncated signals that case so callers can distinguish "exhaustive list" from "first-N sample". Error() and errors.Is/As both respect the cap: only the recorded Failures are walked.

errors.Is matches any sentinel wrapped by any recorded failure; errors.As on a per-failure error returns the wrapped concrete type.

func (*InsertManyError) Error added in v0.11.0

func (e *InsertManyError) Error() string

func (*InsertManyError) Unwrap added in v0.11.0

func (e *InsertManyError) Unwrap() []error

Unwrap returns the wrapped errors so errors.Is and errors.As traverse every recorded failure. A fresh slice is allocated on each call so callers that mutate Failures see consistent unwrap output afterward — the previous sync.Once cache made the slice silently stale on mutation. The cost is one O(len(Failures)) allocation per call, sub-microsecond at the default MaxRecordedFailures cap of 100.

When Truncated is true, only the recorded Failures are unwrapped; elided failures are not reachable via the errors tree. This is intentional — a sampled error should not silently appear exhaustive.

type Iterator

type Iterator interface {
	Next() bool
	Bytes() []byte
	ID() string
	Err() error
	Close() error
}

Iterator provides sequential access to query results.

type Link[T any] struct {
	ID     string
	Value  *T
	Loaded bool
}

Link represents a reference to a document in another collection. Only the ID is persisted; Value is populated on fetch.

func NewLink[T any](doc *T) Link[T]

NewLink creates a Link from an existing document, extracting its ID from the embedded document.Base.

The doc must contain a document.Base anywhere in its struct tree — directly embedded (the standard pattern), embedded via a wrapper, or even as a named field. NewLink panics if no document.Base is found, because a Link without an ID is silently broken downstream and always indicates a programmer error.

An empty Base.ID (i.e. the doc has not been inserted yet) is fine and expected on the LinkWrite cascade path — the cascaded Insert will populate the ID and propagate it back into the parent's Link.

func (Link[T]) IsLoaded

func (l Link[T]) IsLoaded() bool

IsLoaded reports whether the linked document has been fetched.

func (Link[T]) MarshalJSON

func (l Link[T]) MarshalJSON() ([]byte, error)

MarshalJSON serializes the link as a JSON string (the ID).

func (*Link[T]) UnmarshalJSON

func (l *Link[T]) UnmarshalJSON(data []byte) error

UnmarshalJSON deserializes a JSON string into the link.

type LinkRule

type LinkRule int

LinkRule controls cascading behavior for write and delete operations.

LinkDelete cascades a Delete to the immediate link targets only — it does not recurse into the targets' own links. Callers that need transitive cleanup must walk the graph themselves. This keeps a mis-configured delete from wiping an unbounded subgraph.

const (
	LinkIgnore LinkRule = iota
	LinkWrite
	LinkDelete
)

type LockMode added in v0.8.0

type LockMode int

LockMode selects the row-locking behavior used by GetForUpdate.

const (
	// LockDefault acquires the lock and blocks if another transaction holds it.
	LockDefault LockMode = iota
	// LockSkipLocked returns no row (ErrNotFound) if another transaction
	// already holds the lock. Mapped to FOR UPDATE SKIP LOCKED on PostgreSQL.
	LockSkipLocked
	// LockNoWait returns ErrLocked immediately if another transaction
	// already holds the lock. Mapped to FOR UPDATE NOWAIT on PostgreSQL.
	LockNoWait
)

type LockOption added in v0.8.0

type LockOption func(*lockConfig)

LockOption configures LockByID and TxQuerySet.ForUpdate.

func NoWait added in v0.8.0

func NoWait() LockOption

NoWait makes LockByID return ErrLocked immediately if another transaction already holds the row lock, instead of blocking. Maps to PostgreSQL's FOR UPDATE NOWAIT. Useful when the caller wants to decide between retry, abort, or an alternative code path. On SQLite this option is a no-op.

Passing both SkipLocked and NoWait returns an error — they are mutually exclusive in PostgreSQL.

func SkipLocked added in v0.8.0

func SkipLocked() LockOption

SkipLocked makes LockByID return ErrNotFound immediately if another transaction already holds the row lock, instead of blocking. Maps to PostgreSQL's FOR UPDATE SKIP LOCKED. Useful for queue-consumer patterns where each worker should claim a different row without contending. On SQLite this option is a no-op.

Because PostgreSQL returns zero rows for both "locked by another tx" and "row does not exist", the caller cannot distinguish these cases via the error alone.

Passing both SkipLocked and NoWait returns an error — they are mutually exclusive in PostgreSQL.

type Option

type Option func(*DB)

Option configures a DB during Open.

func WithStorage added in v0.9.0

func WithStorage(s Storage) Option

WithStorage installs a Storage on the DB. Storage is DB-scoped — all document types that embed or contain document.Attachment use the same backend. Install at Open:

fs, err := file.New("./uploads", "/media")
// handle err
db, err := den.OpenURL(ctx, dsn, den.WithStorage(fs))

Without a Storage, Den refuses to hard-delete documents that carry attachments — orphan bytes are worse than a clear error.

func WithTagValidator added in v0.8.0

func WithTagValidator(fn func(any) error) Option

WithTagValidator returns an Option that installs a function for validating documents by their struct tags. The function is invoked before insert and update operations; any error it returns is wrapped with ErrValidation.

The option composes with WithTypes and WithValidation from the validate package and is applied at Open, so validation is set once up-front and not racy against concurrent Register calls.

func WithTypes added in v0.8.0

func WithTypes(types ...any) Option

WithTypes queues document types to be registered at the end of Open. Equivalent to calling Register(ctx, db, types...) after Open returns, where ctx is the same context passed to Open / OpenURL — there is no silent context.Background() substitution. Lets the whole setup read as a single expression:

db, err := den.OpenURL(ctx, dsn, den.WithTypes(&Note{}, &Tag{}))

Registration runs after every other Option has been applied, so a Validator installed via WithTagValidator is in place before the queued types are validated. Any registration error aborts Open and is surfaced as its error.

type Query

type Query struct {
	Collection string
	Conditions []where.Condition
	SortFields []SortEntry
	LimitN     int // 0 = no limit
	SkipN      int // 0 = no skip
	AfterID    string
	BeforeID   string
	// Lock requests a row-level lock on every matching row (PostgreSQL
	// only; SQLite ignores it because IMMEDIATE tx already serializes
	// writers). nil means no lock; a non-nil pointer's value selects the
	// lock mode. The pointer form rules out the previously-possible
	// invalid pair of (ForUpdate=false, LockMode!=LockDefault).
	Lock *LockMode

	// GroupBySort carries ORDER BY entries that target aggregates in a
	// GROUP BY query. SortFields are used for group-key ordering; aggregate
	// ordering needs the (Op, Field) tuple because no source-field name
	// identifies a synthetic aggregate column. Only consumed by GroupBy
	// paths — other terminals ignore this slice.
	GroupBySort []GroupBySortEntry
}

Query represents an abstract query that backends translate into their native query mechanism.

type QuerySet

type QuerySet[T any] struct {
	// contains filtered or unexported fields
}

QuerySet is a lazy, immutable query builder. Chain methods return copies; the query is only executed when a terminal method (All, First, Count, etc.) is called.

QuerySet binds to a Scope — either a *DB (operating outside a transaction) or a *Tx (operating inside RunInTransaction). Row-level locking via ForUpdate is only valid on a *Tx scope; calling it on a *DB-bound QuerySet defers an error that surfaces on the terminal method.

The zero value is not usable — always obtain a QuerySet via NewQuery. Calling terminal methods on a zero-value QuerySet panics because the scope reference is nil.

func NewQuery

func NewQuery[T any](scope Scope, conditions ...where.Condition) QuerySet[T]

NewQuery creates a new QuerySet bound to the given scope. Conditions can optionally be passed directly. The context is supplied later when a terminal method (All, First, Iter, …) runs, so the same QuerySet can be executed against different contexts.

Pass a *DB for queries outside a transaction, or a *Tx from within a RunInTransaction closure for a query that sees the transaction's view of the data. Use ForUpdate only on a *Tx-bound QuerySet.

func (QuerySet[T]) After

func (qs QuerySet[T]) After(id string) QuerySet[T]

After sets the cursor for forward pagination.

Cannot be combined with Skip (offset pagination) — terminal methods return ErrIncompatiblePagination when both styles are set.

func (QuerySet[T]) All

func (qs QuerySet[T]) All(ctx context.Context) ([]*T, error)

All executes the query and returns all matching documents.

With WithFetchLinks enabled, All drains the result set first and then resolves every link field in batched IN-queries (one per target type per nesting level) instead of the per-row Get that streaming .Iter() does. For N parents sharing a small set of linked targets this collapses N round-trips into one — at the cost of buffering the full result set, which is already implicit in .All()'s contract. Callers who need true streaming with link resolution should keep using .Iter().

func (QuerySet[T]) AllWithCount

func (qs QuerySet[T]) AllWithCount(ctx context.Context) ([]*T, int64, error)

AllWithCount returns matching documents and the total unpaginated count.

When the QuerySet is bound to a *DB, the count+query run in a read transaction for consistency. When bound to a *Tx, they run through the existing transaction and no nested tx is opened.

func (QuerySet[T]) Avg

func (qs QuerySet[T]) Avg(ctx context.Context, field string) (float64, error)

Avg returns the average of the given field across matching documents.

Scalar aggregates ignore Limit, Skip, Sort, After, and Before — they always operate on the full WHERE-filtered set.

func (QuerySet[T]) Before

func (qs QuerySet[T]) Before(id string) QuerySet[T]

Before sets the cursor for backward pagination.

Cannot be combined with Skip (offset pagination) — terminal methods return ErrIncompatiblePagination when both styles are set.

func (QuerySet[T]) Count

func (qs QuerySet[T]) Count(ctx context.Context) (int64, error)

Count returns the number of matching documents.

Limit, Skip, and Sort are ignored — Count always operates on the full WHERE-filtered set. After / Before cursor modifiers are honored.

func (QuerySet[T]) Exists

func (qs QuerySet[T]) Exists(ctx context.Context) (bool, error)

Exists returns true if at least one document matches.

Limit, Skip, and Sort are ignored — the backend emits its own LIMIT 1 internally. After / Before cursor modifiers are honored.

func (QuerySet[T]) First

func (qs QuerySet[T]) First(ctx context.Context) (*T, error)

First returns the first matching document. Returns ErrNotFound if none match.

func (QuerySet[T]) ForUpdate added in v0.8.0

func (qs QuerySet[T]) ForUpdate(opts ...LockOption) QuerySet[T]

ForUpdate acquires a row-level lock on every matching document, held until the enclosing transaction commits or rolls back. Only valid on a QuerySet bound to a *Tx — on a *DB-bound QuerySet the call is accepted but the terminal method will return ErrLockRequiresTransaction.

Pass SkipLocked to omit locked rows from the result set (queue-consumer pattern) or NoWait to fail immediately with ErrLocked when a row is held by another transaction. On SQLite these options are no-ops because IMMEDIATE transactions already serialize writers.

Passing both SkipLocked and NoWait is a programmer error (PG allows only one); ForUpdate captures the error on the query set and surfaces it when a terminal method runs.

func (QuerySet[T]) GroupBy

func (qs QuerySet[T]) GroupBy(fields ...string) GroupByBuilder[T]

GroupBy starts a group-by aggregation on one or more fields.

The target struct passed to Into must carry one field tagged `den:"group_key:N"` for each field listed here, with N running 0..len(fields)-1. The legacy unindexed `den:"group_key"` is accepted when exactly one field is requested and is treated as slot 0; mixing the unindexed form with positional tags returns an error.

func (QuerySet[T]) IncludeDeleted

func (qs QuerySet[T]) IncludeDeleted() QuerySet[T]

IncludeDeleted includes soft-deleted documents in the results.

func (QuerySet[T]) Iter

func (qs QuerySet[T]) Iter(ctx context.Context) iter.Seq2[*T, error]

Iter returns an iterator over matching documents for use with range. Documents are streamed one at a time via the backend's Iterator, not collected in memory.

for doc, err := range den.NewQuery[Product](db).Iter(ctx) {
    if err != nil { return err }
    fmt.Println(doc.Name)
}

Cancelling ctx stops the iteration: the per-row prologue checks ctx.Err() and surfaces it through the seq2 error path, so at most one further document is yielded to the consumer after cancellation. With WithFetchLinks, an in-flight link fetch may still complete its current backend round-trip before the next prologue check fires; the link resolver passes ctx through, so the round-trip after that observes the cancellation.

func (QuerySet[T]) Limit

func (qs QuerySet[T]) Limit(n int) QuerySet[T]

Limit sets the maximum number of results.

Honored by the same row-returning terminals as Sort, plus GroupBy.Into (caps the number of group rows returned): All, AllWithCount (data slice only; the count path runs unpaginated), First (which rewrites Limit to 1 internally), Iter, Search, Update, Project, and GroupBy.Into. Ignored by Count, Exists, and scalar aggregates — those always operate on the full WHERE-filtered set.

func (QuerySet[T]) Max

func (qs QuerySet[T]) Max(ctx context.Context, field string) (float64, error)

Max returns the maximum value of the given field across matching documents. See Avg for the modifier-applicability rules.

func (QuerySet[T]) Min

func (qs QuerySet[T]) Min(ctx context.Context, field string) (float64, error)

Min returns the minimum value of the given field across matching documents. See Avg for the modifier-applicability rules.

func (QuerySet[T]) Project

func (qs QuerySet[T]) Project(ctx context.Context, target any) error

Project executes the query and decodes results into the projection type. Target must be a pointer to a slice of structs with json/den tags.

func (QuerySet[T]) Search

func (qs QuerySet[T]) Search(ctx context.Context, queryText string) ([]*T, error)

Search performs a full-text search on the QuerySet, honoring the QuerySet's scope: a tx-bound QuerySet sees the tx's uncommitted writes and rolls them back together with the rest of the tx, just like every other Den read. A *DB-bound QuerySet reads committed state.

Returns ErrFTSNotSupported when the underlying scope does not implement FTSSearcher — either the backend has no FTS support, or the scope is a transaction on a backend whose tx side does not (no current backend has this asymmetry, but the contract leaves room for one).

func (QuerySet[T]) Skip

func (qs QuerySet[T]) Skip(n int) QuerySet[T]

Skip sets the number of results to skip (offset pagination).

Honored by the same terminals as Limit (including GroupBy.Into). Ignored by Count, Exists, and scalar aggregates.

Cannot be combined with After or Before (cursor pagination) — terminal methods return ErrIncompatiblePagination when both styles are set.

func (QuerySet[T]) Sort

func (qs QuerySet[T]) Sort(field string, dir SortDirection) QuerySet[T]

Sort adds a sort criterion. Multiple calls define tie-breakers.

Honored by terminals that return ordered rows: All, AllWithCount, First, Iter, Search, Update, and Project. On GroupBy.Into, Sort is honored when the referenced field matches a group key; a non-key field returns an error — use GroupByBuilder.OrderByAgg for aggregate ordering. Ignored by Count, Exists, and the scalar aggregates (Avg / Sum / Min / Max) — those operate on unordered sets where sort order has no effect on the result.

func (QuerySet[T]) Sum

func (qs QuerySet[T]) Sum(ctx context.Context, field string) (float64, error)

Sum returns the sum of the given field across matching documents. See Avg for the modifier-applicability rules.

func (QuerySet[T]) Update

func (qs QuerySet[T]) Update(ctx context.Context, fields SetFields) (int64, error)

Update applies field updates to every matching document. Returns the number of updated documents.

When bound to a *DB, the scan + writes run in a new transaction so the batch is atomic. When bound to a *Tx, they run inline in the caller's transaction — a per-row failure rolls back the caller's transaction too.

Update is fail-fast: any per-row error (BeforeUpdate hook, validation, revision conflict, backend write) stops the loop, rolls back the transaction, and returns (0, err). There is no partial commit; no AfterUpdate / AfterSave hooks fire for rows that would have come after the failure.

Field names in fields (as they appear in the `json` struct tag) are validated against the registered struct before the write transaction opens — an unknown name returns immediately without opening the tx. Callers that want to validate field names at application start can iterate Meta[T].Fields.

WithFetchLinks and WithNestingDepth have no effect on Update. The loaded docs are loop-local and discarded after the per-row write, so resolving links would only be visible to BeforeUpdate / Validate hooks — Update keeps that path lean and Link.Value remains unresolved (nil). Hooks that need linked data should call FetchLink or FetchAllLinks themselves.

func (QuerySet[T]) Where

func (qs QuerySet[T]) Where(conditions ...where.Condition) QuerySet[T]

Where adds filter conditions. Multiple calls are ANDed.

func (qs QuerySet[T]) WithFetchLinks() QuerySet[T]

WithFetchLinks hydrates every Link[T] field on the returned documents, regardless of whether the field is tagged with `den:"eager"`.

Honored only by terminals that return *T values: All, AllWithCount, First, Iter, and Search. Every other terminal — counts, aggregates, projections, GroupBy.Into, and bulk Update — ignores it because it has no documents to attach the resolved links to. See Update's godoc for the hook-visibility caveat that follows from this rule.

func (QuerySet[T]) WithNestingDepth

func (qs QuerySet[T]) WithNestingDepth(depth int) QuerySet[T]

WithNestingDepth caps recursive link resolution. Meaningful for any query that hydrates links — `den:"eager"`-tagged fields under the default mode, or every link field under WithFetchLinks. Honored by the batched terminals (All, AllWithCount, Search) which actually recurse; ignored by terminals that don't return *T values and by the per-row Iter path (which is single-level by construction — streaming can't recurse without buffering).

func (qs QuerySet[T]) WithoutFetchLinks() QuerySet[T]

WithoutFetchLinks suppresses link hydration on this query, including fields tagged `den:"eager"`. Use it when the eager tags would otherwise pay a per-link round-trip cost the caller does not need (bulk export, IDs-only sweep, count-by-link). Returned `Link[T]` values carry their ID but `Value` stays `nil`.

type ReadWriter

type ReadWriter interface {
	Get(ctx context.Context, collection, id string) ([]byte, error)
	Put(ctx context.Context, collection, id string, data []byte) error
	Delete(ctx context.Context, collection, id string) error
	Query(ctx context.Context, collection string, q *Query) (Iterator, error)
	Count(ctx context.Context, collection string, q *Query) (int64, error)
	Exists(ctx context.Context, collection string, q *Query) (bool, error)
	Aggregate(ctx context.Context, collection string, op AggregateOp, field string, q *Query) (*float64, error)
	GroupBy(ctx context.Context, collection string, groupFields []string, aggs []GroupByAgg, q *Query) ([]GroupByRow, error)
}

ReadWriter is the common interface for both Backend and Transaction, providing the core CRUD operations that all write paths need.

type RecordedIndex added in v0.8.0

type RecordedIndex struct {
	Name   string
	Fields []string
	Unique bool
}

RecordedIndex describes a secondary index that was previously created by Den and is tracked in the backend's metadata table. Managed indexes (such as the PostgreSQL GIN index or FTS auxiliary objects) are not recorded.

type Scope added in v0.8.0

type Scope interface {
	// contains filtered or unexported methods
}

Scope is the common parameter type for every CRUD entry point that works both outside and inside a transaction. It is sealed to *DB and *Tx — the gateway methods are unexported so external types cannot implement it, and callers can only obtain a Scope by passing one of the two concrete types.

The idiom mirrors the implicit DBTX pattern used around database/sql (where *sql.DB and *sql.Tx share the query surface) but is explicit here so the compiler can document and enforce which operations accept either.

type SeekableStorage added in v0.11.2

type SeekableStorage interface {
	Storage
	OpenSeekable(ctx context.Context, a document.Attachment) (io.ReadSeekCloser, error)
}

SeekableStorage is an optional Storage capability: backends whose stored bytes can be read with random access cheaply (typically local filesystems) may implement OpenSeekable in addition to Open. Callers that need Range or conditional-GET support (e.g. http.ServeContent) type-assert and use the seekable handle when available, falling back to plain Open otherwise. Backends where Seek is technically possible but expensive (e.g. S3, where each Seek triggers a fresh HTTP GET) should leave this unimplemented; remote-storage Range support belongs at the URL layer (pre-signed URLs) rather than smuggled through Open.

type SetFields

type SetFields map[string]any

SetFields is a map of field names (as they appear in the `json` struct tag) to new values for partial updates used by FindOneAndUpdate, FindOneAndUpsert, and QuerySet.Update.

Names are validated against the registered struct before the write transaction opens; an unknown name aborts the call without touching storage. Callers that want to validate names at application start can iterate Meta[T].Fields and compare against a known set.

type Settings

type Settings struct {
	CollectionName string
	UseRevision    bool
	Indexes        []IndexDefinition
}

Settings configures per-collection behavior.

type SortDirection

type SortDirection int

SortDirection specifies ascending or descending sort order.

const (
	Asc SortDirection = iota
	Desc
)

type SortEntry

type SortEntry struct {
	Field string
	Dir   SortDirection
}

SortEntry defines a single sort criterion.

type StaleIndex added in v0.8.0

type StaleIndex struct {
	Collection string
	Name       string
	Fields     []string
	Unique     bool
}

StaleIndex identifies an index inspected by DropStaleIndexes.

type Storage added in v0.9.0

type Storage interface {
	// Store copies r into the backing store, computes a content hash, and
	// returns a populated Attachment ready to be assigned onto a document
	// before Insert. ext is appended to the generated StoragePath (e.g.
	// ".jpg") — callers derive it from the original filename after any
	// MIME or extension validation. mime annotates the returned
	// Attachment; it is not verified against the content by Storage.
	//
	// Implementations MUST be content-addressed enough that two calls
	// with identical bytes resolve to the same StoragePath; Den relies on
	// that for deduplication via unique indexes on StoragePath.
	Store(ctx context.Context, r io.Reader, ext, mime string) (document.Attachment, error)

	// Open returns a reader for the bytes previously stored under a.StoragePath.
	Open(ctx context.Context, a document.Attachment) (io.ReadCloser, error)

	// Delete removes the bytes at a.StoragePath. Implementations SHOULD
	// treat a missing path as success — cleanup is orchestrated against
	// the document lifecycle and a missing file is the expected terminal
	// state.
	Delete(ctx context.Context, a document.Attachment) error

	// URL returns a URL path (starts with "/") at which a is served.
	// The caller prefixes scheme+host as needed. Remote storages may
	// return an absolute URL instead.
	URL(a document.Attachment) string
}

Storage abstracts the backing byte store for document.Attachment fields. Implementations map logical paths to byte streams; they carry no knowledge of Den's document metadata (which lives in the backend).

Implementations must be safe for concurrent use.

Backends with random access (local filesystem) should also implement SeekableStorage so callers can serve Range requests directly via http.ServeContent.

type Transaction

type Transaction interface {
	ReadWriter
	Commit() error
	Rollback() error

	// GetForUpdate reads a document and acquires a row-level lock that
	// persists until the transaction commits or rolls back. On PostgreSQL
	// this maps to SELECT ... FOR UPDATE, optionally with SKIP LOCKED or
	// NOWAIT. On SQLite it is a no-op because IMMEDIATE transactions
	// already serialize writers; the mode parameter is ignored.
	GetForUpdate(ctx context.Context, collection, id string, mode LockMode) ([]byte, error)

	// AdvisoryLock acquires an application-defined lock identified by key
	// that persists until the transaction commits or rolls back. Concurrent
	// transactions attempting to acquire the same key block until the holder
	// ends. Unlike GetForUpdate this does not require a row to exist, so it
	// is suitable for bootstrap paths like coordinating concurrent migration
	// starters before any state row has been written.
	//
	// On PostgreSQL this maps to pg_advisory_xact_lock. On SQLite it is a
	// no-op because IMMEDIATE transactions already serialize writers on the
	// whole database.
	AdvisoryLock(ctx context.Context, key int64) error
}

Transaction provides CRUD operations within a transaction boundary.

type Tx

type Tx struct {
	// contains filtered or unexported fields
}

Tx wraps a backend Transaction for use in RunInTransaction.

The zero value is not usable — construct a Tx only indirectly by passing a closure to RunInTransaction. Calling transaction-scoped functions on a zero-value Tx panics.

func (*Tx) Transaction added in v0.8.0

func (t *Tx) Transaction() Transaction

Transaction returns the underlying backend Transaction so infrastructure code can issue raw Get / Put / Delete calls on unregistered collections. This is a low-level escape hatch — normal code should use Insert, Update, Delete, FindByID, NewQuery, and friends, all of which honor the registry, encoding, validation, and hook contracts. The only legitimate consumer today is den/migrate (the migration-log collection is deliberately not registered with Den).

Mirrors DB.Backend() in spirit: both are low-level accessors you reach for only when the high-level API does not cover the case.

type Validator

type Validator interface {
	Validate(ctx context.Context) error
}

Validator is the custom-validation hook. Implement it on a document to enforce invariants beyond what struct tag validation can express. Returning an error rolls back the surrounding Insert / Update without touching storage.

The passed ctx is the same one threaded through the surrounding Insert / Update call — use it for cancellation, deadlines, DB lookups inside the validator, outbound HTTP calls that need to participate in the request, or tracing spans. Matches the signature of every other Den hook.

Directories

Path Synopsis
backend
Package dentest provides test helpers for opening a Den database in a temporary directory (SQLite) or against a reachable PostgreSQL instance.
Package dentest provides test helpers for opening a Den database in a temporary directory (SQLite) or against a reachable PostgreSQL instance.
Package id provides ULID-based unique identifier generation.
Package id provides ULID-based unique identifier generation.
Package storage defines the Storage-backend registry used by OpenURL to construct a den.Storage from a URL-style DSN.
Package storage defines the Storage-backend registry used by OpenURL to construct a den.Storage from a URL-style DSN.
file
Package file provides a local-filesystem Storage backend for Den.
Package file provides a local-filesystem Storage backend for Den.
s3
Package s3 is the S3 (and S3-compatible, e.g.
Package s3 is the S3 (and S3-compatible, e.g.

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL