lock

package
v0.10.0-beta.7 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Apr 28, 2026 License: MIT Imports: 6 Imported by: 0

Documentation

Overview

Package lock provides a thin wrapper around OS-level advisory file locking for coordinating which noema process on a given cortex directory runs background work (consolidator agent, eligibility loop, watchdog, filesystem watcher).

The whole-process problem this solves: noema serve can be invoked concurrently against the same cortex via different transports — a long-lived `--transport http` systemd service plus any number of short-lived `--transport stdio` subprocesses spawned by MCP clients (Claude Code, Hermes plugins, etc.). All of them open the same SQLite DB in WAL mode, all of them load the same cortex_id, and all of them previously started their own consolidator agent + eligibility loop + watchdog + watcher. The result was duplicate event emissions, rank churn in federation_state, and consolidation_claim/fail noise from short-lived sessions whose schedulers fired briefly then got killed.

OS-level advisory locks are the right primitive: locks are bound to the open file (descriptor on Unix, handle on Windows), the kernel releases them automatically on process exit (any path — clean, SIGTERM, SIGKILL, panic), and there's no stale-lock-file problem to clean up. We use exclusive non-blocking semantics so the loser observes contention immediately and falls back to MCP-only mode rather than blocking startup.

Cross-platform implementation lives in lock_unix.go (flock(2)) and lock_windows.go (LockFileEx). The public API in this file is the same on both.

Index

Constants

This section is empty.

Variables

This section is empty.

Functions

func RuntimePath

func RuntimePath(cortexID string) (string, error)

RuntimePath returns the per-cortex background-work lock path in a runtime/temp location that user-data sync layers (iCloud Drive, Dropbox, Syncthing, OneDrive) won't replicate. Keyed on the cortex's stable ULID so two cortexes with different IDs but the same display name don't collide on shared hosts.

Why not <cortex>/db/background.lock (the previous location): when the cortex directory is inside a sync layer, the lock file gets replicated to other devices where it has no semantic meaning, and the sync layer's "replace on sync" can unlink the inode our flock is bound to and create a fresh inode at the same path — leaving us holding a flock on an orphaned inode while a new noema process successfully acquires its own flock on the new file. Putting the lock outside the cortex dir removes both problems uniformly, regardless of which sync layer the user has configured.

Cross-host coordination is intentionally given up: kernel flocks are per-host and never propagated across machines anyway, so two hosts mounting the same cortex via a sync layer were always going to each acquire their own local flock. Federation HTTP is the designed cross-host coordination mechanism, not flock.

Path resolution prefers $XDG_RUNTIME_DIR (Linux convention, tmpfs-backed when set), else os.TempDir() (which resolves correctly on macOS and Windows: per-user temp under /var/folders on macOS, %TEMP% on Windows).

Creates the parent directory with 0700 permissions if missing.

Types

type Lock

type Lock struct {
	// contains filtered or unexported fields
}

Lock represents an acquired exclusive file lock. The kernel releases the underlying lock when the file descriptor / handle is closed (or when the process exits), so a process that crashes without calling Release does not strand the lock for future invocations — that property is load-bearing and is the whole reason an OS-level lock was chosen over a PID-file or sentinel-presence scheme.

func TryAcquire

func TryAcquire(path string) (*Lock, bool, error)

TryAcquire attempts to acquire an exclusive non-blocking lock on path. The file is created if it doesn't exist. Three return shapes:

  • (lock, true, nil): we hold the lock; caller is responsible for calling Release at shutdown (the kernel also releases on exit).
  • (nil, false, nil): another process holds the lock; caller should proceed without acquiring resources gated on lock ownership.
  • (nil, false, err): something went wrong before we could ask the kernel for a lock (e.g., parent directory missing, permission denied). Treat as a hard error — don't silently fall through to "no lock" because that masks misconfiguration.

path should be inside the cortex dir's `db/` subdirectory so it doesn't show up in user-managed trace listings or get accidentally synced by tools like Obsidian or iCloud.

func (*Lock) Release

func (l *Lock) Release() error

Release explicitly drops the lock and closes the file descriptor / handle. Safe to call multiple times; the second and subsequent calls are no-ops. Always pair every successful TryAcquire with a deferred Release for clarity, even though the kernel would auto-release on process exit. Safe to call on a nil receiver, so callers can `defer l.Release()` without an `if l != nil` guard.

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL