storage

package
v0.23.2 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: May 14, 2026 License: Apache-2.0 Imports: 15 Imported by: 0

Documentation

Index

Constants

View Source
const (
	// EventTypeFilePromoted is published when a previously-pending upload
	// claim has been adopted by a business transaction (Files.OnCreate or
	// the new-side of Files.OnUpdate). One event per consumed claim.
	EventTypeFilePromoted = "vef.storage.file.promoted"
	// EventTypeFileDeleted is published when the delete worker has
	// successfully removed an object from the backend. One event per
	// pending-delete row drained.
	EventTypeFileDeleted = "vef.storage.file.deleted"
	// EventTypeDeleteDeadLetter is published when the delete worker has
	// exhausted retries for a pending-delete row. Operations should consume
	// this event to investigate; the row is parked, not removed.
	EventTypeDeleteDeadLetter = "vef.storage.delete.dead_letter"
)

Storage event topics. Subscribers should match on the constant rather than the literal string to stay forward-compatible.

View Source
const (
	// PublicPrefix is the key namespace for objects intended to be
	// world-readable (or readable by any authenticated principal,
	// depending on the ACL implementation). DefaultFileACL grants read
	// access to keys under this prefix.
	PublicPrefix = "pub/"

	// PrivatePrefix is the key namespace for objects whose visibility is
	// controlled by business state. DefaultFileACL denies read access
	// to keys under this prefix; business modules MUST register a
	// FileACL implementation that consults their own ownership / ACL
	// tables to grant access.
	PrivatePrefix = "priv/"
)

Object key namespace prefixes. The framework uses these to communicate the intended visibility of a key to the FileACL layer; the storage resource emits keys under PublicPrefix when the upload is flagged public, and under PrivatePrefix otherwise.

These are conventions, not enforcement: any FileACL implementation is free to ignore the prefix and decide visibility purely from business state (e.g. a per-key visibility column on the owning row).

View Source
const DefaultProxyPrefix = "/storage/files/"

DefaultProxyPrefix is the URL path prefix the framework's proxy middleware mounts at. ProxyURLKeyMapper uses it to translate between embedded URLs and storage keys.

Variables

View Source
var (
	// ErrBucketNotFound indicates the specified bucket does not exist.
	ErrBucketNotFound = errors.New("bucket not found")
	// ErrObjectNotFound indicates the specified object does not exist.
	ErrObjectNotFound = errors.New("object not found")
	// ErrInvalidBucketName indicates the bucket name is invalid.
	ErrInvalidBucketName = errors.New("invalid bucket name")
	// ErrAccessDenied indicates permission is denied for the operation.
	ErrAccessDenied = errors.New("access denied")
	// ErrClaimNotFound indicates the requested upload claim does not exist
	// (already consumed by a business transaction, expired and swept,
	// or never existed).
	ErrClaimNotFound = errors.New("upload claim not found")
	// ErrUploadSessionNotFound indicates the multipart UploadID does not
	// reference a live session. Returned when calling PutPart /
	// CompleteMultipart against an unknown, completed, or aborted
	// session. AbortMultipart is exempt and returns nil for the same
	// condition (idempotent abort).
	ErrUploadSessionNotFound = errors.New("upload session not found")
	// ErrPartETagMismatch indicates one of the (PartNumber, ETag) pairs
	// supplied to CompleteMultipart does not match the ETag the backend
	// recorded for that PartNumber. Typically caused by a PartNumber
	// being silently re-uploaded after the caller persisted the old
	// ETag, or by ETag corruption on the caller side.
	ErrPartETagMismatch = errors.New("part etag mismatch")
	// ErrPartTooSmall indicates a non-final PutPart was smaller than the
	// backend's PartSize. The final part of a session is exempt from
	// the minimum; backends should only return this error for parts
	// that turn out to have a successor.
	ErrPartTooSmall = errors.New("non-final part smaller than backend minimum")
	// ErrPartNumberOutOfRange indicates a CompleteMultipart request
	// supplied a PartNumber outside the contiguous 1..N range, or the
	// supplied Parts list has gaps or duplicates.
	ErrPartNumberOutOfRange = errors.New("part number out of range")
)

Functions

func ReplaceHtmlURLs added in v0.23.0

func ReplaceHtmlURLs(content string, replacements map[string]string) string

ReplaceHtmlURLs rewrites <img src> / <a href> / <video src> / <audio src> / <source src> / <embed src> / <object data> attribute values according to the supplied replacement map. URLs absent from the map are left untouched, and the original quote style (single vs double) is preserved so the output round-trips through external HTML formatters cleanly.

Pair this with URLKeyMapper.KeyToURL to render storage keys as the URL convention the frontend expects.

func ReplaceMarkdownURLs added in v0.23.0

func ReplaceMarkdownURLs(content string, replacements map[string]string) string

ReplaceMarkdownURLs rewrites the URL portion of every `![alt](url)` / `[text](url)` construct according to the supplied replacement map. The optional title (`![](url "title")`) is preserved verbatim.

Pair this with URLKeyMapper.KeyToURL to render storage keys as the URL convention the frontend expects.

Types

type AbortMultipartOptions added in v0.23.0

type AbortMultipartOptions struct {
	// Key is the object key the multipart session was targeting.
	Key string
	// UploadID is the opaque session token returned by InitMultipart.
	UploadID string
}

AbortMultipartOptions contains parameters for canceling a multipart upload session. Backends discard any uploaded parts and release the session; calling Abort on an unknown or already-closed session is a no-op (idempotent).

type ClaimConsumer added in v0.23.0

type ClaimConsumer interface {
	// ConsumeMany deletes the upload_claim rows whose object_key matches
	// any entry in keys, executed inside the supplied business
	// transaction tx. Returns ErrClaimNotFound (wrapped) when any key has
	// no corresponding row, signaling that the business write
	// references either an uncommitted or already-swept claim and the
	// caller's transaction should roll back. tx must be the same orm.DB
	// instance passed to RunInTX.
	//
	// Keys may contain duplicates; implementations are responsible for
	// deduplicating before issuing the underlying DELETE. An empty or
	// nil keys argument is a no-op and returns nil.
	ConsumeMany(ctx context.Context, tx orm.DB, keys []string) error
}

ClaimConsumer is the minimal claim-side surface business code needs when reconciling file references inside a CRUD transaction. The framework's Files facade composes ClaimConsumer with DeleteScheduler; most applications only ever interact with Files and never reach for ClaimConsumer directly.

The richer set of operations (Create, ScanExpired, Get*, etc.) lives on the framework-internal store.ClaimStore type and is not part of the stable public surface. Business code that genuinely needs to inspect or manipulate raw claim rows should drop down to a dependency on the storage internal package via a custom integration rather than depending on this minimal interface to expand.

Implementations MUST be safe for concurrent use.

type CompleteMultipartOptions added in v0.23.0

type CompleteMultipartOptions struct {
	// Key is the object key the multipart session is targeting.
	Key string
	// UploadID is the opaque session token returned by InitMultipart.
	UploadID string
	// Parts is the ordered list of (PartNumber, ETag) pairs the backend
	// uses to verify and assemble the final object.
	Parts []CompletedPart
}

CompleteMultipartOptions contains parameters for finalizing a multipart upload. Parts MUST be sorted ascending by PartNumber and MUST form a contiguous 1..N sequence; missing or duplicate part numbers cause CompleteMultipart to return ErrPartNumberOutOfRange.

type CompletedPart added in v0.23.0

type CompletedPart struct {
	// PartNumber is the 1-indexed part position.
	PartNumber int
	// ETag is the opaque token the backend returned from PutPart for
	// this part.
	ETag string
}

CompletedPart references one finished part in a multipart upload (PartNumber + ETag), forwarded back to CompleteMultipart so the backend can verify and assemble parts in order.

type CopyObjectOptions

type CopyObjectOptions struct {
	// SourceKey is the identifier of the source object
	SourceKey string
	// DestKey is the identifier for the copied object
	DestKey string
}

CopyObjectOptions contains parameters for copying an object.

type DefaultFileACL added in v0.23.0

type DefaultFileACL struct{}

DefaultFileACL is the framework-provided default ACL. It grants read access only to keys under PublicPrefix and denies all listing.

This default keeps the framework safe-by-default: without an explicit override, the storage module behaves as a pub-only file server and never exposes private keys to authenticated callers, regardless of who asks. Business modules that need any access beyond pub/ MUST register their own FileACL via vef.SupplyFileACL.

func (*DefaultFileACL) CanList added in v0.23.0

CanList denies all listing. List is intentionally restrictive in the default ACL because there is no safe per-prefix policy the framework can apply without business knowledge.

func (*DefaultFileACL) CanRead added in v0.23.0

func (*DefaultFileACL) CanRead(_ context.Context, _ *security.Principal, key string) (bool, error)

CanRead allows reads of keys under PublicPrefix and denies everything else. Principal is ignored — the default ACL has no notion of per-principal access; that is the business module's responsibility.

type DeleteDeadLetterEvent added in v0.23.0

type DeleteDeadLetterEvent struct {
	event.BaseEvent

	// PendingDeleteID is the primary key of the parked row.
	PendingDeleteID string `json:"pendingDeleteId"`
	// FileKey is the object key that failed to delete.
	FileKey string `json:"fileKey"`
	// Reason carries the original schedule reason.
	Reason DeleteReason `json:"reason"`
	// Attempts is the total number of failed attempts.
	Attempts int `json:"attempts"`
	// LastError captures the most recent error message for triage.
	LastError string `json:"lastError,omitempty"`
}

DeleteDeadLetterEvent reports a pending-delete row that the delete worker could not drain within its retry budget. The row is left in sys_storage_pending_delete (parked) for manual investigation.

func NewDeleteDeadLetterEvent added in v0.23.0

func NewDeleteDeadLetterEvent(id, key string, reason DeleteReason, attempts int, lastErr string) *DeleteDeadLetterEvent

NewDeleteDeadLetterEvent creates a new dead-letter event.

type DeleteObjectOptions

type DeleteObjectOptions struct {
	// Key is the unique identifier of the object to delete
	Key string
}

DeleteObjectOptions contains parameters for deleting a single object.

type DeleteObjectsOptions

type DeleteObjectsOptions struct {
	// Keys is the list of object identifiers to delete
	Keys []string
}

DeleteObjectsOptions contains parameters for batch deleting objects.

type DeleteReason added in v0.23.0

type DeleteReason string

DeleteReason classifies why an object was scheduled for deletion. The reason is persisted on the queue row and forwarded onto file-deleted and dead-letter events for observability; the worker's behavior is independent of reason.

const (
	// DeleteReasonReplaced indicates the object was the previous value of
	// a business field that has just been overwritten with a new key.
	DeleteReasonReplaced DeleteReason = "replaced"
	// DeleteReasonDeleted indicates the owning business record was
	// deleted.
	DeleteReasonDeleted DeleteReason = "deleted"
	// DeleteReasonClaimExpired indicates a pending upload claim expired
	// and its associated object (if any) must be cleaned up. Set only by
	// the framework-internal claim sweeper.
	DeleteReasonClaimExpired DeleteReason = "claim_expired"
)

type DeleteScheduler added in v0.23.0

type DeleteScheduler interface {
	// Schedule INSERTs one pending-delete row per key inside tx, all
	// carrying the supplied reason. tx must be the orm.DB instance
	// passed into RunInTX so that scheduling shares the business
	// transaction's atomicity guarantees. keys may be empty or nil
	// (no-op).
	//
	// Keys may contain duplicates; implementations are responsible for
	// deduplicating before issuing the underlying INSERT.
	Schedule(ctx context.Context, tx orm.DB, keys []string, reason DeleteReason) error
}

DeleteScheduler is the minimal queue-side surface business code needs to drop file references into the asynchronous delete pipeline inside a CRUD transaction. The framework's Files facade composes ClaimConsumer with DeleteScheduler; most applications only ever interact with Files and never reach for DeleteScheduler directly.

The richer set of operations (Lease, Done, Defer, sweeper-side Enqueue) lives on the framework-internal store.DeleteQueue type and is not part of the stable public surface. Business code that genuinely needs to inspect the queue should drop down to a dependency on the storage internal package via a custom integration rather than depending on this minimal interface to expand.

Implementations MUST be safe for concurrent use.

type FileACL added in v0.23.0

type FileACL interface {
	// CanRead returns true when principal is authorized to read key.
	// Called by the proxy middleware (GET /storage/files/<key>) and the
	// stat RPC. Pub/* keys typically short-circuit before reaching this
	// hook; implementations only see keys that need authoritative
	// authorization.
	CanRead(ctx context.Context, principal *security.Principal, key string) (bool, error)

	// CanList returns true when principal is authorized to list objects
	// under prefix. Called before List.
	//
	// Most production setups should keep listing tightly restricted —
	// it is primarily an ops / debug tool and rarely belongs in
	// user-facing flows.
	CanList(ctx context.Context, principal *security.Principal, prefix string) (bool, error)
}

FileACL decides whether a principal may read or list object keys.

The storage module is provider-neutral and intentionally has no model of "ownership" — that information lives entirely in the business layer (which model owns which key, what visibility rules apply, what roles have read access, etc.). Business modules implement FileACL to bridge that gap and inject the implementation into the framework via vef.SupplyFileACL.

Typical implementation pattern:

  1. Maintain a reverse index from object key to the owning row, populated by Files.OnCreate / Files.OnUpdate.
  2. In CanRead, look up the row by key and decide based on the principal's identity, roles, or tenant against the row's visibility / owner columns.
  3. In CanList, restrict listing to operationally privileged principals or to prefixes scoped to the principal's identity.

Implementations MUST return false (not error) for unauthorized access; errors are reserved for backend / lookup failures (database unavailable, etc.) and surface to the caller as 500-class responses.

type FileDeletedEvent added in v0.23.0

type FileDeletedEvent struct {
	event.BaseEvent

	// FileKey is the object key that was just deleted.
	FileKey string `json:"fileKey"`
	// Reason carries the original schedule reason for the deletion.
	Reason DeleteReason `json:"reason"`
}

FileDeletedEvent reports the successful removal of an object from the backend by the asynchronous delete worker. Subscribers can use it for cache invalidation, audit, or downstream cleanup.

func NewFileDeletedEvent

func NewFileDeletedEvent(key string, reason DeleteReason) *FileDeletedEvent

NewFileDeletedEvent creates a new file-deleted event.

type FilePromotedEvent added in v0.23.0

type FilePromotedEvent struct {
	event.BaseEvent

	// FileKey is the object key the business model now owns.
	FileKey string `json:"fileKey"`
}

FilePromotedEvent reports the successful adoption of an upload claim by a business transaction. Subscribers can use it for audit, analytics, or downstream side-effects (cache warm-up, indexing, notifications).

func NewFilePromotedEvent

func NewFilePromotedEvent(key string) *FilePromotedEvent

NewFilePromotedEvent creates a new file-promoted event.

type FileRef added in v0.23.0

type FileRef struct {
	Key      string
	MetaType MetaType
	Attrs    map[string]string
}

FileRef is a single file reference extracted from a model field tagged with `meta:"uploaded_file/richtext/markdown"`. Attrs carries any key/value attributes parsed from the tag value (e.g. `category:gallery`).

type Files added in v0.23.0

type Files interface {
	// OnCreate adopts every file reference reachable from model by
	// deleting the corresponding upload claim rows inside tx. Returns
	// ErrClaimNotFound (wrapped) when any reference is missing, which
	// must cause the caller's tx to roll back.
	OnCreate(ctx context.Context, tx orm.DB, model any) error

	// OnUpdate reconciles file references between two snapshots of the
	// same model: newly-referenced files are adopted (ConsumeMany);
	// dereferenced files are queued for asynchronous deletion with
	// DeleteReasonReplaced. Either argument may be nil to signal the
	// absence of that side (mirrors FileRefExtractor.Diff semantics).
	OnUpdate(ctx context.Context, tx orm.DB, oldModel, newModel any) error

	// OnDelete schedules every file reference in model for asynchronous
	// deletion with DeleteReasonDeleted.
	OnDelete(ctx context.Context, tx orm.DB, model any) error
}

Files is the high-level facade business handlers use to keep their `meta`-tagged file references in sync with the storage backend across the standard create / update / delete lifecycle.

All three methods MUST be called inside a business transaction; the supplied tx is the same orm.DB instance passed to orm.DB.RunInTX, so the claim consumption and pending-delete bookkeeping commit or roll back atomically with the business write.

Internally Files composes ClaimConsumer + DeleteScheduler + a per-type meta field cache; callers do not interact with those primitives directly.

func NewFiles added in v0.23.0

func NewFiles(cc ClaimConsumer, ds DeleteScheduler, publisher event.Publisher, urlMapper URLKeyMapper) Files

NewFiles returns the default Files implementation, sharing the supplied ClaimConsumer, DeleteScheduler, event Publisher, and URLKeyMapper across all model types. The returned value is safe for concurrent use; meta field specs are parsed once per type on first access and cached for the lifetime of the instance.

The URLKeyMapper translates richtext / markdown URLs to storage keys during reconciliation. Pass IdentityURLKeyMapper{} (or nil, which is normalised to the identity mapper) when business code embeds bare keys directly in <img src> / ![](...).

Promoted-file events are published synchronously after a successful ConsumeMany call but before the business transaction commits. Combined with an in-memory bus this gives at-least-once delivery with the possibility of spurious events if the business transaction later rolls back; subscribers MUST be idempotent.

type FilesFor added in v0.23.1

type FilesFor[T any] struct {
	// contains filtered or unexported fields
}

FilesFor is the type-safe counterpart of Files for handlers that manage a single model type. T's `meta`-tagged field spec is resolved once at construction (when the underlying Files exposes its cache), so malformed tags surface at boot and the per-call reflect lookup is elided on the hot path.

All three methods MUST be called inside a business transaction (see Files for the full contract).

func NewFilesFor added in v0.23.1

func NewFilesFor[T any](files Files) FilesFor[T]

NewFilesFor returns a typed file lifecycle facade for T. When files is the value produced by NewFiles, T's meta spec is resolved and cached up front so each lifecycle call skips the per-call reflect lookup. Foreign Files implementations (e.g. fx.Decorate wrappers, test fakes) are accepted and simply delegated to via the public Files interface — the typed signatures still apply, only the pre-resolution optimization is skipped.

func (FilesFor[T]) OnCreate added in v0.23.1

func (f FilesFor[T]) OnCreate(ctx context.Context, tx orm.DB, model *T) error

OnCreate adopts every file reference reachable from model by deleting the corresponding upload claim rows inside tx. See Files.OnCreate for the full contract; passing a nil pointer is a no-op.

func (FilesFor[T]) OnDelete added in v0.23.1

func (f FilesFor[T]) OnDelete(ctx context.Context, tx orm.DB, model *T) error

OnDelete schedules every file reference in model for asynchronous deletion. See Files.OnDelete for the full contract; passing a nil pointer is a no-op.

func (FilesFor[T]) OnUpdate added in v0.23.1

func (f FilesFor[T]) OnUpdate(ctx context.Context, tx orm.DB, oldModel, newModel *T) error

OnUpdate reconciles file references between two snapshots of T. See Files.OnUpdate for the full contract; either argument may be nil.

type GetObjectOptions

type GetObjectOptions struct {
	// Key is the unique identifier of the object
	Key string
}

GetObjectOptions contains parameters for retrieving an object.

type IdentityURLKeyMapper added in v0.23.0

type IdentityURLKeyMapper struct{}

IdentityURLKeyMapper is a simple mapper that treats relative URLs as bare storage keys. Suitable only when the frontend embeds bare object keys directly (e.g. `<img src="priv/2026/05/12/foo.png">`).

Any URL with an explicit scheme is rejected with ok=false.

func (*IdentityURLKeyMapper) KeyToURL added in v0.23.0

func (*IdentityURLKeyMapper) KeyToURL(key string) string

KeyToURL returns key unchanged.

func (*IdentityURLKeyMapper) URLToKey added in v0.23.0

func (*IdentityURLKeyMapper) URLToKey(rawURL string) (string, bool)

URLToKey returns (url, true) for empty-scheme URLs (plain relative paths like "priv/foo.png"). Any URL carrying a scheme is rejected.

type InitMultipartOptions added in v0.23.0

type InitMultipartOptions struct {
	// Key is the unique identifier the final assembled object will
	// receive.
	Key string
	// ContentType is the MIME type recorded with the final object.
	ContentType string
	// Metadata is custom key-value pairs stored on the final object.
	// Programmatic channel only — the HTTP API does not expose it.
	Metadata map[string]string
}

InitMultipartOptions contains parameters for opening a multipart upload session. The session is owned by the backend; callers thread the returned UploadID back through PutPart, CompleteMultipart, and AbortMultipart without interpreting it.

type ListObjectsOptions

type ListObjectsOptions struct {
	// Prefix filters objects by key prefix
	Prefix string
	// Recursive determines whether to list objects recursively
	Recursive bool
	// MaxKeys limits the maximum number of objects to return
	MaxKeys int
}

ListObjectsOptions contains parameters for listing objects.

type MetaType

type MetaType string

MetaType classifies how a struct field references uploaded files.

const (
	// MetaTypeUploadedFile is a direct file key field (string / []string /
	// map[string]string).
	MetaTypeUploadedFile MetaType = "uploaded_file"
	// MetaTypeRichText is HTML content with embedded resource references.
	MetaTypeRichText MetaType = "richtext"
	// MetaTypeMarkdown is Markdown content with embedded resource references.
	MetaTypeMarkdown MetaType = "markdown"
)

type Multipart added in v0.23.0

type Multipart interface {
	// PartSize returns the backend's authoritative part size in bytes.
	// Callers MUST split the object into chunks of exactly this size
	// (except the final chunk, which may be smaller). The value is
	// constant for the lifetime of the backend instance.
	PartSize() int64

	// MaxPartCount returns the maximum number of parts in a single
	// multipart upload, or 0 for unlimited. The value is constant for
	// the lifetime of the backend instance.
	MaxPartCount() int

	// InitMultipart opens a new upload session and returns an opaque
	// UploadID.
	InitMultipart(ctx context.Context, opts InitMultipartOptions) (*MultipartSession, error)

	// PutPart uploads a single part to an open session and returns the
	// ETag the backend assigned. Re-uploading the same PartNumber
	// overwrites the previous content and yields a new ETag.
	PutPart(ctx context.Context, opts PutPartOptions) (*PartInfo, error)

	// CompleteMultipart finalizes a session by assembling its parts in
	// PartNumber order. See the contract notes on the interface
	// documentation for the verification rules and error mapping.
	CompleteMultipart(ctx context.Context, opts CompleteMultipartOptions) (*ObjectInfo, error)

	// AbortMultipart cancels an open session, discarding any uploaded
	// parts. Idempotent: calling Abort on an unknown / already-closed
	// session returns nil.
	AbortMultipart(ctx context.Context, opts AbortMultipartOptions) error
}

Multipart is the vendor-neutral chunked-upload primitive. Every backend in this framework implements Multipart; callers obtain the handle via a type assertion against storage.Service.

The model is S3-inspired but does NOT leak S3 vocabulary into the contract:

  • UploadID is opaque; only the issuing backend interprets it.
  • ETag is an opaque per-part identifier issued by the backend; the caller holds it verbatim and passes it back to Complete.
  • PartNumber is 1-indexed and contiguous (1..N) at Complete time.

Contract:

  1. PutPart calls for distinct PartNumbers of the same session MAY proceed concurrently. Concurrent calls for the SAME PartNumber have last-writer-wins semantics — the part is overwritten and the previous ETag is invalidated.
  2. Except the final part, every part MUST be at least Capabilities().PartSize bytes; smaller parts return ErrPartTooSmall.
  3. CompleteMultipart MUST verify every (PartNumber, ETag) pair in opts.Parts matches a recorded part; mismatches return ErrPartETagMismatch.
  4. CompleteMultipart MUST verify Parts cover the contiguous range 1..N with no gaps or duplicates; otherwise returns ErrPartNumberOutOfRange.
  5. After CompleteMultipart (success) or AbortMultipart the session is closed; further PutPart / CompleteMultipart / AbortMultipart against the same UploadID return ErrUploadSessionNotFound, with the exception that AbortMultipart is idempotent — re-aborting an unknown session returns nil.
  6. Session TTL is NOT part of the contract: long-running sessions are valid. Cleanup of abandoned sessions is driven by upper-layer sweepers (see internal/storage/worker) through AbortMultipart. Implementations MAY garbage-collect internally for resource hygiene, but that is a quality concern, not a contract concern.

type MultipartSession added in v0.23.0

type MultipartSession struct {
	Key      string
	UploadID string
}

MultipartSession identifies an opaque multipart upload session in the backend. UploadID is provider-defined and must be passed back unchanged to subsequent multipart calls.

type ObjectInfo

type ObjectInfo struct {
	// Bucket is the name of the storage bucket
	Bucket string `json:"bucket"`
	// Key is the unique identifier of the object within the bucket
	Key string `json:"key"`
	// ETag is the entity tag, typically an MD5 hash used for versioning and cache validation
	ETag string `json:"eTag"`
	// Size is the object size in bytes
	Size int64 `json:"size"`
	// ContentType is the MIME type of the object
	ContentType string `json:"contentType"`
	// LastModified is the timestamp when the object was last modified
	LastModified time.Time `json:"lastModified"`
	// Metadata contains custom key-value pairs associated with the object
	Metadata map[string]string `json:"metadata,omitempty"`
}

ObjectInfo represents metadata information about a stored object.

type PartInfo added in v0.23.0

type PartInfo struct {
	// PartNumber is the 1-indexed part position.
	PartNumber int
	// ETag is the opaque per-part identifier the backend assigned. The
	// format is backend-specific (MD5 hex on filesystem / memory, S3
	// entity-tag on MinIO); callers MUST treat it as an opaque string.
	ETag string
	// Size is the byte length the backend recorded for this part.
	Size int64
}

PartInfo describes a successfully uploaded part as reported by the backend. ETag is the opaque token the caller MUST persist and replay back through CompleteMultipart.

type ProxyURLKeyMapper added in v0.23.0

type ProxyURLKeyMapper struct {
	// Prefix is the URL path prefix to strip/add. Defaults to
	// DefaultProxyPrefix ("/storage/files/") when empty.
	Prefix string
}

ProxyURLKeyMapper is the recommended default mapper for applications that embed the framework's proxy URL convention in richtext / markdown fields (e.g. `<img src="/storage/files/priv/2026/05/12/foo.png">`).

URLToKey strips the configured Prefix (default "/storage/files/") and returns the remainder as the storage key. URLs that do not start with the prefix — including scheme-bearing URLs, data: URIs, and external links — are rejected with ok=false.

KeyToURL prepends the Prefix to produce the proxy URL the frontend expects.

func (ProxyURLKeyMapper) KeyToURL added in v0.23.0

func (m ProxyURLKeyMapper) KeyToURL(key string) string

KeyToURL prepends the proxy prefix to produce the URL the frontend should embed.

func (ProxyURLKeyMapper) URLToKey added in v0.23.0

func (m ProxyURLKeyMapper) URLToKey(rawURL string) (string, bool)

URLToKey strips the proxy prefix from rawURL and returns the storage key. Returns ok=false for URLs that do not match the prefix or carry a scheme (external links).

type PutObjectOptions

type PutObjectOptions struct {
	// Key is the unique identifier for the object
	Key string
	// Reader provides the object data to upload
	Reader io.Reader
	// Size is the size of the data in bytes (-1 if unknown)
	Size int64
	// ContentType specifies the MIME type of the object
	ContentType string
	// Metadata contains custom key-value pairs to store with the object
	Metadata map[string]string
}

PutObjectOptions contains parameters for uploading an object.

type PutPartOptions added in v0.23.0

type PutPartOptions struct {
	// Key is the object key the multipart session is targeting.
	Key string
	// UploadID is the opaque session token returned by InitMultipart.
	UploadID string
	// PartNumber is the 1-indexed part position within the assembled
	// object. Re-uploading the same PartNumber overwrites the previous
	// content and yields a new ETag.
	PartNumber int
	// Reader is the part payload; the backend reads exactly Size bytes.
	Reader io.Reader
	// Size is the exact byte length of the part. Backends MAY reject
	// requests where Size is smaller than ServiceCapabilities.PartSize
	// (except for the final part of a session) with ErrPartTooSmall.
	Size int64
}

PutPartOptions contains parameters for uploading a single part of an in-progress multipart upload. The reader MUST yield exactly Size bytes; backends use Size to validate the part against Multipart.PartSize() and to plan storage layout.

type Service

type Service interface {
	// PutObject uploads an object to storage.
	PutObject(ctx context.Context, opts PutObjectOptions) (*ObjectInfo, error)
	// GetObject retrieves an object from storage.
	GetObject(ctx context.Context, opts GetObjectOptions) (io.ReadCloser, error)
	// DeleteObject deletes a single object from storage.
	DeleteObject(ctx context.Context, opts DeleteObjectOptions) error
	// DeleteObjects deletes multiple objects from storage in a batch operation.
	DeleteObjects(ctx context.Context, opts DeleteObjectsOptions) error
	// ListObjects lists objects in a bucket with optional filtering.
	ListObjects(ctx context.Context, opts ListObjectsOptions) ([]ObjectInfo, error)
	// CopyObject copies an object from source to destination.
	CopyObject(ctx context.Context, opts CopyObjectOptions) (*ObjectInfo, error)
	// StatObject retrieves metadata information about an object.
	StatObject(ctx context.Context, opts StatObjectOptions) (*ObjectInfo, error)
}

Service is the provider-neutral storage interface. Every backend MUST implement all methods. Vendor-specific behavior lives in internal/storage/<provider>/ and is independent of any specific SDK.

type StatObjectOptions

type StatObjectOptions struct {
	// Key is the unique identifier of the object
	Key string
}

StatObjectOptions contains parameters for retrieving object metadata.

type URLKeyMapper added in v0.23.0

type URLKeyMapper interface {
	// URLToKey returns (key, ok) for the given embedded URL.
	//
	// ok=true: the URL refers to a storage object managed by this
	// system; key is the canonical storage key Files should use for
	// claim consumption and deletion scheduling.
	//
	// ok=false: the URL is unrelated to this system (external CDN,
	// mailto link, data: URI, etc.); Files drops the ref entirely so
	// the URL has no effect on reconciliation.
	URLToKey(url string) (key string, ok bool)

	// KeyToURL returns the URL a frontend should use to fetch the given
	// storage key. Implementations should return the input unchanged when
	// the key is unrecognized.
	KeyToURL(key string) string
}

URLKeyMapper translates between storage object keys (the canonical identifiers persisted in ClaimStore / DeleteQueue and used by the underlying Service) and the URLs that business templates embed in richtext / markdown fields.

The framework uses the mapper in two directions:

  • URLToKey is invoked while reconciling `meta:"richtext"` / `meta:"markdown"` fields, so embedded URLs (e.g. proxy paths like "/storage/files/priv/2026/05/12/foo.png", or CDN URLs like "https://cdn.example.com/priv/2026/05/12/foo.png") are normalised to the keys ClaimStore knows about ("priv/2026/05/12/foo.png") before ConsumeMany / Schedule are called. Implementations decide which URLs map to managed keys; the framework does not pre-filter by scheme, so http(s) URLs reach the mapper too.

  • KeyToURL is exported so business read paths can pair with ReplaceHtmlURLs / ReplaceMarkdownURLs to render stored content with whatever URL convention the frontend expects.

Implementations MUST be deterministic and side-effect free; the framework caches nothing about mapper return values and may invoke it many times per request.

The framework registers IdentityURLKeyMapper by default. Business modules that embed proxy / CDN URLs override it via vef.SupplyURLKeyMapper.

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL