posix

package
v0.2.0 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: May 29, 2025 License: Apache-2.0 Imports: 25 Imported by: 3

README

POSIX Design

This document describes how the storage implementation for running Tessera on a POSIX-compliant filesystem is intended to work.

Overview

POSIX provides for a small number of atomic operations on compliant filesystems.

This design leverages those to safely maintain a Merkle tree log on disk, in a format which can be exposed directly via a read-only endpoint to clients of the log (for example, using nginx or similar).

In contrast with some of other other storage backends, sequencing and integration of entries into the tree is synchronous.

The implementation uses a .state/ directory to coordinate operation. This directory does not need to be visible to log clients, but it does not contain sensitive data and so it isn't a problem if it is made visible.

Life of a leaf

In the description below, when we talk about writing to files - either appending or creating new ones, the actual process used always follows the following pattern:

  1. Create a temporary file on the same filesystem as the target location
  2. If we're appending data, copy the contents of the prefix location into the temporary file
  3. Write any new/additional data into the temporary file
  4. Close the temporary file
  5. Rename the temporary file into the target location.

The final step in the dance above is atomic according to the POSIX spec, so in performing this sequence of actions we can avoid corrupt or partially written files being part of the tree.

  1. Leaves are submitted by the binary built using Tessera via a call the storage's Add func.
  2. The storage library batches these entries up in memory, and, after a configurable period of time has elapsed or the batch reaches a configurable size threshold, the batch is sequenced and appended to the tree:
    1. An advisory lock is taken on .state/treeState.lock file. This helps prevent multiple frontends from stepping on each other, but isn't necesary for safety.
    2. Flushed entries are assigned contiguous sequence numbers, and written out into entry bundle files.
    3. Integrate newly added leaves into Merkle tree, and write tiles out as files.
    4. Update ./state/treeState file with the new size & root hash.
  3. Asynchronously, at an interval determined by the WithCheckpointInterval option, the checkpoint file will be updated:
    1. An advisory lock is taken on .state/publish.lock
    2. If the last-modified date of the checkpoint file is older than the checkpoint update interval, a new checkpoint which commits to the latest tree state is produced and written to the checkpoint file.

Filesystems

This implementation has been somewhat tested on local ext4 and ZFS filesystems, and on a distributed CephFS instance on GCP, in all cases with multiple personality binaries attempting to add new entries concurrently.

Other POSIX compliant filesystems such as XFS should work, but filesystems which do not offer strong POSIX compliance (e.g. s3fs or NFS) are unlikely to result in long term happiness.

If in doubt, tools like https://github.com/saidsay-so/pjdfstest may help in determining whether a given filesystem is suitable.

Documentation

Overview

Copyright 2024 The Tessera authors. All Rights Reserved.

Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at

http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.

Index

Constants

This section is empty.

Variables

This section is empty.

Functions

func New

func New(ctx context.Context, path string) (tessera.Driver, error)

New creates a new POSIX storage. - path is a directory in which the log should be stored

Types

type MigrationStorage

type MigrationStorage struct {
	// contains filtered or unexported fields
}

MigrationStorgage implements the tessera.MigrationTarget lifecycle contract.

func (*MigrationStorage) AwaitIntegration

func (m *MigrationStorage) AwaitIntegration(ctx context.Context, sourceSize uint64) ([]byte, error)

func (*MigrationStorage) IntegratedSize

func (m *MigrationStorage) IntegratedSize(_ context.Context) (uint64, error)

func (*MigrationStorage) SetEntryBundle

func (m *MigrationStorage) SetEntryBundle(ctx context.Context, index uint64, partial uint8, bundle []byte) error

type NewTreeFunc

type NewTreeFunc func(size uint64, root []byte) error

NewTreeFunc is the signature of a function which receives information about newly integrated trees.

type Storage

type Storage struct {
	// contains filtered or unexported fields
}

Storage implements storage functions for a POSIX filesystem. It leverages the POSIX atomic operations where needed.

func (*Storage) Appender

func (*Storage) MigrationWriter

MigrationWriter creates a new POSIX storage for the MigrationTarget lifecycle mode.

Directories

Path Synopsis
Package badger provides a Tessera persistent antispam driver based on BadgerDB (https://github.com/hypermodeinc/badger), a high-performance pure-go DB with KV support.
Package badger provides a Tessera persistent antispam driver based on BadgerDB (https://github.com/hypermodeinc/badger), a high-performance pure-go DB with KV support.

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL