store

package
v0.43.2-log-parallel-p... Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Oct 30, 2025 License: AGPL-3.0 Imports: 16 Imported by: 1

Documentation

Index

Constants

View Source
const DefaultCacheSize = uint(1000)
View Source
const DefaultChunkQueuesCacheSize = uint(1000)
View Source
const JobQueueChunksQueue = "JobQueueChunksQueue"

Variables

View Source
var DefaultEpochProtocolStateCacheSize uint = 20

DefaultEpochProtocolStateCacheSize is the default size for primary epoch protocol state entry cache. Minimally, we have 3 entries per epoch (one on epoch Switchover, one on receiving the Epoch Setup and one when seeing the Epoch Commit event). Let's be generous and assume we have 20 different epoch state entries per epoch.

View Source
var DefaultProtocolKVStoreByBlockIDCacheSize uint = 1000

DefaultProtocolKVStoreByBlockIDCacheSize is the default value for secondary index `byBlockIdCache`. We want to be able to cover a broad interval of views without cache misses, so we use a bigger value. Generally, many blocks will reference the same KV store snapshot.

View Source
var DefaultProtocolKVStoreCacheSize uint = 10

DefaultProtocolKVStoreCacheSize is the default size for primary protocol KV store cache. KV store is rarely updated, so we will have a limited number of unique snapshots. Let's be generous and assume we have 10 different KV stores used at the same time.

View Source
var DefaultProtocolStateIndexCacheSize uint = 1000

DefaultProtocolStateIndexCacheSize is the default value for secondary byBlockIdCache. We want to be able to cover a broad interval of views without cache misses, so we use a bigger value.

Functions

func FirstIDFromTwoIdentifier added in v0.43.3

func FirstIDFromTwoIdentifier(key TwoIdentifier) flow.Identifier

func IDFromIdentifierAndUint32 added in v0.43.3

func IDFromIdentifierAndUint32(key IdentifierAndUint32) flow.Identifier

func KeyToBlockIDIndex added in v0.39.4

func KeyToBlockIDIndex(key IdentifierAndUint32) (flow.Identifier, uint32)

func KeyToBlockIDTransactionID added in v0.39.4

func KeyToBlockIDTransactionID(key TwoIdentifier) (flow.Identifier, flow.Identifier)

Types

type All added in v0.43.0

type All struct {
	Headers            *Headers
	Guarantees         *Guarantees
	Seals              *Seals
	Index              *Index
	Payloads           *Payloads
	Blocks             *Blocks
	QuorumCertificates *QuorumCertificates
	Results            *ExecutionResults
	Receipts           *ExecutionReceipts
	Commits            *Commits

	EpochSetups               *EpochSetups
	EpochCommits              *EpochCommits
	EpochProtocolStateEntries *EpochProtocolStateEntries
	ProtocolKVStore           *ProtocolKVStore
	VersionBeacons            *VersionBeacons

	Transactions *Transactions
	Collections  *Collections
}

func InitAll added in v0.43.0

func InitAll(metrics module.CacheMetrics, db storage.DB) *All

type Blocks added in v0.43.0

type Blocks struct {
	// contains filtered or unexported fields
}

Blocks implements a simple block storage around a badger DB.

func NewBlocks added in v0.43.0

func NewBlocks(db storage.DB, headers *Headers, payloads *Payloads) *Blocks

NewBlocks ...

func (*Blocks) BatchStore added in v0.43.0

func (b *Blocks) BatchStore(lctx lockctx.Proof, rw storage.ReaderBatchWriter, proposal *flow.Proposal) error

BatchStore stores a valid block in a batch.

Expected errors during normal operations: - storage.ErrAlreadyExists if some block with the same ID has already been stored

func (*Blocks) ByCollectionID added in v0.43.0

func (b *Blocks) ByCollectionID(collID flow.Identifier) (*flow.Block, error)

ByCollectionID returns the block for the given flow.CollectionGuarantee ID. This method is only available for collections included in finalized blocks. While consensus nodes verify that collections are not repeated within the same fork, each different fork can contain a recent collection once. Therefore, we must wait for finality. CAUTION: this method is not backed by a cache and therefore comparatively slow!

Error returns:

  • storage.ErrNotFound if the collection ID was not found
  • generic error in case of unexpected failure from the database layer, or failure to decode an existing database value

func (*Blocks) ByHeight added in v0.43.0

func (b *Blocks) ByHeight(height uint64) (*flow.Block, error)

ByHeight returns the block at the given height. It is only available for finalized blocks.

Error returns:

  • storage.ErrNotFound if no block for the corresponding height was found
  • generic error in case of unexpected failure from the database layer, or failure to decode an existing database value

func (*Blocks) ByID added in v0.43.0

func (b *Blocks) ByID(blockID flow.Identifier) (*flow.Block, error)

ByID returns the block with the given hash. It is available for all incorporated blocks (validated blocks that have been appended to any of the known forks) no matter whether the block has been finalized or not.

Error returns:

  • storage.ErrNotFound if no block with the corresponding ID was found
  • generic error in case of unexpected failure from the database layer, or failure to decode an existing database value

func (*Blocks) ByView added in v0.43.0

func (b *Blocks) ByView(view uint64) (*flow.Block, error)

ByView returns the block with the given view. It is only available for certified blocks. certified blocks are the blocks that have received QC. Hotstuff guarantees that for each view, at most one block is certified. Hence, the return value of `ByView` is guaranteed to be unique even for non-finalized blocks. Expected errors during normal operations:

  • `storage.ErrNotFound` if no certified block is known at given view.

func (*Blocks) IndexBlockContainingCollectionGuarantees added in v0.43.0

func (b *Blocks) IndexBlockContainingCollectionGuarantees(blockID flow.Identifier, guaranteeIDs []flow.Identifier) error

IndexBlockContainingCollectionGuarantees populates an index `guaranteeID->blockID` for each guarantee which appears in the block. CAUTION: a collection can be included in multiple *unfinalized* blocks. However, the implementation assumes a one-to-one map from collection ID to a *single* block ID. This holds for FINALIZED BLOCKS ONLY *and* only in the absence of byzantine collector clusters (which the mature protocol must tolerate). Hence, this function should be treated as a temporary solution, which requires generalization (one-to-many mapping) for soft finality and the mature protocol.

Error returns:

  • generic error in case of unexpected failure from the database layer or encoding failure.

func (*Blocks) ProposalByHeight added in v0.43.0

func (b *Blocks) ProposalByHeight(height uint64) (*flow.Proposal, error)

ProposalByHeight returns the block at the given height, along with the proposer's signature on it. It is only available for finalized blocks.

Error returns:

  • storage.ErrNotFound if no block proposal for the corresponding height was found
  • generic error in case of unexpected failure from the database layer, or failure to decode an existing database value

func (*Blocks) ProposalByID added in v0.43.0

func (b *Blocks) ProposalByID(blockID flow.Identifier) (*flow.Proposal, error)

ProposalByID returns the block with the given ID, along with the proposer's signature on it. It is available for all incorporated blocks (validated blocks that have been appended to any of the known forks) no matter whether the block has been finalized or not.

Error returns:

  • storage.ErrNotFound if no block with the corresponding ID was found
  • generic error in case of unexpected failure from the database layer, or failure to decode an existing database value

func (*Blocks) ProposalByView added in v0.43.0

func (b *Blocks) ProposalByView(view uint64) (*flow.Proposal, error)

ProposalByView returns the block proposal with the given view. It is only available for certified blocks.

Expected errors during normal operations:

  • `storage.ErrNotFound` if no certified block is known at given view.

type Cache

type Cache[K comparable, V any] struct {
	// contains filtered or unexported fields
}

func (*Cache[K, V]) Get

func (c *Cache[K, V]) Get(r storage.Reader, key K) (V, error)

Get will try to retrieve the resource from cache first, and then from the injected. During normal operations, the following error returns are expected:

  • `storage.ErrNotFound` if key is unknown.

func (*Cache[K, V]) Insert

func (c *Cache[K, V]) Insert(key K, resource V)

Insert will add a resource directly to the cache with the given ID

func (*Cache[K, V]) IsCached

func (c *Cache[K, V]) IsCached(key K) bool

IsCached returns true if the key exists in the cache. It DOES NOT check whether the key exists in the underlying data store.

func (*Cache[K, V]) PutTx

func (c *Cache[K, V]) PutTx(rw storage.ReaderBatchWriter, key K, resource V) error

PutTx will return tx which adds a resource to the cache with the given ID.

func (*Cache[K, V]) PutWithLockTx added in v0.43.0

func (c *Cache[K, V]) PutWithLockTx(lctx lockctx.Proof, rw storage.ReaderBatchWriter, key K, resource V) error

func (*Cache[K, V]) Remove

func (c *Cache[K, V]) Remove(key K)

func (*Cache[K, V]) RemoveTx added in v0.41.0

func (c *Cache[K, V]) RemoveTx(rw storage.ReaderBatchWriter, key K) error

type ChunkDataPacks

type ChunkDataPacks struct {
	// contains filtered or unexported fields
}

ChunkDataPacks manages storage and retrieval of ChunkDataPacks, primarily serving the use case of EXECUTION NODES persisting and indexing chunk data packs for their OWN RESULTS. Essentially, the chunk describes a batch of work to be done, and the chunk data pack describes the result of that work. The storage of chunk data packs is segregated across different storage components for efficiency and modularity reasons:

  1. Usually (ignoring the system chunk for a moment), the batch of work is given by the collection referenced in the chunk data pack. For any chunk data pack being stored, we assume that the executed collection has *previously* been persisted in storage.Collections. It is useful to persist the collections individually, so we can individually retrieve them.
  2. The actual chunk data pack itself is stored in a dedicated storage component `cdpStorage`. Note that for this storage component, no atomicity is required, as we are storing chunk data packs by their collision-resistant hashes, so different chunk data packs will be stored under different keys. Theoretically, nodes could store persist multiple different (disagreeing) chunk data packs for the same chunk in this step. However, for efficiency, Execution Nodes only store their own chunk data packs.
  3. The index mapping from ChunkID to chunkDataPackID is stored in the protocol database for fast retrieval. This index is intended to be populated by execution nodes when they commit to a specific result represented by the chunk data pack. Here, we require atomicity, as an execution node should not be changing / overwriting which chunk data pack it committed to (during normal operations).

Since the executed collections are stored separately (step 0, above), we can just use the collection ID in context of the chunk data pack storage (step 1, above). Therefore, we utilize the reduced representation storage.StoredChunkDataPack internally. While removing redundant data from storage, it takes 3 look-ups to return chunk data pack by chunk ID:

i. a lookup for chunkID -> chunkDataPackID
ii. a lookup for chunkDataPackID -> StoredChunkDataPack (only has CollectionID, no collection data)
iii. a lookup for CollectionID -> Collection, then reconstruct the chunk data pack from the collection and the StoredChunkDataPack

func NewChunkDataPacks

func NewChunkDataPacks(collector module.CacheMetrics, db storage.DB, cdpStorage storage.StoredChunkDataPacks, collections storage.Collections, chunkIDToChunkDataPackIDCacheSize uint) *ChunkDataPacks

func (*ChunkDataPacks) BatchRemove

func (ch *ChunkDataPacks) BatchRemove(
	chunkIDs []flow.Identifier,
	protocolDBBatch storage.ReaderBatchWriter,
) ([]flow.Identifier, error)

BatchRemove remove multiple ChunkDataPacks with the given chunk IDs. It performs a two-phase removal: 1. First phase: Remove index mappings from ChunkID to chunkDataPackID in the protocol database 2. Second phase: Remove chunk data packs (StoredChunkDataPack) by its hash (chunkDataPackID) in chunk data pack database. Note: it does not remove the collection referred by the chunk data pack. This method is useful for the rollback execution tool to batch remove chunk data packs associated with a set of blocks. No errors are expected during normal operation, even if no entries are matched.

func (*ChunkDataPacks) BatchRemoveChunkDataPacksOnly added in v0.43.3

func (ch *ChunkDataPacks) BatchRemoveChunkDataPacksOnly(chunkIDs []flow.Identifier, chunkDataPackBatch storage.ReaderBatchWriter) error

BatchRemoveChunkDataPacksOnly removes multiple ChunkDataPacks with the given chunk IDs from chunk data pack database only. It does not remove the index mappings from ChunkID to chunkDataPackID in the protocol database. This method is useful for the runtime chunk data pack pruner to batch remove chunk data packs associated with a set of blocks. CAUTION: the chunk data pack batch is for chunk data pack database only, DO NOT pass a batch writer for protocol database. No errors are expected during normal operation, even if no entries are matched.

func (*ChunkDataPacks) ByChunkID

func (ch *ChunkDataPacks) ByChunkID(chunkID flow.Identifier) (*flow.ChunkDataPack, error)

ByChunkID returns the chunk data for the given chunk ID. It returns storage.ErrNotFound if no entry exists for the given chunk ID.

func (*ChunkDataPacks) Store

func (ch *ChunkDataPacks) Store(cs []*flow.ChunkDataPack) (
	func(lctx lockctx.Proof, protocolDBBatch storage.ReaderBatchWriter) error,
	error,
)

Store persists multiple ChunkDataPacks in a two-phase process: 1. Store chunk data packs (StoredChunkDataPack) by its hash (chunkDataPackID) in chunk data pack database. 2. Populate index mapping from ChunkID to chunkDataPackID in protocol database.

Reasoning for two-phase approach: the chunk data pack and the other execution data are stored in different databases.

  • Chunk data pack content is stored in the chunk data pack database by its hash (ID). Conceptually, it would be possible to store multiple different (disagreeing) chunk data packs here. Each chunk data pack is stored using its own collision resistant hash as key, so different chunk data packs will be stored under different keys. So from the perspective of the storage layer, we _could_ in phase 1 store all known chunk data packs. However, an Execution Node may only commit to a single chunk data pack (or it will get slashed). This mapping from chunk ID to the ID of the chunk data pack that the Execution Node actually committed to is stored in the protocol database, in the following phase 2.
  • In the second phase, we populate the index mappings from ChunkID to one "distinguished" chunk data pack ID. This mapping is stored in the protocol database. Typically, an Execution Node uses this for indexing its own chunk data packs which it publicly committed to.

ATOMICITY: ChunkDataPacks.Store executes phase 1 immediately, persisting the chunk data packs in their dedicated database. However, the index mappings in phase 2 is deferred to the caller, who must invoke the returned functor to perform phase 2. This approach has the following benefits:

  • Our API reflects that we are writing to two different databases here, with the chunk data pack database containing largely specialized data subject to pruning. In contrast, the protocol database persists the commitments a node make (subject to slashing). The caller receives the ability to persist this commitment in the form of the returned functor. The functor may be discarded by the caller without corrupting the state (if anything, we have just stored some additional chunk data packs).
  • The serialization and storage of the comparatively large chunk data packs is separated from the protocol database writes.
  • The locking duration of the protocol database is reduced.

The Store method returns:

  • func(lctx lockctx.Proof, rw storage.ReaderBatchWriter) error: Function for populating the index mapping from chunkID to chunk data pack ID in the protocol database. This mapping persists that the Execution Node committed to the result represented by this chunk data pack. This function returns storage.ErrDataMismatch when a _different_ chunk data pack ID for the same chunk ID has already been stored (changing which result an execution Node committed to would be a slashable protocol violation). The caller must acquire storage.LockInsertChunkDataPack and hold it until the database write has been committed.
  • error: No error should be returned during normal operation. Any error indicates a failure in the first phase.

type ChunksQueue added in v0.39.1

type ChunksQueue struct {
	// contains filtered or unexported fields
}

ChunksQueue stores a queue of chunk locators that assigned to me to verify. Job consumers can read the locators as job from the queue by index. Chunk locators stored in this queue are unique.

func NewChunkQueue added in v0.39.1

func NewChunkQueue(collector module.CacheMetrics, db storage.DB) *ChunksQueue

NewChunkQueue will initialize the underlying badger database of chunk locator queue.

func (*ChunksQueue) AtIndex added in v0.39.1

func (q *ChunksQueue) AtIndex(index uint64) (*chunks.Locator, error)

AtIndex returns the chunk locator stored at the given index in the queue.

func (*ChunksQueue) Init added in v0.39.1

func (q *ChunksQueue) Init(defaultIndex uint64) (bool, error)

Init initializes chunk queue's latest index with the given default index. It returns (false, nil) if the chunk queue is already initialized. It returns (true, nil) if the chunk queue is successfully initialized.

func (*ChunksQueue) LatestIndex added in v0.39.1

func (q *ChunksQueue) LatestIndex() (uint64, error)

LatestIndex returns the index of the latest chunk locator stored in the queue.

func (*ChunksQueue) StoreChunkLocator added in v0.39.1

func (q *ChunksQueue) StoreChunkLocator(locator *chunks.Locator) (bool, error)

StoreChunkLocator stores a new chunk locator that assigned to me to the job queue. A true will be returned, if the locator was new. A false will be returned, if the locator was duplicate.

type ClusterBlocks added in v0.43.0

type ClusterBlocks struct {
	// contains filtered or unexported fields
}

ClusterBlocks implements a simple block storage around a badger DB.

func NewClusterBlocks added in v0.43.0

func NewClusterBlocks(db storage.DB, chainID flow.ChainID, headers *Headers, payloads *ClusterPayloads) *ClusterBlocks

func (*ClusterBlocks) ProposalByHeight added in v0.43.0

func (b *ClusterBlocks) ProposalByHeight(height uint64) (*cluster.Proposal, error)

func (*ClusterBlocks) ProposalByID added in v0.43.0

func (b *ClusterBlocks) ProposalByID(blockID flow.Identifier) (*cluster.Proposal, error)

type ClusterPayloads added in v0.43.0

type ClusterPayloads struct {
	// contains filtered or unexported fields
}

ClusterPayloads implements storage of block payloads for collection node cluster consensus.

func NewClusterPayloads added in v0.43.0

func NewClusterPayloads(cacheMetrics module.CacheMetrics, db storage.DB) *ClusterPayloads

func (*ClusterPayloads) ByBlockID added in v0.43.0

func (cp *ClusterPayloads) ByBlockID(blockID flow.Identifier) (*cluster.Payload, error)

type Collections added in v0.39.4

type Collections struct {
	// contains filtered or unexported fields
}

func NewCollections added in v0.39.4

func NewCollections(db storage.DB, transactions *Transactions) *Collections

func (*Collections) BatchStoreAndIndexByTransaction added in v0.43.0

func (c *Collections) BatchStoreAndIndexByTransaction(lctx lockctx.Proof, collection *flow.Collection, rw storage.ReaderBatchWriter) (*flow.LightCollection, error)

BatchStoreAndIndexByTransaction stores a collection and indexes it by transaction ID within a batch.

CAUTION: current approach is NOT BFT and needs to be revised in the future. Honest clusters ensure a transaction can only belong to one collection. However, in rare cases, the collector clusters can exceed byzantine thresholds -- making it possible to produce multiple finalized collections (aka guaranteed collections) containing the same transaction repeatedly. TODO: eventually we need to handle Byzantine clusters

No errors are expected during normal operations

func (*Collections) ByID added in v0.39.4

func (c *Collections) ByID(colID flow.Identifier) (*flow.Collection, error)

ByID returns the collection with the given ID, including all transactions within the collection.

Expected errors during normal operation:

  • `storage.ErrNotFound` if no light collection was found.

func (*Collections) LightByID added in v0.39.4

func (c *Collections) LightByID(colID flow.Identifier) (*flow.LightCollection, error)

LightByID returns a reduced representation of the collection with the given ID. The reduced collection references the constituent transactions by their hashes.

Expected errors during normal operation:

  • `storage.ErrNotFound` if no light collection was found.

func (*Collections) LightByTransactionID added in v0.39.4

func (c *Collections) LightByTransactionID(txID flow.Identifier) (*flow.LightCollection, error)

LightByTransactionID returns a reduced representation of the collection holding the given transaction ID. The reduced collection references the constituent transactions by their hashes.

Expected errors during normal operation:

  • `storage.ErrNotFound` if no light collection was found.

func (*Collections) Remove added in v0.39.4

func (c *Collections) Remove(colID flow.Identifier) error

Remove removes a collection from the database, including all constituent transactions and indices inserted by Store. Remove does not error if the collection does not exist Note: this method should only be called for collections included in blocks below sealed height No errors are expected during normal operation.

func (*Collections) Store added in v0.39.4

func (c *Collections) Store(collection *flow.Collection) (*flow.LightCollection, error)

Store stores a collection in the database. any error returned are exceptions

func (*Collections) StoreAndIndexByTransaction added in v0.43.0

func (c *Collections) StoreAndIndexByTransaction(lctx lockctx.Proof, collection *flow.Collection) (*flow.LightCollection, error)

StoreAndIndexByTransaction stores a collection and indexes it by transaction ID. It's concurrent-safe.

CAUTION: current approach is NOT BFT and needs to be revised in the future. Honest clusters ensure a transaction can only belong to one collection. However, in rare cases, the collector clusters can exceed byzantine thresholds -- making it possible to produce multiple finalized collections (aka guaranteed collections) containing the same transaction repeatedly. TODO: eventually we need to handle Byzantine clusters

No errors are expected during normal operation.

type Commits added in v0.39.4

type Commits struct {
	// contains filtered or unexported fields
}

func NewCommits added in v0.39.4

func NewCommits(collector module.CacheMetrics, db storage.DB) *Commits

func (*Commits) BatchRemoveByBlockID added in v0.39.4

func (c *Commits) BatchRemoveByBlockID(blockID flow.Identifier, rw storage.ReaderBatchWriter) error

BatchRemoveByBlockID removes Commit keyed by blockID in provided batch No errors are expected during normal operation, even if no entries are matched. If the database unexpectedly fails to process the request, the error is wrapped in a generic error and returned.

func (*Commits) BatchStore added in v0.39.4

func (c *Commits) BatchStore(lctx lockctx.Proof, blockID flow.Identifier, commit flow.StateCommitment, rw storage.ReaderBatchWriter) error

BatchStore stores Commit keyed by blockID in provided batch No errors are expected during normal operation. If the database unexpectedly fails to process the request, the error is wrapped in a generic error and returned.

func (*Commits) ByBlockID added in v0.39.4

func (c *Commits) ByBlockID(blockID flow.Identifier) (flow.StateCommitment, error)

func (*Commits) RemoveByBlockID added in v0.39.4

func (c *Commits) RemoveByBlockID(blockID flow.Identifier) error

type ComputationResultUploadStatus added in v0.39.2

type ComputationResultUploadStatus struct {
	// contains filtered or unexported fields
}

func NewComputationResultUploadStatus added in v0.39.2

func NewComputationResultUploadStatus(db storage.DB) *ComputationResultUploadStatus

func (*ComputationResultUploadStatus) ByID added in v0.39.2

func (c *ComputationResultUploadStatus) ByID(computationResultID flow.Identifier) (bool, error)

func (*ComputationResultUploadStatus) GetIDsByUploadStatus added in v0.39.2

func (c *ComputationResultUploadStatus) GetIDsByUploadStatus(targetUploadStatus bool) ([]flow.Identifier, error)

func (*ComputationResultUploadStatus) Remove added in v0.39.2

func (c *ComputationResultUploadStatus) Remove(computationResultID flow.Identifier) error

func (*ComputationResultUploadStatus) Upsert added in v0.39.2

func (c *ComputationResultUploadStatus) Upsert(blockID flow.Identifier,
	wasUploadCompleted bool) error

type ConsumerProgressInitializer

type ConsumerProgressInitializer struct {
	// contains filtered or unexported fields
}

ConsumerProgressInitializer is a helper to initialize the consumer progress index in storage It prevents the consumer from being used before initialization

func NewConsumerProgress

func NewConsumerProgress(db storage.DB, consumer string) *ConsumerProgressInitializer

func (*ConsumerProgressInitializer) Initialize

func (cpi *ConsumerProgressInitializer) Initialize(defaultIndex uint64) (storage.ConsumerProgress, error)

type EpochCommits added in v0.43.0

type EpochCommits struct {
	// contains filtered or unexported fields
}

func NewEpochCommits added in v0.43.0

func NewEpochCommits(collector module.CacheMetrics, db storage.DB) *EpochCommits

func (*EpochCommits) BatchStore added in v0.43.0

func (ec *EpochCommits) BatchStore(rw storage.ReaderBatchWriter, commit *flow.EpochCommit) error

func (*EpochCommits) ByID added in v0.43.0

func (ec *EpochCommits) ByID(commitID flow.Identifier) (*flow.EpochCommit, error)

ByID will return the EpochCommit event by its ID. Error returns: * storage.ErrNotFound if no EpochCommit with the ID exists

type EpochProtocolStateEntries added in v0.43.0

type EpochProtocolStateEntries struct {
	// contains filtered or unexported fields
}

EpochProtocolStateEntries implements a persistent, fork-aware storage for the Epoch-related sub-states of the overall of the overall Protocol State (KV Store). It uses an embedded cache which is populated on first retrieval to speed up access to frequently used epoch sub-state.

func NewEpochProtocolStateEntries added in v0.43.0

func NewEpochProtocolStateEntries(collector module.CacheMetrics,
	epochSetups storage.EpochSetups,
	epochCommits storage.EpochCommits,
	db storage.DB,
	stateCacheSize uint,
	stateByBlockIDCacheSize uint,
) *EpochProtocolStateEntries

NewEpochProtocolStateEntries creates a EpochProtocolStateEntries instance, which stores a subset of the state stored by the Dynamic Protocol State. It supports storing, caching and retrieving by ID or the additionally indexed block ID.

func (*EpochProtocolStateEntries) BatchIndex added in v0.43.0

func (s *EpochProtocolStateEntries) BatchIndex(lctx lockctx.Proof, rw storage.ReaderBatchWriter, blockID flow.Identifier, epochProtocolStateEntryID flow.Identifier) error

BatchIndex persists the specific map entry in the node's database. In a nutshell, we want to maintain a map from `blockID` to `epochStateEntry`, where `blockID` references the block that _proposes_ the referenced epoch protocol state entry. Protocol convention:

  • Consider block B, whose ingestion might potentially lead to an updated protocol state. For example, the protocol state changes if we seal some execution results emitting service events.
  • For the key `blockID`, we use the identity of block B which _proposes_ this Protocol State. As value, the hash of the resulting protocol state at the end of processing B is to be used.
  • CAUTION: The protocol state requires confirmation by a QC and will only become active at the child block, _after_ validating the QC.

No errors are expected during normal operation.

func (*EpochProtocolStateEntries) BatchStore added in v0.43.0

func (s *EpochProtocolStateEntries) BatchStore(w storage.Writer, epochProtocolStateEntryID flow.Identifier, epochStateEntry *flow.MinEpochStateEntry) error

BatchStore persists the given epoch protocol state entry as part of a DB batch. Per convention, the identities in the flow.MinEpochStateEntry must be in canonical order for the current and next epoch (if present), otherwise an exception is returned.

CAUTION: The caller must ensure `epochProtocolStateID` is a collision-resistant hash of the provided `epochProtocolStateEntry`! This method silently overrides existing data, which is safe only if for the same key, we always write the same value.

No errors are expected during normal operation.

func (*EpochProtocolStateEntries) ByBlockID added in v0.43.0

ByBlockID retrieves the epoch protocol state entry that the block with the given ID proposes. CAUTION: this protocol state requires confirmation by a QC and will only become active at the child block, _after_ validating the QC. Protocol convention:

  • Consider block B, whose ingestion might potentially lead to an updated protocol state. For example, the protocol state changes if we seal some execution results emitting service events.
  • For the key `blockID`, we use the identity of block B which _proposes_ this Protocol State. As value, the hash of the resulting protocol state at the end of processing B is to be used.
  • CAUTION: The protocol state requires confirmation by a QC and will only become active at the child block, _after_ validating the QC.

Expected errors during normal operations:

  • storage.ErrNotFound if no state entry has been indexed for the given block.

func (*EpochProtocolStateEntries) ByID added in v0.43.0

func (s *EpochProtocolStateEntries) ByID(epochProtocolStateEntryID flow.Identifier) (*flow.RichEpochStateEntry, error)

ByID returns the epoch protocol state entry by its ID. Expected errors during normal operations:

  • storage.ErrNotFound if no protocol state with the given Identifier is known.

type EpochSetups added in v0.43.0

type EpochSetups struct {
	// contains filtered or unexported fields
}

func NewEpochSetups added in v0.43.0

func NewEpochSetups(collector module.CacheMetrics, db storage.DB) *EpochSetups

NewEpochSetups instantiates a new EpochSetups storage.

func (*EpochSetups) BatchStore added in v0.43.0

func (es *EpochSetups) BatchStore(rw storage.ReaderBatchWriter, setup *flow.EpochSetup) error

No errors are expected during normal operation.

func (*EpochSetups) ByID added in v0.43.0

func (es *EpochSetups) ByID(setupID flow.Identifier) (*flow.EpochSetup, error)

ByID will return the EpochSetup event by its ID. Error returns: * storage.ErrNotFound if no EpochSetup with the ID exists

type Events added in v0.39.4

type Events struct {
	// contains filtered or unexported fields
}

func NewEvents added in v0.39.4

func NewEvents(collector module.CacheMetrics, db storage.DB) *Events

func (*Events) BatchRemoveByBlockID added in v0.39.4

func (e *Events) BatchRemoveByBlockID(blockID flow.Identifier, rw storage.ReaderBatchWriter) error

BatchRemoveByBlockID removes events keyed by a blockID in provided batch No errors are expected during normal operation, even if no entries are matched. If Badger unexpectedly fails to process the request, the error is wrapped in a generic error and returned.

func (*Events) BatchStore added in v0.39.4

func (e *Events) BatchStore(blockID flow.Identifier, blockEvents []flow.EventsList, batch storage.ReaderBatchWriter) error

BatchStore stores events keyed by a blockID in provided batch No errors are expected during normal operation, but it may return generic error if badger fails to process request

func (*Events) ByBlockID added in v0.39.4

func (e *Events) ByBlockID(blockID flow.Identifier) ([]flow.Event, error)

ByBlockID returns the events for the given block ID Note: This method will return an empty slice and no error if no entries for the blockID are found

func (*Events) ByBlockIDEventType added in v0.39.4

func (e *Events) ByBlockIDEventType(blockID flow.Identifier, eventType flow.EventType) ([]flow.Event, error)

ByBlockIDEventType returns the events for the given block ID and event type Note: This method will return an empty slice and no error if no entries for the blockID are found

func (*Events) ByBlockIDTransactionID added in v0.39.4

func (e *Events) ByBlockIDTransactionID(blockID flow.Identifier, txID flow.Identifier) ([]flow.Event, error)

ByBlockIDTransactionID returns the events for the given block ID and transaction ID Note: This method will return an empty slice and no error if no entries for the blockID are found

func (*Events) ByBlockIDTransactionIndex added in v0.39.4

func (e *Events) ByBlockIDTransactionIndex(blockID flow.Identifier, txIndex uint32) ([]flow.Event, error)

ByBlockIDTransactionIndex returns the events for the given block ID and transaction index Note: This method will return an empty slice and no error if no entries for the blockID are found

func (*Events) RemoveByBlockID added in v0.39.4

func (e *Events) RemoveByBlockID(blockID flow.Identifier) error

RemoveByBlockID removes events by block ID

func (*Events) Store added in v0.39.4

func (e *Events) Store(blockID flow.Identifier, blockEvents []flow.EventsList) error

Store will store events for the given block ID

type ExecutionForkEvidence added in v0.42.1

type ExecutionForkEvidence struct {
	// contains filtered or unexported fields
}

ExecutionForkEvidence represents persistent storage for execution fork evidence.

func NewExecutionForkEvidence added in v0.42.1

func NewExecutionForkEvidence(db storage.DB) *ExecutionForkEvidence

NewExecutionForkEvidence creates a new ExecutionForkEvidence store.

func (*ExecutionForkEvidence) Retrieve added in v0.42.1

Retrieve reads conflicting seals from the database. No error is returned if database record doesn't exist. No errors are expected during normal operations.

func (*ExecutionForkEvidence) StoreIfNotExists added in v0.42.1

func (efe *ExecutionForkEvidence) StoreIfNotExists(lctx lockctx.Proof, conflictingSeals []*flow.IncorporatedResultSeal) error

StoreIfNotExists stores the given conflictingSeals to the database. This is a no-op if there is already a record in the database with the same key. The caller must hold the storage.LockInsertExecutionForkEvidence lock. No errors are expected during normal operations.

type ExecutionReceipts added in v0.39.4

type ExecutionReceipts struct {
	// contains filtered or unexported fields
}

ExecutionReceipts implements storage for execution receipts.

func NewExecutionReceipts added in v0.39.4

func NewExecutionReceipts(collector module.CacheMetrics, db storage.DB, results storage.ExecutionResults, cacheSize uint) *ExecutionReceipts

NewExecutionReceipts Creates ExecutionReceipts instance which is a database of receipts which supports storing and indexing receipts by receipt ID and block ID.

func (*ExecutionReceipts) BatchStore added in v0.39.4

func (*ExecutionReceipts) ByBlockID added in v0.39.4

ByBlockID retrieves list of execution receipts from the storage

No errors are expected errors during normal operations.

func (*ExecutionReceipts) ByID added in v0.39.4

func (*ExecutionReceipts) Store added in v0.39.4

func (r *ExecutionReceipts) Store(receipt *flow.ExecutionReceipt) error

type ExecutionResults added in v0.39.4

type ExecutionResults struct {
	// contains filtered or unexported fields
}

ExecutionResults implements persistent storage for execution results.

func NewExecutionResults added in v0.39.4

func NewExecutionResults(collector module.CacheMetrics, db storage.DB) *ExecutionResults

func (*ExecutionResults) BatchIndex added in v0.39.4

func (r *ExecutionResults) BatchIndex(blockID flow.Identifier, resultID flow.Identifier, batch storage.ReaderBatchWriter) error

func (*ExecutionResults) BatchRemoveIndexByBlockID added in v0.39.4

func (r *ExecutionResults) BatchRemoveIndexByBlockID(blockID flow.Identifier, batch storage.ReaderBatchWriter) error

BatchRemoveIndexByBlockID removes blockID-to-executionResultID index entries keyed by blockID in a provided batch. No errors are expected during normal operation, even if no entries are matched. If Badger unexpectedly fails to process the request, the error is wrapped in a generic error and returned.

func (*ExecutionResults) BatchStore added in v0.39.4

func (r *ExecutionResults) BatchStore(result *flow.ExecutionResult, batch storage.ReaderBatchWriter) error

func (*ExecutionResults) ByBlockID added in v0.39.4

func (r *ExecutionResults) ByBlockID(blockID flow.Identifier) (*flow.ExecutionResult, error)

func (*ExecutionResults) ByID added in v0.39.4

func (r *ExecutionResults) ByID(resultID flow.Identifier) (*flow.ExecutionResult, error)

func (*ExecutionResults) ForceIndex added in v0.39.4

func (r *ExecutionResults) ForceIndex(blockID flow.Identifier, resultID flow.Identifier) error

func (*ExecutionResults) Index added in v0.39.4

func (r *ExecutionResults) Index(blockID flow.Identifier, resultID flow.Identifier) error

Index indexes an execution result by block ID. Note: this method call is not concurrent safe, because it checks if the different result is already indexed by the same blockID, and if it is, it returns an error. The caller needs to ensure that there is no concurrent call to this method with the same blockID.

func (*ExecutionResults) RemoveIndexByBlockID added in v0.39.4

func (r *ExecutionResults) RemoveIndexByBlockID(blockID flow.Identifier) error

func (*ExecutionResults) Store added in v0.39.4

func (r *ExecutionResults) Store(result *flow.ExecutionResult) error

type GroupCache added in v0.43.3

type GroupCache[G comparable, K comparable, V any] struct {
	Cache[K, V]
}

GroupCache extends the Cache with a primary index on K and secondary index on G, which can be used to remove multiple cached items efficiently. A common use case of GroupCache is to cache data by concatenated key (block ID and transaction ID) for faster retrieval, and to remove cached items by first key (block ID). Although G can be a prefix of K since that can be useful, G can be anything comparable (it doesn't need to be a prefix).

func (*GroupCache[G, K, V]) RemoveGroup added in v0.43.3

func (c *GroupCache[G, K, V]) RemoveGroup(group G) int

RemoveGroup removes all cached items associated with the given group.

func (*GroupCache[G, K, V]) RemoveGroups added in v0.43.3

func (c *GroupCache[G, K, V]) RemoveGroups(groups []G) int

RemoveGroup removes all cached items associated with the given groups. RemoveGroup should be used to remove multiple groups to reduce number of times cache is locked.

type Guarantees added in v0.43.0

type Guarantees struct {
	// contains filtered or unexported fields
}

Guarantees implements persistent storage for collection guarantees.

func NewGuarantees added in v0.43.0

func NewGuarantees(
	collector module.CacheMetrics,
	db storage.DB,
	cacheSize uint,
	byCollectionIDCacheSize uint,
) *Guarantees

NewGuarantees creates a Guarantees instance, which stores collection guarantees. It supports storing, caching and retrieving by guaranteeID or the additionally indexed collection ID.

func (*Guarantees) ByCollectionID added in v0.43.0

func (g *Guarantees) ByCollectionID(collID flow.Identifier) (*flow.CollectionGuarantee, error)

ByCollectionID retrieves the collection guarantee by collection ID. Expected errors during normal operations:

func (*Guarantees) ByID added in v0.43.0

func (g *Guarantees) ByID(guaranteeID flow.Identifier) (*flow.CollectionGuarantee, error)

ByID returns the flow.CollectionGuarantee by its ID. Expected errors during normal operations:

type Headers added in v0.43.0

type Headers struct {
	// contains filtered or unexported fields
}

Headers implements a simple read-only header storage around a DB.

func NewHeaders added in v0.43.0

func NewHeaders(collector module.CacheMetrics, db storage.DB) *Headers

NewHeaders creates a Headers instance, which stores block headers. It supports storing, caching and retrieving by block ID and the additionally indexed by header height.

func (*Headers) BlockIDByHeight added in v0.43.0

func (h *Headers) BlockIDByHeight(height uint64) (flow.Identifier, error)

BlockIDByHeight returns the block ID that is finalized at the given height. It is an optimized version of `ByHeight` that skips retrieving the block. Expected errors during normal operations:

func (*Headers) BlockIDByView added in v0.43.0

func (h *Headers) BlockIDByView(view uint64) (flow.Identifier, error)

BlockIDByView returns the block ID that is certified at the given view. It is an optimized version of `ByView` that skips retrieving the block. Expected errors during normal operations:

  • `[storage.ErrNotFound] if no certified block is known at given view.

NOTE: this method is not available until next spork (mainnet27) or a migration that builds the index.

func (*Headers) ByBlockID added in v0.43.0

func (h *Headers) ByBlockID(blockID flow.Identifier) (*flow.Header, error)

ByBlockID returns the header with the given ID. It is available for finalized blocks and those pending finalization. Error returns:

func (*Headers) ByHeight added in v0.43.0

func (h *Headers) ByHeight(height uint64) (*flow.Header, error)

ByHeight returns the block with the given number. It is only available for finalized blocks. Error returns:

func (*Headers) ByParentID added in v0.43.0

func (h *Headers) ByParentID(parentID flow.Identifier) ([]*flow.Header, error)

ByParentID finds all children for the given parent block. The returned headers might be unfinalized; if there is more than one, at least one of them has to be unfinalized. CAUTION: this method is not backed by a cache and therefore comparatively slow!

Expected error returns during normal operations:

func (*Headers) ByView added in v0.43.0

func (h *Headers) ByView(view uint64) (*flow.Header, error)

ByView returns the block with the given view. It is only available for certified blocks. Certified blocks are the blocks that have received QC. Hotstuff guarantees that for each view, at most one block is certified. Hence, the return value of `ByView` is guaranteed to be unique even for non-finalized blocks.

Expected errors during normal operations:

func (*Headers) Exists added in v0.43.0

func (h *Headers) Exists(blockID flow.Identifier) (bool, error)

Exists returns true if a header with the given ID has been stored. No errors are expected during normal operation.

func (*Headers) FindHeaders added in v0.43.0

func (h *Headers) FindHeaders(filter func(header *flow.Header) bool) ([]flow.Header, error)

func (*Headers) ProposalByBlockID added in v0.43.0

func (h *Headers) ProposalByBlockID(blockID flow.Identifier) (*flow.ProposalHeader, error)

ProposalByBlockID returns the header with the given ID, along with the corresponding proposer signature. It is available for finalized blocks and those pending finalization. Error returns:

func (*Headers) RollbackExecutedBlock added in v0.43.0

func (h *Headers) RollbackExecutedBlock(header *flow.Header) error

RollbackExecutedBlock update the executed block header to the given header. Intended to be used by Execution Nodes only, to roll back executed block height. This method is NOT CONCURRENT SAFE, the caller should make sure to call this method in a single thread.

type IdentifierAndUint32 added in v0.42.0

type IdentifierAndUint32 [flow.IdentifierLen + 4]byte

func KeyFromBlockIDIndex added in v0.39.4

func KeyFromBlockIDIndex(blockID flow.Identifier, txIndex uint32) IdentifierAndUint32

type Index added in v0.43.0

type Index struct {
	// contains filtered or unexported fields
}

Index implements a simple read-only payload storage around a badger DB.

func NewIndex added in v0.43.0

func NewIndex(collector module.CacheMetrics, db storage.DB) *Index

func (*Index) ByBlockID added in v0.43.0

func (i *Index) ByBlockID(blockID flow.Identifier) (*flow.Index, error)

type LatestPersistedSealedResult added in v0.43.0

type LatestPersistedSealedResult struct {
	// contains filtered or unexported fields
}

LatestPersistedSealedResult tracks the most recently persisted sealed execution result processed by the Access ingestion engine.

func NewLatestPersistedSealedResult added in v0.43.0

func NewLatestPersistedSealedResult(
	progress storage.ConsumerProgress,
	headers storage.Headers,
	results storage.ExecutionResults,
) (*LatestPersistedSealedResult, error)

NewLatestPersistedSealedResult creates a new LatestPersistedSealedResult instance.

No errors are expected during normal operation,

func (*LatestPersistedSealedResult) BatchSet added in v0.43.0

func (l *LatestPersistedSealedResult) BatchSet(resultID flow.Identifier, height uint64, batch storage.ReaderBatchWriter) error

BatchSet updates the latest persisted sealed result in a batch operation The resultID and height are added to the provided batch, and the local data is updated only after the batch is successfully committed.

No errors are expected during normal operation,

func (*LatestPersistedSealedResult) Latest added in v0.43.0

Latest returns the ID and height of the latest persisted sealed result.

type LightTransactionResults added in v0.39.4

type LightTransactionResults struct {
	// contains filtered or unexported fields
}

func NewLightTransactionResults added in v0.39.4

func NewLightTransactionResults(collector module.CacheMetrics, db storage.DB, transactionResultsCacheSize uint) *LightTransactionResults

func (*LightTransactionResults) BatchStore added in v0.39.4

func (tr *LightTransactionResults) BatchStore(lctx lockctx.Proof, rw storage.ReaderBatchWriter, blockID flow.Identifier, transactionResults []flow.LightTransactionResult) error

BatchStore persists and indexes all transaction results (light representation) for the given blockID as part of the provided batch. The caller must acquire storage.LockInsertLightTransactionResult and hold it until the write batch has been committed. It returns storage.ErrAlreadyExists if light transaction results for the block already exist.

func (*LightTransactionResults) ByBlockID added in v0.39.4

ByBlockID gets all transaction results for a block, ordered by transaction index CAUTION: this function returns the empty list in case for block IDs without known results. No error returns are expected during normal operations.

func (*LightTransactionResults) ByBlockIDTransactionID added in v0.39.4

func (tr *LightTransactionResults) ByBlockIDTransactionID(blockID flow.Identifier, txID flow.Identifier) (*flow.LightTransactionResult, error)

ByBlockIDTransactionID returns the transaction result for the given block ID and transaction ID

Expected error returns during normal operation:

func (*LightTransactionResults) ByBlockIDTransactionIndex added in v0.39.4

func (tr *LightTransactionResults) ByBlockIDTransactionIndex(blockID flow.Identifier, txIndex uint32) (*flow.LightTransactionResult, error)

ByBlockIDTransactionIndex returns the transaction result for the given blockID and transaction index

Expected error returns during normal operation:

type MyExecutionReceipts added in v0.39.4

type MyExecutionReceipts struct {
	// contains filtered or unexported fields
}

MyExecutionReceipts holds and indexes Execution Receipts. MyExecutionReceipts is implemented as a wrapper around badger.ExecutionReceipts The wrapper adds the ability to "MY execution receipt", from the viewpoint of an individual Execution Node.

func NewMyExecutionReceipts added in v0.39.4

func NewMyExecutionReceipts(collector module.CacheMetrics, db storage.DB, receipts storage.ExecutionReceipts) *MyExecutionReceipts

NewMyExecutionReceipts creates instance of MyExecutionReceipts which is a wrapper wrapper around badger.ExecutionReceipts It's useful for execution nodes to keep track of produced execution receipts.

func (*MyExecutionReceipts) BatchRemoveIndexByBlockID added in v0.39.4

func (m *MyExecutionReceipts) BatchRemoveIndexByBlockID(blockID flow.Identifier, rw storage.ReaderBatchWriter) error

BatchRemoveIndexByBlockID removes blockID-to-my-execution-receipt index entry keyed by a blockID in a provided batch No errors are expected during normal operation, even if no entries are matched. If Badger unexpectedly fails to process the request, the error is wrapped in a generic error and returned.

func (*MyExecutionReceipts) BatchStoreMyReceipt added in v0.39.4

func (m *MyExecutionReceipts) BatchStoreMyReceipt(lctx lockctx.Proof, receipt *flow.ExecutionReceipt, rw storage.ReaderBatchWriter) error

BatchStoreMyReceipt stores blockID-to-my-receipt index entry keyed by blockID in a provided batch.

If entity fails marshalling, the error is wrapped in a generic error and returned. If database unexpectedly fails to process the request, the error is wrapped in a generic error and returned.

Expected error returns during *normal* operations:

  • `storage.ErrDataMismatch` if a *different* receipt has already been indexed for the same block

func (*MyExecutionReceipts) MyReceipt added in v0.39.4

func (m *MyExecutionReceipts) MyReceipt(blockID flow.Identifier) (*flow.ExecutionReceipt, error)

MyReceipt retrieves my receipt for the given block. Returns storage.ErrNotFound if no receipt was persisted for the block.

func (*MyExecutionReceipts) RemoveIndexByBlockID added in v0.39.4

func (m *MyExecutionReceipts) RemoveIndexByBlockID(blockID flow.Identifier) error

type NodeDisallowList added in v0.41.0

type NodeDisallowList struct {
	// contains filtered or unexported fields
}

func NewNodeDisallowList added in v0.41.0

func NewNodeDisallowList(db storage.DB) *NodeDisallowList

func (*NodeDisallowList) Retrieve added in v0.41.0

func (dl *NodeDisallowList) Retrieve(disallowList *map[flow.Identifier]struct{}) error

Retrieve reads the set of disallowed nodes from the database. No error is returned if no database entry exists. No errors are expected during normal operations.

func (*NodeDisallowList) Store added in v0.41.0

func (dl *NodeDisallowList) Store(disallowList map[flow.Identifier]struct{}) error

Store writes the given disallowList to the database. To avoid legacy entries in the database, we purge the entire database entry if disallowList is empty. No errors are expected during normal operations.

type Payloads added in v0.43.0

type Payloads struct {
	// contains filtered or unexported fields
}

func NewPayloads added in v0.43.0

func NewPayloads(db storage.DB, index *Index, guarantees *Guarantees, seals *Seals, receipts *ExecutionReceipts,
	results *ExecutionResults) *Payloads

func (*Payloads) ByBlockID added in v0.43.0

func (p *Payloads) ByBlockID(blockID flow.Identifier) (*flow.Payload, error)

type ProtocolKVStore added in v0.43.0

type ProtocolKVStore struct {
	// contains filtered or unexported fields
}

ProtocolKVStore implements persistent storage for storing KV store snapshots.

func NewProtocolKVStore added in v0.43.0

func NewProtocolKVStore(collector module.CacheMetrics,
	db storage.DB,
	kvStoreCacheSize uint,
	kvStoreByBlockIDCacheSize uint,
) *ProtocolKVStore

NewProtocolKVStore creates a ProtocolKVStore instance, which is a database holding KV store snapshots. It supports storing, caching and retrieving by ID or the additionally indexed block ID.

func (*ProtocolKVStore) BatchIndex added in v0.43.0

func (s *ProtocolKVStore) BatchIndex(lctx lockctx.Proof, rw storage.ReaderBatchWriter, blockID flow.Identifier, stateID flow.Identifier) error

BatchIndex persists the specific map entry in the node's database. In a nutshell, we want to maintain a map from `blockID` to `epochStateEntry`, where `blockID` references the block that _proposes_ the referenced epoch protocol state entry. Protocol convention:

  • Consider block B, whose ingestion might potentially lead to an updated protocol state. For example, the protocol state changes if we seal some execution results emitting service events.
  • For the key `blockID`, we use the identity of block B which _proposes_ this Protocol State. As value, the hash of the resulting protocol state at the end of processing B is to be used.
  • IMPORTANT: The protocol state requires confirmation by a QC and will only become active at the child block, _after_ validating the QC.

CAUTION:

  • The caller must acquire the lock storage.LockInsertBlock and hold it until the database write has been committed.
  • OVERWRITES existing data (potential for data corruption): The lock proof serves as a reminder that the CALLER is responsible to ensure that the DEDUPLICATION CHECK is done elsewhere ATOMICALLY within this write operation. Currently it's done by operation.InsertHeader where it performs a check to ensure the blockID is new, therefore any data indexed by this blockID is new as well.

Expected errors during normal operations: - storage.ErrAlreadyExist if a KV store for the given blockID has already been indexed.

func (*ProtocolKVStore) BatchStore added in v0.43.0

BatchStore persists the KV-store snapshot in the database using the given ID as key. BatchStore is idempotent, i.e. it accepts repeated calls with the same pairs of (stateID, kvStore). Here, the ID is expected to be a collision-resistant hash of the snapshot (including the ProtocolStateVersion). Hence, for the same ID, BatchStore will reject changing the data.

No error is expected during normal operations.

func (*ProtocolKVStore) ByBlockID added in v0.43.0

func (s *ProtocolKVStore) ByBlockID(blockID flow.Identifier) (*flow.PSKeyValueStoreData, error)

ByBlockID retrieves the kv-store snapshot that the block with the given ID proposes. CAUTION: this store snapshot requires confirmation by a QC and will only become active at the child block, _after_ validating the QC. Protocol convention:

  • Consider block B, whose ingestion might potentially lead to an updated KV store state. For example, the state changes if we seal some execution results emitting specific service events.
  • For the key `blockID`, we use the identity of block B which _proposes_ this updated KV store. As value, the hash of the resulting state at the end of processing B is to be used.
  • CAUTION: The updated state requires confirmation by a QC and will only become active at the child block, _after_ validating the QC.

Expected errors during normal operations:

  • storage.ErrNotFound if no snapshot has been indexed for the given block.

func (*ProtocolKVStore) ByID added in v0.43.0

ByID retrieves the KV store snapshot with the given state ID. Expected errors during normal operations:

  • storage.ErrNotFound if no snapshot with the given Identifier is known.

type QuorumCertificates added in v0.43.0

type QuorumCertificates struct {
	// contains filtered or unexported fields
}

QuorumCertificates implements persistent storage for quorum certificates.

func NewQuorumCertificates added in v0.43.0

func NewQuorumCertificates(collector module.CacheMetrics, db storage.DB, cacheSize uint) *QuorumCertificates

NewQuorumCertificates Creates QuorumCertificates instance which is a database of quorum certificates which supports storing, caching and retrieving by block ID.

func (*QuorumCertificates) BatchStore added in v0.43.0

BatchStore stores a Quorum Certificate as part of database batch update. QC is indexed by QC.BlockID.

Note: For the same block, different QCs can easily be constructed by selecting different sub-sets of the received votes (provided more than the minimal number of consensus participants voted, which is typically the case). In most cases, it is only important that a block has been certified, but irrelevant who specifically contributed to the QC. Therefore, we only store the first QC.

If *any* quorum certificate for QC.BlockID has already been stored, a `storage.ErrAlreadyExists` is returned (typically benign).

func (*QuorumCertificates) ByBlockID added in v0.43.0

func (q *QuorumCertificates) ByBlockID(blockID flow.Identifier) (*flow.QuorumCertificate, error)

ByBlockID returns QC that certifies the block referred by blockID. * storage.ErrNotFound if no QC for blockID doesn't exist.

type ResultApprovals

type ResultApprovals struct {
	// contains filtered or unexported fields
}

ResultApprovals implements persistent storage for result approvals.

CAUTION suitable only for _Verification Nodes_ for persisting their _own_ approvals!

  • In general, the Flow protocol requires multiple approvals for the same chunk from different verification nodes. In other words, there are multiple different approvals for the same chunk.
  • Internally, ResultApprovals populates an index from Executed Chunk ➜ ResultApproval. This is *only safe* for Verification Nodes when tracking their own approvals (for the same ExecutionResult, a Verifier will always produce the same approval)

func NewResultApprovals

func NewResultApprovals(collector module.CacheMetrics, db storage.DB, lockManager lockctx.Manager) *ResultApprovals

func (*ResultApprovals) ByChunk

func (r *ResultApprovals) ByChunk(resultID flow.Identifier, chunkIndex uint64) (*flow.ResultApproval, error)

ByChunk retrieves a ResultApproval by result ID and chunk index. The ResultApprovals store is only used within a verification node, where it is assumed that there is never more than one approval per chunk.

func (*ResultApprovals) ByID

func (r *ResultApprovals) ByID(approvalID flow.Identifier) (*flow.ResultApproval, error)

ByID retrieves a ResultApproval by its ID

func (*ResultApprovals) StoreMyApproval added in v0.43.0

func (r *ResultApprovals) StoreMyApproval(approval *flow.ResultApproval) func(lctx lockctx.Proof) error

StoreMyApproval returns a functor, whose execution

  • will store the given ResultApproval
  • and index it by result ID and chunk index.
  • requires storage.LockIndexResultApproval lock to be held by the caller

The functor's expected error returns during normal operation are:

  • `storage.ErrDataMismatch` if a *different* approval for the same key pair (ExecutionResultID, chunk index) is already indexed

CAUTION: the Flow protocol requires multiple approvals for the same chunk from different verification nodes. In other words, there are multiple different approvals for the same chunk. Therefore, the index Executed Chunk ➜ ResultApproval ID (populated here) is *only safe* to be used by Verification Nodes for tracking their own approvals.

For the same ExecutionResult, a Verifier will always produce the same approval. Therefore, this operation is idempotent, i.e. repeated calls with the *same inputs* are equivalent to just calling the method once; still the method succeeds on each call. However, when attempting to index *different* ResultApproval IDs for the same key (resultID, chunkIndex) this method returns an exception, as this should never happen for a correct Verification Node indexing its own approvals. It returns a functor so that some computation (such as computing approval ID) can be done before acquiring the lock.

type ScheduledTransactions added in v0.43.3

type ScheduledTransactions struct {
	// contains filtered or unexported fields
}

ScheduledTransactions represents persistent storage for scheduled transaction indices. Note: no scheduled transactions are stored. Transaction bodies can be generated on-demand using the blueprints package. This interface provides access to indices used to lookup the block ID that the scheduled transaction was executed in, which allows querying its transaction result.

func NewScheduledTransactions added in v0.43.3

func NewScheduledTransactions(collector module.CacheMetrics, db storage.DB, cacheSize uint) *ScheduledTransactions

func (*ScheduledTransactions) BatchIndex added in v0.43.3

func (st *ScheduledTransactions) BatchIndex(lctx lockctx.Proof, blockID flow.Identifier, txID flow.Identifier, scheduledTxID uint64, batch storage.ReaderBatchWriter) error

BatchIndex indexes the scheduled transaction by its block ID, transaction ID, and scheduled transaction ID. `txID` is be the TransactionBody.ID of the scheduled transaction. `scheduledTxID` is the uint64 id field returned by the system smart contract. Requires the lock: storage.LockIndexScheduledTransaction

Expected error returns during normal operation:

func (*ScheduledTransactions) BlockIDByTransactionID added in v0.43.3

func (st *ScheduledTransactions) BlockIDByTransactionID(txID flow.Identifier) (flow.Identifier, error)

BlockIDByTransactionID returns the block ID in which the provided system transaction was executed. `txID` must be the TransactionBody.ID of the scheduled transaction.

Expected error returns during normal operation:

func (*ScheduledTransactions) TransactionIDByID added in v0.43.3

func (st *ScheduledTransactions) TransactionIDByID(scheduledTxID uint64) (flow.Identifier, error)

TransactionIDByID returns the transaction ID of the scheduled transaction by its scheduled transaction ID. `scheduledTxID` is the uint64 `id` field returned by the system smart contract.

Expected error returns during normal operation:

type Seals added in v0.43.0

type Seals struct {
	// contains filtered or unexported fields
}

func NewSeals added in v0.43.0

func NewSeals(collector module.CacheMetrics, db storage.DB) *Seals

func (*Seals) ByID added in v0.43.0

func (s *Seals) ByID(sealID flow.Identifier) (*flow.Seal, error)

func (*Seals) FinalizedSealForBlock added in v0.43.0

func (s *Seals) FinalizedSealForBlock(blockID flow.Identifier) (*flow.Seal, error)

FinalizedSealForBlock returns the seal for the given block, only if that seal has been included in a finalized block. Returns storage.ErrNotFound if the block is unknown or unsealed.

func (*Seals) HighestInFork added in v0.43.0

func (s *Seals) HighestInFork(blockID flow.Identifier) (*flow.Seal, error)

HighestInFork retrieves the highest seal that was included in the fork up to (and including) blockID. This method should return a seal for any block known to the node. Returns storage.ErrNotFound if blockID is unknown.

func (*Seals) Store added in v0.43.0

func (s *Seals) Store(seal *flow.Seal) error

type ServiceEvents added in v0.39.4

type ServiceEvents struct {
	// contains filtered or unexported fields
}

func NewServiceEvents added in v0.39.4

func NewServiceEvents(collector module.CacheMetrics, db storage.DB) *ServiceEvents

func (*ServiceEvents) BatchRemoveByBlockID added in v0.39.4

func (e *ServiceEvents) BatchRemoveByBlockID(blockID flow.Identifier, rw storage.ReaderBatchWriter) error

BatchRemoveByBlockID removes service events keyed by a blockID in provided batch No errors are expected during normal operation, even if no entries are matched. If Badger unexpectedly fails to process the request, the error is wrapped in a generic error and returned.

func (*ServiceEvents) BatchStore added in v0.39.4

func (e *ServiceEvents) BatchStore(blockID flow.Identifier, events []flow.Event, rw storage.ReaderBatchWriter) error

BatchStore stores service events keyed by a blockID in provided batch No errors are expected during normal operation, even if no entries are matched. If Badger unexpectedly fails to process the request, the error is wrapped in a generic error and returned.

func (*ServiceEvents) ByBlockID added in v0.39.4

func (e *ServiceEvents) ByBlockID(blockID flow.Identifier) ([]flow.Event, error)

ByBlockID returns the events for the given block ID

func (*ServiceEvents) RemoveByBlockID added in v0.39.4

func (e *ServiceEvents) RemoveByBlockID(blockID flow.Identifier) error

RemoveByBlockID removes service events by block ID

type StoredChunkDataPacks added in v0.43.3

type StoredChunkDataPacks struct {
	// contains filtered or unexported fields
}

StoredChunkDataPacks represents persistent storage for chunk data packs. It works with the reduced representation `StoredChunkDataPack` for chunk data packs, where instead of the full collection data, only the collection's hash (ID) is contained.

func NewStoredChunkDataPacks added in v0.43.3

func NewStoredChunkDataPacks(collector module.CacheMetrics, db storage.DB, byIDCacheSize uint) *StoredChunkDataPacks

func (*StoredChunkDataPacks) BatchRemove added in v0.43.3

func (ch *StoredChunkDataPacks) BatchRemove(chunkDataPackIDs []flow.Identifier, rw storage.ReaderBatchWriter) error

BatchRemove removes multiple ChunkDataPacks with the given IDs from storage as part of the provided write batch. No error returns are expected during normal operation, even if no entries are matched.

func (*StoredChunkDataPacks) ByID added in v0.43.3

func (ch *StoredChunkDataPacks) ByID(chunkDataPackID flow.Identifier) (*storage.StoredChunkDataPack, error)

ByID returns the StoredChunkDataPack for the given ID. It returns storage.ErrNotFound if no entry exists for the given ID.

func (*StoredChunkDataPacks) Remove added in v0.43.3

func (ch *StoredChunkDataPacks) Remove(ids []flow.Identifier) error

Remove removes multiple StoredChunkDataPacks cs keyed by their IDs in a batch. No error returns are expected during normal operation, even if none of the referenced objects exist in storage.

func (*StoredChunkDataPacks) StoreChunkDataPacks added in v0.43.3

func (ch *StoredChunkDataPacks) StoreChunkDataPacks(cs []*storage.StoredChunkDataPack) ([]flow.Identifier, error)

StoreChunkDataPacks stores multiple StoredChunkDataPacks cs in a batch. It returns the chunk data pack IDs No error returns are expected during normal operation.

type TransactionResultErrorMessages added in v0.39.4

type TransactionResultErrorMessages struct {
	// contains filtered or unexported fields
}

func NewTransactionResultErrorMessages added in v0.39.4

func NewTransactionResultErrorMessages(collector module.CacheMetrics, db storage.DB, transactionResultsCacheSize uint) *TransactionResultErrorMessages

func (*TransactionResultErrorMessages) BatchStore added in v0.43.0

func (t *TransactionResultErrorMessages) BatchStore(
	lctx lockctx.Proof,
	rw storage.ReaderBatchWriter,
	blockID flow.Identifier,
	transactionResultErrorMessages []flow.TransactionResultErrorMessage,
) error

BatchStore persists and indexes all transaction result error messages for the given blockID as part of the provided batch. The caller must acquire storage.LockInsertTransactionResultErrMessage and hold it until the write batch has been committed. It returns storage.ErrAlreadyExists if tx result error messages for the block already exist.

func (*TransactionResultErrorMessages) ByBlockID added in v0.39.4

ByBlockID gets all transaction result error messages for a block, ordered by transaction index. Note: This method will return an empty slice both if the block is not indexed yet and if the block does not have any errors.

No errors are expected during normal operations.

func (*TransactionResultErrorMessages) ByBlockIDTransactionID added in v0.39.4

func (t *TransactionResultErrorMessages) ByBlockIDTransactionID(blockID flow.Identifier, transactionID flow.Identifier) (*flow.TransactionResultErrorMessage, error)

ByBlockIDTransactionID returns the transaction result error message for the given block ID and transaction ID

Expected errors during normal operation:

  • `storage.ErrNotFound` if no transaction error message is known at given block and transaction id.

func (*TransactionResultErrorMessages) ByBlockIDTransactionIndex added in v0.39.4

func (t *TransactionResultErrorMessages) ByBlockIDTransactionIndex(blockID flow.Identifier, txIndex uint32) (*flow.TransactionResultErrorMessage, error)

ByBlockIDTransactionIndex returns the transaction result error message for the given blockID and transaction index

Expected errors during normal operation:

  • `storage.ErrNotFound` if no transaction error message is known at given block and transaction index.

func (*TransactionResultErrorMessages) Exists added in v0.39.4

Exists returns true if transaction result error messages for the given ID have been stored. No errors are expected during normal operation.

func (*TransactionResultErrorMessages) Store added in v0.39.4

func (t *TransactionResultErrorMessages) Store(lctx lockctx.Proof, blockID flow.Identifier, transactionResultErrorMessages []flow.TransactionResultErrorMessage) error

Store persists and indexes all transaction result error messages for the given blockID. The caller must acquire storage.LockInsertTransactionResultErrMessage and hold it until the write batch has been committed. It returns storage.ErrAlreadyExists if tx result error messages for the block already exist.

type TransactionResults added in v0.39.4

type TransactionResults struct {
	// contains filtered or unexported fields
}

func NewTransactionResults added in v0.39.4

func NewTransactionResults(collector module.CacheMetrics, db storage.DB, transactionResultsCacheSize uint) (*TransactionResults, error)

func (*TransactionResults) BatchRemoveByBlockID added in v0.39.4

func (tr *TransactionResults) BatchRemoveByBlockID(blockID flow.Identifier, batch storage.ReaderBatchWriter) error

BatchRemoveByBlockID batch removes transaction results by block ID.

func (*TransactionResults) BatchStore added in v0.39.4

func (tr *TransactionResults) BatchStore(blockID flow.Identifier, transactionResults []flow.TransactionResult, batch storage.ReaderBatchWriter) error

BatchStore will store the transaction results for the given block ID in a batch

func (*TransactionResults) ByBlockID added in v0.39.4

func (tr *TransactionResults) ByBlockID(blockID flow.Identifier) ([]flow.TransactionResult, error)

ByBlockID gets all transaction results for a block, ordered by transaction index

func (*TransactionResults) ByBlockIDTransactionID added in v0.39.4

func (tr *TransactionResults) ByBlockIDTransactionID(blockID flow.Identifier, txID flow.Identifier) (*flow.TransactionResult, error)

ByBlockIDTransactionID returns the runtime transaction result for the given block ID and transaction ID

func (*TransactionResults) ByBlockIDTransactionIndex added in v0.39.4

func (tr *TransactionResults) ByBlockIDTransactionIndex(blockID flow.Identifier, txIndex uint32) (*flow.TransactionResult, error)

ByBlockIDTransactionIndex returns the runtime transaction result for the given block ID and transaction index

func (*TransactionResults) RemoveByBlockID added in v0.39.4

func (tr *TransactionResults) RemoveByBlockID(blockID flow.Identifier) error

RemoveByBlockID removes transaction results by block ID

type Transactions added in v0.39.4

type Transactions struct {
	// contains filtered or unexported fields
}

Transactions ...

func NewTransactions added in v0.39.4

func NewTransactions(cacheMetrics module.CacheMetrics, db storage.DB) *Transactions

NewTransactions ...

func (*Transactions) BatchStore added in v0.43.0

func (t *Transactions) BatchStore(tx *flow.TransactionBody, batch storage.ReaderBatchWriter) error

BatchStore stores transaction within a batch operation. No errors are expected during normal operations

func (*Transactions) ByID added in v0.39.4

func (*Transactions) RemoveBatch added in v0.41.0

func (t *Transactions) RemoveBatch(rw storage.ReaderBatchWriter, txID flow.Identifier) error

RemoveBatch removes a transaction by fingerprint.

func (*Transactions) Store added in v0.39.4

func (t *Transactions) Store(flowTx *flow.TransactionBody) error

type TwoIdentifier added in v0.42.0

type TwoIdentifier [flow.IdentifierLen * 2]byte

func KeyFromBlockIDTransactionID added in v0.39.4

func KeyFromBlockIDTransactionID(blockID flow.Identifier, txID flow.Identifier) TwoIdentifier

type VersionBeacons added in v0.39.2

type VersionBeacons struct {
	// contains filtered or unexported fields
}

func NewVersionBeacons added in v0.39.2

func NewVersionBeacons(db storage.DB) *VersionBeacons

func (*VersionBeacons) Highest added in v0.39.2

func (r *VersionBeacons) Highest(
	belowOrEqualTo uint64,
) (*flow.SealedVersionBeacon, error)

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL