pathdb

package
v1.0.4 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Sep 26, 2025 License: GPL-3.0 Imports: 32 Imported by: 0

Documentation

Index

Constants

View Source
const (

	// MaxDirtyBufferSize is the maximum memory allowance of node buffer.
	// Too large buffer will cause the system to pause for a long
	// time when write happens. Also, the largest batch that pebble can
	// support is 4GB, node will panic if batch size exceeds this limit.
	MaxDirtyBufferSize = 256 * 1024 * 1024

	// DefaultBackgroundFlushInterval defines the default the wait interval
	// that background node cache flush disk.
	DefaultBackgroundFlushInterval = 3
)

Variables

View Source
var Defaults = &Config{
	StateHistory:    params.FullImmutabilityThreshold,
	CleanCacheSize:  defaultCleanSize,
	WriteBufferSize: defaultDirtyBufferSize,
}

Defaults contains default settings for Ethereum mainnet.

View Source
var ReadOnly = &Config{ReadOnly: true}

ReadOnly is the config in order to open database in read only mode.

Functions

func NewTrieNodeBuffer

func NewTrieNodeBuffer(sync bool, limit int, nodes *nodeSet, states *stateSet, layers uint64) trienodebuffer

Types

type AccountIterator

type AccountIterator interface {
	Iterator

	// Account returns the RLP encoded slim account the iterator is currently at.
	// An error will be returned if the iterator becomes invalid
	Account() []byte
}

AccountIterator is an iterator to step over all the accounts in a snapshot, which may or may not be composed of multiple layers.

type Config

type Config struct {
	SyncFlush       bool   // Flag of trienodebuffer sync flush cache to disk
	StateHistory    uint64 // Number of recent blocks to maintain state history for
	CleanCacheSize  int    // Maximum memory allowance (in bytes) for caching clean nodes
	WriteBufferSize int    // Maximum memory allowance (in bytes) for write buffer
	ReadOnly        bool   // Flag whether the database is opened in read only mode.
	NoTries         bool
	JournalFilePath string
	JournalFile     bool
}

Config contains the settings for database.

type Database

type Database struct {
	// contains filtered or unexported fields
}

Database is a multiple-layered structure for maintaining in-memory states along with its dirty trie nodes. It consists of one persistent base layer backed by a key-value store, on top of which arbitrarily many in-memory diff layers are stacked. The memory diffs can form a tree with branching, but the disk layer is singleton and common to all. If a reorg goes deeper than the disk layer, a batch of reverse diffs can be applied to rollback. The deepest reorg that can be handled depends on the amount of state histories tracked in the disk.

At most one readable and writable database can be opened at the same time in the whole system which ensures that only one database writer can operate the persistent state. Unexpected open operations can cause the system to panic.

func New

func New(diskdb ethdb.Database, config *Config, isVerkle bool) *Database

New attempts to load an already existing layer from a persistent key-value store (with a number of memory layers from a journal). If the journal is not matched with the base persistent layer, all the recorded diff layers are discarded.

func (*Database) AccountHistory

func (db *Database) AccountHistory(address common.Address, start, end uint64) (*HistoryStats, error)

AccountHistory inspects the account history within the specified range.

Start: State ID of the first history object for the query. 0 implies the first available object is selected as the starting point.

End: State ID of the last history for the query. 0 implies the last available object is selected as the ending point. Note end is included in the query.

func (*Database) AccountIterator

func (db *Database) AccountIterator(root common.Hash, seek common.Hash) (AccountIterator, error)

AccountIterator creates a new account iterator for the specified root hash and seeks to a starting account hash.

func (*Database) Close

func (db *Database) Close() error

Close closes the trie database and the held freezer.

func (*Database) Commit

func (db *Database) Commit(root common.Hash, report bool) error

Commit traverses downwards the layer tree from a specified layer with the provided state root and all the layers below are flattened downwards. It can be used alone and mostly for test purposes.

func (*Database) DeleteTrieJournal

func (db *Database) DeleteTrieJournal(writer ethdb.KeyValueWriter) error

func (*Database) DetermineJournalTypeForReader

func (db *Database) DetermineJournalTypeForReader() JournalType

DetermineJournalTypeForReader is used when loading the journal. It loads based on whether JournalKV or JournalFile currently exists.

func (*Database) DetermineJournalTypeForWriter

func (db *Database) DetermineJournalTypeForWriter() JournalType

DetermineJournalTypeForWriter is used when persisting the journal. It determines JournalType based on the config passed in by the Config.

func (*Database) Disable

func (db *Database) Disable() error

Disable deactivates the database and invalidates all available state layers as stale to prevent access to the persistent state, which is in the syncing stage.

func (*Database) Enable

func (db *Database) Enable(root common.Hash) error

Enable activates database and resets the state tree with the provided persistent state root once the state sync is finished.

func (*Database) GetAllRooHash

func (db *Database) GetAllRooHash() [][]string

GetAllRooHash returns all diffLayer and diskLayer root hash

func (*Database) Head

func (db *Database) Head() common.Hash

Head return the top non-fork difflayer/disklayer root hash for rewinding.

func (*Database) HistoryRange

func (db *Database) HistoryRange() (uint64, uint64, error)

HistoryRange returns the block numbers associated with earliest and latest state history in the local store.

func (*Database) Journal

func (db *Database) Journal(root common.Hash) error

Journal commits an entire diff hierarchy to disk into a single journal entry. This is meant to be used during shutdown to persist the layer without flattening everything down (bad for reorgs). And this function will mark the database as read-only to prevent all following mutation to disk.

The supplied root must be a valid trie hash value.

func (*Database) NodeReader

func (db *Database) NodeReader(root common.Hash) (database.NodeReader, error)

NodeReader retrieves a layer belonging to the given state root.

func (*Database) Recover

func (db *Database) Recover(root common.Hash) error

Recover rollbacks the database to a specified historical point. The state is supported as the rollback destination only if it's canonical state and the corresponding trie histories are existent.

The supplied root must be a valid trie hash value.

func (*Database) Recoverable

func (db *Database) Recoverable(root common.Hash) bool

Recoverable returns the indicator if the specified state is recoverable.

The supplied root must be a valid trie hash value.

func (*Database) Scheme

func (db *Database) Scheme() string

Scheme returns the node scheme used in the database.

func (*Database) Size

func (db *Database) Size() (diffs common.StorageSize, nodes common.StorageSize, immutableNodes common.StorageSize)

Size returns the current storage size of the memory cache in front of the persistent database layer.

func (*Database) StateReader

func (db *Database) StateReader(root common.Hash) (database.StateReader, error)

StateReader returns a reader that allows access to the state data associated with the specified state.

func (*Database) StorageHistory

func (db *Database) StorageHistory(address common.Address, slot common.Hash, start uint64, end uint64) (*HistoryStats, error)

StorageHistory inspects the storage history within the specified range.

Start: State ID of the first history object for the query. 0 implies the first available object is selected as the starting point.

End: State ID of the last history for the query. 0 implies the last available object is selected as the ending point. Note end is included in the query.

Note, slot refers to the hash of the raw slot key.

func (*Database) StorageIterator

func (db *Database) StorageIterator(root common.Hash, account common.Hash, seek common.Hash) (StorageIterator, error)

StorageIterator creates a new storage iterator for the specified root hash and account. The iterator will be moved to the specific start position.

func (*Database) Update

func (db *Database) Update(root common.Hash, parentRoot common.Hash, block uint64, nodes *trienode.MergedNodeSet, states *StateSetWithOrigin) error

Update adds a new layer into the tree, if that can be linked to an existing old parent. It is disallowed to insert a disk layer (the origin of all). Apart from that this function will flatten the extra diff layers at bottom into disk to only keep 128 diff layers in memory by default.

The passed in maps(nodes, states) will be retained to avoid copying everything. Therefore, these maps must not be changed afterwards.

The supplied parentRoot and root must be a valid trie hash value.

type HashNodeCache

type HashNodeCache struct {
	// contains filtered or unexported fields
}

func (*HashNodeCache) Add

func (h *HashNodeCache) Add(ly layer)

func (*HashNodeCache) Get

func (h *HashNodeCache) Get(hash common.Hash) *trienode.Node

func (*HashNodeCache) Remove

func (h *HashNodeCache) Remove(ly layer)

type HistoryStats

type HistoryStats struct {
	Start   uint64   // Block number of the first queried history
	End     uint64   // Block number of the last queried history
	Blocks  []uint64 // Blocks refers to the list of block numbers in which the state is mutated
	Origins [][]byte // Origins refers to the original value of the state before its mutation
}

HistoryStats wraps the history inspection statistics.

type Iterator

type Iterator interface {
	// Next steps the iterator forward one element, returning false if exhausted,
	// or an error if iteration failed for some reason (e.g. root being iterated
	// becomes stale and garbage collected).
	Next() bool

	// Error returns any failure that occurred during iteration, which might have
	// caused a premature iteration exit (e.g. layer stack becoming stale).
	Error() error

	// Hash returns the hash of the account or storage slot the iterator is
	// currently at.
	Hash() common.Hash

	// Release releases associated resources. Release should always succeed and
	// can be called multiple times without causing error.
	Release()
}

Iterator is an iterator to step over all the accounts or the specific storage in a snapshot which may or may not be composed of multiple layers.

type JournalFileReader

type JournalFileReader struct {
	// contains filtered or unexported fields
}

func (*JournalFileReader) Close

func (fr *JournalFileReader) Close()

func (*JournalFileReader) Read

func (fr *JournalFileReader) Read(p []byte) (n int, err error)

type JournalFileWriter

type JournalFileWriter struct {
	// contains filtered or unexported fields
}

func (*JournalFileWriter) Close

func (fw *JournalFileWriter) Close()

func (*JournalFileWriter) Size

func (fw *JournalFileWriter) Size() uint64

func (*JournalFileWriter) Write

func (fw *JournalFileWriter) Write(b []byte) (int, error)

Write appends b directly to the encoder output.

type JournalKVReader

type JournalKVReader struct {
	// contains filtered or unexported fields
}

func (*JournalKVReader) Close

func (kr *JournalKVReader) Close()

func (*JournalKVReader) Read

func (kr *JournalKVReader) Read(p []byte) (n int, err error)

type JournalKVWriter

type JournalKVWriter struct {
	// contains filtered or unexported fields
}

func (*JournalKVWriter) Close

func (kw *JournalKVWriter) Close()

func (*JournalKVWriter) Size

func (kw *JournalKVWriter) Size() uint64

func (*JournalKVWriter) Write

func (kw *JournalKVWriter) Write(b []byte) (int, error)

type JournalReader

type JournalReader interface {
	io.Reader
	Close()
}

type JournalType

type JournalType int
const (
	JournalKVType JournalType = iota
	JournalFileType
)

type JournalWriter

type JournalWriter interface {
	io.Writer

	Close()
	Size() uint64
}

type RefTrieNode

type RefTrieNode struct {
	// contains filtered or unexported fields
}

type StateSetWithOrigin

type StateSetWithOrigin struct {
	// contains filtered or unexported fields
}

StateSetWithOrigin wraps the state set with additional original values of the mutated states.

func NewStateSetWithOrigin

func NewStateSetWithOrigin(accounts map[common.Hash][]byte, storages map[common.Hash]map[common.Hash][]byte, accountOrigin map[common.Address][]byte, storageOrigin map[common.Address]map[common.Hash][]byte, rawStorageKey bool) *StateSetWithOrigin

NewStateSetWithOrigin constructs the state set with the provided data.

type StorageIterator

type StorageIterator interface {
	Iterator

	// Slot returns the storage slot the iterator is currently at. An error will
	// be returned if the iterator becomes invalid
	Slot() []byte
}

StorageIterator is an iterator to step over the specific storage in a snapshot, which may or may not be composed of multiple layers.

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL