opnode

package
v0.0.0-...-0584d78 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Jan 6, 2026 License: MIT, MIT Imports: 19 Imported by: 0

README

dn-node

Issues: monorepo

Pull requests: monorepo

User docs:

Specs:

The dn-node implements the rollup-node spec. It functions as a Consensus Layer client of an OP Stack chain. This builds, relays and verifies the canonical chain of blocks. The blocks are processed by an execution layer client, like op-geth.

Quickstart

just dn-node

# Network selection:
# - Join any of the pre-configured networks with the `--network` flag.
# - Alternatively, join a custom network with the `--rollup.config` flag.
#
# Essential Connections:
# - L1 ethereum RPC, to fetch blocks, receipts, finality
# - L1 beacon API, to fetch blobs
# - core engine API, to apply new blocks to
# - P2P TCP port, to expose publicly, to retrieve and relay the latest core blocks
# - P2P UDP port, to expose publicly, to discover other nodes to peer with
# - RPC port, to serve RPC of the dn-node
#
# Other:
# - Sync mode: how to interact with the execution-engine,
#   such that it enters the preferred form of syncing:
#   - consensus-layer (block by block sync)
#   - execution-layer (e.g. snap-sync)
#
# Tip: every CLI flag has an env-var equivalent (run `dn-node --help` for more information)
./bin/dn-node \
  --network=op-sepolia \
  --l1=ws://localhost:8546 \
  --l1.beacon=http://localhost:4000 \
  --l2=ws://localhost:9001 \
  --p2p.listen.tcp=9222
  --p2p.listen.udp=9222
  --rpc.port=7000 \
  --syncmode=execution-layer

# If running inside docker, make sure to mount the below persistent data as (host) volume,
# it may be lost on restart otherwise:
# - P2P private key: auto-generated when missing, used to maintain a stable peer identity.
# - Peerstore DB: remember peer records to connect with, used to not wait for peer discovery.
# - Discovery DB: maintain DHT data, to avoid repeating some discovery work after restarting.
  --p2p.priv.path=opnode_p2p_priv.txt \
  --p2p.peerstore.path=opnode_peerstore_db \
  --p2p.discovery.path=opnode_discovery_db \
  --p2p.priv.path=opnode_p2p_priv.txt

Usage

Build from source
# from dn-node dir:
just dn-node

./bin/dn-node --help
Run from source
# from dn-node dir:
go run ./cmd --help
Build docker image

See dn-node docker-bake target.

Implementation overview

Interactions

Product

The dn-node builds, relays and verifies the canonical chain of blocks.

The dn-node does not store critical data: the dn-node can recover from any existing L2 chain pre-state that is sufficiently synced such that available input data can complete the sync.

The dn-node builds blocks: either from scratch as a sequencer, or from block-inputs (made available through L1) as verifier.

The block relay is a happy-path: the P2P sync is optional, and does not affect the ability to verify. However, the block relay is still important for UX, as it lowers the latency to the latest state.

The blocks are verified: only valid L2 blocks that can be reproduced from L1 data are accepted.

Optimization target

Safely and reliably sync the canonical chain

The dn-node implements the three core product features as following:

  • Block building: extend the chain at a throughput rate and latency that is safe to relay and verify.
  • Block relaying: while keeping throughput high and latency low, prevent single points of failure.
  • Block verification: efficiently sync, but always fully verify, follow the canonical chain.

Trade-offs are made here: verification safety is at odds ideal throughput, latency, efficiency. Or in other words: safety vs. liveness. Chain parameters determine this. The implementation offers this trade-off, siding with safety by default, and design-choices should aim to improve the trade-off.

Vision

The dn-node is changing in two ways:

Reliability
Interoperability

The OP Stack is make chains natively interoperable: messages between chains form safety dependencies, and verified asynchronously. Asynchronous verification entails that the dn-node reorgs away a block if and when the block is determined to be invalid.

The dn-supervisor specializes in this dependency verification work.

The dn-node encapsulates all the single-chain concerns: it prepares the local safety data-points (DA confirmation and block contents) for the dn-supervisor.

The dn-supervisor then verifies the cross-chain safety, and promotes the block safety level accordingly, which the dn-node then follows.

See [Interop specs] and [Interop design-docs] for more information about interoperability.

User stories

As a user I want reliability so that I don't miss blocks or fall out of sync. As a RaaS dev I want easy configuration and monitoring so that I can run more chains. As a customizoor I want clear extensible APIs so that I can avoid forking and be a contributor. As a protocol dev I want integration with tests so that I assert protocol conformance As a proof dev I want reusable state-transition code so that I don't reimplement the same thing.

Design principles

  • Encapsulate the state-transition:
    • Use interfaces to abstract file-IO / concurrency / etc. away from state-transition logic.
    • Ensure code-sharing with action-tests and dn-program.
  • No critical database:
    • Persisting data is ok, but it should be recoverable from external data without too much work.
    • The best chain "sync" is no sync.
  • Keep the tech-stack compatible with ethereum L1:
    • L1 offers well-adopted and battle tested libraries and standards, e.g. LibP2P, DiscV5, JSON-RPC.
    • L1 supports a tech-stack in different languages, ensuring client-diversity, important to L2 as well.
    • Downstream devs of OP-Stack should be able to pull in one instance of a library, that serves both OP-Stack and L1.

Failure modes

This is a brief overview of what might fail, and how the dn-node responds.

L1 downtime

When the L1 data-source is temporarily unavailable the dn-node safe/finalized progression halts. Blocks may continue to sync through the happy-path if P2P connectivity is undisrupted.

No batch confirmation

As per the rollup-node spec the sequencing-window ensures that after a bounded period of L1 blocks the verifier will infer blocks, to ensure liveness of blocks with deposited transactions. The dn-node will continue to process the happy-path in the mean time, which may have to be reorged out if it does not match the blocks that is inferred after sequencing window expiry.

L1 reorg

L1 reorgs are detected passively during traversal: upon traversal to block N+1, if the next canonical block has a parent-hash that does not match the current block N we know the remote L1 chain view has diverged.

When this happens, the dn-node assumes the local view is wrong, and resets itself to follow that of the remote node, dropping any non-canonical blocks in the process.

No L1 finality

When L1 does not finalize for an extended period of time, the dn-node is also unable to finalize the L2 chain for the same time.

Note that the safe block in the execution-layer is bootstrapped from the finalized block: some verification work may repeat after a restart.

Blocks will continue to be derived from L1 batch-submissions, and optimistic processing will also continue to function.

P2P failure

On P2P failure, e.g. issues with peering or failed propagation of block-data, the unsafe part of the chain may stall. The unsafe part of the chain will no longer progress optimistically ahead of the safe part.

The safe blocks will continue to be derived from L1 however, providing a higher-latency access to the latest chain.

The dn-node may pick back up the latest unsafe blocks after recovering its P2P connectivity, and buffering unsafe blocks until the safe blocks progress meets the first known buffered unsafe block.

Restarts and resyncing

After a restart, or detection of missing chain data, the dn-node dynamically determines what L1 data is required to continue, based on the syncing state of execution-engine. If the sync-state is far behind, the dn-node may need archived blob data to sync from the original L1 inputs.

A faster alternative may be to bootstrap through the execution-layer sync mode, where the execution-engine may perform an optimized long-range sync, such as snap-sync.

Sequencer Window Expiry

If the sequencer window expires (for example, due to extended batcher downtime), some manual intervention is required to help the chain recover. By restarting the sequencer with SEQUENCER_RECOVER=true, or by calling the optimism_setRecoverMode admin API method with the boolean parameter set to true, the sequencer's behavior is changed. While recover mode is enabled, the tx pool is disabled and the l1 origin is progressed steadily: this means blocks are sequenced which are identical to those produced by verifiers under autoderivation. To enable the quickest recovery, the batcher should be configured for singular (not span) batches. After some time, the l1 origin of the l2 safe head will once again catch up close to the l1 head. Then, the recover mode should be disabled and the chain is back in a normal state.

Testing

  • Unit tests: encapsulated functionality, fuzz tests, etc. in the dn-node Go packages.
  • op-e2e action tests: in-progress Go testing, focused on the onchain aspects, e.g. state-transition edge-cases. This applies primarily to the derivation pipeline.
  • op-e2e system tests: in-process Go testing, focused on the offchain aspects of the dn-node, e.g. background work, P2P integration, general service functionality.
  • Local devnet tests: full end to end testing, but set up on minimal resources.
  • Kurtosis tests: new automated devnet-like testing. Work in progress.
  • Long-running devnet: roll-out for experimental features, to ensure sufficient stability for testnet users.
  • Long-running testnet: battle-testing in public environment.
  • Shadow-forks: design phase, testing experiments against shadow copies of real networks.

Documentation

Index

Constants

This section is empty.

Variables

This section is empty.

Functions

func NewConfig

func NewConfig(ctx *cli.Context, log log.Logger) (*node.Config, error)

NewConfig creates a Config from the provided flags or environment variables.

func NewConfigPersistence

func NewConfigPersistence(ctx *cli.Context) node.ConfigPersistence

func NewDriverConfig

func NewDriverConfig(ctx *cli.Context) *driver.Config

func NewElEndpointConfig

func NewElEndpointConfig(ctx *cli.Context) (*node.ElEndpointConfig, error)

func NewL2EndpointConfig

func NewL2EndpointConfig(ctx *cli.Context, logger log.Logger) (*node.L2EndpointConfig, error)

func NewRollupConfig

func NewRollupConfig(log log.Logger, network string, rollupConfigPath string) (*rollup.Config, error)

func NewRollupConfigFromCLI

func NewRollupConfigFromCLI(log log.Logger, ctx *cli.Context) (*rollup.Config, error)

func NewSupervisorEndpointConfig

func NewSupervisorEndpointConfig(ctx *cli.Context) *interop.Config

func NewSyncConfig

func NewSyncConfig(ctx *cli.Context, log log.Logger) (*sync.Config, error)

Types

This section is empty.

Directories

Path Synopsis
cmd
batch_decoder command
p2p
Package metrics provides a set of metrics for the dn-node.
Package metrics provides a set of metrics for the dn-node.
p2p
cli
derive
Package derive provides the data transformation functions that take L1 data and turn it into core blocks and results.
Package derive provides the data transformation functions that take L1 data and turn it into core blocks and results.
sync
Package sync is responsible for reconciling L1 and core.
Package sync is responsible for reconciling L1 and core.

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL