iss

package
v0.1.0 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Nov 3, 2022 License: Apache-2.0 Imports: 36 Imported by: 0

README

Insanely Scalable SMR (ISS)

ISS is the first modular algorithm to make leader-driven total order broadcast scale in a robust way, published in March 2022. At its interface, ISS is a classic state machine replication (SMR) system that establishes a total order of client requests with typical liveness and safety properties, applicable to any replicated service, such as resilient databases or a blockchain ordering layer. It is a further development and a successor of the Mir-BFT protocol (not to be confused with the Mir library used to implement ISS, see the description of Mir).

ISS achieves scalability without requiring a primary node to periodically decide on the protocol configuration. It multiplexes multiple instances of a leader-driven consensus protocol which operate concurrently and (almost) independently. We abstract away the logic of the used consensus protocol and only define an interface - that we call "Sequenced Broadcast" (SB) - that such a consensus protocol must use to interact with ISS.

ISS maintains a contiguous log of (batches of) client requests at each node. Each position in the log corresponds to a unique sequence number and ISS agrees on the assignment of a unique request batch to each sequence number. Our goal is to introduce as much parallelism as possible in assigning batches to sequence numbers while avoiding request duplication, i.e., assigning the same request to more than one sequence number. To this end, ISS subdivides the log into non-overlapping segments. Each segment, representing a subset of the log's sequence numbers, corresponds to an independent consensus protocol instance that has its own leader and executes concurrently with other instances.

To prevent the leaders of two different segments from concurrently proposing the same request, and thus wasting resources, while also preventing malicious leaders from censoring (i.e., not proposing) certain requests, we adopt and generalize the partitioning of the request space introduced by Mir-BFT. At any point in time, ISS assigns a different subset of client requests (that we call a bucket) to each segment. ISS periodically changes this assignment, such that each request is guaranteed to eventually be assigned to a segment with a correct leader.

The figure below shows the high-level architecture of the ISS protocol. High-level architecture of the ISS protocol

Documentation

Overview

Package iss contains the implementation of the ISS protocol, the new generation of Mir. For the details of the protocol, see (TODO). To use ISS, instantiate it by calling `iss.New` and use it as the Protocol module when instantiating a mir.Node. A default configuration (to pass, among other arguments, to `iss.New`) can be obtained from `iss.DefaultParams`.

Current status: This package is currently being implemented and is not yet functional.

Index

Constants

This section is empty.

Variables

This section is empty.

Functions

func CheckParams

func CheckParams(c *ModuleParams) error

CheckParams checks whether the given configuration satisfies all necessary constraints.

func DefaultModules

func DefaultModules(orig modules.Modules, moduleConfig *ModuleConfig) (modules.Modules, error)

DefaultModules takes a Modules object (as a value, not a pointer to it) and returns a pointer to a new Modules object with default ISS modules inserted in fields where no module has been specified.

func Event

func Event(destModule t.ModuleID, event *isspb.ISSEvent) *eventpb.Event

func HashOrigin

func HashOrigin(ownModuleID t.ModuleID, origin *isspb.ISSHashOrigin) *eventpb.HashOrigin

func InitialStateSnapshot

func InitialStateSnapshot(
	appState []byte,
	params *ModuleParams,
) *commonpb.StateSnapshot

func LogEntryHashOrigin

func LogEntryHashOrigin(ownModuleID t.ModuleID, logEntrySN t.SeqNr) *eventpb.HashOrigin

func Message

func Message(msg *isspb.ISSMessage) *messagepb.Message

func PbftCatchUpRequestSBMessage

func PbftCatchUpRequestSBMessage(sn t.SeqNr, digest []byte) *isspb.SBInstanceMessage

func PbftCatchUpResponseSBMessage

func PbftCatchUpResponseSBMessage(preprepare *isspbftpb.Preprepare) *isspb.SBInstanceMessage

func PbftCommitSBMessage

func PbftCommitSBMessage(content *isspbftpb.Commit) *isspb.SBInstanceMessage

func PbftDoneSBMessage

func PbftDoneSBMessage(digests [][]byte) *isspb.SBInstanceMessage

func PbftMissingPreprepareSBMessage

func PbftMissingPreprepareSBMessage(preprepare *isspbftpb.Preprepare) *isspb.SBInstanceMessage

func PbftNewViewSBMessage

func PbftNewViewSBMessage(newView *isspbftpb.NewView) *isspb.SBInstanceMessage

func PbftPersistCommit

func PbftPersistCommit(commit *isspbftpb.Commit) *isspb.SBInstanceEvent

func PbftPersistNewView

func PbftPersistNewView(newView *isspbftpb.NewView) *isspb.SBInstanceEvent

func PbftPersistPrepare

func PbftPersistPrepare(prepare *isspbftpb.Prepare) *isspb.SBInstanceEvent

func PbftPersistPreprepare

func PbftPersistPreprepare(preprepare *isspbftpb.Preprepare) *isspb.SBInstanceEvent

func PbftPersistSignedViewChange

func PbftPersistSignedViewChange(signedViewChange *isspbftpb.SignedViewChange) *isspb.SBInstanceEvent

func PbftPrepareSBMessage

func PbftPrepareSBMessage(content *isspbftpb.Prepare) *isspb.SBInstanceMessage

func PbftPreprepareRequestSBMessage

func PbftPreprepareRequestSBMessage(preprepareRequest *isspbftpb.PreprepareRequest) *isspb.SBInstanceMessage

func PbftPreprepareSBMessage

func PbftPreprepareSBMessage(content *isspbftpb.Preprepare) *isspb.SBInstanceMessage

func PbftProposeTimeout

func PbftProposeTimeout(numProposals uint64) *isspb.SBInstanceEvent

func PbftSignedViewChangeSBMessage

func PbftSignedViewChangeSBMessage(signedViewChange *isspbftpb.SignedViewChange) *isspb.SBInstanceMessage

func PbftViewChangeSNTimeout

func PbftViewChangeSNTimeout(view t.PBFTViewNr, numCommitted int) *isspb.SBInstanceEvent

func PbftViewChangeSegmentTimeout

func PbftViewChangeSegmentTimeout(view t.PBFTViewNr) *isspb.SBInstanceEvent

func PersistCheckpointEvent

func PersistCheckpointEvent(
	ownModuleID t.ModuleID,
	sn t.SeqNr,
	stateSnapshot *commonpb.StateSnapshot,
	appSnapshotHash,
	signature []byte,
) *eventpb.Event

func PersistStableCheckpointEvent

func PersistStableCheckpointEvent(ownModuleID t.ModuleID, stableCheckpoint *checkpointpb.StableCheckpoint) *eventpb.Event

func PushCheckpoint

func PushCheckpoint(ownModuleID t.ModuleID) *eventpb.Event

func SBCertReadyEvent

func SBCertReadyEvent(cert *availabilitypb.Cert) *isspb.SBInstanceEvent

func SBCertRequestEvent

func SBCertRequestEvent() *isspb.SBInstanceEvent

func SBDeliverEvent

func SBDeliverEvent(sn t.SeqNr, certData []byte, aborted bool) *isspb.SBInstanceEvent

func SBEvent

func SBEvent(
	ownModuleID t.ModuleID,
	epoch t.EpochNr,
	instance t.SBInstanceNr,
	event *isspb.SBInstanceEvent,
) *eventpb.Event

func SBHashOrigin

func SBHashOrigin(ownModuleID t.ModuleID,
	epoch t.EpochNr,
	instance t.SBInstanceNr,
	origin *isspb.SBInstanceHashOrigin,
) *eventpb.HashOrigin

func SBHashResultEvent

func SBHashResultEvent(digests [][]byte, origin *isspb.SBInstanceHashOrigin) *isspb.SBInstanceEvent

func SBInitEvent

func SBInitEvent() *isspb.SBInstanceEvent

func SBMessage

func SBMessage(epoch t.EpochNr, instance t.SBInstanceNr, msg *isspb.SBInstanceMessage) *messagepb.Message

func SBMessageReceivedEvent

func SBMessageReceivedEvent(message *isspb.SBInstanceMessage, from t.NodeID) *isspb.SBInstanceEvent

func SBNodeSigsVerifiedEvent

func SBNodeSigsVerifiedEvent(
	valid []bool,
	errors []string,
	nodeIDs []t.NodeID,
	origin *isspb.SBInstanceSigVerOrigin,
	allOK bool,
) *isspb.SBInstanceEvent

func SBSigVerOrigin

func SBSigVerOrigin(
	ownModuleID t.ModuleID,
	epoch t.EpochNr,
	instance t.SBInstanceNr,
	origin *isspb.SBInstanceSigVerOrigin,
) *eventpb.SigVerOrigin

func SBSignOrigin

func SBSignOrigin(
	ownModuleID t.ModuleID,
	epoch t.EpochNr,
	instance t.SBInstanceNr,
	origin *isspb.SBInstanceSignOrigin,
) *eventpb.SignOrigin

func SBSignResultEvent

func SBSignResultEvent(signature []byte, origin *isspb.SBInstanceSignOrigin) *isspb.SBInstanceEvent

func SigVerOrigin

func SigVerOrigin(ownModuleID t.ModuleID, origin *isspb.ISSSigVerOrigin) *eventpb.SigVerOrigin

func SignOrigin

func SignOrigin(ownModuleID t.ModuleID, origin *isspb.ISSSignOrigin) *eventpb.SignOrigin

func StableCheckpointMessage

func StableCheckpointMessage(stableCheckpoint *checkpoint.StableCheckpoint) *messagepb.Message

func StableCheckpointSigVerOrigin

func StableCheckpointSigVerOrigin(
	ownModuleID t.ModuleID,
	stableCheckpoint *checkpointpb.StableCheckpoint,
) *eventpb.SigVerOrigin

Types

type BucketGroup

type BucketGroup []*RequestBucket

BucketGroup represents a group of request buckets. It is used to represent both the set of all buckets used by ISS throughout the whole execution (across epochs), and subsets of it used to create request batches.

func NewBuckets

func NewBuckets(numBuckets int, logger logging.Logger) *BucketGroup

NewBuckets returns a new group of numBuckets initialized buckets. The logger will be used to output bucket-related debugging messages.

func (BucketGroup) CutBatch

func (buckets BucketGroup) CutBatch(maxBatchSize t.NumRequests) *requestpb.Batch

CutBatch assembles and returns a new request batch from requests in the bucket group, removing those requests from their respective buckets. The size of the returned batch will be min(buckets.TotalRequests(), maxBatchSize). If possible, requests are taken from every non-empty bucket in the group.

func (BucketGroup) Distribute

func (buckets BucketGroup) Distribute(leaders []t.NodeID, epoch t.EpochNr) map[t.NodeID][]int

Distribute takes a list of node IDs (representing the leaders of the given epoch) and assigns a list of bucket IDs to each of the node (leader) IDs, such that the ID of each bucket is assigned to a exactly one leader. Distribute guarantees that if some node is part of `leaders` for infinitely many consecutive epochs (i.e., infinitely many invocations of Distribute with the `epoch` parameter values increasing monotonically), the ID of each bucket in the group will be assigned to the node infinitely many times. Distribute also makes best effort to distribute the buckets evenly among the leaders. If `leaders` is empty, Distribute returns an empty map.

TODO: Update this to have a more sophisticated, livenes-ensuring implementation, to actually implement what is written above. An additional parameter with all the nodes (even non-leaders) might help there.

func (BucketGroup) Get

func (buckets BucketGroup) Get(bID int) *RequestBucket

Get returns the bucket with id bID.

func (BucketGroup) RequestBucket

func (buckets BucketGroup) RequestBucket(req *requestpb.HashedRequest) *RequestBucket

RequestBucket returns the bucket from this group to which the given request maps. Note that this depends on the whole bucket group (not just the request), as RequestBucket employs a hash function to evenly distribute requests among the buckets in the group. Thus, the same request may map to some bucket in one group and to a different bucket in a different group, even if the former bucket is part of the latter group.

func (BucketGroup) Select

func (buckets BucketGroup) Select(bucketIDs []int) BucketGroup

Select returns a subgroup of buckets consisting only of buckets from this group with the given IDs. Select does not make deep copies of the selected buckets and the buckets underlying both the original and the new group are the same. If any of the given IDs is not represented in this group, Select panics.

func (BucketGroup) TotalRequests

func (buckets BucketGroup) TotalRequests() t.NumRequests

TotalRequests returns the total number of requests in all buckets of this group.

type CommitLogEntry

type CommitLogEntry struct {
	// Sequence number at which this entry has been ordered.
	Sn t.SeqNr

	// The delivered availability certificate data.
	// TODO: Replace by actual certificate when deterministic serialization of certificates is implemented.
	CertData []byte

	// The digest (hash) of the entry.
	Digest []byte

	// A flag indicating whether this entry is an actual certificate (false)
	// or whether the orderer delivered a special abort value (true).
	Aborted bool

	// In case Aborted is true, this field indicates the ID of the node
	// that is suspected to be the reason for the orderer aborting (usually the leader).
	// This information can be used by the leader selection policy at epoch transition.
	Suspect t.NodeID
}

The CommitLogEntry type represents an entry of the commit log, the final output of the ordering process. Whenever an orderer delivers an availability certificate (or a special abort value), it is inserted to the commit log in form of a commitLogEntry.

type ISS

type ISS struct {
	// contains filtered or unexported fields
}

The ISS type represents the ISS protocol module to be used when instantiating a node. The type should not be instantiated directly, but only properly initialized values returned from the New() function should be used.

func New

func New(ownID t.NodeID, moduleConfig *ModuleConfig, params *ModuleParams, startingChkp *checkpoint.StableCheckpoint, logger logging.Logger) (*ISS, error)

New returns a new initialized instance of the ISS protocol module to be used when instantiating a mir.Node. Arguments:

  • ownID: the ID of the node being instantiated with ISS.
  • moduleConfig: the IDs of the modules ISS interacts with.
  • params: ISS protocol-specific configuration (e.g. segment length, proposal frequency etc...). see the documentation of the ModuleParams type for details.
  • startingChkp: the stable checkpoint defining the initial state of the protocol.
  • logger: Logger the ISS implementation uses to output log messages.

func (*ISS) ApplyEvent

func (iss *ISS) ApplyEvent(event *eventpb.Event) (*events.EventList, error)

ApplyEvent receives one event and applies it to the ISS protocol state machine, potentially altering its state and producing a (potentially empty) list of more events to be applied to other modules.

func (*ISS) ApplyEvents

func (iss *ISS) ApplyEvents(eventsIn *events.EventList) (*events.EventList, error)

ApplyEvents receives a list of events, processes them sequentially, and returns a list of resulting events.

func (*ISS) ImplementsModule

func (iss *ISS) ImplementsModule()

The ImplementsModule method only serves the purpose of indicating that this is a Module and must not be called.

type LeaderSelectionPolicy

type LeaderSelectionPolicy interface {

	// Leaders returns the (ordered) list of leaders based on the given epoch e and on the state of this policy object.
	Leaders(e t.EpochNr) []t.NodeID

	// Suspect updates the state of the policy object by announcing it that node `node` has been suspected in epoch `e`.
	Suspect(e t.EpochNr, node t.NodeID)

	// Reconfigure returns a new LeaderSelectionPolicy based on the state of the current one,
	// but using a new configuration.
	// TODO: Use the whole configuration, not just the node IDs.
	Reconfigure(nodeIDs []t.NodeID) LeaderSelectionPolicy
}

A LeaderSelectionPolicy implements the algorithm for selecting a set of leaders in each ISS epoch. In a nutshell, it gathers information about suspected leaders in the past epochs and uses it to calculate the set of leaders for future epochs. Its state can be updated using Suspect() and the leader set for an epoch is queried using Leaders(). A leader set policy must be deterministic, i.e., calling Leaders() after the same sequence of Suspect() invocations always returns the same set of leaders at every Node.

type ModuleConfig

type ModuleConfig struct {
	Self         t.ModuleID
	Net          t.ModuleID
	App          t.ModuleID
	Wal          t.ModuleID
	Hasher       t.ModuleID
	Crypto       t.ModuleID
	Timer        t.ModuleID
	Availability t.ModuleID
	Checkpoint   t.ModuleID
}

ModuleConfig contains the names of modules ISS depends on. The corresponding modules are expected by ISS to be stored under these keys by the Node.

func DefaultModuleConfig

func DefaultModuleConfig() *ModuleConfig

type ModuleParams

type ModuleParams struct {

	// The identities of all nodes that execute the protocol in the first epoch.
	// Must not be empty.
	InitialMembership map[t.NodeID]t.NodeAddress

	// Number of epochs by which to delay configuration changes.
	// If a configuration is agreed upon in epoch e, it will take effect in epoch e + 1 + configOffset.
	// Thus, in the "current" configuration, ConfigOffset subsequent configurations are already known.
	ConfigOffset int

	// The length of an ISS segment, in sequence numbers.
	// This is the number of commitLog entries each orderer needs to output in an epoch.
	// Depending on the number of leaders (and thus orderers), this will result in epoch of different lengths.
	// If set to 0, the EpochLength parameter must be non-zero and will be used to calculate the length of the segments
	// such that their lengths sum up to EpochLength.
	// Must not be negative.
	SegmentLength int

	// The length of an ISS epoch, in sequence numbers.
	// If EpochLength is non-zero, the epoch will always have a fixed
	// length, regardless of the number of leaders.
	// In each epoch, the corresponding segment lengths will be calculated to sum up to EpochLength,
	// potentially resulting in different segment length across epochs as well as within an epoch.
	// If set to zero, SegmentLength must be non-zero and will be used directly to set the length of each segment.
	// Must not be negative.
	// TODO: That EpochLength is not implemented now. SegmentLength has to be used.
	EpochLength int

	// The maximum time duration between two proposals of an orderer, where applicable.
	// For orderers that wait for an availability certificate to fill before proposing it (e.g. PBFT),
	// this parameter caps the waiting time in order to bound latency.
	// When MaxProposeDelay has elapsed since the last proposal made by an orderer,
	// the orderer proposes a new availability certificate.
	// Must not be negative.
	MaxProposeDelay time.Duration

	// Total number of buckets used by ISS.
	// In each epoch, these buckets are re-distributed evenly among the orderers.
	// Must be greater than 0.
	NumBuckets int

	// The logic for selecting leader nodes in each epoch.
	// For details see the documentation of the LeaderSelectionPolicy type.
	// ATTENTION: The leader selection policy is stateful!
	// Must not be nil.
	LeaderPolicy LeaderSelectionPolicy

	// Number of logical time ticks to wait until demanding retransmission of missing requests.
	// If a node receives a proposal containing requests that are not in the node's buckets,
	// it cannot accept the proposal.
	// In such a case, the node will wait for RequestNAckTimeout ticks
	// before trying to fetch those requests from other nodes.
	// Must be positive.
	RequestNAckTimeout int

	// Maximal number of bytes used for message backlogging buffers
	// (only message payloads are counted towards MsgBufCapacity).
	// On reception of a message that the node is not yet ready to process
	// (e.g., a message from a future epoch received from another node that already transitioned to that epoch),
	// the message is stored in a buffer for later processing (e.g., when this node also transitions to that epoch).
	// This total buffer capacity is evenly split among multiple buffers, one for each node,
	// so that one misbehaving node cannot exhaust the whole buffer space.
	// The most recently received messages that together do not exceed the capacity are stored.
	// If the capacity is set to 0, all messages that cannot yet be processed are dropped on reception.
	// Must not be negative.
	MsgBufCapacity int

	// Number of most recent epochs that are older than the latest stable checkpoint.
	RetainedEpochs int

	// Every CatchUpTimerPeriod, a node checks whether other nodes have fallen behind
	// and, if so, sends them the latest state.
	CatchUpTimerPeriod time.Duration

	// Time interval for repeated retransmission of checkpoint messages.
	CheckpointResendPeriod time.Duration

	// View change timeout for the PBFT sub-protocol, in ticks.
	// TODO: Separate this in a sub-group of the ISS params, maybe even use a field of type PBFTConfig in ModuleParams.
	PBFTDoneResendPeriod         time.Duration
	PBFTCatchUpDelay             time.Duration
	PBFTViewChangeSNTimeout      time.Duration
	PBFTViewChangeSegmentTimeout time.Duration
	PBFTViewChangeResendPeriod   time.Duration
}

The ModuleParams type defines all the ISS configuration parameters. Note that some fields specify delays in ticks of the logical clock. To obtain real time delays, these need to be multiplied by the period of the ticker provided to the Node at runtime.

func DefaultParams

func DefaultParams(initialMembership map[t.NodeID]t.NodeAddress) *ModuleParams

DefaultParams returns the default configuration for a given membership. There is no guarantee that this configuration ensures good performance, but it will pass the CheckParams test. DefaultParams is intended for use during testing and hello-world examples. A proper deployment is expected to craft a custom configuration, for which DefaultParams can serve as a starting point.

func (*ModuleParams) AdjustSpeed

func (mp *ModuleParams) AdjustSpeed(maxProposeDelay time.Duration) *ModuleParams

AdjustSpeed sets multiple ISS parameters (e.g. view change timeouts) to their default values relative to maxProposeDelay. It can be useful to make the whole protocol run faster or slower. For example, for a large maxProposeDelay, the view change timeouts must be increased correspondingly, otherwise the view change can kick in before a node makes a proposal. AdjustSpeed makes these adjustments automatically.

type PBFTConfig

type PBFTConfig struct {

	// The IDs of all nodes that execute this instance of the protocol.
	// Must not be empty.
	Membership []t.NodeID

	// ID of the availability module from which this PBFT instance gets its availability certificates.
	AvailabilityModuleID t.ModuleID

	// The maximum time duration between two proposals of new certificatees during normal operation.
	// This parameter caps the waiting time in order to bound latency.
	// When MaxProposeDelay has elapsed since the last proposal,
	// the protocol tries to propose a new availability certificate.
	// Must not be negative.
	MaxProposeDelay time.Duration

	// When a node has committed all certificates in a segment, it will periodically send the Done message
	// in intervals of DoneResendPeriod.
	DoneResendPeriod time.Duration

	// After a node learns about a quorum of other nodes finishing a segment,
	// it waits for CatchUpDelay before requesting missing committed certificates from other nodes.
	CatchUpDelay time.Duration

	// Maximal number of bytes used for message backlogging buffers
	// (only message payloads are counted towards MsgBufCapacity).
	// Same as ModuleParams.MsgBufCapacity, but used only for one instance of PBFT.
	// Must not be negative.
	MsgBufCapacity int

	// Per-sequence-number view change timeout for view 0.
	// If no certificate is delivered by a PBFT instance within this timeout, the node triggers a view change.
	// With each new view, the timeout doubles (without changing this value)
	ViewChangeSNTimeout time.Duration

	// View change timeout for view 0 for the whole segment.
	// If not all certificates of the associated segment are delivered by a PBFT instance within this timeout,
	// the node triggers a view change.
	// With each new view, the timeout doubles (without changing this value)
	ViewChangeSegmentTimeout time.Duration

	// Time period between resending a ViewChange message.
	// ViewChange messages need to be resent periodically to preserve liveness.
	// Otherwise, the system could get stuck if a ViewChange message is dropped by the network.
	ViewChangeResendPeriod time.Duration
}

PBFTConfig holds PBFT-specific configuration parameters used by a concrete instance of PBFT. They are mostly inherited from the ISS configuration at the time of creating the PBFT instance.

type RequestBucket

type RequestBucket struct {

	// Numeric ID of the bucket.
	// A hash function maps each request to a bucket ID.
	ID int
	// contains filtered or unexported fields
}

RequestBucket represents a subset of received requests (called a Bucket in ISS) that retains the order in which the requests have been added. Each request deterministically maps to a single bucket. A hash function (BucketGroup.RequestBucket()) decides which bucket a request falls into.

The implementation of a bucket must support efficient additions (new requests), removals of the n oldest requests (cutting a batch), as well as removals of random requests (committing requests).

func (*RequestBucket) Add

Add adds a new request to the bucket if that request has not yet been added. Note that even requests that have been added and removed cannot be added again (except for the case of request resurrection, but this is done using the Resurrect() method). Returns true if the request was not in the bucket and has just been added, false if the request already has been added.

func (*RequestBucket) Contains

func (b *RequestBucket) Contains(req *requestpb.HashedRequest) bool

Contains returns true if the given request is in the bucket, false otherwise. Only requests that have been added but not removed count as contained in the bucket, as well as resurrected requests that have not been removed since resurrection.

func (*RequestBucket) Len

func (b *RequestBucket) Len() int

Len returns the number of requests in the bucket.

func (*RequestBucket) Remove

func (b *RequestBucket) Remove(req *requestpb.HashedRequest)

Remove removes a request from the bucket. Note that even after removal, the request cannot be added again using the Add() method. Moreover, removing a request from the bucket, even if the request is not present in the bucket, also prevents the request from being added in the future using Add(). It can be, however, returned to the bucket during request resurrection, see Resurrect().

func (*RequestBucket) RemoveFirst

func (b *RequestBucket) RemoveFirst(n int, acc []*requestpb.HashedRequest) []*requestpb.HashedRequest

RemoveFirst removes the first up to n requests from the bucket and appends them to the accumulator acc. Returns the resulting slice obtained by appending the Requests to acc.

func (*RequestBucket) Resurrect

func (b *RequestBucket) Resurrect(req *requestpb.HashedRequest)

Resurrect re-adds a previously removed request to the bucket, effectively undoing the removal of the request. The request is added to the "front" of the bucket, i.e., it will be the first request to be removed by RemoveFirst(). Request resurrection is performed when a leader proposes a batch from this bucket, but, for some reason, the batch is not committed in the same epoch. In such a case, the requests contained in the batch need to be resurrected (i.e., put back in their respective buckets) so they can be committed in a future epoch.

type SimpleLeaderPolicy

type SimpleLeaderPolicy struct {
	Membership []t.NodeID
}

The SimpleLeaderPolicy is a trivial leader selection policy. It must be initialized with a set of node IDs and always returns that full set as leaders, regardless of which nodes have been suspected. In other words, each node is leader each epoch with this policy.

func (*SimpleLeaderPolicy) Leaders

func (simple *SimpleLeaderPolicy) Leaders(e t.EpochNr) []t.NodeID

Leaders always returns the whole membership for the SimpleLeaderPolicy. All nodes are always leaders.

func (*SimpleLeaderPolicy) Reconfigure

func (simple *SimpleLeaderPolicy) Reconfigure(nodeIDs []t.NodeID) LeaderSelectionPolicy

Reconfigure informs the leader selection policy about a change in the membership.

func (*SimpleLeaderPolicy) Suspect

func (simple *SimpleLeaderPolicy) Suspect(e t.EpochNr, node t.NodeID)

Suspect does nothing for the SimpleLeaderPolicy.

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL