clmimicry

package
v1.4.13 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Oct 1, 2025 License: GPL-3.0 Imports: 51 Imported by: 0

README

CL-Mimicry: Ethereum Consensus Layer P2P Network Monitoring

CL-Mimicry is a sophisticated consensus layer P2P network monitoring client that mimics validator behavior to collect libp2p and gossipsub events from Ethereum consensus networks. It provides advanced trace-based sampling and sharding capabilities for scalable, distributed monitoring of Ethereum network activity.

Table of Contents

Overview

CL-Mimicry connects to Ethereum consensus network nodes and captures libp2p trace events, providing insights into:

  • Gossipsub Message Flow: Beacon blocks, attestations, and blob sidecars
  • Peer Behavior: Connection patterns, message propagation, and network topology
  • Protocol Performance: Message timing, duplicate detection, and peer interactions
  • Network Health: RPC communication, consensus participation, and validator activity

The system uses consistent hashing with SipHash-2-4 algorithm to enable distributed processing across multiple instances while maintaining deterministic message routing.

Sharding System

CL-Mimicry uses a simplified, unified sharding system based on a streamlined event categorization model.

  • Event categorization: Events grouped by sharding capabilities (Groups A-D)
  • Configurable shards: Consistent distribution across all configurations (defaulting to 512)
  • Topic-first design: Prioritize topic-based sharding where available
How It Works
┌─────────────┐
│Event Arrives│
└──────┬──────┘
       │
       ▼
┌──────────────┐     ┌─────────────────┐
│Get Event Info├────►│ Event Category? │
└──────────────┘     └────────┬────────┘
                              │
        ┌─────────────────────┼─────────────────────┬─────────────────────┐
        ▼                     ▼                     ▼                     ▼
   ┌─────────┐         ┌─────────┐           ┌─────────┐           ┌─────────┐
   │ Group A │         │ Group B │           │ Group C │           │ Group D │
   │Topic+Msg│         │Topic Only│          │Msg Only │           │ No Keys │
   └────┬────┘         └────┬────┘           └────┬────┘           └────┬────┘
        │                   │                     │                     │
        ▼                   ▼                     ▼                     ▼
   Topic Config?       Topic Config?         Default Shard         Enabled?
        │                   │                     │                     │
     Yes/No              Yes/No                   │                  Yes/No
        │                   │                     │                     │
        ▼                   ▼                     ▼                     ▼
   Shard by Msg       Shard by Topic        Shard by Msg         Process/Drop

Event Categorization

Events are categorized into four groups based on their available sharding keys:

Group A: Topic + MsgID Events

Events with both topic and message ID, enabling full sharding flexibility:

  • PUBLISH_MESSAGE, DELIVER_MESSAGE, DUPLICATE_MESSAGE, REJECT_MESSAGE
  • GOSSIPSUB_BEACON_BLOCK, GOSSIPSUB_BEACON_ATTESTATION, GOSSIPSUB_BLOB_SIDECAR
  • RPC_META_MESSAGE, RPC_META_CONTROL_IHAVE

Sharding: Uses message ID for sharding, with topic-based configuration

Group B: Topic-Only Events

Events with only topic information:

  • JOIN, LEAVE, GRAFT, PRUNE
  • RPC_META_CONTROL_GRAFT, RPC_META_CONTROL_PRUNE, RPC_META_SUBSCRIPTION

Sharding: Uses topic hash for sharding decisions

Group C: MsgID-Only Events

Events with only message ID:

  • RPC_META_CONTROL_IWANT, RPC_META_CONTROL_IDONTWANT

Sharding: Uses message ID with default configuration

Group D: No Sharding Key Events

Events without sharding keys:

  • ADD_PEER, REMOVE_PEER, CONNECTED, DISCONNECTED
  • RECV_RPC, SEND_RPC, DROP_RPC (parent events only)
  • HANDLE_METADATA, HANDLE_STATUS

Sharding: All-or-nothing based on configuration

Configuration

The configuration focuses on topic-based patterns with simplified sharding:

Basic Structure
sharding:
  # Topic-based sharding configuration
  topics:
    ".*beacon_block.*":
      totalShards: 512          # Always 512 for consistency
      activeShards: ["0-511"]   # 100% sampling

    ".*beacon_attestation.*":
      totalShards: 512
      activeShards: ["0-25"]    # 26/512 = ~5% sampling

    ".*":                       # Catch-all pattern
      totalShards: 512
      activeShards: ["0-127"]   # 25% sampling

  # Events without sharding keys (Group D)
  noShardingKeyEvents:
    enabled: true               # Process all Group D events

events:
  # Enable/disable specific event types
  recvRpcEnabled: true
  gossipSubBeaconBlockEnabled: true
  gossipSubAttestationEnabled: true
  # ... etc
Active Shards Syntax

Flexible syntax for specifying which shards to process:

# Range syntax (recommended)
activeShards: ["0-255"]         # 256 shards = 50% sampling

# Individual shards
activeShards: [0, 1, 5, 10]     # Specific shards only

# Mixed syntax
activeShards: ["0-10", 50, "100-150"]  # Ranges and individuals

# Common sampling rates with 512 total shards:
activeShards: ["0-511"]         # 100% (all shards)
activeShards: ["0-255"]         # 50%  (256 shards)
activeShards: ["0-127"]         # 25%  (128 shards)
activeShards: ["0-25"]          # 5%   (26 shards)
activeShards: ["0-4"]           # 1%   (5 shards)
activeShards: [0]               # 0.2% (1 shard)
Pattern Matching

Topics use regex patterns with highest-match-wins:

topics:
  # Most specific patterns first
  ".*beacon_attestation_[0-9]+.*":  # Subnet-specific
    activeShards: ["0-12"]          # 2.5% sampling

  ".*beacon_attestation.*":         # General attestations
    activeShards: ["0-25"]          # 5% sampling

  ".*":                            # Everything else
    activeShards: ["0-127"]        # 25% sampling

Documentation

Index

Constants

View Source
const (
	// libp2p pubsub events.
	TraceEvent_HANDLE_MESSAGE = "HANDLE_MESSAGE"

	// libp2p core networking events.
	TraceEvent_CONNECTED           = "CONNECTED"
	TraceEvent_DISCONNECTED        = "DISCONNECTED"
	TraceEvent_SYNTHETIC_HEARTBEAT = "SYNTHETIC_HEARTBEAT"

	// RPC events.
	TraceEvent_HANDLE_METADATA = "HANDLE_METADATA"
	TraceEvent_HANDLE_STATUS   = "HANDLE_STATUS"
)

Define events not supplied by libp2p proto pkgs.

View Source
const (
	// DefaultTotalShards is the default number of shards if not specified
	DefaultTotalShards = 512
)

Variables

This section is empty.

Functions

func GetGossipTopics added in v1.1.19

func GetGossipTopics(event *host.TraceEvent) []string

GetGossipTopics extracts all gossip topics from a trace event if available. Returns a slice of unique topics found in the event.

func GetMsgID added in v1.1.19

func GetMsgID(event *host.TraceEvent) string

GetMsgID extracts the message ID from the event for sharding. We only shard based on message IDs, not peer IDs.

func GetShard added in v1.1.1

func GetShard(shardingKey string, totalShards uint64) uint64

GetShard calculates which shard a message belongs to based on its ID.

This function uses SipHash to consistently map message IDs (typically hashes themselves) to specific shards, ensuring even distribution across the available shards.

Key benefits: - Deterministic: The same message ID always maps to the same shard - Balanced: Messages are evenly distributed across all shards

Parameters:

  • shardingKey: The identifier to use for sharding (often a hash like "0x1234...abcd")
  • totalShards: The total number of available shards (e.g., 64)

Returns:

  • The shard number (0 to totalShards-1) where this message should be processed

func IsShardActive added in v1.1.1

func IsShardActive(shard uint64, activeShards []uint64) bool

IsShardActive checks if a shard is in the active shards list.

func SipHash added in v1.1.1

func SipHash(key [16]byte, data []byte) uint64

SipHash implements the SipHash-2-4 algorithm, a fast and efficient hash function designed for message authentication and hash-table lookups.

Key features of SipHash: - Deterministic output for identical inputs - Even distribution of outputs across the range of uint64

When used for sharding (via GetShard), SipHash provides: - Consistent distribution of messages across shards - Deterministic routing where the same message always maps to the same shard

Parameters:

  • key: A 16-byte secret key (can be fixed for consistent sharding)
  • data: The message bytes to hash

Returns:

  • A 64-bit unsigned integer hash value

References:

Types

type CompiledPattern added in v1.1.19

type CompiledPattern struct {
	Pattern *regexp.Regexp
	Config  *TopicShardingConfig
	// EventTypeConstraint specifies which event types this pattern applies to.
	// Empty string means it applies to all events (backward compatibility).
	// Can be an exact event name (e.g., "LIBP2P_TRACE_GOSSIPSUB_BEACON_ATTESTATION")
	// or a wildcard pattern (e.g., "LIBP2P_TRACE_RPC_META_*")
	EventTypeConstraint string
}

CompiledPattern holds a compiled regex pattern and its config

type Config

type Config struct {
	LoggingLevel string  `yaml:"logging" default:"info"`
	MetricsAddr  string  `yaml:"metricsAddr" default:":9090"`
	PProfAddr    *string `yaml:"pprofAddr"`
	ProbeAddr    *string `yaml:"probeAddr"`

	// The name of the mimicry
	Name string `yaml:"name"`

	// Ethereum configuration
	Ethereum ethereum.Config `yaml:"ethereum"`

	// Outputs configuration
	Outputs []output.Config `yaml:"outputs"`

	// Labels configures the mimicry with labels
	Labels map[string]string `yaml:"labels"`

	// NTP Server to use for clock drift correction
	NTPServer string `yaml:"ntpServer" default:"time.google.com"`

	// Node is the configuration for the node
	Node NodeConfig `yaml:"node"`

	// Events is the configuration for the events
	Events EventConfig `yaml:"events"`

	// Sharding is the configuration for event sharding
	Sharding ShardingConfig `yaml:"sharding"`
}

func (*Config) ApplyOverrides added in v1.0.15

func (c *Config) ApplyOverrides(o *Override, log logrus.FieldLogger) error

ApplyOverrides applies any overrides to the config.

func (*Config) CreateSinks

func (c *Config) CreateSinks(log logrus.FieldLogger) ([]output.Sink, error)

func (*Config) Validate

func (c *Config) Validate() error

type DutiesProvider added in v1.2.1

type DutiesProvider interface {
	GetValidatorIndex(epoch phase0.Epoch, slot phase0.Slot, committeeIndex phase0.CommitteeIndex, position uint64) (phase0.ValidatorIndex, error)
}

DutiesProvider provides validator duty information

type EventCategorizer added in v1.1.19

type EventCategorizer struct {
	// contains filtered or unexported fields
}

EventCategorizer manages event categorization for sharding decisions

func NewEventCategorizer added in v1.1.19

func NewEventCategorizer() *EventCategorizer

NewEventCategorizer creates and initializes an EventCategorizer with all known events

func (*EventCategorizer) GetAllEventsByGroup added in v1.1.19

func (ec *EventCategorizer) GetAllEventsByGroup() map[ShardingGroup][]xatu.Event_Name

GetAllEventsByGroup returns all events categorized by their sharding group

func (*EventCategorizer) GetEventInfo added in v1.1.19

func (ec *EventCategorizer) GetEventInfo(eventType xatu.Event_Name) (*EventInfo, bool)

GetEventInfo returns information about an event type

func (*EventCategorizer) GetGroupAEvents added in v1.1.19

func (ec *EventCategorizer) GetGroupAEvents() []xatu.Event_Name

GetGroupAEvents returns all events that have both Topic and MsgID

func (*EventCategorizer) GetGroupBEvents added in v1.1.19

func (ec *EventCategorizer) GetGroupBEvents() []xatu.Event_Name

GetGroupBEvents returns all events that have only Topic

func (*EventCategorizer) GetGroupCEvents added in v1.1.19

func (ec *EventCategorizer) GetGroupCEvents() []xatu.Event_Name

GetGroupCEvents returns all events that have only MsgID

func (*EventCategorizer) GetGroupDEvents added in v1.1.19

func (ec *EventCategorizer) GetGroupDEvents() []xatu.Event_Name

GetGroupDEvents returns all events that have no sharding keys

func (*EventCategorizer) GetShardingGroup added in v1.1.19

func (ec *EventCategorizer) GetShardingGroup(eventType xatu.Event_Name) ShardingGroup

GetShardingGroup returns the sharding group for an event type

func (*EventCategorizer) IsMetaEvent added in v1.1.19

func (ec *EventCategorizer) IsMetaEvent(eventType xatu.Event_Name) bool

IsMetaEvent returns whether an event is an RPC meta event

type EventConfig added in v0.0.169

type EventConfig struct {
	RecvRPCEnabled                    bool `yaml:"recvRpcEnabled" default:"false"`
	SendRPCEnabled                    bool `yaml:"sendRpcEnabled" default:"false"`
	DropRPCEnabled                    bool `yaml:"dropRpcEnabled" default:"false"`
	RpcMetaControlIHaveEnabled        bool `yaml:"rpcMetaControlIHaveEnabled" default:"false"`
	RpcMetaControlIWantEnabled        bool `yaml:"rpcMetaControlIWantEnabled" default:"false"`
	RpcMetaControlIDontWantEnabled    bool `yaml:"rpcMetaControlIDontWantEnabled" default:"false"`
	RpcMetaControlGraftEnabled        bool `yaml:"rpcMetaControlGraftEnabled" default:"false"`
	RpcMetaControlPruneEnabled        bool `yaml:"rpcMetaControlPruneEnabled" default:"false"`
	RpcMetaSubscriptionEnabled        bool `yaml:"rpcMetaSubscriptionEnabled" default:"false"`
	RpcMetaMessageEnabled             bool `yaml:"rpcMetaMessageEnabled" default:"false"`
	AddPeerEnabled                    bool `yaml:"addPeerEnabled" default:"true"`
	RemovePeerEnabled                 bool `yaml:"removePeerEnabled" default:"true"`
	ConnectedEnabled                  bool `yaml:"connectedEnabled" default:"true"`
	DisconnectedEnabled               bool `yaml:"disconnectedEnabled" default:"true"`
	SyntheticHeartbeatEnabled         bool `yaml:"syntheticHeartbeatEnabled" default:"true"`
	JoinEnabled                       bool `yaml:"joinEnabled" default:"true"`
	LeaveEnabled                      bool `yaml:"leaveEnabled" default:"false"`
	GraftEnabled                      bool `yaml:"graftEnabled" default:"false"`
	PruneEnabled                      bool `yaml:"pruneEnabled" default:"false"`
	PublishMessageEnabled             bool `yaml:"publishMessageEnabled" default:"false"`
	RejectMessageEnabled              bool `yaml:"rejectMessageEnabled" default:"false"`
	DuplicateMessageEnabled           bool `yaml:"duplicateMessageEnabled" default:"false"`
	DeliverMessageEnabled             bool `yaml:"deliverMessageEnabled" default:"false"`
	HandleMetadataEnabled             bool `yaml:"handleMetadataEnabled" default:"true"`
	HandleStatusEnabled               bool `yaml:"handleStatusEnabled" default:"true"`
	GossipSubBeaconBlockEnabled       bool `yaml:"gossipSubBeaconBlockEnabled" default:"true"`
	GossipSubAttestationEnabled       bool `yaml:"gossipSubAttestationEnabled" default:"true"`
	GossipSubAggregateAndProofEnabled bool `yaml:"gossipSubAggregateAndProofEnabled" default:"true"`
	GossipSubBlobSidecarEnabled       bool `yaml:"gossipSubBlobSidecarEnabled" default:"true"`
	GossipSubDataColumnSidecarEnabled bool `yaml:"gossipSubDataColumnSidecarEnabled" default:"true"`
}

EventConfig represents configuration for all event types.

func (*EventConfig) Validate added in v0.0.169

func (e *EventConfig) Validate() error

Validate validates the event config.

type EventInfo added in v1.1.19

type EventInfo struct {
	Type          xatu.Event_Name
	ShardingGroup ShardingGroup
	HasTopic      bool
	HasMsgID      bool
	IsMeta        bool // True for RPC meta events
}

EventInfo contains metadata about an event type

type FilteredMessageWithIndex added in v1.1.9

type FilteredMessageWithIndex struct {
	MessageID     *wrapperspb.StringValue
	OriginalIndex uint32
}

FilteredMessageWithIndex represents a filtered message with its original index

type MetaProvider added in v1.2.2

type MetaProvider interface {
	GetClientMeta(ctx context.Context) (*xatu.ClientMeta, error)
}

MetaProvider provides client metadata

type MetadataProvider added in v1.2.1

type MetadataProvider interface {
	Wallclock() *ethwallclock.EthereumBeaconChain
	ClockDrift() *time.Duration
	Network() *xatu.ClientMeta_Ethereum_Network
}

MetadataProvider provides ethereum network metadata

type Metrics

type Metrics struct {
	// contains filtered or unexported fields
}

Metrics provides simplified metrics for the sharding system

func NewMetrics

func NewMetrics(namespace string) *Metrics

NewMetrics creates a new metrics instance with simplified metrics

func (*Metrics) AddDecoratedEvent

func (m *Metrics) AddDecoratedEvent(count float64, eventType, network string)

AddDecoratedEvent tracks decorated events (before sharding)

func (*Metrics) AddEvent added in v1.1.19

func (m *Metrics) AddEvent(eventType, network string)

AddEvent records that an event was received

func (*Metrics) AddProcessedMessage added in v1.1.1

func (m *Metrics) AddProcessedMessage(eventType, network string)

AddProcessedMessage records that an event was processed

func (*Metrics) AddShardingDecision added in v1.1.19

func (m *Metrics) AddShardingDecision(eventType, reason, network string)

AddShardingDecision records the reason for a sharding decision

func (*Metrics) AddSkippedMessage added in v1.1.1

func (m *Metrics) AddSkippedMessage(eventType, network string)

AddSkippedMessage records that an event was filtered out

type MetricsCollector added in v1.2.1

type MetricsCollector interface {
	AddEvent(eventType, network string)
	AddProcessedMessage(eventType, network string)
	AddSkippedMessage(eventType, network string)
	AddShardingDecision(eventType, reason, network string)
	AddDecoratedEvent(count float64, eventType, network string)
}

MetricsCollector interface for event processing metrics

type Mimicry

type Mimicry struct {
	Config *Config
	// contains filtered or unexported fields
}

func New

func New(ctx context.Context, log logrus.FieldLogger, config *Config, overrides *Override) (*Mimicry, error)

func (*Mimicry) ClockDrift added in v1.2.1

func (m *Mimicry) ClockDrift() *time.Duration

func (*Mimicry) GetClientMeta added in v1.2.2

func (m *Mimicry) GetClientMeta(ctx context.Context) (*xatu.ClientMeta, error)

func (*Mimicry) GetProcessor added in v1.2.1

func (m *Mimicry) GetProcessor() *Processor

GetProcessor returns the processor for testing purposes

func (*Mimicry) GetValidatorIndex added in v1.2.1

func (m *Mimicry) GetValidatorIndex(epoch phase0.Epoch, slot phase0.Slot, committeeIndex phase0.CommitteeIndex, position uint64) (phase0.ValidatorIndex, error)

Implement DutiesProvider interface

func (*Mimicry) HandleDecoratedEvent added in v1.2.1

func (m *Mimicry) HandleDecoratedEvent(ctx context.Context, event *xatu.DecoratedEvent) error

Implement OutputHandler interface

func (*Mimicry) HandleDecoratedEvents added in v1.2.1

func (m *Mimicry) HandleDecoratedEvents(ctx context.Context, events []*xatu.DecoratedEvent) error

func (*Mimicry) Network added in v1.2.1

func (*Mimicry) ServeMetrics

func (m *Mimicry) ServeMetrics(ctx context.Context) error

func (*Mimicry) ServePProf

func (m *Mimicry) ServePProf(ctx context.Context) error

func (*Mimicry) ServeProbe added in v0.0.163

func (m *Mimicry) ServeProbe(ctx context.Context) error

func (*Mimicry) Start

func (m *Mimicry) Start(ctx context.Context) error

func (*Mimicry) Wallclock added in v1.2.1

func (m *Mimicry) Wallclock() *ethwallclock.EthereumBeaconChain

Implement MetadataProvider interface

type NoShardingKeyConfig added in v1.1.19

type NoShardingKeyConfig struct {
	// Whether to record events without sharding keys (default: true)
	Enabled bool `yaml:"enabled" default:"true"`
}

NoShardingKeyConfig defines behavior for events without sharding keys

type NodeConfig

type NodeConfig struct {
	// The private key for the libp2p host and local enode in hex format
	PrivateKeyStr string `yaml:"privateKeyStr" default:""`

	// General timeout when communicating with other network participants
	DialTimeout time.Duration `yaml:"dialTimeout" default:"5s"`

	// The address information of the local ethereuem [enode.Node].
	Devp2pHost string `yaml:"devp2pHost" default:"0.0.0.0"`
	Devp2pPort int    `yaml:"devp2pPort" default:"0"`

	// The address information of the local libp2p host
	Libp2pHost string `yaml:"libp2pHost" default:"0.0.0.0"`
	Libp2pPort int    `yaml:"libp2pPort" default:"0"`

	// The address information where the Beacon API or Prysm's custom API is accessible at
	PrysmHost     string `yaml:"prysmHost" default:"127.0.0.1"`
	PrysmPortHTTP int    `yaml:"prysmPortHttp" default:"3500"`
	PrysmPortGRPC int    `yaml:"prysmPortGrpc" default:"4000"`
	PrysmUseTLS   bool   `yaml:"prysmUseTls" default:"false"`

	// The maximum number of peers our libp2p host can be connected to.
	MaxPeers int `yaml:"maxPeers" default:"30"`

	// Limits the number of concurrent connection establishment routines. When
	// we discover peers over discv5 and are not at our MaxPeers limit we try
	// to establish a connection to a peer. However, we limit the concurrency to
	// this DialConcurrency value.
	DialConcurrency int `yaml:"dialConcurrency" default:"16"`

	// DataStreamType is the type of data stream to use for the node (e.g. kinesis, callback, etc).
	DataStreamType string `yaml:"dataStreamType" default:"callback"`

	// Subnets is the configuration for gossipsub subnets.
	Subnets map[string]*hermes.SubnetConfig `yaml:"subnets"`
}

func (*NodeConfig) AsHermesConfig

func (h *NodeConfig) AsHermesConfig() *hermes.NodeConfig

type OutputHandler added in v1.2.1

type OutputHandler interface {
	HandleDecoratedEvent(ctx context.Context, event *xatu.DecoratedEvent) error
	HandleDecoratedEvents(ctx context.Context, events []*xatu.DecoratedEvent) error
}

OutputHandler handles processed events

type Override added in v1.0.15

type Override struct {
	MetricsAddr struct {
		Enabled bool
		Value   string
	}
}

Override is the set of overrides for the cl-mimicry command.

type Processor added in v1.2.1

type Processor struct {
	// contains filtered or unexported fields
}

Processor encapsulates all event processing logic for Hermes events

func NewProcessor added in v1.2.1

func NewProcessor(
	duties DutiesProvider,
	output OutputHandler,
	metrics MetricsCollector,
	metaProvider MetaProvider,
	unifiedSharder *UnifiedSharder,
	eventCategorizer *EventCategorizer,
	wallclock *ethwallclock.EthereumBeaconChain,
	clockDrift time.Duration,
	events EventConfig,
	log logrus.FieldLogger,
) *Processor

NewProcessor creates a new Processor instance

func (*Processor) HandleHermesEvent added in v1.2.1

func (p *Processor) HandleHermesEvent(ctx context.Context, event *host.TraceEvent) error

HandleHermesEvent processes a Hermes trace event and routes it to the appropriate handler

func (*Processor) ShouldTraceMessage added in v1.2.1

func (p *Processor) ShouldTraceMessage(
	event *host.TraceEvent,
	clientMeta *xatu.ClientMeta,
	xatuEventType string,
) bool

ShouldTraceMessage determines whether a message with the given MsgID should be included in the sample based on the configured trace settings.

func (*Processor) ShouldTraceRPCMetaMessages added in v1.2.1

func (p *Processor) ShouldTraceRPCMetaMessages(
	clientMeta *xatu.ClientMeta,
	xatuEventType string,
	messages interface{},
) ([]FilteredMessageWithIndex, error)

ShouldTraceRPCMetaMessages determines which RPC meta messages should be processed based on sharding configuration

type RPCMetaMessageInfo added in v1.1.19

type RPCMetaMessageInfo struct {
	MessageID *wrapperspb.StringValue
	Topic     *wrapperspb.StringValue // Optional: gossip topic for the message
}

RPCMetaMessageInfo represents a message with its ID and optional topic for RPC meta filtering

type RPCMetaTopicInfo added in v1.1.23

type RPCMetaTopicInfo struct {
	Topic *wrapperspb.StringValue // Gossip topic for the event
}

RPCMetaTopicInfo represents a topic-based RPC meta event for filtering

type ShardableEvent added in v1.1.19

type ShardableEvent struct {
	MsgID string
	Topic string
}

ShardableEvent represents an event that can be sharded

type ShardingConfig added in v1.1.19

type ShardingConfig struct {
	// Topic-based patterns with sampling rates
	Topics map[string]*TopicShardingConfig `yaml:"topics"`

	// Events without sharding keys (Group D)
	NoShardingKeyEvents *NoShardingKeyConfig `yaml:"noShardingKeyEvents,omitempty"`
	// contains filtered or unexported fields
}

ShardingConfig represents the sharding configuration

func (*ShardingConfig) LogSummary added in v1.1.19

func (c *ShardingConfig) LogSummary() string

LogSummary returns a human-readable summary of the sharding configuration

type ShardingGroup added in v1.1.19

type ShardingGroup int

ShardingGroup represents the categorization of events based on their sharding capabilities

const (
	// GroupA events have both Topic and MsgID available for sharding
	GroupA ShardingGroup = iota
	// GroupB events have only Topic available for sharding
	GroupB
	// GroupC events have only MsgID available for sharding
	GroupC
	// GroupD events have no sharding keys available
	GroupD
)

type TopicShardingConfig added in v1.1.19

type TopicShardingConfig struct {
	// Total number of shards for this topic pattern
	TotalShards uint64 `yaml:"totalShards"`
	// Active shards for this topic pattern
	ActiveShards []uint64 `yaml:"activeShards"`
}

TopicShardingConfig defines sharding for a topic pattern

func (*TopicShardingConfig) GetSamplingRate added in v1.1.19

func (t *TopicShardingConfig) GetSamplingRate() float64

GetSamplingRate returns the sampling rate for a topic pattern

func (*TopicShardingConfig) IsFirehose added in v1.1.19

func (t *TopicShardingConfig) IsFirehose() bool

IsFirehose returns true if all shards are active

func (*TopicShardingConfig) UnmarshalYAML added in v1.1.19

func (t *TopicShardingConfig) UnmarshalYAML(node *yaml.Node) error

UnmarshalYAML implements custom YAML unmarshaling to support range syntax

type UnifiedSharder added in v1.1.19

type UnifiedSharder struct {
	// contains filtered or unexported fields
}

UnifiedSharder provides a single sharding decision point for all events

func NewUnifiedSharder added in v1.1.19

func NewUnifiedSharder(config *ShardingConfig, enabled bool) (*UnifiedSharder, error)

NewUnifiedSharder creates a new unified sharder

func (*UnifiedSharder) GetShardForKey added in v1.1.19

func (s *UnifiedSharder) GetShardForKey(key string, totalShards uint64) uint64

GetShardForKey returns the shard number for a given key (for testing/debugging)

func (*UnifiedSharder) ShouldProcess added in v1.1.19

func (s *UnifiedSharder) ShouldProcess(eventType xatu.Event_Name, msgID, topic string) (bool, string)

ShouldProcess determines if an event should be processed based on sharding rules

func (*UnifiedSharder) ShouldProcessBatch added in v1.1.19

func (s *UnifiedSharder) ShouldProcessBatch(eventType xatu.Event_Name, events []ShardableEvent) []bool

ShouldProcessBatch determines which events in a batch should be processed This is used for RPC meta events where we have multiple events to evaluate

Directories

Path Synopsis

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL