Documentation
¶
Index ¶
- Variables
- func InitMetrics(registry *prometheus.Registry)
- type DDLJobPuller
- type DDLPuller
- type MultiplexingPuller
- func (p *MultiplexingPuller) Close()
- func (p *MultiplexingPuller) Run(ctx context.Context) (err error)
- func (p *MultiplexingPuller) Stats(span tablepb.Span) Stats
- func (p *MultiplexingPuller) Subscribe(spans []tablepb.Span, startTs model.Ts, tableName string, ...)
- func (p *MultiplexingPuller) Unsubscribe(spans []tablepb.Span)
- type Stats
Constants ¶
This section is empty.
Variables ¶
var PullerEventCounter = prometheus.NewCounterVec( prometheus.CounterOpts{ Namespace: "ticdc", Subsystem: "puller", Name: "txn_collect_event_count", Help: "The number of events received by a puller", }, []string{"namespace", "changefeed", "type"})
PullerEventCounter is the counter of puller's received events There are two types of events: kv (row changed event), resolved (resolved ts event).
Functions ¶
func InitMetrics ¶
func InitMetrics(registry *prometheus.Registry)
InitMetrics registers all metrics in this file
Types ¶
type DDLJobPuller ¶
type DDLJobPuller interface {
util.Runnable
// Output the DDL job entry, it contains the DDL job and the error.
Output() <-chan *model.DDLJobEntry
}
DDLJobPuller is used to pull ddl job from TiKV. It's used by processor and ddlPullerImpl.
func NewDDLJobPuller ¶
func NewDDLJobPuller( up *upstream.Upstream, checkpointTs uint64, cfg *config.ServerConfig, changefeed model.ChangeFeedID, schemaStorage entry.SchemaStorage, filter filter.Filter, ) DDLJobPuller
NewDDLJobPuller creates a new NewDDLJobPuller, which fetches ddl events starting from checkpointTs.
type DDLPuller ¶
type DDLPuller interface {
// Run runs the DDLPuller
Run(ctx context.Context) error
// PopFrontDDL returns and pops the first DDL job in the internal queue
PopFrontDDL() (uint64, *timodel.Job)
// ResolvedTs returns the resolved ts of the DDLPuller
ResolvedTs() uint64
// Close closes the DDLPuller
Close()
}
DDLPuller is the interface for DDL Puller, used by owner only.
func NewDDLPuller ¶
func NewDDLPuller( up *upstream.Upstream, startTs uint64, changefeed model.ChangeFeedID, schemaStorage entry.SchemaStorage, filter filter.Filter, ) DDLPuller
NewDDLPuller return a puller for DDL Event
type MultiplexingPuller ¶
type MultiplexingPuller struct {
CounterKv prometheus.Counter
CounterResolved prometheus.Counter
CounterResolvedDropped prometheus.Counter
// contains filtered or unexported fields
}
MultiplexingPuller works with `kv.SharedClient`. All tables share resources.
func NewMultiplexingPuller ¶
func NewMultiplexingPuller( changefeed model.ChangeFeedID, client *kv.SharedClient, pdClock pdutil.Clock, consume func(context.Context, *model.RawKVEntry, []tablepb.Span, model.ShouldSplitKVEntry) error, workerCount int, inputChannelIndexer func(tablepb.Span, int) int, resolvedTsAdvancerCount int, ) *MultiplexingPuller
NewMultiplexingPuller creates a MultiplexingPuller. `workerCount` specifies how many workers will be spawned to handle events from kv client. `frontierCount` specifies how many workers will be spawned to handle resolvedTs event.
func (*MultiplexingPuller) Run ¶
func (p *MultiplexingPuller) Run(ctx context.Context) (err error)
Run the puller.
func (*MultiplexingPuller) Stats ¶
func (p *MultiplexingPuller) Stats(span tablepb.Span) Stats
Stats returns Stats.
func (*MultiplexingPuller) Subscribe ¶
func (p *MultiplexingPuller) Subscribe( spans []tablepb.Span, startTs model.Ts, tableName string, shouldSplitKVEntry model.ShouldSplitKVEntry, )
Subscribe some spans. They will share one same resolved timestamp progress.
func (*MultiplexingPuller) Unsubscribe ¶
func (p *MultiplexingPuller) Unsubscribe(spans []tablepb.Span)
Unsubscribe some spans, which must be subscribed in one call.
Directories
¶
| Path | Synopsis |
|---|---|
|
Package memorysorter is an in-memory event sorter implementation.
|
Package memorysorter is an in-memory event sorter implementation. |