Documentation
¶
Index ¶
- Variables
- func NewCoalesce(columns []*columnWithType) arrow.Array
- func NewScalar(value types.Literal, rows int) arrow.Array
- type BinaryFunction
- type BinaryFunctionRegistry
- type BufferedPipeline
- type Config
- type Context
- type ContributingTimeRangeChangedHandler
- type ContributingTimeRangeChangedNotifier
- type GenericPipeline
- type Merge
- type Pipeline
- func NewBatchingPipeline(inner Pipeline, batchSize int64) Pipeline
- func NewObservedPipeline(name string, attrs []attribute.KeyValue, inner Pipeline) Pipeline
- func NewProjectPipeline(input Pipeline, proj *physical.Projection, evaluator *expressionEvaluator) (Pipeline, error)
- func Run(ctx context.Context, cfg Config, plan *physical.Plan, logger log.Logger) Pipeline
- func TranslateEOF(pipeline Pipeline) Pipeline
- func Unwrap(p Pipeline) Pipeline
- type RequestStreamFilterer
- type StreamFilterer
- type UnaryFunc
- type UnaryFunction
- type UnaryFunctionRegistry
- type VariadicFunction
- type VariadicFunctionFunc
- type VariadicFunctionRegistry
- type WrappedPipeline
Constants ¶
This section is empty.
Variables ¶
var (
EOF = errors.New("pipeline exhausted") //nolint:revive,staticcheck
)
var ErrSeriesLimitExceeded = errors.New("maximum number of series limit exceeded")
Functions ¶
func NewCoalesce ¶ added in v3.7.0
Types ¶
type BinaryFunction ¶
type BinaryFunctionRegistry ¶
type BufferedPipeline ¶
type BufferedPipeline struct {
// contains filtered or unexported fields
}
BufferedPipeline is a pipeline implementation that reads from a fixed set of Arrow records. It implements the Pipeline interface and serves as a simple source for testing and data injection.
func NewBufferedPipeline ¶
func NewBufferedPipeline(records ...arrow.RecordBatch) *BufferedPipeline
NewBufferedPipeline creates a new BufferedPipeline from a set of Arrow records. The pipeline will return these records in sequence.
func (*BufferedPipeline) Close ¶
func (p *BufferedPipeline) Close()
Close implements Pipeline. It releases all unreturned records.
func (*BufferedPipeline) Open ¶ added in v3.7.0
func (p *BufferedPipeline) Open(_ context.Context) error
Open implements Pipeline.
func (*BufferedPipeline) Read ¶
func (p *BufferedPipeline) Read(_ context.Context) (arrow.RecordBatch, error)
Read implements Pipeline. It advances to the next record and returns EOF when all records have been read.
type Config ¶
type Config struct {
BatchSize int64
Bucket objstore.Bucket
Metastore metastore.Metastore
PrefetchBytes int64
MergePrefetchCount int
// GetExternalInputs is an optional function called for each node in the
// plan. If GetExternalInputs returns a non-nil slice of Pipelines, they
// will be used as inputs to the pipeline of node.
GetExternalInputs func(ctx context.Context, node physical.Node) []Pipeline
// StreamFilterer is an optional filterer that can filter streams based on their labels.
// When set, streams are filtered before scanning.
StreamFilterer RequestStreamFilterer `yaml:"-"`
}
type Context ¶
type Context struct {
// contains filtered or unexported fields
}
Context is the execution context
type ContributingTimeRangeChangedHandler ¶ added in v3.7.0
Contributing time range would be anything less than `ts` if `lessThan` is true, or greater than `ts` otherwise.
type ContributingTimeRangeChangedNotifier ¶ added in v3.7.0
type ContributingTimeRangeChangedNotifier interface {
// SubscribeToTimeRangeChanges adds a callback function to a list of listeners.
SubscribeToTimeRangeChanges(callback ContributingTimeRangeChangedHandler)
}
ContributingTimeRangeChangedNotifier is an optional interface that pipelines can implement to notify others that they are interested only in inputs from some specific time range.
type GenericPipeline ¶
type GenericPipeline struct {
// contains filtered or unexported fields
}
func NewFilterPipeline ¶
func NewFilterPipeline(filter *physical.Filter, input Pipeline, evaluator *expressionEvaluator) *GenericPipeline
func NewLimitPipeline ¶
func NewLimitPipeline(input Pipeline, skip, fetch uint32) *GenericPipeline
func (*GenericPipeline) Open ¶ added in v3.7.0
func (p *GenericPipeline) Open(ctx context.Context) error
Open implements Pipeline.
func (*GenericPipeline) Read ¶
func (p *GenericPipeline) Read(ctx context.Context) (arrow.RecordBatch, error)
Read implements Pipeline.
type Merge ¶
type Merge struct {
// contains filtered or unexported fields
}
Merge is a pipeline that takes N inputs and sequentially consumes each one of them. It completely exhausts an input before moving to the next one.
type Pipeline ¶
type Pipeline interface {
// Open initializes the pipeline resources and must be called before [Read].
// The implementation must be safe to call multiple times.
Open(context.Context) error
// Read collects the next value ([arrow.RecordBatch]) from the pipeline and returns it to the caller.
// It returns an error if reading fails or when the pipeline is exhausted. In this case, the function returns EOF.
Read(context.Context) (arrow.RecordBatch, error)
// Close closes the resources of the pipeline.
// The implementation must close all the of the pipeline's inputs and must be safe to call multiple times.
Close()
}
Pipeline represents a data processing pipeline that can read Arrow records. It provides methods to read data, access the current record, and close resources.
func NewBatchingPipeline ¶ added in v3.7.0
NewBatchingPipeline wraps inner so that each Read call returns a single aggregated batch of up to batchSize rows. When batchSize <= 0, records are passed through unchanged.
func NewObservedPipeline ¶ added in v3.7.0
NewObservedPipeline wraps a pipeline to automatically collect common statistics. The xcap span and region are not created here; they are deferred to the first [Read] call so the span inherits its parent from the caller's context.
func NewProjectPipeline ¶
func NewProjectPipeline(input Pipeline, proj *physical.Projection, evaluator *expressionEvaluator) (Pipeline, error)
func TranslateEOF ¶ added in v3.7.0
type RequestStreamFilterer ¶ added in v3.7.0
type RequestStreamFilterer interface {
ForRequest(ctx context.Context) StreamFilterer
}
RequestStreamFilterer creates a StreamFilterer for a given request context.
type StreamFilterer ¶ added in v3.7.0
type StreamFilterer interface {
// ShouldFilter returns true if the stream should be filtered out.
ShouldFilter(labels labels.Labels) bool
}
StreamFilterer filters streams based on their labels.
type UnaryFunction ¶
type UnaryFunctionRegistry ¶
type VariadicFunction ¶ added in v3.7.0
type VariadicFunctionFunc ¶ added in v3.7.0
type VariadicFunctionRegistry ¶ added in v3.7.0
type VariadicFunctionRegistry interface {
GetForSignature(types.VariadicOp) (VariadicFunction, error)
// contains filtered or unexported methods
}
type WrappedPipeline ¶ added in v3.7.0
type WrappedPipeline interface {
Pipeline
// Unwrap returns the inner pipeline. Implementations must always return the
// same non-nil value representing the inner pipeline.
Unwrap() Pipeline
}
WrappedPipeline represents a pipeline that wraps another pipeline.
Source Files
¶
- aggregator.go
- arrow_compare.go
- batching.go
- cast.go
- column.go
- compat.go
- dataobjscan.go
- dataobjscan_predicate.go
- executor.go
- expressions.go
- filter.go
- functions.go
- limit.go
- merge.go
- metastore.go
- parse.go
- parse_json.go
- parse_logfmt.go
- parse_regexp.go
- pipeline.go
- pipeline_utils.go
- project.go
- range_aggregation.go
- schema.go
- stats.go
- stream_injector.go
- streams_view.go
- topk.go
- topk_batch.go
- translate_errors.go
- util.go
- vector_aggregate.go