murmur

module
v0.0.0-...-84a1714 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: May 8, 2026 License: Apache-2.0

README

Murmur

Lambda-architecture-aware streaming aggregation framework for Go.

Murmur is a spiritual successor to Twitter's Summingbird, built for Go-shop AWS deployments in 2026. One pipeline definition, three execution modes (live stream, snapshot bootstrap, archive replay), monoid-typed state in DynamoDB with optional Valkey acceleration, and a generic gRPC query layer that merges across windows.

Status

Pre-1.0, experimental. The architecture is built and exercised end-to-end against a docker-compose stack. Several rough edges are tracked openly in STABILITY.md — most notably error-handling gaps and the gRPC Get/GetWindow/etc. surface being a generic adapter today rather than per-pipeline codegen.

Feature Status
Pipeline DSL with structural monoids
Live mode: Kafka source (franz-go)
Live mode: Kinesis source ⚠️ single-instance only, no checkpointing — KCL v3 multi-instance is on the roadmap
Bootstrap mode: Mongo SnapshotSource + handoff token
Replay mode: S3 / MinIO JSON-Lines
State: DynamoDB Int64SumStore (atomic ADD) + BytesStore (CAS)
Cache: Valkey Int64Cache (write-through, INCRBY)
Monoids: Sum, Count, First, Last, Set
Monoids: Min, Max (Bounded[V])
Sketches: HyperLogLog, TopK (Misra-Gries), Bloom
Windowed aggregations + sliding-window queries
Generic gRPC query service (Get / GetMany / GetWindow / GetRange) ✅ — typed-per-pipeline codegen is on the roadmap
Atomic state-table swap (alias version pointer)
Spark Connect batch executor (user-supplied SQL) ✅ validated locally against apache/spark:4.0.1
Lambda mode (batch view ⊕ realtime delta merge) ✅ via pkg/query.LambdaQuery
Decayed-value monoid (exponential decay) ✅ via pkg/monoid/compose.DecayedSum
Minute / hour / daily windowed buckets
Web UI (dark mode, pipeline DAG, live metrics, query console) cmd/murmur-ui
Admin control plane — Connect-RPC, single port speaks gRPC + gRPC-Web + Connect/HTTP-JSON; proto-defined contract for any-language clients pkg/admin, proto/murmur/admin/v1/admin.proto
Metrics recorder hook in streaming runtime pkg/metrics + streaming.WithMetrics
DX facade (Counter / UniqueCount / TopN presets) pkg/murmur
Terraform pipeline-counter module
Worked example: page-view-counters (worker + query binaries)
Per-pipeline gRPC codegen (typed responses) 🛣 roadmap
Valkey-native HLL/Bloom acceleration 🛣 roadmap
KCL-v3 Kinesis source 🛣 roadmap

Limitations to read before adopting

  • replace directive only for Spark Connect. The root github.com/gallowaysoftware/murmur module no longer depends on apache/spark-connect-gopkg/exec/batch/sparkconnect carries its own go.mod. Consumers who don't use Spark Connect (95% of users) get a clean go.mod. Consumers who DO use the sparkconnect submodule must mirror its replace github.com/apache/spark-connect-go => github.com/pequalsnp/spark-connect-go … line in their own go.mod — Go does not propagate replace directives transitively.
  • At-least-once with optional dedup. Pass streaming.WithDedup(d) (where d is a pkg/state/dynamodb.Deduper) to make replay-after-crash idempotent for any monoid. Without it, the streaming runtime is at-least-once with no per-EventID dedup — fine for idempotent monoids (Set, Min, Max, Bloom) but double-counts non-idempotent ones (Sum, HLL, TopK).
  • Single-goroutine streaming runtime. Phase-1 streaming processes records sequentially per worker. Throughput ceiling is roughly 5–10 k events/s/worker against DDB-local depending on item size. Scale horizontally with Kafka partitions until per-partition parallelism lands.
  • Min / Max monoids violate the identity law. Fixed: lift inputs via core.NewBounded(v); the monoid value type is core.Bounded[V] and Identity is the unset wrapper.
  • CORS is closed by default. Pass admin.WithAllowedOrigins("https://dashboard.example", …) (or cmd/murmur-ui --allow-origin=…) to open it up. The admin API is read-only but still leaks pipeline metadata, so don't expose it to the public internet without auth in front.
  • CI runs on every PR. gofmt / go vet / unit tests with -race / golangci-lint / web tsc + eslint + vite build. Dependabot is wired up for Go, npm, and Actions.

Quick taste

import (
    "context"
    "time"

    "github.com/gallowaysoftware/murmur/pkg/murmur"
    mkafka "github.com/gallowaysoftware/murmur/pkg/source/kafka"
    mddb "github.com/gallowaysoftware/murmur/pkg/state/dynamodb"
)

type PageView struct {
    PageID string `json:"page_id"`
    UserID string `json:"user_id"`
}

func main() {
    src, err := mkafka.NewSource(mkafka.Config[PageView]{
        Brokers:       []string{"localhost:9092"},
        Topic:         "page_views",
        ConsumerGroup: "page_views_worker",
        Decode:        mkafka.JSONDecoder[PageView](),
    })
    if err != nil {
        panic(err)
    }
    defer src.Close()

    store := mddb.NewInt64SumStore(ddbClient, "page_views")
    defer store.Close()

    pipe := murmur.Counter[PageView]("page_views").
        From(src).
        KeyBy(func(e PageView) string { return e.PageID }).
        Daily(90 * 24 * time.Hour).
        StoreIn(store).
        Build()

    ctx := context.Background()
    if rc := murmur.RunStreamingWorker(ctx, pipe); rc != 0 {
        panic("worker exited with non-zero code")
    }
}

For the runnable version, see examples/page-view-counters/.

The generic gRPC service exposes Get(entity), GetWindow(entity, duration), GetMany(entities), and GetRange(entity, start, end) — wire it up with pkg/query/grpc.NewServer; proto definitions in proto/murmur/v1/query.proto. Per-pipeline typed responses (today everything is bytes) are tracked on the roadmap.

Architecture

The full design is documented in doc/architecture.md. For the canonical "how do I integrate Murmur with text search" question, see doc/search-integration.md — three patterns (query-time rescore, bucketed indexing, snapshot+delta), their tradeoffs, and a reference DDB-Streams Lambda projector.

The headline ideas:

  1. Structural monoids. Each well-known monoid (Sum, HLL, TopK, Bloom, …) carries a Kind that backend executors dispatch on — DDB picks atomic ADD vs CAS, Spark picks the right SQL aggregation, Valkey picks PFADD vs INCRBY. Custom monoids work as opaque Go closures on Go-only execution backends.
  2. Three execution modes, one DSL. A pipeline definition is execution-mode-agnostic. The same monoid Combine runs from a Kafka consumer (live), a Mongo collection scan (bootstrap), or an S3 JSON-Lines archive (replay).
  3. DDB is source of truth, Valkey is a cache. State that's lost in Valkey is repopulatable from DDB. The cache is never trusted as ground truth.
  4. Windowed monoids first-class. windowed.Daily(retention) adds a time-bucket dimension to state keys; queries assemble sliding windows by merging the N most-recent buckets via the monoid Combine.
  5. No Beam, no Flink-in-Go. Beam's Go SDK is unmaintained and its Spark runner is batch-only. Murmur is not a streaming engine — it's a framework that runs your monoid Combine on ECS Fargate workers reading from Kinesis/Kafka, and dispatches batch through Spark Connect.

doc/architecture.md has the full version. Short version:

  • Apache Beam Go SDK is unmaintained as of 2.32 and the Spark runner is batch-only — Beam streaming on EMR is not actually possible.
  • Apache Flink (incl. Amazon Managed Service for Apache Flink) is mature but JVM-only. JVM tax for Go shops, and no auto-generated query layer.
  • Goka is Kafka-only, no batch story, no query layer, small community.

Murmur fills the gap with: unified Go DSL, structural monoids that dispatch to multiple backends, three execution modes, time-windowed aggregations, and a generic gRPC service that does the merge.

Run locally

make compose-up   # bring up kafka, dynamodb-local, valkey, mongo, minio, spark-connect
                  # plus rs.initiate for Mongo (idempotent)
make seed-ddb     # create the page_views DDB table the example reads from
make test-unit    # fast unit tests, no infra
make test-integration  # full E2E suite against the docker-compose stack
make ui           # build the web UI and run cmd/murmur-ui --demo on :8080

make help lists every target.

The end-to-end tests in test/e2e/ exercise:

  • Counter pipeline: Kafka → Sum → DDB (counter_test.go)
  • HLL pipeline: Kafka → HLL → DDB BytesStore CAS (hll_test.go)
  • Windowed counters with Last1/2/3/7/10/30Days queries (windowed_test.go)
  • Mongo bootstrap with Change Stream resume token (mongo_bootstrap_test.go)
  • DDB ParallelScan bootstrap with re-run idempotency under DDB-backed Deduper (ddb_bootstrap_test.go)
  • S3 replay into a shadow table (s3_replay_test.go)
  • Spark Connect batch SUM aggregation → DDB (spark_connect_test.go)

Production-readiness packages

Beyond the core pipeline DSL, several packages exist to make Murmur deployable:

  • pkg/exec/lambda/{kinesis,dynamodbstreams,sqs} — three Lambda runtimes for the AWS-native event sources, all sharing the same retry / dedup / BatchItemFailures contract via pkg/exec/processor.
  • pkg/source/snapshot/{mongo,dynamodb,jsonl,s3} — bootstrap sources for the four common shapes: Mongo collections, DynamoDB ParallelScan, raw JSON Lines, and S3-prefix-scan-of-JSON-Lines for partitioned archives.
  • pkg/state/{dynamodb,valkey} — DDB as source-of-truth (Int64SumStore / BytesStore + Deduper), Valkey as cache (Int64Cache / BytesCache + warmup helpers in pkg/query).
  • pkg/query/grpc — Connect-RPC server speaking gRPC + gRPC-Web + Connect/HTTP-JSON on one port. Singleflight coalescing + fresh_read flag + per-RPC metrics + batched windowed reads (GetWindowMany / GetRangeMany).
  • pkg/projection — bucket functions (LogBucket / LinearBucket / ManualBucket) + HysteresisBucket for change-data-capture into search indices.
  • pkg/observability/autoscaleSignal → Emitter loop for publishing scaling-signal metrics. Reference CloudWatch emitter for ECS Fargate target tracking on Kafka consumer lag / Kinesis iterator-age / events-per-second.
  • pkg/state.NewInstrumented — decorator for any Store[V] / Cache[V] that adds metrics.Recorder hooks (per-op latency + errors). Zero-overhead when the recorder is nil.
  • pkg/murmur — facade with the common-case presets (Counter, UniqueCount, TopN, Trending) plus RunStreamingWorker and the Lambda-side KinesisHandler / DynamoDBStreamsHandler / SQSHandler / MustHandler wrappers.

Worked examples

  • examples/page-view-counters/ — runnable two-binary pipeline (cmd/worker + cmd/query), a Dockerfile producing a multi-binary distroless image, and the Terraform deployment via deploy/terraform/modules/pipeline-counter/.
  • examples/mongo-cdc-orderstats/ — Mongo collection bootstrap → Kafka CDC live, with upstream-id dedup so re-deliveries fold idempotently.
  • examples/recently-interacted-topk/ — single Top-N pipeline fed by two sources at once: Kinesis (consumed via an AWS Lambda trigger) plus Kafka (consumed by a long-running ECS worker). Both binaries write through the same DDB row; the Misra-Gries Combine produces a unified ranking across channels.
  • examples/search-projector/ — runnable Pattern B from doc/search-integration.md: a Lambda that tails Murmur's counter table via DDB Streams and projects bucket transitions into an OpenSearch index, reducing search-side index write rate from per-event to per-order-of-magnitude (~6 reindexes for a 0→1M counter rise vs 1M).
  • examples/search-rerank/ — runnable Pattern A from the same doc: an HTTP search service that does two-stage retrieval (OpenSearch recall + Murmur counter rerank). Pairs with the search-projector to form the canonical "filter on bucket + rank by live counters" shape.
  • examples/typed-wrapper/ — count-core-shaped reference for the typed-wrapper pattern: how application services expose Murmur counter pipelines through their own typed Connect-RPC API instead of the generic Value{bytes} shape. Uses pkg/query/typed as the building block.

Web UI and admin API

make ui   # builds the UI, builds the binary, runs --demo on :8080
# open http://localhost:8080

--demo registers three synthetic pipelines and ticks fake metrics so the dashboard, DAG, and query console have data to show. Real workers register via pkg/admin.Server.Register.

The bundled UI is one client of the admin API; anyone can sub in their own. The contract lives in proto/murmur/admin/v1/admin.proto and the server uses Connect-RPC, so a single port speaks gRPC, gRPC-Web, and Connect (HTTP+JSON) — pick whichever your client supports. Generate bindings in your language of choice with buf generate. Hit it from curl if you want:

curl -X POST http://localhost:8080/api/murmur.admin.v1.AdminService/ListPipelines \
    -H 'Content-Type: application/json' -d '{}'
# → {"pipelines":[{"name":"page_views","monoidKind":"sum",...}, ...]}

License

Apache 2.0.

Directories

Path Synopsis
cmd
murmur-codegen-typed command
murmur-codegen-typed generates a typed Connect-RPC service from a pipeline-spec YAML.
murmur-codegen-typed generates a typed Connect-RPC service from a pipeline-spec YAML.
murmur-ui command
murmur-ui is a single-binary admin server + embedded web UI for Murmur pipelines.
murmur-ui is a single-binary admin server + embedded web UI for Murmur pipelines.
examples
mongo-cdc-orderstats
Package orderstats defines a Murmur pipeline that aggregates per-customer order totals from a Mongo collection (Bootstrap mode) and a Kafka CDC stream (Live mode).
Package orderstats defines a Murmur pipeline that aggregates per-customer order totals from a Mongo collection (Bootstrap mode) and a Kafka CDC stream (Live mode).
mongo-cdc-orderstats/cmd/bootstrap command
Bootstrap binary for the mongo-cdc-orderstats example.
Bootstrap binary for the mongo-cdc-orderstats example.
mongo-cdc-orderstats/cmd/worker command
Live streaming worker for the mongo-cdc-orderstats example.
Live streaming worker for the mongo-cdc-orderstats example.
page-view-counters
Package pageviews defines the page-view-counters Murmur pipeline.
Package pageviews defines the page-view-counters Murmur pipeline.
page-view-counters/cmd/query command
Connect-RPC query server for the page-view-counters example.
Connect-RPC query server for the page-view-counters example.
page-view-counters/cmd/worker command
Streaming worker for the page-view-counters example.
Streaming worker for the page-view-counters example.
recently-interacted-topk
Package recentlyinteracted defines a TopK "recently interacted" pipeline fed by TWO sources at once: a Segment-style Kinesis stream (consumed via an AWS Lambda trigger) and an internal Kafka topic (consumed by a long-lived ECS streaming worker).
Package recentlyinteracted defines a TopK "recently interacted" pipeline fed by TWO sources at once: a Segment-style Kinesis stream (consumed via an AWS Lambda trigger) and an internal Kafka topic (consumed by a long-lived ECS streaming worker).
recently-interacted-topk/cmd/lambda command
Lambda binary for the recently-interacted-topk example.
Lambda binary for the recently-interacted-topk example.
recently-interacted-topk/cmd/query command
Connect-RPC query server for the recently-interacted-topk example.
Connect-RPC query server for the recently-interacted-topk example.
recently-interacted-topk/cmd/worker command
Streaming worker binary for the recently-interacted-topk example.
Streaming worker binary for the recently-interacted-topk example.
search-projector
Package projector is the runnable Pattern B reference implementation from doc/search-integration.md.
Package projector is the runnable Pattern B reference implementation from doc/search-integration.md.
search-projector/cmd/projector command
Lambda binary for the search-projector example.
Lambda binary for the search-projector example.
search-rerank
Package rerank is the runnable Pattern A reference implementation from doc/search-integration.md.
Package rerank is the runnable Pattern A reference implementation from doc/search-integration.md.
search-rerank/cmd/server command
HTTP server fronting the rerank service.
HTTP server fronting the rerank service.
typed-wrapper
Package wrapper is a count-core-shaped example showing how to expose Murmur counter pipelines through your application's OWN typed Connect-RPC service rather than the generic Value{bytes} shape.
Package wrapper is a count-core-shaped example showing how to expose Murmur counter pipelines through your application's OWN typed Connect-RPC service rather than the generic Value{bytes} shape.
pkg
admin
Package admin is Murmur's read-only control plane.
Package admin is Murmur's read-only control plane.
exec/bootstrap
Package bootstrap drives a Murmur pipeline in Bootstrap mode: it reads from a SnapshotSource (Mongo collection scan, DynamoDB ParallelScan, etc.), applies the pipeline's key/value extractors and monoid Combine, and writes to the primary state store — populating initial state when the live event stream lacks full history.
Package bootstrap drives a Murmur pipeline in Bootstrap mode: it reads from a SnapshotSource (Mongo collection scan, DynamoDB ParallelScan, etc.), applies the pipeline's key/value extractors and monoid Combine, and writes to the primary state store — populating initial state when the live event stream lacks full history.
exec/lambda/dynamodbstreams
Package dynamodbstreams hosts a Murmur pipeline behind an AWS Lambda DynamoDB Streams trigger.
Package dynamodbstreams hosts a Murmur pipeline behind an AWS Lambda DynamoDB Streams trigger.
exec/lambda/kinesis
Package kinesis hosts a Murmur pipeline behind an AWS Lambda Kinesis trigger.
Package kinesis hosts a Murmur pipeline behind an AWS Lambda Kinesis trigger.
exec/lambda/sqs
Package sqs hosts a Murmur pipeline behind an AWS Lambda SQS trigger.
Package sqs hosts a Murmur pipeline behind an AWS Lambda SQS trigger.
exec/processor
Package processor is the shared per-record processing core that every Murmur runtime delegates to.
Package processor is the shared per-record processing core that every Murmur runtime delegates to.
exec/replay
Package replay drives a Murmur pipeline in Replay mode: it consumes historical events from a replay.Driver (typically S3-archived Firehose/Kafka-Connect output, or a Kafka offset range under tiered storage) and applies the same monoid Combine the live runtime would.
Package replay drives a Murmur pipeline in Replay mode: it consumes historical events from a replay.Driver (typically S3-archived Firehose/Kafka-Connect output, or a Kafka offset range under tiered storage) and applies the same monoid Combine the live runtime would.
exec/streaming
Package streaming runs a Murmur Pipeline in Live mode: it reads records from the pipeline's Source, applies the user's key/value extractors and monoid Combine, writes to the primary state store (and optionally a cache), and Acks the source record.
Package streaming runs a Murmur Pipeline in Live mode: it reads records from the pipeline's Source, applies the user's key/value extractors and monoid Combine, writes to the primary state store (and optionally a cache), and Acks the source record.
metrics
Package metrics is Murmur's observability surface.
Package metrics is Murmur's observability surface.
monoid
Package monoid defines the structural-monoid abstraction at the heart of Murmur.
Package monoid defines the structural-monoid abstraction at the heart of Murmur.
monoid/compose
Package compose provides higher-order monoids that combine simpler monoids:
Package compose provides higher-order monoids that combine simpler monoids:
monoid/core
Package core provides the standard numeric and basic monoids: Sum, Count, Min, Max, First, Last, Set.
Package core provides the standard numeric and basic monoids: Sum, Count, Min, Max, First, Last, Set.
monoid/monoidlaws
Package monoidlaws provides reusable test helpers that verify a monoid.Monoid[V] satisfies the algebraic laws (associativity and identity) required for safe use in Murmur pipelines.
Package monoidlaws provides reusable test helpers that verify a monoid.Monoid[V] satisfies the algebraic laws (associativity and identity) required for safe use in Murmur pipelines.
monoid/sketch
Package sketch defines probabilistic-aggregation monoids: HLL (cardinality), TopK (heavy hitters), and Bloom (set membership).
Package sketch defines probabilistic-aggregation monoids: HLL (cardinality), TopK (heavy hitters), and Bloom (set membership).
monoid/sketch/bloom
Package bloom is a Bloom filter implementation of monoid.Monoid[[]byte], suitable for approximate set-membership aggregations.
Package bloom is a Bloom filter implementation of monoid.Monoid[[]byte], suitable for approximate set-membership aggregations.
monoid/sketch/hll
Package hll is a HyperLogLog implementation of monoid.Monoid[[]byte], suitable for approximate-cardinality aggregations.
Package hll is a HyperLogLog implementation of monoid.Monoid[[]byte], suitable for approximate-cardinality aggregations.
monoid/sketch/topk
Package topk is a Misra-Gries top-K heavy-hitters implementation of monoid.Monoid[[]byte].
Package topk is a Misra-Gries top-K heavy-hitters implementation of monoid.Monoid[[]byte].
monoid/windowed
Package windowed expresses time-bucketed aggregations on top of any structural monoid.
Package windowed expresses time-bucketed aggregations on top of any structural monoid.
murmur
Package murmur is a thin facade over pkg/pipeline for the most common pipeline shapes.
Package murmur is a thin facade over pkg/pipeline for the most common pipeline shapes.
observability/autoscale
Package autoscale emits scaling signals from a Murmur worker so an upstream autoscaler (ECS Fargate target tracking, Kubernetes HPA via the CloudWatch adapter, etc.) can adjust replica count to load.
Package autoscale emits scaling signals from a Murmur worker so an upstream autoscaler (ECS Fargate target tracking, Kubernetes HPA via the CloudWatch adapter, etc.) can adjust replica count to load.
pipeline
Package pipeline is the Murmur DSL surface — the entry point users touch when defining an aggregation.
Package pipeline is the Murmur DSL surface — the entry point users touch when defining an aggregation.
projection
Package projection provides bucket functions and transition detection for projecting Murmur counter state into search indices, dashboards, or any external system that benefits from a coarse, slow-moving view of a fast-moving counter.
Package projection provides bucket functions and transition detection for projecting Murmur counter state into search indices, dashboards, or any external system that benefits from a coarse, slow-moving view of a fast-moving counter.
query
Package query provides the read-side merge logic Murmur uses to assemble sliding-window query results from per-bucket state.
Package query provides the read-side merge logic Murmur uses to assemble sliding-window query results from per-bucket state.
query/grpc
Package grpc serves Murmur's read-side query layer for an application data plane.
Package grpc serves Murmur's read-side query layer for an application data plane.
query/typed
Package typed provides typed wrappers around the generic Murmur QueryService — the building block for application services that want to expose typed protos to their callers rather than the generic Value{bytes} shape.
Package typed provides typed wrappers around the generic Murmur QueryService — the building block for application services that want to expose typed protos to their callers rather than the generic Value{bytes} shape.
replay
Package replay defines the ReplayDriver contract for Murmur's backfill mode.
Package replay defines the ReplayDriver contract for Murmur's backfill mode.
replay/s3
Package s3 provides a replay.Driver that reads JSON-Lines event archives from S3 (or any S3-compatible store like MinIO) and feeds the records into the pipeline.
Package s3 provides a replay.Driver that reads JSON-Lines event archives from S3 (or any S3-compatible store like MinIO) and feeds the records into the pipeline.
source
Package source defines the live-event Source contract that Murmur pipelines read from.
Package source defines the live-event Source contract that Murmur pipelines read from.
source/kafka
Package kafka provides a Kafka-backed implementation of source.Source via the franz-go client.
Package kafka provides a Kafka-backed implementation of source.Source via the franz-go client.
source/kinesis
Package kinesis provides a Kinesis Data Streams source via aws-sdk-go-v2.
Package kinesis provides a Kinesis Data Streams source via aws-sdk-go-v2.
source/snapshot
Package snapshot defines the SnapshotSource contract used by Murmur's Bootstrap execution mode.
Package snapshot defines the SnapshotSource contract used by Murmur's Bootstrap execution mode.
source/snapshot/dynamodb
Package dynamodb provides a snapshot.Source backed by DynamoDB ParallelScan, suitable for bootstrapping a Murmur pipeline from a DDB-resident OLTP table.
Package dynamodb provides a snapshot.Source backed by DynamoDB ParallelScan, suitable for bootstrapping a Murmur pipeline from a DDB-resident OLTP table.
source/snapshot/jsonl
Package jsonl provides a snapshot.Source that reads JSON Lines from any io.Reader — local files, S3 objects, gzip streams, etc.
Package jsonl provides a snapshot.Source that reads JSON Lines from any io.Reader — local files, S3 objects, gzip streams, etc.
source/snapshot/mongo
Package mongo provides a Mongo-backed implementation of snapshot.Source for Murmur's Bootstrap execution mode.
Package mongo provides a Mongo-backed implementation of snapshot.Source for Murmur's Bootstrap execution mode.
source/snapshot/s3
Package s3 provides a snapshot.Source that scans every JSON-Lines object under an S3 prefix and emits records for bootstrap.
Package s3 provides a snapshot.Source that scans every JSON-Lines object under an S3 prefix and emits records for bootstrap.
state
Package state defines the StateStore abstraction Murmur pipelines write aggregations to.
Package state defines the StateStore abstraction Murmur pipelines write aggregations to.
state/dynamodb
Package dynamodb provides a DynamoDB-backed implementation of state.Store, the source-of-truth state for Murmur pipelines.
Package dynamodb provides a DynamoDB-backed implementation of state.Store, the source-of-truth state for Murmur pipelines.
state/valkey
Package valkey provides Valkey-backed cache implementations of state.Cache.
Package valkey provides Valkey-backed cache implementations of state.Cache.
swap
Package swap manages atomic state-table version pointers for backfill cutover.
Package swap manages atomic state-table version pointers for backfill cutover.
proto

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL