Documentation
¶
Index ¶
Constants ¶
const ( // CollectInterval is how often the collector takes a snapshot. CollectInterval = 60 * time.Second )
Variables ¶
Functions ¶
Types ¶
type Collector ¶
type Collector struct {
// contains filtered or unexported fields
}
Collector scrapes the live uptime data on a fixed interval and persists samples to the Store. It is deliberately decoupled from the HTTP layer — no allocations happen on the request path.
func NewCollector ¶
func NewCollector( store *Store, hm *discovery.Host, res *resource.Resource, logger *ll.Logger, ) *Collector
NewCollector wires up the collector. Call Start() to begin sampling.
type HostSamples ¶
HostSamples is the set of samples for one host (keyed by domain).
type QueryRange ¶
type QueryRange struct {
Duration time.Duration
Resolution time.Duration // how we down-sample when returning data
Label string
}
QueryRange is parsed from the ?range= query parameter.
type Sample ¶
type Sample struct {
Timestamp int64 `json:"ts"` // Unix seconds
RequestsSec float64 `json:"requests_sec"` // req/s since last sample
P99Ms float64 `json:"p99_ms"` // p99 latency in milliseconds
ErrorRate float64 `json:"error_rate"` // 0.0–100.0 percent
ActiveBE int `json:"active_backends"` // backends currently alive
}
Sample is one point-in-time snapshot captured every CollectInterval. Kept intentionally small — only what the UI actually graphs.
type Store ¶
type Store struct {
// contains filtered or unexported fields
}
Store is a bbolt-backed time-series store for telemetry samples. All writes are async (non-blocking channel), reads are direct DB queries. Zero allocations on the hot path — the collector goroutine owns all writes.
func (*Store) Close ¶
Close flushes pending writes and closes the database. Safe to call more than once — subsequent calls are no-ops.