Documentation
¶
Index ¶
- Variables
- func CheckFatal(location string, err error)
- func Event() log.Logger
- func GetFirstAddressOf(names []string) (string, error)
- func HashFP(fp model.Fingerprint) uint32
- func InitEvents(freq int)
- func InitLogger(cfg *server.Config)
- func LabelsToMetric(ls labels.Labels) model.Metric
- func Max64(a, b int64) int64
- func MergeNSampleSets(sampleSets ...[]model.SamplePair) []model.SamplePair
- func MergeSampleSets(a, b []model.SamplePair) []model.SamplePair
- func Min(a, b int) int
- func Min64(a, b int64) int64
- func NewPrometheusLogger(l logging.Level) (log.Logger, error)
- func ParseProtoReader(ctx context.Context, reader io.Reader, req proto.Message, ...) ([]byte, error)
- func SerializeProtoResponse(w http.ResponseWriter, resp proto.Message, compression CompressionType) error
- func SplitFiltersAndMatchers(allMatchers []*labels.Matcher) (filters, matchers []*labels.Matcher)
- func WithContext(ctx context.Context, l log.Logger) log.Logger
- func WithTraceID(traceID string, l log.Logger) log.Logger
- func WithUserID(userID string, l log.Logger) log.Logger
- func WriteJSONResponse(w http.ResponseWriter, v interface{})
- type Backoff
- type BackoffConfig
- type CompressionType
- type HashBucketHistogram
- type HashBucketHistogramOpts
- type Op
- type PriorityQueue
- type PrometheusLogger
- type SampleStreamIterator
Constants ¶
This section is empty.
Variables ¶
var ( // Logger is a shared go-kit logger. // TODO: Change all components to take a non-global logger via their constructors. Logger = log.NewNopLogger() )
Functions ¶
func CheckFatal ¶
CheckFatal prints an error and exits with error code 1 if err is non-nil
func GetFirstAddressOf ¶
GetFirstAddressOf returns the first IPv4 address of the supplied interface names.
func HashFP ¶
func HashFP(fp model.Fingerprint) uint32
HashFP simply moves entropy from the most significant 48 bits of the fingerprint into the least significant 16 bits (by XORing) so that a simple MOD on the result can be used to pick a mutex while still making use of changes in more significant bits of the fingerprint. (The fast fingerprinting function we use is prone to only change a few bits for similar metrics. We really want to make use of every change in the fingerprint to vary mutex selection.)
func InitEvents ¶
func InitEvents(freq int)
InitEvents initializes event sampling, with the given frequency. Zero=off.
func InitLogger ¶
InitLogger initialises the global gokit logger (util.Logger) and overrides the default logger for the server.
func LabelsToMetric ¶
LabelsToMetric converts a Labels to Metric Don't do this on any performance sensitive paths.
func MergeNSampleSets ¶
func MergeNSampleSets(sampleSets ...[]model.SamplePair) []model.SamplePair
MergeNSampleSets merges and dedupes n sets of already sorted sample pairs.
func MergeSampleSets ¶
func MergeSampleSets(a, b []model.SamplePair) []model.SamplePair
MergeSampleSets merges and dedupes two sets of already sorted sample pairs.
func NewPrometheusLogger ¶
NewPrometheusLogger creates a new instance of PrometheusLogger which exposes Prometheus counters for various log levels.
func ParseProtoReader ¶
func ParseProtoReader(ctx context.Context, reader io.Reader, req proto.Message, compression CompressionType) ([]byte, error)
ParseProtoReader parses a compressed proto from an io.Reader.
func SerializeProtoResponse ¶
func SerializeProtoResponse(w http.ResponseWriter, resp proto.Message, compression CompressionType) error
SerializeProtoResponse serializes a protobuf response into an HTTP response.
func SplitFiltersAndMatchers ¶
SplitFiltersAndMatchers splits empty matchers off, which are treated as filters, see #220
func WithContext ¶
WithContext returns a Logger that has information about the current user in its details.
e.g.
log := util.WithContext(ctx) log.Errorf("Could not chunk chunks: %v", err)
func WithTraceID ¶
WithTraceID returns a Logger that has information about the traceID in its details.
func WithUserID ¶
WithUserID returns a Logger that has information about the current user in its details.
func WriteJSONResponse ¶
func WriteJSONResponse(w http.ResponseWriter, v interface{})
WriteJSONResponse writes some JSON as a HTTP response.
Types ¶
type Backoff ¶
type Backoff struct {
// contains filtered or unexported fields
}
Backoff implements exponential backoff with randomized wait times
func NewBackoff ¶
func NewBackoff(ctx context.Context, cfg BackoffConfig) *Backoff
NewBackoff creates a Backoff object. Pass a Context that can also terminate the operation.
func (*Backoff) Err ¶
Err returns the reason for terminating the backoff, or nil if it didn't terminate
func (*Backoff) NumRetries ¶
NumRetries returns the number of retries so far
type BackoffConfig ¶
type BackoffConfig struct { MinBackoff time.Duration // start backoff at this level MaxBackoff time.Duration // increase exponentially to this level MaxRetries int // give up after this many; zero means infinite retries }
BackoffConfig configures a Backoff
func (*BackoffConfig) RegisterFlags ¶
func (cfg *BackoffConfig) RegisterFlags(prefix string, f *flag.FlagSet)
RegisterFlags for BackoffConfig.
type CompressionType ¶
type CompressionType int
CompressionType for encoding and decoding requests and responses.
const ( NoCompression CompressionType = iota FramedSnappy RawSnappy )
Values for CompressionType
func CompressionTypeFor ¶
func CompressionTypeFor(version string) CompressionType
CompressionTypeFor a given version of the Prometheus remote storage protocol. See https://github.com/prometheus/prometheus/issues/2692.
type HashBucketHistogram ¶ added in v1.0.0
type HashBucketHistogram interface { prometheus.Metric prometheus.Collector Observe(string, uint32) Stop() }
HashBucketHistogram is used to track a histogram of per-bucket rates.
For instance, I want to know that 50% of rows are getting X QPS or lower and 99% are getting Y QPS of lower. At first glance, this would involve tracking write rate per row, and periodically sticking those numbers in a histogram. To make this fit in memory: instead of per-row, we keep N buckets of counters and hash the key to a bucket. Then every second we update a histogram with the bucket values (and zero the buckets).
Note, we want this metric to be relatively independent of the number of hash buckets and QPS of the service - we're trying to measure how well load balanced the write load is. So we normalise the values in the hash buckets such that if all buckets are '1', then we have even load. We do this by multiplying the number of ops per bucket by the number of buckets, and dividing by the number of ops.
func NewHashBucketHistogram ¶ added in v1.0.0
func NewHashBucketHistogram(opts HashBucketHistogramOpts) HashBucketHistogram
NewHashBucketHistogram makes a new HashBucketHistogram
type HashBucketHistogramOpts ¶ added in v1.0.0
type HashBucketHistogramOpts struct { prometheus.HistogramOpts HashBuckets int }
HashBucketHistogramOpts are the options for making a HashBucketHistogram
type Op ¶
type Op interface { Key() string Priority() int64 // The larger the number the higher the priority. }
Op is an operation on the priority queue.
type PriorityQueue ¶
type PriorityQueue struct {
// contains filtered or unexported fields
}
PriorityQueue is a priority queue.
func NewPriorityQueue ¶
func NewPriorityQueue(lengthGauge prometheus.Gauge) *PriorityQueue
NewPriorityQueue makes a new priority queue.
func (*PriorityQueue) Close ¶
func (pq *PriorityQueue) Close()
Close signals that the queue should be closed when it is empty. A closed queue will not accept new items.
func (*PriorityQueue) Dequeue ¶
func (pq *PriorityQueue) Dequeue() Op
Dequeue will return the op with the highest priority; block if queue is empty; returns nil if queue is closed.
func (*PriorityQueue) DiscardAndClose ¶
func (pq *PriorityQueue) DiscardAndClose()
DiscardAndClose closes the queue and removes all the items from it.
func (*PriorityQueue) Enqueue ¶
func (pq *PriorityQueue) Enqueue(op Op) bool
Enqueue adds an operation to the queue in priority order. Returns true if added; false if the operation was already on the queue.
func (*PriorityQueue) Length ¶
func (pq *PriorityQueue) Length() int
Length returns the length of the queue.
type PrometheusLogger ¶
type PrometheusLogger struct {
// contains filtered or unexported fields
}
PrometheusLogger exposes Prometheus counters for each of go-kit's log levels.
func (*PrometheusLogger) Log ¶
func (pl *PrometheusLogger) Log(kv ...interface{}) error
Log increments the appropriate Prometheus counter depending on the log level.
type SampleStreamIterator ¶
type SampleStreamIterator struct {
// contains filtered or unexported fields
}
SampleStreamIterator is a struct and not just a renamed type because otherwise the Metric field and Metric() methods would clash.
func NewSampleStreamIterator ¶
func NewSampleStreamIterator(ss *model.SampleStream) SampleStreamIterator
NewSampleStreamIterator creates a SampleStreamIterator
func (SampleStreamIterator) Close ¶
func (it SampleStreamIterator) Close()
Close implements the SeriesIterator interface.
func (SampleStreamIterator) Metric ¶
func (it SampleStreamIterator) Metric() metric.Metric
Metric implements the SeriesIterator interface.
func (SampleStreamIterator) RangeValues ¶
func (it SampleStreamIterator) RangeValues(in metric.Interval) []model.SamplePair
RangeValues implements the SeriesIterator interface.
func (SampleStreamIterator) ValueAtOrBeforeTime ¶
func (it SampleStreamIterator) ValueAtOrBeforeTime(ts model.Time) model.SamplePair
ValueAtOrBeforeTime implements the SeriesIterator interface.