otel

package
v0.260331.0 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Mar 30, 2026 License: MPL-2.0 Imports: 16 Imported by: 0

README

pkg/otel - OpenTelemetry Observability for Tingly-Box

Package otel provides OpenTelemetry-based observability for LLM token usage in tingly-box.

Features

  • Token Usage Metrics: Track input/output tokens, request counts, latency, and errors
  • Multi-Exporter Support: Export to SQLite, OTLP backends, and JSONL files simultaneously
  • Distributed Tracing: Trace request lifecycle with token usage events
  • Semantic Conventions: Follows OpenLLMetry attribute naming conventions

Package Structure

pkg/otel/
├── attributes.go         # Semantic convention attribute keys
├── config.go             # Configuration types (Config, SQLiteConfig, OTLPConfig, SinkConfig)
├── meter.go              # MeterSetup initialization and lifecycle
├── tracer.go             # Distributed tracing support
├── tracker/
│   └── token_tracker.go  # TokenTracker for recording token usage
└── exporter/
    ├── multi.go          # MultiExporter for multiple backends
    ├── sqlite.go         # SQLite exporter
    ├── otlp.go           # OTLP exporter (gRPC/HTTP)
    ├── sink.go           # JSONL file sink exporter
    └── util.go           # Utility functions

Usage

Basic Setup
import (
    "context"
    "github.com/tingly-dev/tingly-box/pkg/otel"
    "github.com/tingly-dev/tingly-box/pkg/otel/tracker"
)

// Create configuration
cfg := &otel.Config{
    Enabled:        true,
    ExportInterval: 10 * time.Second,
    ExportTimeout:  30 * time.Second,
    SQLite: otel.SQLiteConfig{Enabled: true},
    Sink:   otel.SinkConfig{Enabled: true},
    OTLP:   otel.OTLPConfig{Enabled: false},
}

// Initialize meter setup
setup, err := otel.NewMeterSetup(ctx, cfg, &otel.StoreRefs{
    StatsStore: statsStore,
    UsageStore: usageStore,
    Sink:       sink,
})
if err != nil {
    // handle error
}
defer setup.Shutdown(ctx)

// Get token tracker
tracker := setup.Tracker()

// Record token usage
tracker.RecordUsage(ctx, tracker.UsageOptions{
    Provider:     "openai",
    ProviderUUID: "uuid-123",
    Model:        "gpt-4",
    RequestModel: "gpt-4",
    Scenario:     "openai",
    InputTokens:  100,
    OutputTokens: 50,
    Streamed:     true,
    Status:       "success",
    LatencyMs:    250,
})
Using Tracer
// Get tracer
tracer := setup.Tracer()

// Start a span for LLM request
ctx, span := tracer.StartRequestSpan(ctx, "openai", "gpt-4", "openai")
defer tracer.EndSpan(span, nil)

// Record token usage as span event
tracer.RecordTokenUsageEvent(ctx, 100, 50)
OTLP Export
cfg := &otel.Config{
    Enabled:        true,
    ExportInterval: 10 * time.Second,
    OTLP: otel.OTLPConfig{
        Enabled:  true,
        Endpoint: "localhost:4317",
        Protocol: "grpc",
        Insecure: true,
    },
}

Metrics

Metric Name Type Description
llm.token.usage.input Counter Input/prompt token usage
llm.token.usage.output Counter Output/completion token usage
llm.token.total Counter Total tokens consumed
llm.request.count Counter Number of LLM requests
llm.request.duration Histogram Request duration in milliseconds
llm.request.errors Counter Number of request errors

Semantic Attributes

Attribute Key Example
Provider llm.provider "openai", "anthropic"
Model llm.model "gpt-4", "claude-3-opus"
Request Model llm.request.model User-requested model
Token Type llm.token.type "input", "output"
Scenario llm.scenario "openai", "anthropic", "claude_code"
Streaming llm.streaming true, false
Status llm.response.status "success", "error", "canceled"
Error Code llm.error.code Error code if failed
Rule UUID llm.rule.uuid Load balancer rule
Provider UUID llm.provider.uuid Provider UUID
User Tier llm.user.tier "enterprise", "standard"
Latency llm.latency.ms Request latency in ms

Configuration

Config
Field Type Default Description
Enabled bool true Enable/disable OTel tracking
ExportInterval duration 10s Time between exports
ExportTimeout duration 30s Timeout for each export
BufferSize int 10000 Max metrics to buffer
SQLite SQLiteConfig - SQLite exporter config
OTLP OTLPConfig - OTLP exporter config
Sink SinkConfig - JSONL sink config
SQLiteConfig
Field Type Default Description
Enabled bool true Enable SQLite export
OTLPConfig
Field Type Default Description
Enabled bool false Enable OTLP export
Endpoint string "" OTLP endpoint (host:port)
Protocol string "grpc" "grpc" or "http/protobuf"
Insecure bool false Disable TLS
Headers map[string]string nil Additional headers
SinkConfig
Field Type Default Description
Enabled bool true Enable JSONL sink export

Migration from internal/obs/otel

The new pkg/otel package replaces internal/obs/otel:

  1. Import path changed: internal/obs/otelpkg/otel
  2. TokenTracker moved: otel.TokenTrackertracker.TokenTracker
  3. UsageOptions moved: otel.UsageOptionstracker.UsageOptions
  4. Config structure updated with nested exporter configs
  5. Metric names updated for clarity:
    • llm.token.usagellm.token.usage.input / llm.token.usage.output

Dependencies

  • go.opentelemetry.io/otel (v1.42.0)
  • go.opentelemetry.io/otel/sdk (v1.42.0)
  • go.opentelemetry.io/otel/exporters/otlp (v1.42.0)
  • go.opentelemetry.io/otel/exporters/stdout (v1.42.0)
  • Specification: docs/spec/20260309-otel-token-usage-collector-spec.md
  • Architecture: docs/arch/otel-arch.md

Documentation

Overview

Package otel provides OpenTelemetry-based observability for LLM token usage. It implements metrics, traces, and logs collection with a collector/exporter architecture for efficient batch processing.

Index

Constants

This section is empty.

Variables

View Source
var (
	// AttrLLMProvider identifies the LLM provider (e.g., "openai", "anthropic", "google")
	AttrLLMProvider = attribute.Key("llm.provider")

	// AttrLLMModel identifies the actual model used (e.g., "gpt-4", "claude-3-opus")
	AttrLLMModel = attribute.Key("llm.model")

	// AttrLLMRequestModel identifies the model requested by the user
	AttrLLMRequestModel = attribute.Key("llm.request.model")

	// AttrLLMTokenType identifies the type of token (input/output)
	// Note: Uses underscore (llm.token_type) for backward compatibility with internal/obs/otel
	AttrLLMTokenType = attribute.Key("llm.token_type")

	// AttrLLMScenario identifies the API scenario (e.g., "openai", "anthropic", "claude_code")
	AttrLLMScenario = attribute.Key("llm.scenario")

	// AttrLLMStreaming indicates whether the request was streaming
	AttrLLMStreaming = attribute.Key("llm.streaming")

	// AttrLLMResponseStatus indicates the response status (success, error, canceled)
	AttrLLMResponseStatus = attribute.Key("llm.response.status")

	// AttrLLMErrorCode contains the error code if status is error
	AttrLLMErrorCode = attribute.Key("llm.error.code")

	// AttrLLMRuleUUID identifies the load balancer rule used
	AttrLLMRuleUUID = attribute.Key("llm.rule.uuid")

	// AttrLLMProviderUUID identifies the provider UUID
	AttrLLMProviderUUID = attribute.Key("llm.provider.uuid")

	// AttrLLMUserTier identifies low-cardinality user class for enterprise traffic.
	AttrLLMUserTier = attribute.Key("llm.user.tier")

	// AttrLLMLatencyMs identifies the request latency in milliseconds
	AttrLLMLatencyMs = attribute.Key("llm.latency.ms")
)

Semantic convention attributes following OpenLLMetry and OpenTelemetry standards These attributes are used to annotate metrics with consistent, meaningful labels.

Functions

This section is empty.

Types

type Config

type Config struct {
	// Enabled enables or disables OTel tracking
	Enabled bool

	// ExportInterval is the time between exports. Default: 10s
	ExportInterval time.Duration

	// ExportTimeout is the timeout for each export. Default: 30s
	ExportTimeout time.Duration

	// BufferSize is the max number of metrics to buffer. Default: 10000
	BufferSize int

	// SQLite exporter configuration
	SQLite SQLiteConfig

	// OTLP exporter configuration
	OTLP OTLPConfig

	// Sink exporter configuration
	Sink SinkConfig
}

Config holds the configuration for the OTel observability setup.

func DefaultConfig

func DefaultConfig() *Config

DefaultConfig returns a config with sensible defaults

type MeterSetup

type MeterSetup struct {
	// contains filtered or unexported fields
}

MeterSetup holds the meter provider, tracer provider, and token tracker.

func NewMeterSetup

func NewMeterSetup(ctx context.Context, cfg *Config, stores *StoreRefs) (*MeterSetup, error)

NewMeterSetup creates a new meter setup with the provided config and stores.

func (*MeterSetup) Shutdown

func (ms *MeterSetup) Shutdown(ctx context.Context) error

Shutdown shuts down the meter and tracer providers.

func (*MeterSetup) Tracer

func (ms *MeterSetup) Tracer() *Tracer

Tracer returns the tracer.

func (*MeterSetup) Tracker

func (ms *MeterSetup) Tracker() *tracker.TokenTracker

Tracker returns the token tracker.

type OTLPConfig

type OTLPConfig struct {
	// Enabled enables OTLP export
	Enabled bool

	// Endpoint is the OTLP endpoint (gRPC or HTTP)
	Endpoint string

	// Protocol is the OTLP protocol ("grpc" or "http/protobuf")
	Protocol string

	// Insecure disables TLS for the connection
	Insecure bool

	// Headers are optional headers to send with each request
	Headers map[string]string
}

OTLPConfig holds OTLP exporter configuration

type SQLiteConfig

type SQLiteConfig struct {
	// Enabled enables SQLite export
	Enabled bool
}

SQLiteConfig holds SQLite exporter configuration

type SinkConfig

type SinkConfig struct {
	// Enabled enables JSONL sink export
	Enabled bool
}

SinkConfig holds JSONL sink exporter configuration

type StoreRefs

type StoreRefs struct {
	StatsStore *db.StatsStore
	UsageStore *db.UsageStore
	Sink       *obs.Sink
}

StoreRefs holds references to the storage backends for exporters.

type Tracer

type Tracer struct {
	// contains filtered or unexported fields
}

Tracer provides distributed tracing capabilities for LLM requests.

func NewTracer

func NewTracer(tp trace.TracerProvider) *Tracer

NewTracer creates a new Tracer with the provided tracer provider.

func (*Tracer) EndSpan

func (t *Tracer) EndSpan(span trace.Span, err error)

EndSpan ends a span with optional error handling.

func (*Tracer) RecordError

func (t *Tracer) RecordError(ctx context.Context, err error, attrs ...attribute.KeyValue)

RecordError records an error to the current span.

func (*Tracer) RecordTokenUsageEvent

func (t *Tracer) RecordTokenUsageEvent(ctx context.Context, inputTokens, outputTokens int)

RecordTokenUsageEvent records token usage as a span event.

func (*Tracer) SetSpanAttributes

func (t *Tracer) SetSpanAttributes(ctx context.Context, attrs ...attribute.KeyValue)

SetSpanAttributes sets attributes on the current span.

func (*Tracer) StartRequestSpan

func (t *Tracer) StartRequestSpan(ctx context.Context, provider, model, scenario string) (context.Context, trace.Span)

StartRequestSpan begins a span for an LLM request with standard attributes.

func (*Tracer) StartSpan

func (t *Tracer) StartSpan(ctx context.Context, name string, opts ...trace.SpanStartOption) (context.Context, trace.Span)

StartSpan begins a new span with the given name and options.

Directories

Path Synopsis

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL