iris

package module
v1.2.7 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: May 3, 2026 License: MPL-2.0 Imports: 15 Imported by: 6

README

Iris — Structured Logging at Wind Speed for Go

an AGILira fragment

Iris is a blazing-fast, zero-allocation structured logging library for Go, engineered for applications that demand maximum throughput, built-in security, and production-grade reliability — without compromising developer experience.

CI/CD Pipeline Security Go Report Card Test Coverage Xantos Powered

Key Features
  • Smart API: Zero-configuration setup with automatic optimization for your environment
  • SyncReader Interface: Extensible architecture for integrating external log sources
  • SyncWriter Interface: Modular output destinations with external writer modules
  • Intelligent Auto-Scaling: Real-time switching between SingleRing and ThreadedRings modes based on workload
  • Built-In Security: Sensitive data redaction, log injection protection, and key sanitization
  • Advanced Idle Strategies: Progressive, spinning, sleeping, yielding, and channel strategies for optimal CPU usage
  • Backpressure Policies: Drop-on-full or block-on-full handling for high-load scenarios
  • OnDrop Forensic Callback: Real-time notification when log entries are dropped (CWE-778 detection)
  • Token-Bucket Sampling: Rate-limiting for high-volume logging with configurable capacity and refill
  • Context Integration: First-class context.Context support with key extraction and field propagation
Modular Architecture

Iris uses a modular design with external packages for integrations. Currently available:

Installation

go get github.com/agilira/iris

Quick Start

import "github.com/agilira/iris"

// Smart API automatically configures everything optimally
logger, err := iris.New(iris.Config{})
if err != nil {
    panic(err)
}
defer logger.Sync()

logger.Start()

// Zero-allocation structured logging
logger.Info("User authenticated",
    iris.Str("user_id", "12345"),
    iris.Dur("response_time", time.Millisecond*150),
    iris.Secret("api_key", apiKey))  // Automatically redacted

Performance

Iris prioritizes performance without sacrificing developer experience. Through careful engineering of zero-allocation field encoding, cached time sources, and lock-free ring buffers, we achieve consistent sub-35ns logging operations.

Benchmark environment: AMD Ryzen 5 7520U, Go 1.24.5, linux/amd64, SingleRing architecture.

Logging a message and 10 fields:

Package Time Time % to iris Objects Allocated
iris 34 ns/op +0% 0 allocs/op
zerolog 53 ns/op +56% 0 allocs/op
zap 429 ns/op +1,162% 1 allocs/op
slog 689 ns/op +1,926% 11 allocs/op
go-kit 2,516 ns/op +7,300% 36 allocs/op
apex/log 5,690 ns/op +16,635% 35 allocs/op
logrus 9,904 ns/op +29,012% 52 allocs/op
log15 10,062 ns/op +29,476% 42 allocs/op

Logging with accumulated context (6 fields already present):

Package Time Time % to iris Objects Allocated
iris 25 ns/op +0% 0 allocs/op
zerolog 27 ns/op +8% 0 allocs/op
zap 100 ns/op +300% 0 allocs/op
slog 157 ns/op +528% 0 allocs/op
go-kit 1,179 ns/op +4,616% 19 allocs/op
apex/log 2,801 ns/op +11,104% 13 allocs/op
log15 4,240 ns/op +16,860% 23 allocs/op
logrus 5,262 ns/op +20,948% 35 allocs/op

Adding fields at log site (6 fields):

Package Time Time % to iris Objects Allocated
iris 31 ns/op +0% 0 allocs/op
zerolog 75 ns/op +142% 0 allocs/op
zap 330 ns/op +965% 1 allocs/op
slog 571 ns/op +1,742% 7 allocs/op
go-kit 1,442 ns/op +4,552% 28 allocs/op
apex/log 4,129 ns/op +13,219% 24 allocs/op
logrus 6,106 ns/op +19,597% 40 allocs/op
log15 7,821 ns/op +25,132% 34 allocs/op

Architecture

Iris provides intelligent logging through Smart API optimization and security-first design:

graph TD
    A[Application] --> B[Smart API<br/>Auto-Configuration]
    B --> C[Logger Instance<br/>Zero-Config Setup]
    C --> D[ZephyrosLite MPSC<br/>Ring Buffer + Batching]
    D --> E[Field Processing<br/>Type-Safe + Security]
    E --> F[Encoder Selection<br/>JSON / Text]
    F --> G[Output Writers<br/>File / Stdout / Custom]
    E --> H[Security Layer<br/>Redaction + Injection Protection]
    B --> I[Time Cache<br/>go-timecache]

    classDef primary fill:#e1f5fe,stroke:#01579b,stroke-width:2px
    classDef secondary fill:#e8f5e8,stroke:#1b5e20,stroke-width:2px
    classDef security fill:#fce4ec,stroke:#880e4f,stroke-width:2px
    classDef performance fill:#fff3e0,stroke:#e65100,stroke-width:2px

    class A,G primary
    class B,C,F secondary
    class E,H security
    class D,I performance
SyncReader Integration

Iris provides a SyncReader interface for integrating with existing logging libraries through external provider modules:

// Example with slog provider
import slogprovider "github.com/agilira/iris-provider-slog"

provider := slogprovider.New(slogprovider.Config{})
logger := slog.New(provider)  // Same slog API, iris performance
Advanced Features

Auto-Scaling Architecture:

  • SingleRing Mode: ~29 ns/op for low-contention scenarios
  • ThreadedRings Mode: ~35 ns/op per thread for high-contention workloads
  • Automatic switching based on write frequency, contention, latency, and goroutine count
  • Real-time metrics via logger.Stats() for performance monitoring

Idle Strategies:

  • Progressive Strategy: Adaptive CPU usage (default)
  • Spinning Strategy: Ultra-low latency, maximum CPU usage
  • Sleeping Strategy: Minimal CPU usage for low-throughput scenarios
  • Yielding Strategy: Moderate CPU reduction via runtime.Gosched()
  • Channel Strategy: Event-driven wake-up for minimal CPU footprint

Core Framework

Smart API - Zero Configuration

Auto-detection and configuration of architecture, capacity, encoder, and logging level without any setup.

Security-First Design
  • Secret Redaction: Automatic masking of sensitive data (passwords, API keys, tokens)
  • Injection Protection: Complete defense against log manipulation attacks (CWE-93, CWE-116)
  • Field Key Sanitization: All keys pass through quoteString() -- no raw writes to output
  • Input Validation: Config.Validate() enforced at construction with capacity ceiling (CWE-400)
  • Drop Detection: OnDrop callback for real-time log-flooding attack detection (CWE-778)
Multi-Format Output
  • JSON: Structured logging for production systems and log aggregation
  • Text: Human-readable format for development and debugging
Field Type System
  • Type-Safe Constructors: Strongly typed field creation (Str, Int64, Dur, etc.)
  • Union Storage: Memory-efficient field storage with type indicators
  • Secret Fields: Automatic redaction in all encoder output
// Type-safe field construction with automatic security
logger.Info("Payment processed",
    iris.Str("transaction_id", "tx-123456"),
    iris.Int64("amount_cents", 2499),
    iris.Dur("processing_time", time.Millisecond*45),
    iris.Secret("card_number", cardNumber),  // Automatically redacted
)

// Output (JSON): {"ts":"...","level":"info","msg":"Payment processed","transaction_id":"tx-123456","amount_cents":2499,"processing_time":"45ms","card_number":"[REDACTED]"}

The Philosophy Behind Iris

In Greek mythology, Iris was the personification of the rainbow and divine messenger of the gods, beloved wife of Zephyros, the swiftest and gentlest of the Anemoi. Together, they embodied perfect partnership: Zephyros as the carrier of velocity and power, Iris as the guardian of beauty and truth. When they worked in harmony, messages crossed the heavens with unprecedented speed while maintaining their radiant clarity and divine fidelity.

Iris and Zephyros work together within every log operation -- Zephyros provides the velocity that moves your messages in mere nanoseconds across any distance, while Iris ensures each log maintains its integrity, security, and meaning. Neither works alone; they are unified in purpose.

Documentation

Quick Links:

License

Iris is licensed under the Mozilla Public License 2.0.


Iris - an AGILira fragment

Documentation

Overview

Package iris provides a high-performance, structured logging library for Go applications.

Iris is designed for production environments where performance, security, and reliability are critical. It achieves zero-allocation logging on hot paths through lock-free ring buffers, buffer pooling, and type-safe field encoding.

Key Features

  • Smart API with zero-configuration setup and automatic optimization
  • Zero-allocation structured logging (~29 ns/op without fields, 0 allocs)
  • Lock-free MPSC ring buffer architecture (ZephyrosLite)
  • Built-in security: field sanitization, log injection prevention, secret redaction
  • JSON and text encoders with safe key/value escaping
  • Context-aware logging with context.Context integration
  • Dynamic level changes via atomic operations
  • Backpressure handling (drop-on-full or block-on-full policies)
  • Configurable idle strategies for CPU/latency trade-offs
  • Token-bucket sampling for high-volume scenarios
  • Intelligent auto-scaling between SingleRing and ThreadedRings architectures
  • SyncReader interface for integrating external log sources
  • Modular writer ecosystem via SyncWriter interface

Smart API - Zero Configuration

The Smart API automatically detects optimal settings for your environment:

logger, err := iris.New(iris.Config{})
if err != nil {
	// handle error
}
logger.Start()
defer logger.Sync()

logger.Info("Hello world", iris.String("user", "alice"))

Smart features include:

  • Architecture detection (SingleRing vs ThreadedRings based on CPU count)
  • Capacity optimization (power-of-two sizing, bounded by maxCapacity)
  • Encoder selection (text for TTY, JSON otherwise)
  • Level detection from IRIS_LEVEL environment variable
  • Cached time source for high-frequency logging (go-timecache)

Configuration

While Smart API handles most scenarios, any setting can be overridden:

logger, err := iris.New(iris.Config{
	Output: myCustomWriter,
	Level:  iris.Error,
})

Config is validated at construction time: New() calls Validate() internally, rejecting invalid levels, out-of-range capacities, and mismatched batch sizes.

Performance

Benchmark results on AMD Ryzen 5 7520U (go test -bench=. -benchmem, SingleRing):

  • Message + 10 fields: ~34 ns/op, 0 allocs/op
  • Accumulated context: ~25 ns/op, 0 allocs/op
  • Adding 6 fields at call site: ~31 ns/op, 0 allocs/op
  • No fields: ~29 ns/op, 0 allocs/op
  • Disabled level (early exit): <1 ns/op, 0 allocs/op

Security

Security is built into every layer:

  • Field key and value sanitization prevents log injection (CWE-93, CWE-116)
  • Secret field redaction protects sensitive data in output
  • Encoder escapes all keys through quoteString (no raw writes)
  • Input validation at construction (Capacity ceiling prevents CWE-400 OOM)
  • OnDrop callback for real-time log-flooding attack detection (CWE-778)

Field Types

Type-safe field constructors minimize allocation and prevent type confusion:

logger.Info("Payment processed",
	iris.Str("tx_id", "tx-123456"),
	iris.Int64("amount_cents", 2499),
	iris.Dur("elapsed", time.Since(start)),
	iris.Secret("card", cardNumber),   // appears as [REDACTED] in output
)

Available constructors: Str, String, Int, Int8, Int16, Int32, Int64, Uint, Uint8, Uint16, Uint32, Uint64, Float32, Float64, Bool, Dur, TimeField, Time, Bytes, Binary, Secret, Err, Stringer, Object.

Error Handling

Logger creation returns errors for invalid configurations. Internal write errors are routed to Config.ErrorHandler (or stderr if nil). Dropped messages are tracked via logger.Stats()["dropped"]. For real-time drop notification, set Config.OnDrop to receive a callback from the consumer goroutine whenever the cumulative drop count increases.

logger, err := iris.New(iris.Config{})
if err != nil {
	// invalid config: handle or exit
}
logger.Start()

Development Mode

Development mode enables debug level, caller info, and text encoding:

logger, err := iris.New(iris.Config{}, iris.Development())

Context Integration

ContextLogger carries fields extracted from context.Context:

cl := logger.WithContext(ctx, iris.WithKeys(iris.TraceIDKey))
cl.Info("request handled", iris.Int("status", 200))

Best Practices

  • Use Smart API for all new projects: iris.New(iris.Config{})
  • Prefer typed field constructors over formatted messages
  • Use iris.Secret() for passwords, tokens, and PII
  • Use iris.Development() for local development
  • Monitor logger.Stats() in production for drop rate insight
  • Set IRIS_LEVEL environment variable for deployment tuning

For comprehensive documentation, see: https://github.com/agilira/iris

Index

Constants

View Source
const (
	// Core logging errors
	ErrCodeLoggerCreation errors.ErrorCode = "IRIS_LOGGER_CREATION"
	ErrCodeLoggerNotFound errors.ErrorCode = "IRIS_LOGGER_NOT_FOUND"
	ErrCodeLoggerDisabled errors.ErrorCode = "IRIS_LOGGER_DISABLED"
	ErrCodeLoggerClosed   errors.ErrorCode = "IRIS_LOGGER_CLOSED"

	// Configuration errors
	ErrCodeInvalidConfig errors.ErrorCode = "IRIS_INVALID_CONFIG"
	ErrCodeInvalidLevel  errors.ErrorCode = "IRIS_INVALID_LEVEL"
	ErrCodeInvalidFormat errors.ErrorCode = "IRIS_INVALID_FORMAT"
	ErrCodeInvalidOutput errors.ErrorCode = "IRIS_INVALID_OUTPUT"

	// Field and encoding errors
	ErrCodeInvalidField      errors.ErrorCode = "IRIS_INVALID_FIELD"
	ErrCodeEncodingFailed    errors.ErrorCode = "IRIS_ENCODING_FAILED"
	ErrCodeFieldTypeMismatch errors.ErrorCode = "IRIS_FIELD_TYPE_MISMATCH"
	ErrCodeBufferOverflow    errors.ErrorCode = "IRIS_BUFFER_OVERFLOW"

	// Writer and output errors
	ErrCodeWriterNotAvailable errors.ErrorCode = "IRIS_WRITER_NOT_AVAILABLE"
	ErrCodeWriteFailed        errors.ErrorCode = "IRIS_WRITE_FAILED"
	ErrCodeFlushFailed        errors.ErrorCode = "IRIS_FLUSH_FAILED"
	ErrCodeSyncFailed         errors.ErrorCode = "IRIS_SYNC_FAILED"

	// Performance and resource errors
	ErrCodeMemoryAllocation errors.ErrorCode = "IRIS_MEMORY_ALLOCATION"
	ErrCodePoolExhausted    errors.ErrorCode = "IRIS_POOL_EXHAUSTED"
	ErrCodeTimeout          errors.ErrorCode = "IRIS_TIMEOUT"
	ErrCodeResourceLimit    errors.ErrorCode = "IRIS_RESOURCE_LIMIT"

	// Ring buffer errors
	ErrCodeRingInvalidCapacity  errors.ErrorCode = "IRIS_RING_INVALID_CAPACITY"
	ErrCodeRingInvalidBatchSize errors.ErrorCode = "IRIS_RING_INVALID_BATCH_SIZE"
	ErrCodeRingMissingProcessor errors.ErrorCode = "IRIS_RING_MISSING_PROCESSOR"
	ErrCodeRingClosed           errors.ErrorCode = "IRIS_RING_CLOSED"
	ErrCodeRingBuildFailed      errors.ErrorCode = "IRIS_RING_BUILD_FAILED"

	// Hook and middleware errors
	ErrCodeHookExecution   errors.ErrorCode = "IRIS_HOOK_EXECUTION"
	ErrCodeMiddlewareChain errors.ErrorCode = "IRIS_MIDDLEWARE_CHAIN"
	ErrCodeFilterFailed    errors.ErrorCode = "IRIS_FILTER_FAILED"

	// File and rotation errors
	ErrCodeFileOpen         errors.ErrorCode = "IRIS_FILE_OPEN"
	ErrCodeFileWrite        errors.ErrorCode = "IRIS_FILE_WRITE"
	ErrCodeFileRotation     errors.ErrorCode = "IRIS_FILE_ROTATION"
	ErrCodePermissionDenied errors.ErrorCode = "IRIS_PERMISSION_DENIED"
)

LoggerError codes - specific error codes for the iris logging library

View Source
const ErrCodeLoggerExecution errors.ErrorCode = "IRIS_LOGGER_EXECUTION"

ErrCodeLoggerExecution represents the error code for logger execution failures

Variables

View Source
var (
	// ErrLoggerNotStarted is returned when logging operations are attempted on a non-started logger
	ErrLoggerNotStarted = errors.New(ErrCodeLoggerNotFound, "logger not started - call Start() first")

	// ErrLoggerClosed is returned when logging operations are attempted on a closed logger
	ErrLoggerClosed = errors.New(ErrCodeLoggerClosed, "logger is closed")

	// ErrLoggerCreationFailed is returned when logger creation fails
	ErrLoggerCreationFailed = errors.New(ErrCodeLoggerCreation, "failed to create logger")
)

Logger errors

View Source
var BalancedStrategy = NewProgressiveIdleStrategy()

BalancedStrategy provides good performance for most production workloads. Uses progressive strategy that adapts to workload patterns. Equivalent to NewProgressiveIdleStrategy().

View Source
var EfficientStrategy = NewSleepingIdleStrategy(time.Millisecond, 0)

EfficientStrategy minimizes CPU usage for low-throughput scenarios. Uses 1ms sleep with no initial spinning. Equivalent to NewSleepingIdleStrategy(time.Millisecond, 0).

View Source
var HybridStrategy = NewSleepingIdleStrategy(time.Millisecond, 1000)

HybridStrategy provides a good compromise between latency and CPU usage. Spins briefly then sleeps for 1ms. Equivalent to NewSleepingIdleStrategy(time.Millisecond, 1000).

View Source
var SpinningStrategy = NewSpinningIdleStrategy()

SpinningStrategy provides ultra-low latency with maximum CPU usage. Equivalent to NewSpinningIdleStrategy().

Functions

func AllLevelNames

func AllLevelNames() []string

AllLevelNames returns a slice of all valid level names. This is useful for generating help text and validation messages.

func FreeStack

func FreeStack(stack *Stack)

FreeStack returns a Stack to the pool for reuse

func GetErrorCode

func GetErrorCode(err error) errors.ErrorCode

GetErrorCode extracts the error code from an error

func GetUserMessage

func GetUserMessage(err error) string

GetUserMessage extracts a user-friendly message from an error

func IsFileSyncer

func IsFileSyncer(ws WriteSyncer) bool

IsFileSyncer checks if a WriteSyncer is backed by a file. This can be useful for conditional logic based on the underlying writer type, such as applying different buffering strategies.

func IsLoggerError

func IsLoggerError(err error, code errors.ErrorCode) bool

IsLoggerError checks if an error is an iris logger error

func IsNopSyncer

func IsNopSyncer(ws WriteSyncer) bool

IsNopSyncer checks if a WriteSyncer uses no-op synchronization. This can help optimize write patterns when sync operations are known to be no-ops.

func IsRetryableError

func IsRetryableError(err error) bool

IsRetryableError checks if an error is retryable

func IsValidLevel

func IsValidLevel(level Level) bool

IsValidLevel checks if the given level is a valid predefined level. WHY upper bound is Fatal: DPanic, Panic, and Fatal are intentional production-grade levels. Restricting to Error was wrong — it would cause New() to reject loggers created at Fatal level, which is a legitimate operational mode (e.g. silence everything except fatal conditions).

func NewAtomicLevelFromConfig

func NewAtomicLevelFromConfig(config *Config) *atomicLevel

NewAtomicLevelFromConfig creates a new atomicLevel initialized with the config's level. This function bridges the gap between static configuration and dynamic level management.

func NewLoggerError

func NewLoggerError(code errors.ErrorCode, message string) *errors.Error

NewLoggerError creates a new logger-specific error with standard context

func NewLoggerErrorWithField

func NewLoggerErrorWithField(code errors.ErrorCode, message, field, value string) *errors.Error

NewLoggerErrorWithField creates a logger error with field and value information

func NewReaderLogger added in v1.1.0

func NewReaderLogger(config Config, readers []SyncReader, opts ...Option) (*readerLogger, error)

NewReaderLogger creates a logger that processes both direct logging calls and background readers. The underlying Logger performance is preserved while external log sources are processed asynchronously.

Parameters:

  • config: Standard Iris logger configuration
  • readers: External log sources to process in background
  • opts: Standard Iris logger options

Returns:

  • *readerLogger: Extended logger with reader support
  • error: Configuration or setup error

Performance: Zero impact on direct logging, background readers operate in separate goroutines feeding into the same high-performance ring buffer.

func RecoverWithError

func RecoverWithError(code errors.ErrorCode) *errors.Error

RecoverWithError recovers from a panic and converts it to a logger error

func WrapLoggerError

func WrapLoggerError(originalErr error, code errors.ErrorCode, message string) *errors.Error

WrapLoggerError wraps an existing error with logger-specific context

Types

type AdaptiveLogger added in v1.2.0

type AdaptiveLogger struct {
	// contains filtered or unexported fields
}

AdaptiveLogger provides lazy dual-mode auto-scaling

func NewAdaptiveLogger added in v1.2.0

func NewAdaptiveLogger(cfg ScalerConfig) (*AdaptiveLogger, error)

NewAdaptiveLogger creates a lazy dual-mode auto-scaling logger

func (*AdaptiveLogger) Close added in v1.2.0

func (al *AdaptiveLogger) Close() error

Close gracefully shuts down

func (*AdaptiveLogger) Debug added in v1.2.0

func (al *AdaptiveLogger) Debug(msg string, fields ...Field)

Debug logs at Debug level

func (*AdaptiveLogger) Error added in v1.2.0

func (al *AdaptiveLogger) Error(msg string, fields ...Field)

Error logs at Error level

func (*AdaptiveLogger) Info added in v1.2.0

func (al *AdaptiveLogger) Info(msg string, fields ...Field)

Info logs at Info level with automatic scaling

func (*AdaptiveLogger) Level added in v1.2.1

func (al *AdaptiveLogger) Level() Level

Level returns the current minimum logging level. Reads from singleLogger (always exists, always in sync with multiLogger).

func (*AdaptiveLogger) Mode added in v1.2.0

func (al *AdaptiveLogger) Mode() ScalingMode

Mode returns the current scaling mode

func (*AdaptiveLogger) Named added in v1.2.4

func (al *AdaptiveLogger) Named(name string) *AdaptiveLogger

Named creates a child AdaptiveLogger with a hierarchical name. Names are dot-separated: parent.Named("db") on a logger named "app" produces "app.db". The child shares scaling machinery with the parent.

WHY accumulate: same reason as With() -- multiLogger is lazy and must inherit the full name chain when it is eventually created.

Thread Safety: safe to call from any goroutine.

func (*AdaptiveLogger) SetLevel added in v1.2.1

func (al *AdaptiveLogger) SetLevel(level Level)

SetLevel atomically changes the minimum logging level on both internal loggers. WHY both: if multiLogger was lazily initialized, its level must stay in sync with singleLogger. A SetLevel that only updates singleLogger would cause filtered messages to reappear when the system scales up under contention.

func (*AdaptiveLogger) Start added in v1.2.0

func (al *AdaptiveLogger) Start() error

Start begins logger operations

func (*AdaptiveLogger) Stats added in v1.2.0

func (al *AdaptiveLogger) Stats() AdaptiveStats

Stats returns scaling statistics

func (*AdaptiveLogger) Sync added in v1.2.1

func (al *AdaptiveLogger) Sync() error

Sync flushes both internal loggers' ring buffers and syncs the output. WHY both: if multiLogger was initialized, it may have buffered records that must reach the output before Sync returns.

func (*AdaptiveLogger) Warn added in v1.2.0

func (al *AdaptiveLogger) Warn(msg string, fields ...Field)

Warn logs at Warn level

func (*AdaptiveLogger) With added in v1.2.4

func (al *AdaptiveLogger) With(fields ...Field) *AdaptiveLogger

With creates a child AdaptiveLogger that includes the given fields in every log record. The child shares the parent's ring buffers, output, and scaling machinery -- only the per-record context differs.

WHY accumulate-and-clone: multiLogger is lazy. If we only called singleLogger.With() we would lose the fields when the system scales up under contention and creates multiLogger. By storing contextFields in the AdaptiveLogger struct, ensureMultiLogger can replay them.

Thread Safety: safe to call from any goroutine. The returned child is independent and can be used concurrently with the parent.

type AdaptiveStats added in v1.2.0

type AdaptiveStats struct {
	Mode           ScalingMode
	ScaleUpCount   uint64
	ScaleDownCount uint64
	TotalWrites    uint64
	ActiveWriters  uint32
}

AdaptiveStats provides scaling insights

type Architecture

type Architecture int

Architecture represents the ring buffer architecture type

const (
	// SingleRing uses a single Zephyros ring for maximum single-thread performance
	// Best for: benchmarks, single-producer scenarios, maximum single-thread throughput
	// Performance: ~25ns/op single-thread, limited concurrency scaling
	SingleRing Architecture = iota

	// ThreadedRings uses ThreadedZephyros with multiple rings for multi-producer scaling
	// Best for: production, multi-producer scenarios, high concurrency
	// Performance: ~35ns/op per thread, excellent scaling (4x+ improvement with multiple producers)
	ThreadedRings
)

func ParseArchitecture

func ParseArchitecture(s string) (Architecture, error)

ParseArchitecture parses a string into an Architecture

func (Architecture) String

func (a Architecture) String() string

String returns the string representation of the architecture

type AtomicLevel

type AtomicLevel struct {
	// contains filtered or unexported fields
}

AtomicLevel provides atomic operations on Level values. This is useful for dynamically changing log levels in concurrent environments.

func NewAtomicLevel

func NewAtomicLevel(level Level) *AtomicLevel

NewAtomicLevel creates a new AtomicLevel with the given initial level.

func (*AtomicLevel) Enabled

func (al *AtomicLevel) Enabled(level Level) bool

Enabled checks if the given level is enabled atomically. This is a high-performance method for checking levels in hot paths.

func (*AtomicLevel) Level

func (al *AtomicLevel) Level() Level

Level returns the current level atomically.

func (*AtomicLevel) MarshalText

func (al *AtomicLevel) MarshalText() ([]byte, error)

MarshalText implements encoding.TextMarshaler for AtomicLevel.

func (*AtomicLevel) SetLevel

func (al *AtomicLevel) SetLevel(level Level)

SetLevel sets the level atomically.

func (*AtomicLevel) String

func (al *AtomicLevel) String() string

String returns the string representation of the current level.

func (*AtomicLevel) UnmarshalText

func (al *AtomicLevel) UnmarshalText(b []byte) error

UnmarshalText implements encoding.TextUnmarshaler for AtomicLevel.

type Config

type Config struct {
	// Ring buffer configuration (power-of-two recommended for Capacity)
	// Capacity determines the maximum number of log entries that can be buffered
	// before blocking or dropping occurs. Larger values improve throughput but
	// increase memory usage.
	Capacity int64

	// BatchSize controls how many log entries are processed together.
	// Higher values improve throughput but may increase latency.
	// Optimal values are typically 8-64 depending on workload.
	BatchSize int64

	// Architecture determines the ring buffer architecture type
	// SingleRing: Maximum single-thread performance (~25ns/op) - best for benchmarks
	// ThreadedRings: Multi-producer scaling (~35ns/op per thread) - best for production
	// Default: SingleRing for benchmark compatibility
	Architecture Architecture

	// NumRings specifies the number of rings for ThreadedRings architecture
	// Only used when Architecture = ThreadedRings
	// Higher values provide better parallelism but use more memory
	// Default: 4 (optimal for most multi-core systems)
	NumRings int

	// BackpressurePolicy determines the behavior when the ring buffer is full
	// DropOnFull: Drops new messages for maximum performance (default)
	// BlockOnFull: Blocks caller until space is available (guaranteed delivery)
	BackpressurePolicy zephyroslite.BackpressurePolicy

	// IdleStrategy controls CPU usage when no log records are being processed
	// Different strategies provide various trade-offs between latency and CPU usage:
	// - SpinningIdleStrategy: Ultra-low latency, ~100% CPU usage
	// - SleepingIdleStrategy: Balanced CPU/latency, ~1-10% CPU usage
	// - YieldingIdleStrategy: Moderate reduction, ~10-50% CPU usage
	// - ChannelIdleStrategy: Minimal CPU usage, ~microsecond latency
	// - ProgressiveIdleStrategy: Adaptive strategy for variable workloads (default)
	IdleStrategy zephyroslite.IdleStrategy

	// Output and formatting configuration
	// Output specifies where log entries are written. Must implement WriteSyncer
	// for proper synchronization guarantees.
	Output WriteSyncer

	// Encoder determines the output format (JSON, Console, etc.)
	// The encoder converts log records to their final byte representation
	Encoder Encoder

	// Level sets the minimum logging level. Messages below this level
	// are filtered out early for maximum performance.
	Level Level // default: Info

	// TimeFn allows custom time source for timestamps.
	// Default: time.Now for real-time logging
	// Can be overridden for testing or performance optimization
	TimeFn func() time.Time

	// Optional performance tuning
	// Sampler controls log sampling for high-volume scenarios
	// Can be nil to disable sampling
	Sampler Sampler

	// Name provides a human-readable identifier for this logger instance
	// Useful for debugging and metrics collection
	Name string

	// ErrorHandler is called when the logger encounters an internal error
	// (encode failure, write failure). If nil, errors are written to stderr.
	//
	// WHY not global: a global error handler is a data race waiting to happen
	// (CWE-362). Each Logger owns its handler, set at construction time,
	// immutable after. Tests can inject a handler that captures errors
	// without affecting other loggers.
	ErrorHandler ErrorHandler

	// OnDrop is called from the consumer goroutine when the cumulative
	// drop count exceeds the last reported value. The argument is the
	// total number of drops since logger creation (monotonically increasing).
	//
	// WHY consumer-side only: drops happen in the producer fast path where
	// any callback would add latency. The consumer already processes records
	// sequentially, so a periodic check of the atomic counter is essentially
	// free (~1ns Load). Detection latency equals one batch interval.
	//
	// Forensic use case (CWE-778 — Insufficient Logging): a burst of drops
	// can signal log-flooding attacks attempting to mask malicious activity.
	// Wire this callback to a tamper-evident audit trail (e.g. BlackBox) so
	// that dropped-log events are themselves unforgeable.
	OnDrop func(totalDropped int64)
}

Config represents the core configuration for an iris logger instance. This structure centralizes all logging parameters with intelligent defaults and performance optimizations. All fields are designed for zero-copy operations and minimal memory allocation.

Performance considerations: - Capacity should be a power-of-two for optimal ring buffer performance - BatchSize affects throughput vs latency trade-offs - TimeFn allows for custom time sources (useful for testing and optimization)

Thread-safety: Config structs are immutable after logger creation

func (*Config) Clone

func (c *Config) Clone() *Config

Clone creates a deep copy of the configuration. This is useful for creating derived configurations without affecting the original.

func (*Config) GetStats

func (c *Config) GetStats() *stats

GetStats creates a new stats instance for tracking logger metrics. This factory function ensures proper initialization of all atomic counters.

func (*Config) Validate

func (c *Config) Validate() error

Validate checks the configuration for common errors and returns an error if the configuration is invalid. This helps catch configuration issues early before logger creation.

Performance: Fast validation with early returns for common cases

type ConsoleEncoder

type ConsoleEncoder struct {
	// contains filtered or unexported fields
}

ConsoleEncoder writes compact, optionally colorized log lines for interactive terminal sessions. It is not intended for production log aggregation — use JSONEncoder for machine-readable output.

Output format:

15:04:05.000  INFO   mts  daemon starting  key=value ...

Colors are emitted only when os.Stdout is a real TTY at construction time. When stdout is redirected to a file or pipe, output is plain text with no ANSI escape sequences.

Security: all user-controlled content (message, field keys, string values) is sanitized to strip control characters including ESC (0x1B), preventing ANSI injection attacks (CWE-116) that could manipulate the terminal display.

func NewConsoleEncoder

func NewConsoleEncoder() *ConsoleEncoder

NewConsoleEncoder creates a ConsoleEncoder, enabling colors only when os.Stdout is an interactive terminal.

func NewConsoleEncoderWithColorize added in v1.2.6

func NewConsoleEncoderWithColorize(colorize bool) *ConsoleEncoder

NewConsoleEncoderWithColorize creates an encoder with explicit color control. WHY: isTTY(os.Stdout) returns false in test environments (no real TTY); this constructor lets tests exercise both the color and plain-text paths.

func (*ConsoleEncoder) Encode

func (e *ConsoleEncoder) Encode(rec *Record, now time.Time, buf *bytes.Buffer)

Encode writes one log line to buf.

type ContextExtractor

type ContextExtractor struct {
	// Keys maps context keys to field names in log output
	Keys map[ContextKey]string

	// MaxDepth limits how deep to search in context chain (default: 10)
	MaxDepth int
}

ContextExtractor defines which context keys should be extracted and logged. This prevents the performance overhead of scanning all context values.

type ContextKey

type ContextKey string

ContextKey represents a key type for context values that should be logged.

const (
	RequestIDKey ContextKey = "request_id"
	TraceIDKey   ContextKey = "trace_id"
	SpanIDKey    ContextKey = "span_id"
	UserIDKey    ContextKey = "user_id"
	SessionIDKey ContextKey = "session_id"
)

Common context keys for standardized logging

type ContextLogger

type ContextLogger struct {
	// contains filtered or unexported fields
}

ContextLogger wraps a Logger with pre-extracted context fields. This avoids context.Value() calls in the hot logging path.

func (*ContextLogger) Debug

func (cl *ContextLogger) Debug(msg string, fields ...Field)

Debug logs a message at debug level with context fields

func (*ContextLogger) Error

func (cl *ContextLogger) Error(msg string, fields ...Field)

Error logs a message at error level with context fields

func (*ContextLogger) Fatal

func (cl *ContextLogger) Fatal(msg string, fields ...Field)

Fatal logs a message at fatal level with context fields and exits

func (*ContextLogger) Info

func (cl *ContextLogger) Info(msg string, fields ...Field)

Info logs a message at info level with context fields

func (*ContextLogger) Warn

func (cl *ContextLogger) Warn(msg string, fields ...Field)

Warn logs a message at warn level with context fields

func (*ContextLogger) With

func (cl *ContextLogger) With(fields ...Field) *ContextLogger

With creates a new ContextLogger with additional fields. This preserves both context fields and manually added fields.

func (*ContextLogger) WithAdditionalContext

func (cl *ContextLogger) WithAdditionalContext(ctx context.Context, opts ...ContextOption) *ContextLogger

WithAdditionalContext extracts additional context values without losing existing ones.

type ContextOption added in v1.2.0

type ContextOption func(*contextOptions)

ContextOption configures context extraction behavior

func WithExtractor added in v1.2.0

func WithExtractor(extractor *ContextExtractor) ContextOption

WithExtractor uses a custom ContextExtractor for field extraction. Use this when you need full control over which keys are extracted and how they map to field names.

func WithKey added in v1.2.0

func WithKey(key ContextKey, fieldName string) ContextOption

WithKey extracts a single key with a custom field name. Optimized for the common case of extracting just one context value.

func WithKeys added in v1.2.0

func WithKeys(keys ...ContextKey) ContextOption

WithKeys extracts only the specified keys using their default field names. This is more efficient than using the full DefaultContextExtractor when you only need specific fields.

type Depth

type Depth int

Depth specifies how deep of a stack trace should be captured

const (
	// FirstFrame captures only the first frame (caller info)
	FirstFrame Depth = iota
	// FullStack captures the entire call stack
	FullStack
)

type Encoder

type Encoder interface {
	Encode(rec *Record, now time.Time, buf *bytes.Buffer)
}

Encoder astratto (permette anche encoder binari futuri).

type ErrorHandler

type ErrorHandler func(err *errors.Error)

ErrorHandler represents a function that handles errors within the logging system. WHY a type alias: this type is used in Config so callers can inject custom error handling at construction time, eliminating the need for mutable global state.

type Field

type Field struct {
	// K is the field key/name
	K string
	// T indicates the type of data stored in this field
	T kind
	// I64 stores signed integers, bools (as 0/1), durations, and timestamps
	I64 int64
	// U64 stores unsigned integers
	U64 uint64
	// F64 stores floating-point numbers
	F64 float64
	// Str stores string values
	Str string
	// B stores byte slices
	B []byte
	// Obj stores arbitrary objects (errors, stringers, etc.)
	Obj interface{}
}

Field represents a key-value pair with type information for structured logging. It uses a union-like approach to minimize memory allocation and maximize performance. The T field indicates which of the value fields (I64, U64, F64, Str, B, Obj) contains the actual data.

func Binary

func Binary(k string, v []byte) Field

Binary creates a byte slice field (alias for Bytes).

func Bool

func Bool(k string, v bool) Field

Bool creates a boolean field. Internally stored as int64 (1 for true, 0 for false) for efficiency.

func Bytes

func Bytes(k string, v []byte) Field

Bytes creates a byte slice field. Useful for binary data, encoded strings, or raw bytes.

func Dur

func Dur(k string, v time.Duration) Field

Dur creates a duration field from time.Duration. Stored as int64 nanoseconds for precision and efficiency.

func Err

func Err(err error) Field

Err creates an error field with key "error". If err is nil, returns a field with empty string (compatible but not elided).

func ErrorField

func ErrorField(err error) Field

ErrorField creates an error field for logging errors. Equivalent to NamedErr("error", err) but uses the proper error type for potential optimization.

func Errors

func Errors(k string, errs []error) Field

Errors creates a field for multiple errors (like Zap's ErrorsField).

func Float32

func Float32(k string, v float32) Field

Float32 creates a field from a float32 value.

func Float64

func Float64(k string, v float64) Field

Float64 creates a 64-bit floating-point field. Suitable for decimal numbers and scientific notation.

func Int

func Int(k string, v int) Field

Int creates a signed integer field from an int value. The int is converted to int64 for consistent storage.

func Int8

func Int8(k string, v int8) Field

Int8 creates a field from an int8 value.

func Int16

func Int16(k string, v int16) Field

Int16 creates a field from an int16 value.

func Int32

func Int32(k string, v int32) Field

Int32 creates a field from an int32 value.

func Int64

func Int64(k string, v int64) Field

Int64 creates a signed 64-bit integer field. Use this for large integers or when you specifically need int64.

func NamedErr

func NamedErr(k string, err error) Field

NamedErr creates an error field with a custom key. If err is nil, returns a field with empty string (compatible but not elided).

func NamedError

func NamedError(k string, err error) Field

NamedError creates an error field with a custom key using proper error type.

func Object

func Object(k string, val interface{}) Field

Object creates an object field for arbitrary data.

func Secret

func Secret(k, v string) Field

Secret creates a field for sensitive data that will be automatically redacted. The actual value is stored but will appear as "[REDACTED]" in log output. Use this for passwords, API keys, tokens, personal data, or any sensitive information.

Example:

logger.Info("User login", iris.Secret("password", userPassword))
// Output: {"level":"info","msg":"User login","password":"[REDACTED]"}

Security: This prevents accidental exposure of sensitive data in logs while maintaining the field structure for debugging purposes.

func Str

func Str(k, v string) Field

Str creates a string field for logging. This is one of the most commonly used field types.

func String

func String(k, v string) Field

String creates a string field (alias for Str for consistency with Go naming).

func Stringer

func Stringer(k string, val interface{ String() string }) Field

Stringer creates a stringer field for objects implementing fmt.Stringer.

func Time

func Time(k string, v time.Time) Field

Time creates a timestamp field from time.Time (alias for TimeField for consistency).

func TimeField

func TimeField(k string, v time.Time) Field

TimeField creates a timestamp field from time.Time. Stored as Unix nanoseconds for high precision and compact representation.

func Uint

func Uint(k string, v uint) Field

Uint creates a field from a uint value.

func Uint8

func Uint8(k string, v uint8) Field

Uint8 creates a field from a uint8 value.

func Uint16

func Uint16(k string, v uint16) Field

Uint16 creates a field from a uint16 value.

func Uint32

func Uint32(k string, v uint32) Field

Uint32 creates a field from a uint32 value.

func Uint64

func Uint64(k string, v uint64) Field

Uint64 creates an unsigned 64-bit integer field. Use this for non-negative values that may exceed int64 range.

func (Field) BoolValue

func (f Field) BoolValue() bool

BoolValue returns the boolean value if the field is a bool, false otherwise.

func (Field) BytesValue

func (f Field) BytesValue() []byte

BytesValue returns the byte slice value if the field is bytes, nil otherwise.

func (Field) DurationValue

func (f Field) DurationValue() time.Duration

DurationValue returns the time.Duration value if the field is a duration, 0 otherwise.

func (Field) FloatValue

func (f Field) FloatValue() float64

FloatValue returns the float64 value if the field is a float, 0.0 otherwise.

func (Field) IntValue

func (f Field) IntValue() int64

IntValue returns the int64 value if the field is an integer, 0 otherwise.

func (Field) IsBool

func (f Field) IsBool() bool

IsBool returns true if the field contains boolean data.

func (Field) IsBytes

func (f Field) IsBytes() bool

IsBytes returns true if the field contains byte slice data.

func (Field) IsDuration

func (f Field) IsDuration() bool

IsDuration returns true if the field contains duration data.

func (Field) IsFloat

func (f Field) IsFloat() bool

IsFloat returns true if the field contains floating-point data.

func (Field) IsInt

func (f Field) IsInt() bool

IsInt returns true if the field contains integer data.

func (Field) IsString

func (f Field) IsString() bool

IsString returns true if the field contains string data.

func (Field) IsTime

func (f Field) IsTime() bool

IsTime returns true if the field contains timestamp data.

func (Field) IsUint

func (f Field) IsUint() bool

IsUint returns true if the field contains unsigned integer data.

func (Field) Key

func (f Field) Key() string

Key returns the field's key name.

func (Field) StringValue

func (f Field) StringValue() string

StringValue returns the string value if the field is a string, empty string otherwise.

func (Field) TimeValue

func (f Field) TimeValue() time.Time

TimeValue returns the time.Time value if the field is a timestamp, zero time otherwise.

func (Field) Type

func (f Field) Type() kind

Type returns the kind of data stored in this field.

func (Field) UintValue

func (f Field) UintValue() uint64

UintValue returns the uint64 value if the field is an unsigned integer, 0 otherwise.

type Hook

type Hook func(rec *Record)

Hook represents a function executed in the consumer thread after log record processing.

Hooks are executed in the consumer thread to avoid contention with producer threads. This design ensures maximum performance for logging operations while still allowing powerful post-processing capabilities.

Hook functions receive the fully populated Record after encoding but before the buffer is returned to the pool. This allows for:

  • Metrics collection
  • Log forwarding to external systems
  • Custom processing based on log content
  • Development-time debugging

Performance Notes:

  • Executed in single consumer thread (no locks needed)
  • Called after encoding is complete
  • Should avoid blocking operations to maintain throughput

Thread Safety: Hooks are called from single consumer thread only

type IdleStrategy

type IdleStrategy = zephyroslite.IdleStrategy

IdleStrategy defines the interface for consumer idle behavior. This type alias exposes the internal interface for configuration purposes.

func NewChannelIdleStrategy

func NewChannelIdleStrategy(timeout time.Duration) IdleStrategy

NewChannelIdleStrategy creates an efficient blocking wait strategy. This strategy puts the consumer goroutine into an efficient wait state using Go channels, providing near-zero CPU usage when idle.

Parameters:

  • timeout: Maximum time to wait before checking for shutdown (0 = no timeout)

Best for: Minimum CPU usage with acceptable latency for low-throughput scenarios CPU Usage: Near 0% when idle Latency: ~microseconds (channel wake-up time)

Note: This strategy works best with lower throughput workloads where the overhead of channel operations is acceptable.

Examples:

// No timeout - maximum efficiency
NewChannelIdleStrategy(0)

// With timeout for responsive shutdown
NewChannelIdleStrategy(100*time.Millisecond)

func NewProgressiveIdleStrategy

func NewProgressiveIdleStrategy() IdleStrategy

NewProgressiveIdleStrategy creates an adaptive idle strategy. This strategy automatically adjusts its behavior based on work patterns, starting with spinning for ultra-low latency and progressively reducing CPU usage as idle time increases.

This is the default strategy, providing good performance for most workloads without requiring manual tuning.

Best for: Variable workload patterns where both low latency and low CPU usage are important CPU Usage: Adaptive - starts high, reduces over time when idle Latency: Starts at minimum, increases gradually when idle

Behavior:

  • Hot spin for first 1000 iterations (minimum latency)
  • Occasional yielding up to 10000 iterations
  • Progressive sleep with exponential backoff
  • Resets to hot spin when work is found

Example:

config := &Config{
    IdleStrategy: NewProgressiveIdleStrategy(),
    // ... other config
}

func NewSleepingIdleStrategy

func NewSleepingIdleStrategy(sleepDuration time.Duration, maxSpins int) IdleStrategy

NewSleepingIdleStrategy creates a CPU-efficient idle strategy with controlled latency. This strategy reduces CPU usage by sleeping when no work is available, with optional initial spinning for hybrid behavior.

Parameters:

  • sleepDuration: How long to sleep when no work is found (e.g., time.Millisecond)
  • maxSpins: Number of spin iterations before sleeping (0 = sleep immediately)

Best for: Balanced CPU usage and latency in production environments CPU Usage: ~1-10% depending on sleep duration and spin count Latency: ~1-10ms depending on sleep duration

Examples:

// Low CPU usage, higher latency
NewSleepingIdleStrategy(5*time.Millisecond, 0)

// Hybrid: spin briefly then sleep
NewSleepingIdleStrategy(time.Millisecond, 1000)

func NewSpinningIdleStrategy

func NewSpinningIdleStrategy() IdleStrategy

NewSpinningIdleStrategy creates an ultra-low latency idle strategy. This strategy provides the minimum possible latency by continuously checking for work without ever yielding the CPU.

Best for: Ultra-low latency requirements where CPU consumption is not a concern CPU Usage: ~100% of one core when idle Latency: Minimum possible (~nanoseconds)

Example:

config := &Config{
    IdleStrategy: NewSpinningIdleStrategy(),
    // ... other config
}

func NewYieldingIdleStrategy

func NewYieldingIdleStrategy(maxSpins int) IdleStrategy

NewYieldingIdleStrategy creates a moderate CPU reduction strategy. This strategy spins for a configurable number of iterations before yielding to the Go scheduler, providing a middle ground between spinning and sleeping approaches.

Parameters:

  • maxSpins: Number of spins before yielding to scheduler

Best for: Moderate CPU reduction while maintaining reasonable latency CPU Usage: ~10-50% depending on max spins configuration Latency: ~microseconds to low milliseconds

Examples:

// More aggressive yielding (lower CPU, higher latency)
NewYieldingIdleStrategy(100)

// Conservative yielding (higher CPU, lower latency)
NewYieldingIdleStrategy(10000)

type JSONEncoder

type JSONEncoder struct {
	// TimeKey specifies the JSON key for timestamps (default: "ts")
	TimeKey string

	// LevelKey specifies the JSON key for log levels (default: "level")
	LevelKey string

	// MsgKey specifies the JSON key for log messages (default: "msg")
	MsgKey string

	// RFC3339 controls timestamp format:
	//   true:  RFC3339 string format (default, human-readable)
	//   false: Unix nanoseconds integer (compact, faster)
	RFC3339 bool
}

JSONEncoder implements NDJSON (newline-delimited JSON) encoding with zero-reflection.

The encoder produces one JSON object per log record, separated by newlines. This format is ideal for log processing systems and streaming applications.

Performance Features: - Zero reflection overhead using pre-compiled encoding paths - Reusable buffer allocation for minimal GC pressure - Optimized time formatting with caching - Direct byte buffer writing without intermediate strings

Output Format:

{"ts":"2025-09-06T14:30:45.123Z","level":"info","msg":"User action","field":"value"}

Use Cases: - Log aggregation systems (ELK stack, Splunk) - Structured logging for APIs and microservices - Machine-readable logs for automated processing - Integration with JSON-based monitoring tools

func NewJSONEncoder

func NewJSONEncoder() *JSONEncoder

NewJSONEncoder creates a new JSON encoder with standard defaults.

Default configuration: - TimeKey: "ts" - LevelKey: "level" - MsgKey: "msg" - RFC3339: true (human-readable timestamps)

The defaults follow common logging conventions and work well with most log processing systems.

Returns:

  • *JSONEncoder: Configured JSON encoder instance

func (*JSONEncoder) Encode

func (e *JSONEncoder) Encode(rec *Record, now time.Time, buf *bytes.Buffer)

Encode encodes a log record to JSON format

type Level

type Level int32

Level represents the severity level of a log message. Levels are ordered from least to most severe: Debug < Info < Warn < Error < DPanic < Panic < Fatal

Performance Notes: - Level is implemented as int32 for fast comparisons - Atomic operations used for thread-safe level changes - Zero allocation for level checks via inlined comparisons

const (
	Debug  Level = iota - 1 // Debug information, typically disabled in production
	Info                    // General information messages
	Warn                    // Warning messages for potentially harmful situations
	Error                   // Error messages for failure conditions
	DPanic                  // Development panic - panics in development, errors in production
	Panic                   // Panic level - logs message then panics
	Fatal                   // Fatal level - logs message then calls os.Exit(1)

	// StacktraceDisabled is a sentinel value used to disable stack trace collection
	StacktraceDisabled Level = -999
)

Log levels in order of increasing severity

func AllLevels

func AllLevels() []Level

AllLevels returns a slice of all valid levels in ascending order. This is useful for documentation, validation, and testing.

func ParseLevel

func ParseLevel(s string) (Level, error)

ParseLevel parses a string representation of a level and returns the corresponding Level. It handles common aliases and is case-insensitive. Returns Info level for empty strings as a sensible default.

func (Level) Enabled

func (l Level) Enabled(min Level) bool

Enabled determines if this level is enabled given a minimum level. This is a critical hot path function optimized for maximum performance.

func (Level) IsDPanic

func (l Level) IsDPanic() bool

IsDPanic returns true if the level is DPanic. Convenience method for checking development panic level.

func (Level) IsDebug

func (l Level) IsDebug() bool

IsDebug returns true if the level is Debug. Convenience method for frequently checked debug level.

func (Level) IsError

func (l Level) IsError() bool

IsError returns true if the level is Error. Convenience method for frequently checked error level.

func (Level) IsFatal

func (l Level) IsFatal() bool

IsFatal returns true if the level is Fatal. Convenience method for checking fatal level.

func (Level) IsInfo

func (l Level) IsInfo() bool

IsInfo returns true if the level is Info. Convenience method for frequently checked info level.

func (Level) IsPanic

func (l Level) IsPanic() bool

IsPanic returns true if the level is Panic. Convenience method for checking panic level.

func (Level) IsWarn

func (l Level) IsWarn() bool

IsWarn returns true if the level is Warn. Convenience method for frequently checked warn level.

func (Level) MarshalText

func (l Level) MarshalText() ([]byte, error)

MarshalText implements encoding.TextMarshaler for JSON/XML serialization. This method is optimized to avoid allocations in the common case.

func (Level) String

func (l Level) String() string

String returns the string representation of the level. This is used for human-readable output and serialization.

func (*Level) UnmarshalText

func (l *Level) UnmarshalText(b []byte) error

UnmarshalText implements encoding.TextUnmarshaler for JSON/XML deserialization. This method provides detailed error information for debugging.

type LevelAwareSampler added in v1.2.1

type LevelAwareSampler struct {
	// contains filtered or unexported fields
}

LevelAwareSampler applies independent rate limits per log level. Levels without a configured limit pass unconditionally, which ensures Error/Fatal are never silenced even under flood attacks.

func DefaultLevelAwareSampler added in v1.2.1

func DefaultLevelAwareSampler() *LevelAwareSampler

DefaultLevelAwareSampler returns a production-ready configuration:

Error/Fatal: unlimited (no entry in map — always passes)
Warn:        100/s burst, 100/s sustained
Info:        50/s burst, 50/s sustained
Debug:       10/s burst, 10/s sustained

WHY: these defaults are tuned for a daemon that logs to disk. High-severity levels must never be silenced; low-severity levels are throttled to prevent disk saturation under load.

func NewLevelAwareSampler added in v1.2.1

func NewLevelAwareSampler(limits map[Level]SamplerLimit) *LevelAwareSampler

NewLevelAwareSampler creates a sampler with per-level rate limits. Levels not present in the map are never sampled (always allowed). WHY: This is the firewall against log flood attacks (CWE-799). An attacker flooding Debug cannot drown Error/Fatal signals because each level draws from an independent token bucket.

func (*LevelAwareSampler) Allow added in v1.2.1

func (s *LevelAwareSampler) Allow(level Level) bool

Allow implements Sampler. If no limit is configured for the given level, the entry passes unconditionally. Otherwise it delegates to the per-level TokenBucketSampler.

type LevelFlag

type LevelFlag struct {
	// contains filtered or unexported fields
}

LevelFlag is a command-line flag implementation for Level. It implements the flag.Value interface for easy CLI integration.

func NewLevelFlag

func NewLevelFlag(level *Level) *LevelFlag

NewLevelFlag creates a new LevelFlag pointing to the given Level.

func (*LevelFlag) Set

func (lf *LevelFlag) Set(s string) error

Set parses and sets the level from a string. This method is called by the flag package when parsing command-line arguments.

func (*LevelFlag) String

func (lf *LevelFlag) String() string

String returns the string representation of the level.

func (*LevelFlag) Type

func (lf *LevelFlag) Type() string

Type returns the type description for help text.

type Logger

type Logger struct {
	// contains filtered or unexported fields
}

Logger provides ultra-high performance logging with zero-allocation structured fields.

The Logger uses a lock-free MPSC (Multiple Producer, Single Consumer) ring buffer for maximum throughput. Multiple goroutines can log concurrently while a single background goroutine processes and outputs the log records.

Thread Safety:

  • All logging methods (Debug, Info, Warn, Error) are thread-safe
  • Multiple goroutines can log concurrently without locks
  • Configuration changes (SetLevel) are atomic and thread-safe

Performance Features:

  • Zero allocations for structured logging with pre-allocated fields
  • Lock-free atomic operations for level checking
  • Intelligent sampling to reduce log volume
  • Efficient buffer pooling to minimize GC pressure
  • Adaptive batching based on log volume
  • Context inheritance with With() for repeated fields

Lifecycle:

  • Create with New() - configures but doesn't start processing
  • Call Start() to begin background processing
  • Use logging methods (Debug, Info, etc.) for actual logging
  • Call Close() for graceful shutdown with guaranteed log processing

func New

func New(cfg Config, opts ...Option) (*Logger, error)

New creates a new high-performance logger with the specified configuration and options.

The logger is created but not started - call Start() to begin processing. This separation allows for configuration verification and testing setup before actual log processing begins.

Parameters:

  • cfg: Logger configuration with output, encoding, and performance settings
  • opts: Optional configuration functions for advanced features

The configuration is validated and enhanced with intelligent defaults:

  • Missing TimeFn defaults to time.Now
  • Zero BatchSize gets auto-sized based on Capacity
  • Nil Output or Encoder will cause an error

Returns:

  • *Logger: Configured logger ready for Start()
  • error: Configuration validation error

Example:

logger, err := iris.New(iris.Config{
    Level:    iris.Info,
    Output:   os.Stdout,
    Encoder:  iris.NewJSONEncoder(),
    Capacity: 8192,
}, iris.WithCaller(), iris.Development())
if err != nil {
    return err
}
logger.Start()

func (*Logger) AtomicLevel

func (l *Logger) AtomicLevel() *AtomicLevel

AtomicLevel returns a pointer to the logger's atomic level.

This method provides access to the underlying atomic level structure, which can be used with dynamic configuration watchers like Argus to enable runtime level changes without logger restarts.

Returns:

  • *AtomicLevel: Pointer to the atomic level instance

Example usage with dynamic config watching:

watcher, err := iris.EnableDynamicLevel(logger, "config.json")
if err != nil {
    log.Printf("Dynamic level disabled: %v", err)
} else {
    defer watcher.Stop()
    log.Println("✅ Dynamic level changes enabled!")
}

Thread Safety: The returned AtomicLevel is thread-safe

func (*Logger) Close

func (l *Logger) Close() error

Close gracefully shuts down the logger.

This method stops the background processing goroutine and ensures all buffered log records are processed before shutdown. The shutdown is deterministic - Close() will not return until all pending logs have been written to the output.

After Close() is called:

  • All subsequent logging operations will fail silently
  • The ring buffer becomes unusable
  • All buffered records are guaranteed to be processed

The method is idempotent - calling Close() multiple times is safe.

Close flushes any pending log data and closes the logger Close should be called when the logger is no longer needed

Performance Characteristics:

  • Blocks until all pending records are processed
  • Automatically syncs output before closing
  • Cannot be used after Close() is called

Thread Safety: Safe to call from multiple goroutines

func (*Logger) DPanic

func (l *Logger) DPanic(msg string, fields ...Field) bool

DPanic logs a message at a special development panic level.

DPanic (Development Panic) logs at Error level but panics if the logger is in development mode. This allows for aggressive error detection during development while maintaining stability in production.

Behavior:

  • Development mode: Logs and then panics
  • Production mode: Logs only (no panic)

Parameters:

  • msg: Primary log message
  • fields: Structured key-value pairs (zero-allocation)

Performance: Same as Error level logging with conditional panic Zap-compat: DPanic/Panic/Fatal con livelli dedicati

func (*Logger) Debug

func (l *Logger) Debug(msg string, fields ...Field) bool

Debug logs a message at Debug level with structured fields.

Debug level is intended for detailed diagnostic information useful during development and troubleshooting. These messages are typically disabled in production environments.

Parameters:

  • msg: Primary log message
  • fields: Structured key-value pairs (zero-allocation)

Returns:

  • bool: true if successfully logged, false if dropped or filtered

Performance: Optimized for zero allocations with pre-allocated field storage

func (*Logger) Error

func (l *Logger) Error(msg string, fields ...Field) bool

Error logs a message at Error level with structured fields.

Error level is intended for error events that allow the application to continue running. These messages indicate failures that need immediate attention but don't crash the application.

Parameters:

  • msg: Primary log message
  • fields: Structured key-value pairs (zero-allocation)

Returns:

  • bool: true if successfully logged, false if dropped or filtered

Performance: Optimized for zero allocations with pre-allocated field storage

func (*Logger) Fatal

func (l *Logger) Fatal(msg string, fields ...Field)

Fatal logs a message at fatal level and exits the program

func (*Logger) Info

func (l *Logger) Info(msg string, fields ...Field) bool

Info logs a message at Info level with structured fields.

Info level is intended for general information about program execution. These messages provide insight into application flow and important events.

Parameters:

  • msg: Primary log message
  • fields: Structured key-value pairs (zero-allocation)

Returns:

  • bool: true if successfully logged, false if dropped or filtered

Performance: Zero allocations for simple messages, optimized fast path for messages with fields

func (*Logger) InfoFields

func (l *Logger) InfoFields(msg string, fields ...Field) bool

InfoFields logs a message at Info level with structured fields.

This method supports structured logging with key-value pairs for detailed context. Use the simpler Info() method for messages without fields to achieve zero allocations.

Performance: Optimized for zero allocations with pre-allocated field storage

func (*Logger) Level

func (l *Logger) Level() Level

Level atomically reads the current minimum logging level.

Returns the current minimum level threshold used for filtering log messages. Messages below this level are discarded early for maximum performance.

Returns:

  • Level: Current minimum logging level

Performance Notes:

  • Atomic load operation
  • Zero allocations
  • Sub-nanosecond read performance

Thread Safety: Safe to call from multiple goroutines

func (*Logger) Named

func (l *Logger) Named(name string) *Logger

Named creates a new logger with the specified name.

Named loggers are useful for organizing logs by component, module, or functionality. The name typically appears in log output to help with filtering and analysis.

Parameters:

  • name: Name to assign to the new logger instance

Returns:

  • *Logger: New logger instance with the specified name

Example:

dbLogger := logger.Named("database")
apiLogger := logger.Named("api")
dbLogger.Info("Connection established") // Includes "database" context

Performance Notes:

  • String assignment only (minimal overhead)
  • Name is included in log output by encoder
  • Zero allocations during normal operation

Thread Safety: Safe to call from multiple goroutines

func (*Logger) Panic

func (l *Logger) Panic(msg string, fields ...Field) bool

Panic logs a message at panic level and panics

func (*Logger) SetLevel

func (l *Logger) SetLevel(min Level)

SetLevel atomically changes the minimum logging level.

This method allows dynamic level adjustment during runtime without restarting the logger. Level changes take effect immediately for subsequent log operations.

Parameters:

  • min: New minimum level (Debug, Info, Warn, Error)

Performance Notes:

  • Atomic operation with no locks or allocations
  • Sub-nanosecond level changes
  • Thread-safe concurrent access

Thread Safety: Safe to call from multiple goroutines

func (*Logger) Start

func (l *Logger) Start()

Start begins background processing of log records.

This method starts the consumer goroutine that processes log records from the ring buffer and writes them to the configured output. The method is idempotent - calling Start() multiple times is safe and has no effect after the first call.

The consumer goroutine will continue processing until Close() is called. All logging operations require Start() to be called first, otherwise log records will accumulate in the ring buffer without being processed.

Performance Notes:

  • Uses lock-free atomic operations for state management
  • Single consumer goroutine eliminates lock contention
  • Processing begins immediately after Start() returns

Thread Safety: Safe to call from multiple goroutines

func (*Logger) Stats

func (l *Logger) Stats() map[string]int64

Stats returns comprehensive performance statistics for monitoring.

This method provides real-time metrics about logger performance, buffer utilization, and operational health. The statistics are collected atomically and can be safely called from multiple goroutines.

Returns:

  • map[string]int64: Performance metrics including:
  • Ring buffer statistics (capacity, utilization, etc.)
  • Dropped message count
  • Processing throughput metrics
  • Memory usage indicators

The returned map contains:

  • "dropped": Number of messages dropped due to ring buffer full
  • "writer_position": Current writer position in ring buffer
  • "reader_position": Current reader position in ring buffer
  • "buffer_size": Ring buffer capacity
  • "items_buffered": Number of items waiting to be processed
  • "utilization_percent": Buffer utilization percentage
  • Additional ring buffer specific statistics

Performance: Atomic reads with zero allocations for metric collection

func (*Logger) Sync

func (l *Logger) Sync() error

Sync flushes any buffered log entries.

This method ensures that all buffered log entries are written to their destination. It's useful before program termination or when immediate log delivery is required.

Returns:

  • error: Any error encountered during synchronization

Performance Notes:

  • May block until all buffers are flushed
  • Should be called sparingly in hot paths
  • Automatically called during Close()

Thread Safety: Safe to call from multiple goroutines

func (*Logger) Warn

func (l *Logger) Warn(msg string, fields ...Field) bool

Warn logs a message at Warn level with structured fields.

Warn level is intended for potentially harmful situations that don't prevent the application from continuing. These messages indicate conditions that should be investigated.

Parameters:

  • msg: Primary log message
  • fields: Structured key-value pairs (zero-allocation)

Returns:

  • bool: true if successfully logged, false if dropped or filtered

Performance: Optimized for zero allocations with pre-allocated field storage

func (*Logger) With

func (l *Logger) With(fields ...Field) *Logger

With creates a new logger with additional structured fields.

This method creates a new logger instance that automatically includes the specified fields in every log message. This is useful for adding context that applies to multiple log statements, such as request IDs, user IDs, or component names.

Parameters:

  • fields: Structured fields to include in all log messages

Returns:

  • *Logger: New logger instance with pre-populated fields

Implementation Note: The fields are stored in the logger and applied to each log record during the logging operation.

Example:

requestLogger := logger.With(
    iris.String("request_id", reqID),
    iris.String("user_id", userID),
)
requestLogger.Info("Processing request") // Includes request_id and user_id

Performance Notes:

  • Fields are stored once in logger instance
  • Applied during each log operation (small overhead)
  • Zero allocations for field storage in logger

Thread Safety: Safe to call from multiple goroutines

func (*Logger) WithContext

func (l *Logger) WithContext(ctx context.Context, opts ...ContextOption) *ContextLogger

WithContext creates a new ContextLogger with fields extracted from context. This is the unified way to use context integration - extract once, log many times with the same context.

Usage:

// Default extraction (all standard keys)
ctxLogger := logger.WithContext(ctx)

// Custom extractor
ctxLogger := logger.WithContext(ctx, WithExtractor(myExtractor))

// Specific keys only
ctxLogger := logger.WithContext(ctx, WithKeys(RequestIDKey, TraceIDKey))

// Single key with custom field name
ctxLogger := logger.WithContext(ctx, WithKey(RequestIDKey, "req_id"))

Performance: O(k) where k is number of configured keys, not context depth.

func (*Logger) WithOptions

func (l *Logger) WithOptions(opts ...Option) *Logger

WithOptions creates a new logger with the specified options applied.

This method clones the current logger and applies additional configuration options. The original logger is unchanged, ensuring immutable configuration and thread safety. The new logger shares the same ring buffer and output configuration but can have different caller, hook, and development settings.

Parameters:

  • opts: Option functions to apply to the new logger instance

Returns:

  • *Logger: New logger instance with applied options

Example:

devLogger := logger.WithOptions(
    iris.WithCaller(),
    iris.AddStacktrace(iris.Error),
    iris.Development(),
)

Performance Notes:

  • Clones logger configuration (minimal allocation)
  • Shares ring buffer and output resources
  • Options are applied once during creation

Thread Safety: Safe to call from multiple goroutines

func (*Logger) Write

func (l *Logger) Write(fill func(*Record)) bool

Write provides zero-allocation logging with a fill function.

This is the fastest logging method, allowing direct manipulation of a pre-allocated Record in the ring buffer. The fill function is called with a pointer to a Record that should be populated with log data.

Parameters:

  • fill: Function to populate the log record (zero allocations)

Returns:

  • bool: true if record was successfully queued, false if ring buffer full

Performance Features:

  • Zero heap allocations during normal operation
  • Direct record manipulation in ring buffer
  • Lock-free atomic operations
  • Fastest possible logging path

Example:

success := logger.Write(func(r *Record) {
    r.Level = iris.Error
    r.Msg = "Critical system error"
    r.AddField(iris.String("component", "database"))
})

Thread Safety: Safe to call from multiple goroutines

type Option

type Option func(*loggerOptions)

Option represents a function that modifies logger options during construction.

Options use the functional options pattern to provide a clean, extensible API for logger configuration. Each Option function modifies the options structure in place during logger creation or cloning.

Pattern Benefits:

  • Backward compatible API evolution
  • Clear, self-documenting configuration
  • Composable option sets
  • Type-safe configuration

Usage:

logger := logger.WithOptions(
    iris.WithCaller(),
    iris.AddStacktrace(iris.Error),
    iris.Development(),
)

func AddStacktrace

func AddStacktrace(min Level) Option

AddStacktrace enables stack trace capture for log levels at or above the specified minimum.

Stack traces provide detailed call stack information for debugging complex issues. They are automatically captured for severe log levels (typically Error and above) to aid in troubleshooting.

Parameters:

  • min: Minimum log level for stack trace capture (Debug, Info, Warn, Error)

Performance Impact:

  • Stack trace capture is expensive (runtime.Stack() call)
  • Only enabled for specified log levels to minimize overhead
  • Stack traces are captured in producer thread but processed in consumer

Returns:

  • Option: Configuration function to enable stack trace capture

Example:

// Capture stack traces for Error level and above
logger := logger.WithOptions(iris.AddStacktrace(iris.Error))
logger.Error("critical error") // Will include stack trace
logger.Warn("warning")         // No stack trace

func Development

func Development() Option

Development enables development-specific behaviors for enhanced debugging.

Development mode changes logger behavior to be more suitable for development and testing environments:

  • DPanic level causes panic() in addition to logging
  • Enhanced error reporting and validation
  • More verbose debugging information

This option should typically be disabled in production environments for optimal performance and stability.

Returns:

  • Option: Configuration function to enable development mode

Example:

logger := logger.WithOptions(iris.Development())
logger.DPanic("development panic") // Will panic in dev mode, log in production

func WithCaller

func WithCaller() Option

WithCaller enables caller information capture for log records.

When enabled, the logger will capture the file name, line number, and function name of the calling code for each log record. This information is added to the log output for debugging and troubleshooting.

Performance Impact:

  • Adds runtime.Caller() call per log operation
  • Minimal allocation for caller information
  • Skip level optimization reduces overhead

Returns:

  • Option: Configuration function to enable caller capture

Example:

logger := logger.WithOptions(iris.WithCaller())
logger.Info("message") // Will include caller info

func WithCallerSkip

func WithCallerSkip(n int) Option

WithCallerSkip sets the number of stack frames to skip for caller detection.

This option is useful when the logger is wrapped by helper functions and you want the caller information to point to the actual calling code rather than the wrapper function.

Parameters:

  • n: Number of stack frames to skip (negative values are treated as 0)

Common Skip Values:

  • 0: Direct caller of log method
  • 1: Skip one wrapper function
  • 2+: Skip multiple wrapper layers

Returns:

  • Option: Configuration function to set caller skip level

Example:

// Skip helper function to show actual caller
logger := logger.WithOptions(
    iris.WithCaller(),
    iris.WithCallerSkip(1),
)

func WithHook

func WithHook(h Hook) Option

WithHook adds a post-processing hook to the logger.

Hooks are functions executed in the consumer thread after log records are processed but before buffers are returned to the pool. This design ensures zero contention with producer threads while enabling powerful post-processing.

Hook Use Cases:

  • Metrics collection based on log content
  • Log forwarding to external systems
  • Custom alerting on specific log patterns
  • Development-time debugging and validation

Parameters:

  • h: Hook function to execute (nil hooks are ignored)

Performance Notes:

  • Hooks are executed sequentially in consumer thread
  • Should avoid blocking operations to maintain throughput
  • No allocation overhead in producer threads

Returns:

  • Option: Configuration function to add the hook

Example:

metricHook := func(rec *Record) {
    if rec.Level >= iris.Error {
        errorCounter.Inc()
    }
}
logger := logger.WithOptions(iris.WithHook(metricHook))

func WithSampler added in v1.1.0

func WithSampler(s Sampler) Option

WithSampler enables log sampling with the specified sampler.

Sampling is used to reduce log volume in high-throughput scenarios by selectively allowing only a subset of log messages to be processed. This is particularly useful for preventing log storms and managing system resources while maintaining visibility into application behavior.

Parameters:

  • s: Sampler implementation (nil disables sampling)

Common Use Cases:

  • Rate limiting in high-volume production systems
  • Preventing log storms during error conditions
  • Managing log storage costs
  • Maintaining system performance under load

Returns:

  • Option: Configuration function to set the sampler

Example:

// Create a token bucket sampler: 100 burst, 10/sec sustained rate
sampler := iris.NewTokenBucketSampler(100, 10, time.Second)
logger := logger.WithOptions(iris.WithSampler(sampler))

// High-volume logging will be automatically rate-limited
for i := 0; i < 1000; i++ {
    logger.Info("high volume message", iris.Int("id", i))
}

type OwnedWriter added in v1.2.0

type OwnedWriter interface {
	WriteOwned([]byte) (int, error)
}

OwnedWriter is an optional optimization interface for zero-copy writes. WHY: when the output destination supports ownership transfer (e.g. lethe), iris hands off the buffer instead of copying. This eliminates one allocation per log entry and reduces GC pressure under sustained throughput.

The Logger detects OwnedWriter once at construction time via type assertion (INV-410), not per-write. Most io.Writer implementations do not support ownership transfer, so the fast path is the standard Write call.

type ProcessorFunc

type ProcessorFunc func(record *Record)

ProcessorFunc defines the signature for record processing functions

This function is called for each log record that flows through the ring buffer. It should be efficient and avoid blocking operations to maintain high throughput.

Parameters:

  • record: The log record to process (guaranteed non-nil)

Performance Notes:

  • Called from the consumer thread only (single-threaded)
  • Should avoid allocations and blocking operations
  • Can safely access shared state (no concurrent access)

type Record

type Record struct {
	Level  Level  // Log level
	Msg    string // Log message
	Logger string // Logger name
	Caller string // Caller information (file:line)
	Stack  string // Stack trace
	// contains filtered or unexported fields
}

Record represents a log entry with optimized field storage

func NewRecord

func NewRecord(level Level, msg string) *Record

NewRecord creates a new Record with the specified level and message. Uses pre-allocated field storage to avoid heap allocations during logging.

func (*Record) AddField

func (r *Record) AddField(field Field) bool

AddField adds a structured field to this record. Returns false if the field array is full (32 fields max - optimal for performance).

func (*Record) FieldCount

func (r *Record) FieldCount() int

FieldCount returns the number of fields in this record.

func (*Record) GetField

func (r *Record) GetField(index int) Field

GetField returns the field at the specified index. Panics if index is out of bounds (for test simplicity).

func (*Record) Reset

func (r *Record) Reset()

Reset clears the record for reuse.

type Ring

type Ring struct {
	// contains filtered or unexported fields
}

Ring provides ultra-high performance logging with embedded Zephyros Light

The Ring uses the embedded ZephyrosLight engine to provide optimal performance for logging operations while eliminating external dependencies and maintaining the core features needed for high-performance logging.

Embedded Zephyros Light Features:

  • Single ring architecture optimized for logging
  • ~15-20ns/op performance (vs 9ns commercial, 25ns previous)
  • Zero heap allocations during normal operation
  • Lock-free atomic operations for maximum throughput
  • Fixed batch processing (simplified vs adaptive)

Architecture Simplification:

  • SingleRing only (ThreadedRings removed - commercial feature)
  • Simplified configuration (fewer options, better defaults)
  • Embedded implementation (no external dependencies)

Performance Characteristics:

  • Zero heap allocations during normal operation
  • Lock-free atomic operations for maximum throughput
  • Fixed batching optimized for logging workloads
  • Simplified spinning strategy for low latency

func (*Ring) Close

func (r *Ring) Close()

Close gracefully shuts down the ring buffer

This method signals the consumer to stop processing and ensures all buffered records are processed before shutdown. It is safe to call multiple times and from multiple goroutines.

After Close() is called:

  • Write() will return false for all subsequent calls
  • Loop() will process all remaining records and then exit
  • The ring buffer becomes unusable

Shutdown Guarantees:

  • All buffered records are processed before shutdown
  • Multiple Close() calls are safe (idempotent)
  • Deterministic shutdown behavior for testing

func (*Ring) Flush

func (r *Ring) Flush() error

Flush ensures all pending writes are visible to the consumer

In the embedded ZephyrosLight architecture, this method ensures that all writes from producer threads are visible to the consumer thread. This is primarily useful for testing and ensuring deterministic behavior.

Note: In normal operation, flushing is automatic and this method exists primarily for API compatibility and testing scenarios.

func (*Ring) Loop

func (r *Ring) Loop()

Loop starts the record processing loop (CONSUMER THREAD ONLY)

This method should be called from exactly one goroutine to consume and process log records. The embedded ZephyrosLight implements an optimized spinning strategy for balanced performance and CPU usage.

The loop continues until Close() is called, after which it processes all remaining records before exiting.

Performance Features:

  • Fixed batching optimized for logging workloads
  • Simplified idle strategy to minimize CPU usage
  • Guaranteed processing of all records during shutdown

Warning: Only call this method from one goroutine per ring buffer. Multiple consumers will cause race conditions and data loss.

func (*Ring) ProcessBatch

func (r *Ring) ProcessBatch() int

ProcessBatch processes a single batch of records and returns the count

This method is useful for custom consumer implementations that need fine-grained control over processing timing. It processes up to batchSize records in a single call using the embedded ZephyrosLight engine.

Returns:

  • int: Number of records processed in this batch (0 if no records available)

Note: This is a lower-level method. Most applications should use Loop() which handles the complete consumer lifecycle automatically.

func (*Ring) Stats

func (r *Ring) Stats() map[string]int64

Stats returns detailed performance statistics for monitoring and debugging

The returned map contains real-time metrics about the embedded ZephyrosLight ring buffer's performance and current state. This is useful for monitoring, alerting, and performance optimization.

Returned Statistics:

  • "writer_position": Last claimed sequence number
  • "reader_position": Current reader position
  • "buffer_size": Total ring buffer capacity
  • "items_buffered": Number of records waiting to be processed
  • "items_processed": Total records processed
  • "items_dropped": Total records dropped due to full buffer
  • "closed": Ring buffer closed state (0=open, 1=closed)
  • "capacity": Configured ring capacity
  • "batch_size": Configured batch size
  • "utilization_percent": Buffer utilization percentage
  • "engine": "zephyros_light" (embedded engine identifier)

Returns:

  • map[string]int64: Real-time performance statistics

Example:

stats := ring.Stats()
fmt.Printf("Buffer utilization: %d%%\n", stats["utilization_percent"])
fmt.Printf("Items buffered: %d\n", stats["items_buffered"])

func (*Ring) Write

func (r *Ring) Write(fill func(*Record)) bool

Write adds a log record to the ring buffer using zero-allocation pattern

The fill function is called with a pointer to a pre-allocated Record in the embedded Zephyros Light ring buffer. This avoids any heap allocations during logging operations while providing excellent performance.

The function is thread-safe and can be called concurrently from multiple goroutines. The embedded ZephyrosLight uses atomic operations for lock-free performance.

Performance: Target ~15-20ns/op with embedded Zephyros Light engine

Parameters:

  • fill: Function to populate the log record (called with pre-allocated Record)

Returns:

  • bool: true if record was successfully written, false if ring is full or closed

Performance Notes:

  • Zero heap allocations during normal operation
  • Lock-free atomic operations for maximum throughput
  • Returns false instead of blocking when ring is full
  • Optimized for high-frequency logging scenarios

Example:

success := ring.Write(func(r *Record) {
    r.Level = ErrorLevel
    r.Message = "Critical error occurred"
    r.Timestamp = time.Now()
})

type Sampler

type Sampler interface {
	// Allow determines if a log entry at the given level should be processed.
	// Returns true if the entry should be logged, false if it should be dropped.
	Allow(level Level) bool
}

Sampler defines the interface for log sampling strategies. Implementations control which log entries are allowed through to prevent overwhelming downstream systems.

type SamplerLimit added in v1.2.1

type SamplerLimit struct {
	Capacity int64         // Maximum tokens (burst capacity)
	Refill   int64         // Tokens added per refill period
	Every    time.Duration // Refill period duration
}

SamplerLimit defines the rate limit parameters for a single log level. Each level can have independent burst capacity and sustained rate.

type ScalerConfig added in v1.2.0

type ScalerConfig struct {
	// Threshold: concurrent goroutines to trigger scale-up
	GoroutineThreshold uint32
	// Cooldown before scaling back down
	ScaleDownCooldown time.Duration
	// Base config for logger creation
	BaseConfig Config
	// Options for logger creation
	Options []Option
}

ScalerConfig defines auto-scaling behavior

func DefaultScalerConfig added in v1.2.0

func DefaultScalerConfig(cfg Config, opts ...Option) ScalerConfig

DefaultScalerConfig returns production-ready defaults

type ScalingMode added in v1.2.0

type ScalingMode uint32

ScalingMode represents the current scaling mode

const (
	// SingleMode - ultra-fast single-producer (~25ns/op)
	SingleMode ScalingMode = iota
	// MultiMode - multi-producer optimized (~35ns/op)
	MultiMode
)

func (ScalingMode) String added in v1.2.0

func (m ScalingMode) String() string

type Stack

type Stack struct {
	// contains filtered or unexported fields
}

Stack represents a captured stack trace with program counters

func CaptureStack

func CaptureStack(skip int, depth Depth) *Stack

CaptureStack captures a stack trace of the specified depth, skipping frames. skip=0 identifies the caller of CaptureStack. The caller must call FreeStack on the returned stack after using it.

func (*Stack) FormatStack

func (s *Stack) FormatStack() string

FormatStack formats the entire stack trace into a string using buffer pooling

func (*Stack) Next

func (s *Stack) Next() (runtime.Frame, bool)

Next returns the next frame in the stack trace

type SyncReader added in v1.1.0

type SyncReader interface {
	// Read retrieves the next log record from the external logging system.
	// Returns nil when no more records are available or context is cancelled.
	// Implementations should block until a record is available or context expires.
	Read(ctx context.Context) (*Record, error)

	// Close releases any resources associated with the reader.
	// Should be called when the reader is no longer needed.
	io.Closer
}

SyncReader provides the ability to read log records from external logging systems and integrate them into Iris's high-performance processing pipeline. This interface enables Iris to act as a universal logging accelerator for existing logger implementations.

The SyncReader operates in background goroutines and feeds records into Iris's lock-free ring buffer, allowing existing loggers (slog, logrus, zap) to benefit from Iris's performance and advanced features without code changes.

Performance considerations: - Read() operates in separate goroutines to avoid blocking Iris's hot path - Implementations should handle backpressure gracefully - Context cancellation should be respected for clean shutdowns

type SyncWriter added in v1.1.0

type SyncWriter interface {
	// WriteRecord writes a structured log record to the destination.
	// Should handle the record asynchronously to avoid blocking Iris's hot path.
	WriteRecord(record *Record) error

	// Close releases any resources and flushes pending data.
	// Should ensure all data is safely written before returning.
	io.Closer
}

SyncWriter provides enhanced writer capabilities for external output destinations such as Loki, Kafka, Prometheus, etc. This interface enables modular output architecture where specialized writers are maintained as separate modules.

SyncWriter extends basic io.Writer with structured record processing, allowing external writer modules to access Iris's rich Record format with fields, levels, and metadata while maintaining zero dependencies in the core library.

Performance considerations: - WriteRecord() should be non-blocking or implement internal buffering - Implementations should handle backpressure gracefully - Background processing recommended for network/disk operations

type TextEncoder

type TextEncoder struct {
	// TimeFormat specifies the Go time layout for timestamps.
	// Default: time.RFC3339 for standard compliance.
	// Common alternatives: time.RFC3339Nano, time.Kitchen, custom layouts.
	TimeFormat string

	// QuoteValues determines whether string values are quoted.
	// Default: true for security (prevents value parsing ambiguity).
	// Set to false only in trusted environments for cleaner output.
	QuoteValues bool

	// SanitizeKeys determines whether field keys are sanitized.
	// Default: true for security (prevents key-based injection).
	// Set to false only when keys are guaranteed to be safe.
	SanitizeKeys bool
}

TextEncoder provides secure human-readable text encoding for log records.

This encoder implements comprehensive security measures to prevent log injection attacks and ensure safe output in production environments. All field keys and values are sanitized to prevent malicious manipulation of log data.

Security Features:

  • Field key sanitization prevents injection via malformed keys
  • Value sanitization with proper quoting and escaping
  • Control character neutralization (prevents terminal manipulation)
  • Newline injection protection (prevents log splitting)
  • Unicode direction override protection (prevents text reversal attacks)

Output Format:

time=2025-09-06T14:30:45Z level=info msg="User action" field=value

Use Cases: - Production logging in security-sensitive environments - System logs that may contain untrusted input - Compliance and audit logging requiring tamper resistance - Human-readable logs that still need machine parsing

func NewTextEncoder

func NewTextEncoder() *TextEncoder

NewTextEncoder creates a new secure text encoder with production-safe defaults.

Default configuration prioritizes security: - TimeFormat: time.RFC3339 (standard, sortable) - QuoteValues: true (prevents parsing ambiguity) - SanitizeKeys: true (prevents key injection)

These defaults are suitable for production environments where log data may contain untrusted input or require security compliance.

Returns:

  • *TextEncoder: Configured secure text encoder instance

func (*TextEncoder) Encode

func (e *TextEncoder) Encode(rec *Record, now time.Time, buf *bytes.Buffer)

Encode writes the record to the buffer in secure text format.

type TokenBucketSampler

type TokenBucketSampler struct {
	// contains filtered or unexported fields
}

TokenBucketSampler implements rate limiting using a token bucket algorithm. Provides burst capacity with sustained rate limiting for high-volume logging.

func NewTokenBucketSampler

func NewTokenBucketSampler(capacity, refill int64, every time.Duration) *TokenBucketSampler

NewTokenBucketSampler creates a new token bucket sampler with the specified parameters. Validates inputs and sets reasonable defaults for invalid values.

Parameters:

  • capacity: Maximum number of tokens (burst capacity)
  • refill: Number of tokens added per refill period
  • every: Time duration between refills

Returns a configured sampler ready for concurrent use.

func (*TokenBucketSampler) Allow

func (s *TokenBucketSampler) Allow(_ Level) bool

Allow implements the Sampler interface using token bucket rate limiting. Thread-safe implementation that refills tokens based on elapsed time and consumes tokens for allowed log entries.

Parameters:

  • level: Log level (unused in this implementation, all levels treated equally)

Returns true if logging should proceed, false if rate limited.

type WriteSyncer

type WriteSyncer interface {
	io.Writer
	Sync() error
}

WriteSyncer combines io.Writer with the ability to synchronize written data to persistent storage. This interface is essential for ensuring data durability in logging scenarios where data loss is unacceptable.

Performance considerations: - Sync() should be called judiciously as it may involve expensive syscalls - Implementations should be thread-safe for concurrent logging scenarios - Zero allocations in hot paths for maximum throughput

func AddSync

func AddSync(w io.Writer) WriteSyncer

AddSync is an alias for WrapWriter for familiarity with zap

func MultiWriteSyncer

func MultiWriteSyncer(writers ...WriteSyncer) WriteSyncer

MultiWriteSyncer creates a WriteSyncer that duplicates writes to multiple writers

func MultiWriter

func MultiWriter(writers ...io.Writer) WriteSyncer

MultiWriter accepts io.Writer interfaces, wraps them and creates a MultiWriteSyncer

func NewFileSyncer

func NewFileSyncer(file *os.File) WriteSyncer

NewFileSyncer creates a WriteSyncer specifically for file operations. This function provides explicit file syncing capabilities and should be used when you need guaranteed durability for file-based logging.

Performance: Direct file operations with explicit sync control

func NewNopSyncer

func NewNopSyncer(w io.Writer) WriteSyncer

NewNopSyncer creates a WriteSyncer that performs no synchronization. This is useful for scenarios where sync is handled externally or where the underlying writer doesn't support/need synchronization.

Performance: Zero-cost wrapper with inline no-op sync

func WrapWriter

func WrapWriter(w io.Writer) WriteSyncer

WrapWriter intelligently converts any io.Writer into a WriteSyncer. This function provides automatic detection and wrapping of different writer types to ensure optimal performance and correct synchronization behavior.

Type-specific optimizations: - *os.File: Uses fileSyncer for explicit sync() syscalls - WriteSyncer: Returns as-is (already implements interface) - Other writers: Uses nopSyncer (no-op sync for non-file writers)

Performance: Zero allocations for WriteSyncer inputs, minimal overhead for type switching in other cases.

Usage patterns:

  • File logging: WrapWriter(file) -> fileSyncer (with sync)
  • Buffer logging: WrapWriter(buffer) -> nopSyncer (no sync needed)
  • Network logging: WrapWriter(conn) -> nopSyncer (sync at protocol level)

Directories

Path Synopsis
internal
Package slogprovider bridges Go's standard log/slog to the Iris pipeline.
Package slogprovider bridges Go's standard log/slog to the Iris pipeline.

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL