arbor

package module
v1.4.67 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Mar 4, 2026 License: MIT Imports: 13 Imported by: 2

README

arbor

CI/CD Pipeline Go Reference Go Report Card

A comprehensive Go logging system designed for APIs with structured logging, multiple output writers, and advanced correlation tracking.

Installation

go get github.com/ternarybob/arbor@latest

Quick Start

package main

import (
    "github.com/ternarybob/arbor"
    "github.com/ternarybob/arbor/models"
)

func main() {
    // Create logger with console output
    logger := arbor.Logger().
        WithConsoleWriter(models.WriterConfiguration{
            Type:       models.LogWriterTypeConsole,
            TimeFormat: "15:04:05.000",
        }).
        WithCorrelationId("app-startup")

    // Log messages
    logger.Info().Str("version", "1.0.0").Msg("Application started")
    logger.Warn().Msg("This is a warning")
    logger.Error().Str("error", "connection failed").Msg("Database connection error")
}

Features

  • Multi-Writer Architecture:
    • Synchronous writers (Console, File) for immediate output
    • Async writers (LogStore, Context) with buffered non-blocking processing
    • Shared log store for queryable in-memory and optional persistent storage
  • Correlation ID Tracking: Request tracing across application layers
  • Structured Logging: Rich field support with fluent API
  • Log Level Management: String-based and programmatic level configuration
  • In-Memory Log Store: Fast queryable storage with optional BoltDB persistence
  • API Integration: Built-in Gin framework support
  • Global Registry: Cross-context logger access
  • Thread-Safe: Concurrent access with proper synchronization
  • Performance Focused: Non-blocking async writes, optimized for high-throughput API scenarios
  • Async Processing: Non-blocking buffered writes with graceful shutdown and automatic buffer draining

Basic Usage

Simple Console Logging
import "github.com/ternarybob/arbor"

// Use global logger
arbor.Info().Msg("Simple log message")
arbor.Error().Err(err).Msg("Error occurred")

// With structured fields
arbor.Info().
    Str("user", "john.doe").
    Int("attempts", 3).
    Msg("User login attempt")
Multiple Writers Configuration
logger := arbor.Logger().
    WithConsoleWriter(models.WriterConfiguration{
        Type:       models.LogWriterTypeConsole,
        TimeFormat: "15:04:05.000",
    }).
    WithFileWriter(models.WriterConfiguration{
        Type:       models.LogWriterTypeFile,
        FileName:   "logs/app.log",
        MaxSize:    500 * 1024, // 500KB (default) - AI-friendly size
        MaxBackups: 20,         // 20 files (default) - maintains ~10MB history
        TimeFormat: "2006-01-02 15:04:05.000",
        TextOutput: true, // Enable human-readable text format (default: false for JSON)
    }).
    WithMemoryWriter(models.WriterConfiguration{
        Type:       models.LogWriterTypeMemory,
        TimeFormat: "15:04:05.000",
    })

logger.Info().Msg("This goes to console, file, and memory")

File Writer Configuration

The file writer supports both JSON and human-readable text output formats.

JSON Output Format (Default)
logger := arbor.Logger().
    WithFileWriter(models.WriterConfiguration{
        Type:       models.LogWriterTypeFile,
        FileName:   "logs/app.log",
        TimeFormat: "2006-01-02 15:04:05.000",
        TextOutput: false, // JSON format (default)
    })

logger.Info().Str("user", "john").Msg("User logged in")

Output:

{"time":"2025-09-18 15:04:05.123","level":"info","user":"john","message":"User logged in"}
Text Output Format
logger := arbor.Logger().
    WithFileWriter(models.WriterConfiguration{
        Type:       models.LogWriterTypeFile,
        FileName:   "logs/app.log",
        TimeFormat: "15:04:05.000",
        TextOutput: true, // Human-readable text format
    })

logger.Info().Str("user", "john").Msg("User logged in")

Output:

15:04:05.123 INF > User logged in user=john
File Writer Options
  • FileName: Log file path (default: "logs/main.log")
  • MaxSize: Maximum file size in bytes before rotation (default: 500KB)
  • MaxBackups: Number of backup files to keep (default: 20)
  • TextOutput: Enable human-readable format instead of JSON (default: false)
  • TimeFormat: Timestamp format for log entries
  • Level: Minimum log level to write
AI-Friendly Log File Sizing

The default configuration is optimized for AI agent consumption:

  • 500KB per file: Approximately 3,300 log lines at ~150 bytes per line
  • 20 backup files: Maintains ~10MB total log history across all files
  • Automatic rotation: New files created when size limit is reached
  • Timestamped backups: Each rotated file includes timestamp in filename

This configuration ensures log files remain within AI context windows while maintaining sufficient history for debugging and analysis. For high-volume production systems, you may want to increase MaxSize and adjust MaxBackups accordingly:

// Example: Larger files for high-volume systems
WithFileWriter(models.WriterConfiguration{
    Type:       models.LogWriterTypeFile,
    FileName:   "logs/app.log",
    MaxSize:    5 * 1024 * 1024, // 5MB per file
    MaxBackups: 10,               // Keep 10 backups (~50MB total)
})

Log Levels

String-Based Configuration
// Configure from external config
logger := arbor.Logger().WithLevelFromString("debug")

// Supported levels: "trace", "debug", "info", "warn", "error", "fatal", "panic", "disabled"
Programmatic Configuration
logger := arbor.Logger().WithLevel(arbor.DebugLevel)

// Available levels: TraceLevel, DebugLevel, InfoLevel, WarnLevel, ErrorLevel, FatalLevel, PanicLevel

Correlation ID Tracking

Correlation IDs enable request tracing across your application layers:

// Set correlation ID for request tracking
logger := arbor.Logger().
    WithConsoleWriter(config).
    WithCorrelationId("req-12345")

logger.Info().Msg("Processing request")
logger.Debug().Str("step", "validation").Msg("Validating input")
logger.Info().Str("result", "success").Msg("Request completed")

// Auto-generate correlation ID
logger = logger.WithCorrelationId("") // Generates UUID automatically

// Clear correlation ID
logger = logger.ClearCorrelationId()

Async Writers with ChannelWriter

Arbor provides a powerful async buffered writer pattern through the channelWriter base. This architecture enables non-blocking log writes while maintaining reliability through automatic buffer draining and graceful shutdown.

What is ChannelWriter?

The channelWriter is a reusable async writer implementation that:

  • Buffers log entries in a channel (default 1000 entries)
  • Processes entries in a background goroutine
  • Returns immediately from Write() calls (~100μs latency)
  • Drains buffer automatically during shutdown to prevent log loss
  • Handles overflow gracefully by dropping entries with warnings
Built-in Async Writers

Two writers use the channelWriter base:

LogStoreWriter

Writes logs to in-memory or persistent storage asynchronously:

// Used internally by WithMemoryWriter
logger := arbor.Logger().WithMemoryWriter(models.WriterConfiguration{
    Type: models.LogWriterTypeMemory,
})

// Logs are buffered and written asynchronously
// No blocking on database writes
logger.Info().Msg("Stored asynchronously in memory/BoltDB")
Context Logging with Channels

Stream logs for specific contexts (jobs, requests) to channels using correlation IDs:

// Setup channel to receive log batches
logChannel := make(chan []models.LogEvent, 10)
arbor.Logger().SetChannel("job-logs", logChannel)

// Create context logger (adds correlation ID)
contextLogger := logger.WithContextWriter("job-123")

// Logs go to all writers (consumer can filter by CorrelationID)
contextLogger.Info().Msg("Logged to all writers, filterable by correlation ID")
Lifecycle and Behavior

Buffer Management:

// Buffer capacity: 1000 entries per writer
// Write latency: ~100μs (non-blocking)
// Overflow: Drops with warning log, no blocking

Graceful Shutdown:

// On logger cleanup or application shutdown:
// 1. Stop accepting new entries
// 2. Process all buffered entries
// 3. Clean up resources

// For named channels:
defer arbor.Logger().UnregisterChannel("channel-name") // Flushes and stops channel

Performance Characteristics:

  • Write operations complete in ~100μs
  • Background processing doesn't block logging
  • Supports 10,000+ logs/second throughput
  • Automatic level filtering before buffering
  • Thread-safe concurrent writes
Creating Custom Async Writers

You can build custom writers using channelWriter for:

  • External services (Datadog, Splunk, CloudWatch)
  • Custom databases (MongoDB, PostgreSQL, Elasticsearch)
  • Specialized processing (aggregation, filtering, transformation)

See the "Custom Async Writers" section in Writer Architecture for detailed examples.

Context-Specific Logging

For long-running processes, jobs, or any scenario where you need to stream all logs for a specific context (e.g., a jobId) to a durable store, arbor provides a context logging feature. This allows a consumer (e.g., a database writer) to receive all logs for multiple contexts on a single channel, in batches.

This approach is ideal for:

  • Auditing all actions related to a specific job or entity.
  • Persisting logs for long-running background tasks.
  • Building custom log processing and analysis pipelines.
How It Works
  1. Consumer Sets a Channel: At startup, your application's consumer creates a channel that accepts log batches (chan []models.LogEvent) and registers it with arbor.
  2. Producers Log with Context: Any part of your application, in any goroutine, can get a context-specific logger by calling logger.WithContextWriter("your-job-id").
  3. Additive Logging: The context logger writes to all standard writers (like console and file) and sends a copy of the log to an internal buffer.
  4. Batching and Streaming: A background process batches the logs from the internal buffer and sends them as a slice to your consumer's channel. This is efficient and reduces channel contention.
  5. Non-Blocking Writes: The context logger uses an async buffered writer (1000-entry capacity) to prevent blocking on slow context buffer operations, ensuring your application remains responsive even under high logging load.
Setting up the Consumer

You can set up the context log consumer with default or custom buffering settings.

Using Default Buffering

This is the simplest way to get started. It uses a default batch size of 5 and a flush interval of 1 second.

// 1. Create a channel to receive log batches.
logBatchChannel := make(chan []models.LogEvent, 10)

// 2. Configure arbor to send logs to your channel with default settings.
arbor.Logger().SetChannel("context-logs", logBatchChannel)
defer arbor.Logger().UnregisterChannel("context-logs")

// 3. Start a consumer goroutine to process logs from the channel.
go func() {
    for logBatch := range logBatchChannel {
        // Process the batch of logs (filter by CorrelationID if needed)...
    }
}()

Using Custom Buffering

For more control over performance, you can specify the batch size and flush interval. This is useful for high-throughput applications where larger batches are more efficient.

// 1. Create a channel.
logBatchChannel := make(chan []models.LogEvent, 10)

// 2. Configure with a larger batch size and longer interval.
arbor.Logger().SetChannelWithBuffer("context-logs", logBatchChannel, 100, 5*time.Second)
defer arbor.Logger().UnregisterChannel("context-logs")

// 3. Start the consumer goroutine...
Producer Example

This example demonstrates how a consumer can set up a channel and how multiple producers can log to it using a shared context ID.

package main

import (
    "fmt"
    "sync"
    "time"

    "github.com/ternarybob/arbor"
    "github.com/ternarybob/arbor/models"
)

func main() {
    // --- Consumer Setup ---

    // 1. Create a channel to receive log batches.
    logBatchChannel := make(chan []models.LogEvent, 10)

    // 2. Configure arbor to send logs to your channel.
    // We use a small batch size and interval for demonstration purposes.
    arbor.Logger().SetChannelWithBuffer("demo-context", logBatchChannel, 3, 500*time.Millisecond)
    defer arbor.Logger().UnregisterChannel("demo-context")

    // 3. Start a consumer goroutine to process logs from the channel.
    var wgConsumer sync.WaitGroup
    wgConsumer.Add(1)
    go func() {
        defer wgConsumer.Done()
        for logBatch := range logBatchChannel {
            fmt.Printf("\n--- Received Batch of %d Logs ---\\n", len(logBatch))
            for _, log := range logBatch {
                // In a real application, you would write this to a database.
                fmt.Printf("  [DB] JobID: %s, Message: %s\n", log.CorrelationID, log.Message)
            }
            fmt.Println("------------------------------------")
        }
    }()

    // --- Producer Logic ---

    // 4. In various parts of your application, get a logger for a specific context.
    jobID := "job-xyz-789"
    logger := arbor.Logger().WithConsoleWriter(models.WriterConfiguration{})

    // Goroutine 1 simulates one part of the job.
    var wgProducers sync.WaitGroup
    wgProducers.Add(1)
    go func() {
        defer wgProducers.Done()
        jobLogger := logger.WithContextWriter(jobID)
        jobLogger.Info().Msg("Step 1: Validating input.")
        time.Sleep(10 * time.Millisecond)
        jobLogger.Info().Msg("Step 2: Processing data.")
    }()

    // Goroutine 2 simulates another part of the same job.
    wgProducers.Add(1)
    go func() {
        defer wgProducers.Done()
        jobLogger := logger.WithContextWriter(jobID)
        time.Sleep(20 * time.Millisecond)
        jobLogger.Warn().Msg("Step 3: A non-critical error occurred.")
        time.Sleep(10 * time.Millisecond)
        jobLogger.Info().Msg("Step 4: Job complete.")
    }()

    wgProducers.Wait()

    // 5. Unregister the channel and wait for the consumer to finish.
    arbor.Logger().UnregisterChannel("demo-context") // This will flush any remaining logs.
    close(logBatchChannel) // Close the channel to signal the consumer to exit.
    wgConsumer.Wait()
}

Memory Logging & Retrieval

Note: For capturing logs related to a specific function or request, the recommended approach is to use WithContextWriter as described in the section above.

Arbor provides a powerful in-memory log store with optional BoltDB persistence for general-purpose debugging and log retrieval.

Architecture

The memory writer uses a shared log store architecture:

  • Fast in-memory storage (primary) for quick queries
  • Optional BoltDB persistence (configurable)
  • Non-blocking async writes - logging path remains fast
  • Buffered async writes - LogStoreWriter uses 1000-entry buffer for non-blocking writes with automatic overflow handling
  • Automatic TTL cleanup (10 min default, 1 min interval)
Basic Configuration
// In-memory only (fast, no persistence)
logger := arbor.Logger().
    WithMemoryWriter(models.WriterConfiguration{
        Type:       models.LogWriterTypeMemory,
        TimeFormat: "15:04:05.000",
    }).
    WithCorrelationId("debug-session")

// With optional BoltDB persistence
logger := arbor.Logger().
    WithMemoryWriter(models.WriterConfiguration{
        Type:       models.LogWriterTypeMemory,
        TimeFormat: "15:04:05.000",
        DBPath:     "temp/logs", // Enable persistence
    }).
    WithCorrelationId("debug-session")

// Log some messages
logger.Info().Msg("Starting process")
logger.Debug().Str("step", "initialization").Msg("Initializing components")
logger.Error().Str("error", "timeout").Msg("Operation failed")

// Retrieve logs by correlation ID (ordered by timestamp)
logs, err := logger.GetMemoryLogs("debug-session", arbor.DebugLevel)
if err != nil {
    log.Fatal(err)
}

// Display retrieved logs
for index, message := range logs {
    fmt.Printf("[%s]: %s\n", index, message)
}
Memory Log Retrieval Options
// Get all logs for correlation ID (timestamp ordered)
logs, _ := logger.GetMemoryLogsForCorrelation("correlation-id")

// Get logs with minimum level filter
logs, _ := logger.GetMemoryLogs("correlation-id", arbor.WarnLevel)

// Get most recent N entries
logs, _ := logger.GetMemoryLogsWithLimit(100)
API Call Pattern - Snapshot at Request End

Perfect for API debugging where you want all logs for a request:

func HandleRequest(c *gin.Context) {
    correlationID := c.GetHeader("X-Correlation-ID")
    logger := arbor.Logger().WithCorrelationId(correlationID)

    // Process request with logging
    logger.Info().Msg("Processing request")
    logger.Debug().Str("user", user.ID).Msg("Fetching user data")

    // ... business logic ...

    // At end of request - get snapshot ordered by timestamp
    if c.Query("debug") == "true" {
        logs, _ := logger.GetMemoryLogs(correlationID, arbor.TraceLevel)
        c.JSON(200, gin.H{
            "result": result,
            "logs":   logs, // All request logs in timestamp order
        })
    }
}
Channel-Based Log Streaming

Arbor provides a unified channel-based API for streaming logs using SetChannel/SetChannelWithBuffer. This allows you to register multiple independent named channels, each with its own batching configuration and lifecycle.

Overview

The channel API allows you to:

  • Register multiple independent channels for different purposes (WebSocket streaming, external services, custom consumers)
  • Configure per-channel batching (batch size and flush interval)
  • Filter logs by correlation ID in consumers for context-specific tracking
  • Manage lifecycle independently with SetChannel() and UnregisterChannel()

Use Cases:

  • General Streaming: "Stream all application logs to WebSocket clients" or "Send logs to external monitoring"
  • Context Tracking: "Capture all logs for job-123" by filtering on CorrelationID in the consumer
  • Service Integration: "Send error logs to Slack" or "Forward metrics to Datadog"
Basic Usage
// Create channel for receiving log batches
// Buffer size of 10-100 recommended depending on throughput
logChannel := make(chan []models.LogEvent, 10)

// Register channel with default batching (5 events, 1 second)
// The "name" parameter uniquely identifies this channel
arbor.Logger().SetChannel("websocket-logs", logChannel)

// Or with custom batching for high-throughput scenarios
// batchSize: number of events before automatic flush
// flushInterval: maximum time before automatic flush
arbor.Logger().SetChannelWithBuffer("websocket-logs", logChannel, 100, 5*time.Second)

// Consumer goroutine to receive and process batches
go func() {
    for logBatch := range logChannel {
        // Batch contains up to batchSize events
        for _, logEvent := range logBatch {
            // Process each event
            fmt.Printf("[%s] %s: %s\n", logEvent.Level, logEvent.CorrelationID, logEvent.Message)
        }

        // Error handling for WebSocket broadcast
        if err := broadcastToClients(logBatch); err != nil {
            log.Printf("Broadcast error: %v", err)
        }
    }
}()

// Normal logging - all logs go to the channel
arbor.Info().Msg("This message will be batched and sent to the channel")
Advanced Patterns

Multiple Independent Channels

Register multiple named channels for different purposes:

// Audit logs to database
auditChannel := make(chan []models.LogEvent, 20)
arbor.Logger().SetChannel("audit-logs", auditChannel)

// Metrics to monitoring system
metricsChannel := make(chan []models.LogEvent, 50)
arbor.Logger().SetChannel("metrics", metricsChannel)

// Critical alerts to notification service
alertsChannel := make(chan []models.LogEvent, 10)
arbor.Logger().SetChannelWithBuffer("alerts", alertsChannel, 1, 100*time.Millisecond)

// Each channel receives all logs independently

Dynamic Channel Registration

Add and remove channels at runtime:

// Add channel when WebSocket client connects
func OnClientConnect(clientID string) {
    clientChannel := make(chan []models.LogEvent, 10)
    channelName := fmt.Sprintf("client-%s", clientID)

    arbor.Logger().SetChannel(channelName, clientChannel)

    // Start consumer for this client
    go streamToClient(clientID, clientChannel)
}

// Remove channel when client disconnects
func OnClientDisconnect(clientID string) {
    channelName := fmt.Sprintf("client-%s", clientID)
    arbor.Logger().UnregisterChannel(channelName)
    // Channel cleanup happens automatically
}

High-Throughput Configuration

For high-volume scenarios, use larger batches and longer intervals:

// Process 500 events at once or flush every 10 seconds
// Reduces overhead but increases latency
highThroughputChannel := make(chan []models.LogEvent, 100)
arbor.Logger().SetChannelWithBuffer("high-volume", highThroughputChannel, 500, 10*time.Second)

Real-Time Configuration

For low-latency requirements, use small batches and short intervals:

// Process 1-5 events at once with sub-second flush
// Lower latency but higher processing overhead
realTimeChannel := make(chan []models.LogEvent, 20)
arbor.Logger().SetChannelWithBuffer("real-time", realTimeChannel, 1, 100*time.Millisecond)

WebSocket Broadcasting with Connection Management

Complete example with error handling and graceful shutdown:

type WebSocketManager struct {
    clients map[string]*websocket.Conn
    mu      sync.RWMutex
    logChan chan []models.LogEvent
}

func NewWebSocketManager() *WebSocketManager {
    mgr := &WebSocketManager{
        clients: make(map[string]*websocket.Conn),
        logChan: make(chan []models.LogEvent, 50),
    }

    // Register channel with appropriate batching
    arbor.Logger().SetChannelWithBuffer("websocket", mgr.logChan, 20, 1*time.Second)

    // Start broadcast goroutine
    go mgr.broadcastLoop()

    return mgr
}

func (mgr *WebSocketManager) broadcastLoop() {
    for logBatch := range mgr.logChan {
        mgr.mu.RLock()
        for clientID, conn := range mgr.clients {
            if err := conn.WriteJSON(logBatch); err != nil {
                log.Printf("Failed to send to client %s: %v", clientID, err)
                // Handle disconnected clients
                go mgr.RemoveClient(clientID)
            }
        }
        mgr.mu.RUnlock()
    }
}

func (mgr *WebSocketManager) AddClient(clientID string, conn *websocket.Conn) {
    mgr.mu.Lock()
    mgr.clients[clientID] = conn
    mgr.mu.Unlock()
}

func (mgr *WebSocketManager) RemoveClient(clientID string) {
    mgr.mu.Lock()
    if conn, ok := mgr.clients[clientID]; ok {
        conn.Close()
        delete(mgr.clients, clientID)
    }
    mgr.mu.Unlock()
}

func (mgr *WebSocketManager) Shutdown() {
    // Unregister channel (stops the buffer and writer)
    arbor.Logger().UnregisterChannel("websocket")

    // Close all client connections
    mgr.mu.Lock()
    for _, conn := range mgr.clients {
        conn.Close()
    }
    mgr.clients = nil
    mgr.mu.Unlock()

    // Drain remaining batches with bounded wait
    timeout := time.After(2 * time.Second)
drainLoop:
    for {
        select {
        case batch := <-mgr.logChan:
            // Process final batches (already closed clients, just drain)
            _ = batch
        case <-time.After(100 * time.Millisecond):
            // No batch arrived, done draining
            break drainLoop
        case <-timeout:
            // Overall timeout exceeded
            break drainLoop
        }
    }

    // Exit without closing the channel - the sender (ChannelBuffer) owns it
    // and may still attempt a final send during flush
}
Lifecycle Management

Cleanup with UnregisterChannel

Properly stop and remove a channel logger:

// Register channel
logChannel := make(chan []models.LogEvent, 10)
arbor.Logger().SetChannel("my-channel", logChannel)

// Later, when done with the channel
arbor.Logger().UnregisterChannel("my-channel")
// This stops the ChannelWriter and ChannelBuffer goroutines
// and removes the channel from the registry

Automatic Cleanup on Replacement

Calling SetChannel with an existing name automatically cleans up the old writer and buffer:

// First registration
channel1 := make(chan []models.LogEvent, 10)
arbor.Logger().SetChannel("stream", channel1)

// Later, replace with new channel
// Old channel is automatically cleaned up
channel2 := make(chan []models.LogEvent, 10)
arbor.Logger().SetChannel("stream", channel2)
// channel1 is no longer receiving logs

Graceful Shutdown

Pattern for draining remaining batches during shutdown:

func shutdown() {
    // Step 1: Unregister channel (stops the buffer and writer)
    arbor.Logger().UnregisterChannel("my-channel")

    // Step 2: Drain any remaining batches with bounded wait
    // Use a timeout to prevent indefinite blocking
    timeout := time.After(2 * time.Second)
    drainLoop:
    for {
        select {
        case batch := <-logChannel:
            // Process final batches
            processBatch(batch)
        case <-time.After(100 * time.Millisecond):
            // No batch arrived within timeout, done draining
            break drainLoop
        case <-timeout:
            // Overall timeout exceeded
            log.Println("Shutdown timeout exceeded, exiting")
            break drainLoop
        }
    }

    // Step 3: Exit consumer goroutine without closing the channel
    // The sender (ChannelBuffer) owns the channel and may attempt
    // a final send during flush. Never close a channel you don't own.
}

Important: The consumer must not close the channel because the sender (common.ChannelBuffer) owns it and may still attempt a final send during buffer flush. Receivers should only read from channels; closing is the sender's responsibility.

Resource Management

Each named channel creates two goroutines (ChannelWriter + ChannelBuffer), so cleanup is important:

// Resource usage per channel:
// - 1 ChannelWriter goroutine (processes writes)
// - 1 ChannelBuffer goroutine (batches events)
// - ~150KB memory overhead (buffers)

// Always cleanup when done:
defer arbor.Logger().UnregisterChannel("channel-name")
Migration Guide: SetContextChannel → SetChannel

SetContextChannel and SetContextChannelWithBuffer are deprecated and will be removed in a future major version. Use the unified SetChannel/SetChannelWithBuffer API instead.

Old Approach (Deprecated):

// Create channel
logChannel := make(chan []models.LogEvent, 10)

// Register with singleton context buffer
arbor.Logger().SetContextChannel(logChannel)
defer common.Stop()

// Create context logger (sent logs to both standard writers and context buffer)
contextLogger := arbor.Logger().WithContextWriter("job-123")
contextLogger.Info().Msg("Processing")

// Consumer received all WithContextWriter logs
for batch := range logChannel {
    for _, event := range batch {
        // All logs from any WithContextWriter call
    }
}

New Approach (Recommended):

// Create channel
logChannel := make(chan []models.LogEvent, 10)

// Register with named channel (same behavior as SetContextChannel)
arbor.Logger().SetChannel("context", logChannel)
defer arbor.Logger().UnregisterChannel("context")

// Create context logger (now only adds correlation ID)
contextLogger := arbor.Logger().WithContextWriter("job-123")
contextLogger.Info().Msg("Processing")

// Consumer filters by correlation ID
for batch := range logChannel {
    for _, event := range batch {
        if event.CorrelationID == "job-123" {
            // Process logs for specific context
        }
    }
}

Key Changes:

  1. Replace SetContextChannel(ch) with SetChannel("context", ch) or use any channel name
  2. Replace defer common.Stop() with defer Logger().UnregisterChannel("context")
  3. WithContextWriter now only adds correlation ID - filter by CorrelationID in consumer
  4. Each named channel is independent with its own lifecycle and batching configuration
Error Handling and Edge Cases

Nil Channel

Calling SetChannel with a nil channel will panic with a clear error message:

// This will panic
arbor.Logger().SetChannel("bad-channel", nil)
// Panic: Cannot create channel writer with nil channel

Invalid Parameters

Zero or negative batchSize or flushInterval will use safe defaults:

// These all use defaults: batchSize=5, flushInterval=1s
arbor.Logger().SetChannelWithBuffer("ch1", logChan, 0, 1*time.Second)
arbor.Logger().SetChannelWithBuffer("ch2", logChan, -10, 1*time.Second)
arbor.Logger().SetChannelWithBuffer("ch3", logChan, 5, 0)
arbor.Logger().SetChannelWithBuffer("ch4", logChan, 5, -1*time.Second)

Buffer Overflow

If the ChannelWriter buffer fills (default 1000 entries), entries are dropped with warning logs:

// If logging faster than channel consumer can process:
// - ChannelWriter buffer fills (1000 entries)
// - New writes complete in ~100μs (non-blocking)
// - Entries are dropped with warning log
// - Application continues normally without blocking

Channel Blocking

If the output channel blocks (consumer too slow), the ChannelBuffer will timeout and drop the batch:

// If consumer is too slow and channel buffer fills:
// - ChannelBuffer attempts to send batch
// - 1 second timeout on channel send
// - Batch is dropped if timeout occurs
// - Warning logged about dropped batch
// - Next batch continues normally

Best Practices:

  • Use buffered channels (10-100 capacity) for output
  • Monitor consumer performance to avoid backpressure
  • Implement proper error handling in consumers
  • Always cleanup channels with UnregisterChannel when done

For more details on the context-specific logging API, see the "Context-Specific Logging" section above.

Performance Characteristics
  • Synchronous writes (Console/File): ~50-100μs, blocking but fast
  • Async writes (LogStore/Context): ~100μs non-blocking + background processing
    • Buffer capacity: 1000 entries per writer
    • Overflow behavior: Drops entries with warning log
    • Shutdown: Automatic buffer draining prevents log loss
  • Correlation queries: ~50μs (in-memory map lookup)
  • Timestamp queries: ~100μs (in-memory slice scan)
  • BoltDB persistence: Async background writes (doesn't block logging)

API Integration

Gin Framework Integration
import (
    "github.com/gin-gonic/gin"
    "github.com/ternarybob/arbor"
    "github.com/ternarybob/arbor/models"
)

func main() {
    // Configure logger with memory writer for log retrieval
    logger := arbor.Logger().
        WithConsoleWriter(models.WriterConfiguration{
            Type:       models.LogWriterTypeConsole,
            TimeFormat: "15:04:05.000",
        }).
        WithMemoryWriter(models.WriterConfiguration{
            Type:       models.LogWriterTypeMemory,
            TimeFormat: "15:04:05.000",
        })

    // Create Gin engine with arbor integration
    r := gin.New()
    
    // Use arbor writer for Gin logs
    ginWriter := logger.GinWriter(models.WriterConfiguration{
        Type:       models.LogWriterTypeConsole,
        TimeFormat: "15:04:05.000",
    })
    
    r.Use(gin.LoggerWithWriter(ginWriter.(io.Writer)))
    
    // Your routes here
    r.GET("/health", func(c *gin.Context) {
        correlationID := c.GetHeader("X-Correlation-ID")
        requestLogger := logger.WithCorrelationId(correlationID)
        
        requestLogger.Info().Str("endpoint", "/health").Msg("Health check requested")
        c.JSON(200, gin.H{"status": "ok"})
    })
    
    r.Run(":8080")
}
Request Correlation Middleware
func CorrelationMiddleware(logger arbor.ILogger) gin.HandlerFunc {
    return gin.HandlerFunc(func(c *gin.Context) {
        // Extract or generate correlation ID
        correlationID := c.GetHeader("X-Correlation-ID")
        if correlationID == "" {
            correlationID = generateUUID() // Your UUID generation
        }
        
        // Create request-scoped logger
        requestLogger := logger.WithCorrelationId(correlationID)
        
        // Store in context for handler access
        c.Set("logger", requestLogger)
        c.Header("X-Correlation-ID", correlationID)
        
        requestLogger.Info().
            Str("method", c.Request.Method).
            Str("path", c.Request.URL.Path).
            Msg("Request started")
        
        c.Next()
        
        requestLogger.Info().
            Int("status", c.Writer.Status()).
            Msg("Request completed")
    })
}

Advanced Features

Context Management
// Add structured context
logger := arbor.Logger().
    WithContext("service", "user-management").
    WithContext("version", "1.2.0").
    WithPrefix("UserSvc")

// Fork logger (tree-like): same writers + inherited context
forkedLogger := logger.Copy()

Configuration Examples

From Environment Variables
logLevel := os.Getenv("LOG_LEVEL")
if logLevel == "" {
    logLevel = "info"
}

logger := arbor.Logger().
    WithConsoleWriter(models.WriterConfiguration{
        Type:       models.LogWriterTypeConsole,
        TimeFormat: "15:04:05.000",
    }).
    WithLevelFromString(logLevel)
Configuration Struct
type LogConfig struct {
    Level      string `json:"level"`
    Console    bool   `json:"console"`
    File       string `json:"file"`
    Memory     bool   `json:"memory"`
    TimeFormat string `json:"time_format"`
    TextOutput bool   `json:"text_output"`
}

func ConfigureLogger(config LogConfig) arbor.ILogger {
    logger := arbor.NewLogger()

    if config.Console {
        logger.WithConsoleWriter(models.WriterConfiguration{
            Type:       models.LogWriterTypeConsole,
            TimeFormat: config.TimeFormat,
        })
    }

    if config.File != "" {
        logger.WithFileWriter(models.WriterConfiguration{
            Type:       models.LogWriterTypeFile,
            FileName:   config.File,
            TimeFormat: config.TimeFormat,
            TextOutput: config.TextOutput, // Enable text format for files
        })
    }

    if config.Memory {
        logger.WithMemoryWriter(models.WriterConfiguration{
            Type:       models.LogWriterTypeMemory,
            TimeFormat: config.TimeFormat,
        })
    }

    return logger.WithLevelFromString(config.Level)
}

Architecture & Performance

Log Store Architecture

Arbor uses a shared log store pattern for memory-based writers:

┌─────────────┐
│  Log Event  │
└──────┬──────┘
       │
       ├──────────► Console Writer (direct, ~50μs)
       ├──────────► File Writer (direct, ~80μs)
       └──────────► Log Store Writer (buffered async)
                         │
                         ├──► In-Memory Store (primary, fast queries)
                         └──► BoltDB (optional persistence, async)
                              │
                              ├──► Memory Writer (correlation queries)
                              └──► Future readers...
Performance Characteristics
  • Direct Writers (Console/File): ~50-100μs per log, no blocking
  • Log Store Writes: Buffered channel (1000 entries), non-blocking
  • In-Memory Queries: ~50-100μs for correlation/timestamp lookups
  • Optional Persistence: Async BoltDB writes, doesn't block logging path
  • Cleanup: Automatic TTL expiration every 1 minute (10 min default TTL)
  • Thread Safety: RWMutex for concurrent access with minimal lock contention
  • Level Filtering: Occurs at writer level for efficiency
Design Principles
  • Separation of Concerns: Write path (fast) vs. Query path (acceptable latency)
  • Non-Blocking: Buffered async writes prevent slow storage from blocking logs
  • In-Memory Primary: Fast queries without disk I/O for active sessions
  • Optional Persistence: BoltDB backup for crash recovery and long-term storage
  • Extensible: Easy to add new store-based readers (metrics, search, alerts)

Writer Architecture

Arbor uses different writer patterns optimized for specific use cases. Understanding these patterns helps you choose the right configuration for your application.

Synchronous Writers (Console, File)

Pattern: Direct write to output (stdout or file)

These writers provide immediate output with minimal overhead:

  • Performance: ~50-100μs per log entry
  • Blocking: Yes, but very fast (acceptable for most use cases)
  • Use Cases:
    • Development debugging with immediate console feedback
    • Production file logging for audit trails
    • Scenarios where log order guarantee is critical

Example:

logger := arbor.Logger().
    WithConsoleWriter(models.WriterConfiguration{
        Type:       models.LogWriterTypeConsole,
        TimeFormat: "15:04:05.000",
    }).
    WithFileWriter(models.WriterConfiguration{
        Type:       models.LogWriterTypeFile,
        FileName:   "logs/app.log",
        MaxSize:    500 * 1024,
        MaxBackups: 20,
    })

logger.Info().Msg("Immediate output to console and file")
Async Writers (LogStore, Context)

Pattern: Buffered channel + background goroutine processing

These writers provide non-blocking writes with automatic buffer management:

  • Performance: ~100μs non-blocking write + async background processing
  • Blocking: No - Write() returns immediately
  • Buffer Capacity: 1000 entries per writer
  • Overflow Behavior: Drops entries with warning log when buffer is full
  • Shutdown: Automatic buffer draining ensures no log loss during graceful shutdown

Data Flow:

Log Event → channelWriter Base (async, buffered)
    ├──► LogStoreWriter → ILogStore → In-Memory/BoltDB
    └──► ContextWriter → Global Context Buffer → Channel

Benefits:

  • Non-blocking writes prevent slow storage from blocking logging path
  • 1000-entry buffer absorbs traffic bursts without dropping logs
  • Graceful shutdown with automatic buffer draining prevents log loss
  • Level filtering applied before buffering for efficiency
  • Thread-safe concurrent writes with minimal lock contention

Example:

// LogStore writer for queryable memory logs
logger := arbor.Logger().
    WithMemoryWriter(models.WriterConfiguration{
        Type:       models.LogWriterTypeMemory,
        TimeFormat: "15:04:05.000",
    })

// Channel-based streaming with correlation ID filtering
logChannel := make(chan []models.LogEvent, 10)
arbor.Logger().SetChannel("job-logs", logChannel)
contextLogger := logger.WithContextWriter("job-123")

contextLogger.Info().Msg("Non-blocking write completes in ~100μs")
Custom Async Writers (ChannelWriter)

For advanced use cases, you can create custom async writers using the channelWriter base. This is useful when you need to integrate with custom storage backends, external services, or implement specialized log processing.

Pattern: The channelWriter provides a reusable async buffered writer with automatic lifecycle management.

Use Cases:

  • Custom database writers (MongoDB, PostgreSQL, Elasticsearch)
  • External service integrations (Datadog, Splunk, CloudWatch)
  • Specialized log processing (aggregation, filtering, transformation)
  • High-throughput scenarios requiring non-blocking writes
Basic Custom Writer Example
import (
    "github.com/ternarybob/arbor/writers"
    "github.com/ternarybob/arbor/models"
)

// Custom writer that sends logs to an external API
type APIWriter struct {
    writer writers.IChannelWriter
    config models.WriterConfiguration
    apiClient *YourAPIClient
}

func NewAPIWriter(config models.WriterConfiguration, apiURL string) (writers.IWriter, error) {
    apiClient := NewAPIClient(apiURL)

    // Define processor that handles each log entry
    processor := func(entry models.LogEvent) error {
        return apiClient.SendLog(entry)
    }

    // Create channel writer with 1000 buffer size
    writer, err := writers.NewChannelWriter(config, 1000, processor)
    if err != nil {
        return nil, err
    }

    // Start the background goroutine
    if err := writer.Start(); err != nil {
        return nil, err
    }

    return &APIWriter{
        writer: writer,
        config: config,
        apiClient: apiClient,
    }, nil
}

// Implement IWriter interface
func (w *APIWriter) Write(data []byte) (int, error) {
    return w.writer.Write(data)
}

func (w *APIWriter) WithLevel(level log.Level) writers.IWriter {
    w.writer.WithLevel(level)
    return w
}

func (w *APIWriter) GetFilePath() string {
    return "" // Not file-based
}

func (w *APIWriter) Close() error {
    // Gracefully shut down - drains buffer before closing
    return w.writer.Close()
}
WebSocketWriter using ChannelWriter

This example demonstrates creating a WebSocketWriter that broadcasts log events to connected WebSocket clients using the ChannelWriter pattern:

import (
    "encoding/json"
    "sync"

    "github.com/gorilla/websocket"
    "github.com/ternarybob/arbor"
    "github.com/ternarybob/arbor/models"
    "github.com/ternarybob/arbor/writers"
)

// WebSocketWriter broadcasts log events to WebSocket clients
type WebSocketWriter struct {
    writer  writers.IChannelWriter
    config  models.WriterConfiguration
    manager *ConnectionManager
}

// ConnectionManager handles thread-safe WebSocket client connections
type ConnectionManager struct {
    clients map[string]*websocket.Conn
    mu      sync.RWMutex
}

func NewConnectionManager() *ConnectionManager {
    return &ConnectionManager{
        clients: make(map[string]*websocket.Conn),
    }
}

func (cm *ConnectionManager) AddClient(clientID string, conn *websocket.Conn) {
    cm.mu.Lock()
    defer cm.mu.Unlock()
    cm.clients[clientID] = conn
}

func (cm *ConnectionManager) RemoveClient(clientID string) {
    cm.mu.Lock()
    defer cm.mu.Unlock()
    if conn, ok := cm.clients[clientID]; ok {
        conn.Close()
        delete(cm.clients, clientID)
    }
}

func (cm *ConnectionManager) Broadcast(logEvent models.LogEvent) error {
    cm.mu.RLock()
    defer cm.mu.RUnlock()

    // Broadcast to all connected clients
    for clientID, conn := range cm.clients {
        if err := conn.WriteJSON(logEvent); err != nil {
            // Handle disconnected clients asynchronously
            go cm.RemoveClient(clientID)
            continue
        }
    }
    return nil
}

// NewWebSocketWriter creates a ChannelWriter-based WebSocket broadcaster
func NewWebSocketWriter(config models.WriterConfiguration, manager *ConnectionManager) (writers.IWriter, error) {
    // Define processor that broadcasts each log entry to WebSocket clients
    processor := func(entry models.LogEvent) error {
        return manager.Broadcast(entry)
    }

    // Create channel writer with 1000 buffer size
    writer, err := writers.NewChannelWriter(config, 1000, processor)
    if err != nil {
        return nil, err
    }

    // Start the background goroutine
    if err := writer.Start(); err != nil {
        return nil, err
    }

    return &WebSocketWriter{
        writer:  writer,
        config:  config,
        manager: manager,
    }, nil
}

// Implement IWriter interface
func (w *WebSocketWriter) Write(data []byte) (int, error) {
    return w.writer.Write(data)
}

func (w *WebSocketWriter) WithLevel(level log.Level) writers.IWriter {
    w.writer.WithLevel(level)
    return w
}

func (w *WebSocketWriter) GetFilePath() string {
    return "" // Not file-based
}

func (w *WebSocketWriter) Close() error {
    // Gracefully shut down - drains buffer before closing
    return w.writer.Close()
}

// Usage example
func SetupWebSocketLogging() {
    // Create connection manager
    connManager := NewConnectionManager()

    // Create WebSocket writer
    wsWriter, err := NewWebSocketWriter(models.WriterConfiguration{
        Type:  models.LogWriterTypeConsole, // Use console type for compatibility
        Level: arbor.InfoLevel,
    }, connManager)
    if err != nil {
        log.Fatal(err)
    }
    defer wsWriter.Close()

    // Register with logger
    allWriters := append(arbor.GetAllRegisteredWriters(), wsWriter)
    logger := arbor.Logger().WithWriters(allWriters)

    // Add WebSocket clients as they connect
    // connManager.AddClient(clientID, conn)

    // All logs now broadcast to WebSocket clients
    logger.Info().Msg("This log will be sent to all WebSocket clients")
}

Key Features:

  • Thread-safe connection management: RWMutex protects concurrent client access
  • Automatic cleanup: Disconnected clients are removed asynchronously during broadcast failures
  • Non-blocking writes: ChannelWriter buffers logs, preventing slow clients from blocking the logger
  • Error handling: Gracefully handles client disconnections without affecting other clients
  • Standard IWriter interface: Integrates seamlessly with arbor's writer system

Notes:

  • The processor function unmarshal and broadcasts the LogEvent to all connected clients
  • Connection management is separated from the writer for better modularity
  • Buffer size of 1000 entries prevents memory issues during client slowdowns
  • Always call Close() during shutdown to drain the buffer and prevent log loss
Manual Lifecycle Control

If you need fine-grained control over the goroutine lifecycle:

// Create without auto-start
processor := func(entry models.LogEvent) error {
    // Your processing logic
    return nil
}

writer, err := writers.NewChannelWriter(config, 1000, processor)
if err != nil {
    log.Fatal(err)
}

// Start when ready
if err := writer.Start(); err != nil {
    log.Fatal(err)
}

// Check if running
if writer.IsRunning() {
    fmt.Println("Writer is processing logs")
}

// Stop processing (drains buffer)
if err := writer.Stop(); err != nil {
    log.Printf("Error stopping writer: %v", err)
}

// Close and cleanup
writer.Close()
Helper Function for Simple Async Writers

For most cases, use the newAsyncWriter helper which creates and starts in one call:

// This is used internally by LogStoreWriter and ContextWriter
processor := func(entry models.LogEvent) error {
    // Process log entry
    return db.Store(entry)
}

// Creates, starts, and returns ready-to-use writer
writer, err := writers.newAsyncWriter(config, 1000, processor)
if err != nil {
    log.Fatal(err)
}
// Writer is already running and processing logs
Error Handling in Processors

The processor function receives each log entry and should handle errors appropriately:

processor := func(entry models.LogEvent) error {
    // Retry logic for transient failures
    maxRetries := 3
    for i := 0; i < maxRetries; i++ {
        err := sendToExternalService(entry)
        if err == nil {
            return nil
        }

        if isTransientError(err) && i < maxRetries-1 {
            time.Sleep(time.Duration(i+1) * 100 * time.Millisecond)
            continue
        }

        // Log error and return (channelWriter will log the failure)
        return fmt.Errorf("failed after %d retries: %w", i+1, err)
    }
    return nil
}
Buffer Overflow Behavior

When the buffer is full (1000 entries by default):

// Buffer full scenario
writer, _ := writers.NewChannelWriter(config, 1000, slowProcessor)
writer.Start()

// If buffer fills up:
// - New writes complete immediately (~100μs)
// - Entry is dropped with warning log
// - No blocking occurs
// - Application continues normally
Graceful Shutdown Pattern

Always close writers to ensure no log loss:

func main() {
    writer := setupCustomWriter()
    defer writer.Close() // Drains buffer before exiting

    // Application logic...

    // On shutdown, Close() will:
    // 1. Stop accepting new entries
    // 2. Process all buffered entries
    // 3. Clean up resources
}
Performance Characteristics
  • Write latency: ~100μs (non-blocking, returns immediately)
  • Buffer capacity: Configurable (default 1000 entries)
  • Throughput: Supports 10,000+ logs/sec depending on processor speed
  • Memory overhead: ~150KB per writer (buffer + goroutine)
  • Buffer drain: Automatic on Close() - ensures zero log loss during shutdown

CI/CD

This project uses GitHub Actions for continuous integration and deployment:

  • Automated Testing: Runs tests on every push and pull request
  • Code Quality Checks: Enforces go fmt, go vet, and build validation
  • Auto-Release: Automatically creates releases on main branch pushes
  • Tagged Releases: Manual version control via git tags

Documentation

Full documentation is available at pkg.go.dev.

Contributing

Contributions are welcome! Please feel free to submit a Pull Request.

License

This project is licensed under the MIT License - see the LICENSE file for details.

This library is part of the T3B ecosystem:

  • funktion - Core utility functions
  • satus - Configuration and status management
  • arbor - Structured logging system
  • omnis - Web framework integrations

Documentation

Index

Constants

View Source
const (
	LOGGER_CONTEXT_KEY string = "logger"
	LEVEL_KEY          string = "level"
	CORRELATION_ID_KEY string = "correlationid"
	PREFIX_KEY         string = "prefix"
	GIN_LOG_KEY        string = "gin"
)
View Source
const (
	WRITER_CONSOLE  = "console"
	WRITER_FILE     = "file"
	WRITER_MEMORY   = "memory"
	WRITER_LOGSTORE = "logstore"
)

Variables

This section is empty.

Functions

func GetAllRegisteredWriters added in v1.4.31

func GetAllRegisteredWriters() map[string]writers.IWriter

GetAllRegisteredWriters returns a copy of all registered writers

func GetRegisteredMemoryWriter added in v1.4.31

func GetRegisteredMemoryWriter(name string) writers.IMemoryWriter

GetRegisteredMemoryWriter retrieves a memory writer by name from the global registry Returns nil if the writer is not found or is not a memory writer

func GetRegisteredWriter added in v1.4.31

func GetRegisteredWriter(name string) writers.IWriter

GetRegisteredWriter retrieves a writer by name from the global registry Returns nil if the writer is not found

func GetRegisteredWriterNames added in v1.4.31

func GetRegisteredWriterNames() []string

GetRegisteredWriterNames returns a list of all registered writer names

func GetWriterCount added in v1.4.31

func GetWriterCount() int

GetWriterCount returns the number of registered writers

func LevelToString added in v1.4.14

func LevelToString(level log.Level) string

LevelToString converts log level to string representation (exported for writers)

func ParseLevelString added in v1.4.14

func ParseLevelString(levelStr string) (log.Level, error)

Re-export convenience functions from levels subpackage

func ParseLogLevel added in v1.4.14

func ParseLogLevel(level int) log.Level

func RegisterFunctionLogger added in v1.4.47

func RegisterFunctionLogger(correlationID string) error

RegisterFunctionLogger adds a correlation ID to the active function logger registry. It returns an error if the correlation ID is already registered.

func RegisterWriter added in v1.4.31

func RegisterWriter(name string, writer writers.IWriter)

RegisterWriter registers a writer with the given name in the global registry

func UnregisterFunctionLogger added in v1.4.47

func UnregisterFunctionLogger(correlationID string)

UnregisterFunctionLogger removes a correlation ID from the active function logger registry.

func UnregisterWriter added in v1.4.31

func UnregisterWriter(name string)

UnregisterWriter removes a writer from the global registry

Types

type ILogEvent added in v1.4.14

type ILogEvent interface {
	// String slice field method
	Strs(key string, values []string) ILogEvent

	// String field methods
	Str(key, value string) ILogEvent

	// Error field method
	Err(err error) ILogEvent

	// Message methods
	Msg(message string)
	Msgf(format string, args ...interface{})

	// Integer field method
	Int(key string, value int) ILogEvent

	// Int32 field method
	Int32(key string, value int32) ILogEvent

	// Int64 field method
	Int64(key string, value int64) ILogEvent

	// Float32 field method
	Float32(key string, value float32) ILogEvent

	// Duration field method
	Dur(key string, value time.Duration) ILogEvent

	// Float64 field method
	Float64(key string, value float64) ILogEvent

	// Bool field method
	Bool(key string, value bool) ILogEvent
}

ILogEvent represents a fluent interface for building log events

func Debug added in v1.4.14

func Debug() ILogEvent

func Error added in v1.4.14

func Error() ILogEvent

func Fatal added in v1.4.14

func Fatal() ILogEvent

func Info added in v1.4.14

func Info() ILogEvent

func Panic added in v1.4.14

func Panic() ILogEvent

func Trace added in v1.4.14

func Trace() ILogEvent

Global convenience functions for direct logging

func Warn added in v1.4.14

func Warn() ILogEvent

type ILogger added in v1.4.14

type ILogger interface {
	// Deprecated: Use SetChannel("context", ch) instead. This method will be removed in a future version.
	// SetContextChannel internally calls SetChannel with a fixed name "context".
	SetContextChannel(ch chan []models.LogEvent)

	// Deprecated: Use SetChannelWithBuffer("context", ch, batchSize, flushInterval) instead. This method will be removed in a future version.
	// SetContextChannelWithBuffer internally calls SetChannelWithBuffer with a fixed name "context".
	SetContextChannelWithBuffer(ch chan []models.LogEvent, batchSize int, flushInterval time.Duration)

	SetChannel(name string, ch chan []models.LogEvent)
	SetChannelWithBuffer(name string, ch chan []models.LogEvent, batchSize int, flushInterval time.Duration)
	UnregisterChannel(name string)
	WithContextWriter(contextID string) ILogger
	WithWriters(writers []writers.IWriter) ILogger
	WithConsoleWriter(config models.WriterConfiguration) ILogger

	WithFileWriter(config models.WriterConfiguration) ILogger

	WithMemoryWriter(config models.WriterConfiguration) ILogger

	WithLogStore(store writers.ILogStore, config models.WriterConfiguration) ILogger

	WithPrefix(value string) ILogger

	WithCorrelationId(value string) ILogger

	ClearCorrelationId() ILogger

	// ClearContext removes all context data from the logger
	ClearContext() ILogger

	WithLevel(lvl LogLevel) ILogger

	// WithLevelFromString applies a log level from a string configuration
	WithLevelFromString(levelStr string) ILogger

	WithContext(key string, value string) ILogger

	// Copy creates a forked copy of the logger with the same configuration and context.
	// This supports tree-like logger usage where `With*` methods do not mutate the parent.
	Copy() ILogger

	// Fluent logging methods
	Trace() ILogEvent
	Debug() ILogEvent
	Info() ILogEvent
	Warn() ILogEvent
	Error() ILogEvent
	Fatal() ILogEvent
	Panic() ILogEvent

	GetMemoryLogs(correlationid string, minLevel LogLevel) (map[string]string, error)

	// GetMemoryLogsForCorrelation retrieves all log entries for a specific correlation ID
	GetMemoryLogsForCorrelation(correlationid string) (map[string]string, error)

	// GetMemoryLogsWithLimit retrieves the most recent log entries up to the specified limit
	GetMemoryLogsWithLimit(limit int) (map[string]string, error)

	// GinWriter returns an io.Writer that integrates Gin logs with arbor's registered writers
	GinWriter(config models.WriterConfiguration) interface{}

	// GetLogFilePath returns the configured log file path if a file writer is registered
	GetLogFilePath() string
}

func GetLogger added in v1.4.14

func GetLogger() ILogger

GetLogger returns the default logger instance from the registry

func Logger added in v1.4.14

func Logger() ILogger

Logger returns the default logger instance, creating it if it doesn't exist

func NewLogger added in v1.4.31

func NewLogger() ILogger

NewLogger creates a new logger instance This is useful for testing or when you need isolated logger instances

type IWriterRegistry added in v1.4.31

type IWriterRegistry interface {
	// RegisterWriter registers a writer with the given name in the registry
	RegisterWriter(name string, writer writers.IWriter)

	// GetRegisteredWriter retrieves a writer by name from the registry
	// Returns nil if the writer is not found
	GetRegisteredWriter(name string) writers.IWriter

	// GetRegisteredMemoryWriter retrieves a memory writer by name from the registry
	// Returns nil if the writer is not found or is not a memory writer
	GetRegisteredMemoryWriter(name string) writers.IMemoryWriter

	// GetRegisteredWriterNames returns a list of all registered writer names
	GetRegisteredWriterNames() []string

	// UnregisterWriter removes a writer from the registry
	UnregisterWriter(name string)

	// GetWriterCount returns the number of registered writers
	GetWriterCount() int

	// GetAllRegisteredWriters returns a copy of all registered writers
	GetAllRegisteredWriters() map[string]writers.IWriter
}

IWriterRegistry defines the interface for managing a collection of named writers with thread-safe access operations

func NewWriterRegistry added in v1.4.31

func NewWriterRegistry() IWriterRegistry

NewWriterRegistry creates a new instance of WriterRegistry

type LogLevel added in v1.4.14

type LogLevel = levels.LogLevel
const (
	TraceLevel LogLevel = levels.TraceLevel
	DebugLevel LogLevel = levels.DebugLevel
	InfoLevel  LogLevel = levels.InfoLevel
	WarnLevel  LogLevel = levels.WarnLevel
	ErrorLevel LogLevel = levels.ErrorLevel
	FatalLevel LogLevel = levels.FatalLevel
	PanicLevel LogLevel = levels.PanicLevel
	Disabled   LogLevel = levels.Disabled
)

type WriterRegistry added in v1.4.31

type WriterRegistry struct {
	// contains filtered or unexported fields
}

WriterRegistry manages a collection of named writers with thread-safe access and implements the IWriterRegistry interface

func (*WriterRegistry) GetAllRegisteredWriters added in v1.4.31

func (wr *WriterRegistry) GetAllRegisteredWriters() map[string]writers.IWriter

GetAllRegisteredWriters returns a copy of all registered writers

func (*WriterRegistry) GetRegisteredMemoryWriter added in v1.4.31

func (wr *WriterRegistry) GetRegisteredMemoryWriter(name string) writers.IMemoryWriter

GetRegisteredMemoryWriter retrieves a memory writer by name from the registry Returns nil if the writer is not found or is not a memory writer

func (*WriterRegistry) GetRegisteredWriter added in v1.4.31

func (wr *WriterRegistry) GetRegisteredWriter(name string) writers.IWriter

GetRegisteredWriter retrieves a writer by name from the registry Returns nil if the writer is not found

func (*WriterRegistry) GetRegisteredWriterNames added in v1.4.31

func (wr *WriterRegistry) GetRegisteredWriterNames() []string

GetRegisteredWriterNames returns a list of all registered writer names

func (*WriterRegistry) GetWriterCount added in v1.4.31

func (wr *WriterRegistry) GetWriterCount() int

GetWriterCount returns the number of registered writers

func (*WriterRegistry) RegisterWriter added in v1.4.31

func (wr *WriterRegistry) RegisterWriter(name string, writer writers.IWriter)

RegisterWriter registers a writer with the given name in the registry

func (*WriterRegistry) UnregisterWriter added in v1.4.31

func (wr *WriterRegistry) UnregisterWriter(name string)

UnregisterWriter removes a writer from the registry

Directories

Path Synopsis
services

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL