logf

package module
v2.0.0-beta.2 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Mar 20, 2026 License: MIT Imports: 19 Imported by: 1

README

› logf

Go Reference Build Status Go Report Card codecov

Structured logging for Go — context-aware, slog-native, fast.

So you want to log things

You already have slog. It works. It's in the standard library. Why would you need anything else?

Well, most of the time you don't. But then one day your service starts handling 50K requests per second and you notice something funny: your p99 latency spikes every time the log collector hiccups. Or you realize that passing a logger through seventeen function arguments just to get a request_id in your database layer is... not great.

That's where logf comes in. Think of it as slog's cool older sibling who went to systems programming school and came back with opinions about memory allocation.

What's in the box

  • Context-aware fields — attach fields to context.Context, they show up in every log entry magically. No more threading loggers through your entire call stack like some kind of dependency injection nightmare.
  • Native slog bridgelogger.Slog() gives you a real *slog.Logger that shares everything. Fields, name, pipeline. It's not a wrapper, it's the same logger wearing a different hat.
  • Router — send logs to multiple destinations. JSON to file, colored text to console, errors to alerting. Each destination gets its own encoder and level filter. A stalled Kibana doesn't block your stderr.
  • SlabWriter — async buffered I/O that copies your log into a pre-allocated slab in ~17 ns and moves on. A background goroutine handles the actual writing. Your HTTP handler never waits for disk.
  • WriterSlot — don't know where you're logging to yet? No problem. Start logging, connect the destination later. Early logs are buffered.
  • JSON and Text encoderslogf.JSON() for machines, logf.Text() for humans. The text encoder has colors, italics, and a separator that makes your terminal look like it went to design school.
  • Builder API — one line to start, chain methods to customize. No config structs with 47 fields.
  • Zero-alloc hot path — the only allocation is Go's variadic []Field slice. Everything else is pooled, pre-allocated, or stack-allocated.

Getting started

go get github.com/ssgreg/logf/v2

Two lines to logging:

logger := logf.NewLogger().Build()
logger.Info(ctx, "hello, world", logf.String("from", "logf"))
// → {"level":"info","ts":"2026-03-19T14:04:02Z","msg":"hello, world","caller":"main.go:10","from":"logf"}

Want colors? Say no more:

logger := logf.NewLogger().EncoderFrom(logf.Text()).Build()
// Mar 19 14:04:02.167 [INF] hello, world › from=logf → main.go:10

Going to production? Crank it up:

logger := logf.NewLogger().
    Level(logf.LevelInfo).
    Output(os.Stdout).
    Build()

Logging (the fun part)

ctx := context.Background()

// The classics:
logger.Debug(ctx, "starting up")
// → {"level":"debug","msg":"starting up"}

logger.Info(ctx, "request handled", logf.String("method", "GET"), logf.Int("status", 200))
// → {"level":"info","msg":"request handled","method":"GET","status":200}

logger.Warn(ctx, "slow query", logf.Duration("elapsed", 2*time.Second))
// → {"level":"warn","msg":"slow query","elapsed":"2s"}

logger.Error(ctx, "connection failed", logf.Error(err))
// → {"level":"error","msg":"connection failed","error":"dial tcp: timeout"}

Accumulated fields — set once, included forever:

reqLogger := logger.With(logf.String("request_id", "abc-123"))
reqLogger.Info(ctx, "processing")
// → {"level":"info","msg":"processing","request_id":"abc-123"}

reqLogger.Info(ctx, "done", logf.Int("items", 3))
// → {"level":"info","msg":"done","request_id":"abc-123","items":3}

Groups — nest fields under a key:

logger.Info(ctx, "done", logf.Group("http",
    logf.String("method", "GET"),
    logf.Int("status", 200),
))
// → {"msg":"done","http":{"method":"GET","status":200}}

// Or permanently with WithGroup:
httpLogger := logger.WithGroup("http")
httpLogger.Info(ctx, "req", logf.String("method", "GET"), logf.Int("status", 200))
// → {"msg":"req","http":{"method":"GET","status":200}}

Named loggers — know who's talking:

dbLogger := logger.WithName("db")
dbLogger.Info(ctx, "connected")
// → {"logger":"db","msg":"connected"}

Context-aware fields

Here's the thing about logging in real applications: you want request_id in every single log entry. With most loggers, that means passing a derived logger through every function. With logf, you put fields in the context and forget about them:

// In your middleware — add fields once:
func middleware(next http.Handler) http.Handler {
    return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
        ctx := logf.With(r.Context(),
            logf.String("request_id", r.Header.Get("X-Request-ID")),
            logf.String("method", r.Method),
            logf.String("path", r.URL.Path),
        )
        next.ServeHTTP(w, r.WithContext(ctx))
    })
}

// Somewhere deep in the call stack — fields are just there:
func handleOrder(ctx context.Context, orderID string) {
    logger.Info(ctx, "processing order", logf.String("order_id", orderID))
    // → {"msg":"processing order","request_id":"abc","method":"POST","path":"/orders","order_id":"123"}
}

Enable it with .Context() in the builder:

logger := logf.NewLogger().Context().Build()

Want to automatically extract trace IDs from OpenTelemetry spans? Write a FieldSource and pass it to .Context():

// Define once:
func otelTraceSource(ctx context.Context) []logf.Field {
    span := trace.SpanFromContext(ctx)
    if !span.SpanContext().IsValid() {
        return nil
    }
    return []logf.Field{
        logf.String("trace_id", span.SpanContext().TraceID().String()),
    }
}

// Plug it in:
logger := logf.NewLogger().Context(otelTraceSource).Build()

That's it. From now on, whenever a context carries an active OTel span, trace_id shows up in every log entry. You didn't change a single logging call in your application.

logfc — when you don't want to pass the logger at all

The logfc package puts the logger in the context. Not a global singleton — a real logger that picks up new fields as the request travels deeper through your code. Each layer adds its own details, and by the time you log something ten functions down, the entry carries the full story of how it got there:

import "github.com/ssgreg/logf/v2/logfc"

// In main or middleware:
ctx = logfc.New(ctx, logger)

// Anywhere else — no logger argument needed:
logfc.Info(ctx, "order processed", logf.Int("items", 3))

// Add fields for everything downstream:
ctx = logfc.With(ctx, logf.String("order_id", "ord-789"))
logfc.Info(ctx, "payment complete")
// → includes order_id automatically

// Need slog? Pull it out:
slogger := logfc.Get(ctx).Slog()

If no logger is in context, everything is a no-op. Zero overhead. No panics.

slog integration (they're best friends)

logger.Slog() doesn't create a new logger. It returns a *slog.Logger that IS your logf logger, just with slog's API. Same fields, same name, same pipeline, same destination. Log with either one — the output is identical.

// These two produce the same output:
logger.Info(ctx, "hello", logf.Int("n", 42))
logger.Slog().InfoContext(ctx, "hello", "n", 42)

Give it to your dependencies:

db := sqlx.NewClient(sqlx.WithLogger(logger.Slog()))
cache := redis.NewClient(redis.WithLogger(logger.Slog()))
// Their logs go through YOUR pipeline. One config to rule them all.

Here's the neat part — slog has InfoContext(ctx, ...) but the built-in handlers completely ignore the context. logf actually reads fields from it:

// Standard slog — context is decoration:
slog.InfoContext(ctx, "order placed")
// → {"msg":"order placed"}

// slog through logf — context fields included:
slog.InfoContext(ctx, "order placed")
// → {"msg":"order placed","request_id":"abc-123","trace_id":"def-456"}

Progressive enhancement — start with slog, add logf features one at a time:

// Step 1: just a faster backend — JSON to stderr
sync := logf.NewSyncHandler(logf.LevelInfo, os.Stderr, logf.JSON().Build())
slog.SetDefault(slog.New(logf.NewSlogHandler(sync)))

// Step 2: add context fields — existing slog calls magically gain request_id
slog.SetDefault(slog.New(logf.NewSlogHandler(
    logf.NewContextHandler(sync),
)))

// Step 3: add async I/O — swap stderr for SlabWriter → file
sw := logf.NewSlabWriter(file).SlabSize(64*1024).SlabCount(8).Build()
router, close, _ := logf.NewRouter().Route(logf.JSON().Build(), logf.Output(logf.LevelInfo, sw)).Build()
slog.SetDefault(slog.New(logf.NewSlogHandler(logf.NewContextHandler(router))))

// Step 4 (optional): switch hot paths to logf for typed fields
logger := logf.New(logf.NewContextHandler(router))
logger.Info(ctx, "fast path", logf.Int("status", 200))

Router (the traffic cop)

One log entry, multiple destinations, each with its own rules:

fileSlab := logf.NewSlabWriter(file).SlabSize(64*1024).SlabCount(8).Build()
jsonEnc := logf.JSON().Build()
textEnc := logf.Text().Build()

router, close, _ := logf.NewRouter().
    Route(jsonEnc,
        logf.OutputCloser(logf.LevelDebug, fileSlab), // everything to file (async)
        logf.Output(logf.LevelError, alertWriter),    // errors to alerting
    ).
    Route(textEnc,
        logf.Output(logf.LevelInfo, os.Stderr),       // colored text to console (sync)
    ).
    Build()
defer close() // flushes and closes fileSlab

The Router encodes once per encoder group. Two outputs sharing the same encoder? One encode call. Stalled network destination? The file output doesn't care — each writer is independent.

Mix sync and async — because console output should be instant but file writes can be batched:

fileSlab := logf.NewSlabWriter(file).
    SlabSize(64*1024).
    SlabCount(8).
    FlushInterval(100*time.Millisecond).
    Build()

router, close, _ := logf.NewRouter().
    Route(enc,
        logf.OutputCloser(logf.LevelDebug, fileSlab), // async, Router closes it
        logf.Output(logf.LevelInfo, os.Stderr),       // sync, direct write
    ).
    Build()
defer close() // flushes and closes fileSlab automatically

SlabWriter (the speed demon)

Here's how it works: your goroutine copies log bytes into a pre-allocated slab buffer under a mutex (~17 ns memcpy). A background goroutine writes filled slabs to the destination. Your goroutine never touches the disk. Never blocks on the network. Just copies bytes and moves on.

sw := logf.NewSlabWriter(file).
    SlabSize(64*1024).
    SlabCount(8).
    FlushInterval(100*time.Millisecond).
    Build()
defer sw.Close()

When the I/O goroutine can't keep up? The slab pool absorbs the spike. 8 slabs × 64 KB = 512 KB of burst tolerance. At 10K msg/sec with 256-byte messages, that's ~200 ms of I/O stall with zero caller impact.

Drop mode — for when losing a log is better than blocking a request:

sw := logf.NewSlabWriter(conn).
    SlabSize(64*1024).
    SlabCount(8).
    DropOnFull().
    FlushInterval(100*time.Millisecond).
    ErrorWriter(os.Stderr).
    Build()

Keep an eye on it:

stats := sw.Stats()
// stats.Dropped      — messages lost (dropOnFull mode)
// stats.Written      — messages accepted
// stats.QueuedSlabs  — slabs waiting for I/O
// stats.WriteErrors  — I/O failures

See docs/BUFFERING.md for capacity planning.

WriterSlot (the patient one)

Sometimes you need a logger before you know where the logs are going. Config isn't parsed yet. The database connection isn't up. The cloud SDK hasn't initialized.

WriterSlot lets you start logging immediately and connect the real destination later:

slot := logf.NewWriterSlot(logf.WithSlotBuffer(4096))
logger := logf.NewLogger().Output(slot).Build()

logger.Info(ctx, "booting up")       // buffered
logger.Info(ctx, "config loaded")    // buffered

slot.Set(file)                       // buffer flushed, future writes go to file

logger.Info(ctx, "ready to serve")   // written directly

Why not just use slog?

Honestly? For most apps, slog is fine. logf is for when:

  • You're logging a lot (>100K entries/sec) and encoding is parallel across goroutines with pre-allocated slabs (~17 ns per write)
  • Your I/O is unreliable (slab pool gives you p99 = 71µs vs slog's p99 = 2.5ms under simulated slow disk)
  • You want context fields without the ceremony (slog passes context through but never reads it)
  • You need fan-out to multiple destinations with independent encoding and I/O strategies

See docs/ARCHITECTURE.md for the gory details.

Who uses logf

  • Acronis — global cybersecurity and data protection platform

Testing

// Silent tests (discard everything):
logger := logf.DisabledLogger()

// Capture logs for assertions:
var buf bytes.Buffer
logger := logf.NewLogger().Output(&buf).Build()
logger.Info(ctx, "hello")
// buf.String() has your JSON

// Logs in test output (visible with -v or on failure):
type testWriter struct{ t testing.TB }
func (w testWriter) Write(p []byte) (int, error) {
    w.t.Helper()
    w.t.Log(strings.TrimRight(string(p), "\n"))
    return len(p), nil
}
logger := logf.NewLogger().Output(testWriter{t}).Build()

Log rotation

logf doesn't rotate logs — that's what lumberjack and logrotate are for:

import "gopkg.in/natefinch/lumberjack.v2"

rotator := &lumberjack.Logger{
    Filename:   "/var/log/myapp.log",
    MaxSize:    100, // MB
    MaxBackups: 3,
    MaxAge:     28,
}
sw := logf.NewSlabWriter(rotator).SlabSize(64*1024).SlabCount(8).Build()
Viewing JSON logs

JSON is great for machines but hard on the eyes. hl is a log viewer that renders JSON logs with colors, field highlighting, and filtering — similar to logf's text encoder but for any JSON log file:

hl app.log                     # colored, human-readable
hl app.log -f 'level == error' # filter by level
tail -f app.log | hl           # live streaming

Performance

Parallel benchmarks on Apple M1 Pro, Go 1.24, count=5. Full results and methodology in benchmarks/.

Latency (ns/op, lower is better)
Scenario logf slog slog+logf zap zerolog logrus
No fields 43 221 53 50 26 500
2 scalars 94 237 133 126 32 820
6 fields (bytes, time, object…) 257 722 836 611 147 1937
With() per call 200 363 254 579 196 812
Caller + 2 scalars 232 471 246 339 232
With() (no log call) 60 343 227 456 68 226
WithGroup() (no log call) 21 98 67 430
Allocations (B/op / allocs)
Scenario logf slog slog+logf zap zerolog logrus
No fields 0 0 0 0 0 836 / 16
2 scalars 112 / 1 0 112 / 1 128 / 1 0 1413 / 23
6 fields 355 / 1 1046 / 13 710 / 5 1188 / 7 0 3220 / 46
With() per call 177 / 2 352 / 8 371 / 6 1427 / 6 512 / 1 1413 / 23
With() 176 / 2 352 / 8 368 / 6 1425 / 6 0 416 / 3
WithGroup() 64 / 1 184 / 4 128 / 3 1361 / 6

The highlights:

  • Faster than zap on most scenarios. With() is 7.6× faster (60 ns vs 456 ns), WithGroup() is 20× faster (21 ns vs 430 ns). These are the "new logger per request" operations — they happen a lot.
  • 2–5× faster than slog across the board. slog shows 0 allocs on small field counts (inline buffer), but pays for it in latency.
  • 6 fields: 257 ns — zap needs 611 ns, slog needs 722 ns. logf keeps 1 alloc where slog does 13.
  • slog+logf — keep the standard slog.Logger API, get 2–4× faster than stock slog. Caller lookup is nearly free: 246 ns vs slog's 471 ns. The one weak spot is 6 fields (836 ns) — slog's any-based attrs force reflection that logf's typed fields avoid.
  • zerolog is faster (zero-alloc by design), but you pay for it: no multi-destination routing, no context-aware fields, no slog compatibility, no async I/O, and a fluent API where a forgotten .Msg() silently drops the entry.
  • Router: 36 ns for a log call routed to io.Discard — encoding, level check, and dispatch included.
  • SlabWriter: 0 allocs, async I/O. Your goroutine does a memcpy and moves on. Background I/O handles the rest.
Real file I/O (parallel, 6 fields)

The benchmarks above use io.Discard. Here's what happens with a real file and a realistic payload (bytes, time, []int, []string, duration, object) — where SlabWriter's async architecture actually matters:

Config ns/op B/op allocs
logf + SlabWriter 744 353 1
zerolog + bufio 256KB 1098 0 0
zap + BufferedWriteSyncer 1097 1187 7
slog + bufio 256KB 1986 1076 17

All loggers use 256KB of buffering. With real I/O and realistic fields, logf is 32% faster than zerolog and zap — the typed encoder advantage grows as field count and complexity increase.

Under I/O pressure (5% of writes stall for 5ms — think slow network, overloaded disk):

Logger p50 p99 p999
logf (SlabWriter) 833 ns 43 µs 163 µs
zap (buffered) 917 ns 56 µs 180 µs
zerolog (unbuffered) 7.5 µs 5.9 ms 10.2 ms
slog (unbuffered) 14 µs 17.9 ms 24 ms

Sync loggers block on every stalled write. logf copies bytes into a slab and moves on — the background goroutine deals with the slow destination. Your HTTP handler never notices.

The fine print
  • One allocation per log call with fields. That's Go's variadic []Field slice. Calls without fields are zero-alloc.
  • Oversized messages allocate (SlabWriter only). Messages bigger than slabSize get a dedicated buffer. Normal log entries (100–500 bytes) with normal slabs (16–64 KB) never hit this.

Learn more

Documentation

Index

Constants

View Source
const (
	DefaultFieldKeyLevel  = "level"
	DefaultFieldKeyMsg    = "msg"
	DefaultFieldKeyTime   = "ts"
	DefaultFieldKeyName   = "logger"
	DefaultFieldKeyCaller = "caller"
)

Default field keys.

View Source
const (
	PageSize = 4 * 1024
)

PageSize is the recommended buffer size.

Variables

View Source
var NewErrorEncoder = errorEncoderGetter(
	func(c ErrorEncoderConfig) ErrorEncoder {
		return func(key string, err error, enc FieldEncoder) {
			encodeError(key, err, enc, c.WithDefaults())
		}
	},
)

NewErrorEncoder creates an ErrorEncoder with the given config. Call it as a function: NewErrorEncoder(cfg) returns an ErrorEncoder.

Functions

func AllocEncoderSlot

func AllocEncoderSlot() int

AllocEncoderSlot returns a unique 1-based slot index for an encoder to use with Bag.LoadCache and Bag.StoreCache. Call this once when you create an encoder — the slot lets the Bag cache encoded bytes per encoder format so repeated encoding is nearly free.

If all slots are taken, returns 0 (no caching, graceful degradation — everything still works, just without the cache speedup).

func CallerPC

func CallerPC(skip int) uintptr

CallerPC captures the program counter of the caller, skipping the given number of stack frames. Returns 0 if the caller cannot be determined. You usually do not need to call this directly — the Logger handles it.

func ContextWithBag

func ContextWithBag(ctx context.Context, bag *Bag) context.Context

ContextWithBag returns a new context carrying the given Bag. This is the low-level API — most callers should use logf.With(ctx, fields...) instead, which handles Bag creation and chaining automatically.

func DefaultErrorEncoder

func DefaultErrorEncoder(key string, err error, enc FieldEncoder)

DefaultErrorEncoder encodes an error as one or two fields: the error message under the given key, and (if the error implements fmt.Formatter) a verbose field with the full "%+v" output (stack traces, etc.).

func DefaultLevelEncoder

func DefaultLevelEncoder(lvl Level, m TypeEncoder)

DefaultLevelEncoder formats levels as lower-case strings ("debug", "info", "warn", "error"). This is the default for JSON output.

func EscapeString

func EscapeString[S []byte | string](buf *Buffer, s S) error

EscapeString JSON-escapes s (which can be a string or []byte) and appends the result to buf. It handles control characters, backslash, quotes, and invalid UTF-8 sequences.

func FloatSecondsDurationEncoder

func FloatSecondsDurationEncoder(d time.Duration, e TypeEncoder)

FloatSecondsDurationEncoder formats durations as floating-point seconds (e.g. 1.5 for one and a half seconds).

func FullCallerEncoder

func FullCallerEncoder(pc uintptr, m TypeEncoder)

FullCallerEncoder formats the caller as the full filesystem path with line number. More verbose than ShortCallerEncoder but unambiguous when you have multiple packages with the same file name.

func HasField

func HasField(ctx context.Context, key string) bool

HasField reports whether the context's Bag contains a field with the given key. Useful for conditional field injection — for example, adding a trace ID only if one is not already present.

func LogDepth

func LogDepth(l *Logger, ctx context.Context, depth int, lvl Level, text string, fs ...Field)

LogDepth logs at the given level, adding depth extra frames to the caller skip count. It is a package-level function (not a method) so that wrapper packages like logfc can log through an existing Logger without allocating a new one on every call.

func NanoDurationEncoder

func NanoDurationEncoder(d time.Duration, e TypeEncoder)

NanoDurationEncoder formats durations as integer nanoseconds.

func NewContext

func NewContext(parent context.Context, logger *Logger) context.Context

NewContext returns a new Context carrying the given Logger. Retrieve it later with FromContext. No more threading loggers through your entire call stack like some kind of dependency injection nightmare.

func NewSlogHandler

func NewSlogHandler(w Handler) slog.Handler

NewSlogHandler returns a slog.Handler that bridges the standard library's slog package to logf's pipeline. Use this when you want third-party code that speaks slog to flow through your logf Handler, Encoder, and Writer setup.

Fields added with slog.Logger.With become [Entry.LoggerBag] (cached by the encoder). The handler propagates context to [Handler.Handle], so field bags attached via With are resolved by NewContextHandler.

func RFC3339NanoTimeEncoder

func RFC3339NanoTimeEncoder(t time.Time, e TypeEncoder)

RFC3339NanoTimeEncoder formats timestamps as RFC3339 strings with nanosecond precision (e.g. "2006-01-02T15:04:05.999999999Z07:00").

func RFC3339TimeEncoder

func RFC3339TimeEncoder(t time.Time, e TypeEncoder)

RFC3339TimeEncoder formats timestamps as RFC3339 strings (e.g. "2006-01-02T15:04:05Z07:00"). This is the default for JSON output.

func ShortCallerEncoder

func ShortCallerEncoder(pc uintptr, m TypeEncoder)

ShortCallerEncoder formats the caller as "package/file.go:line" — compact enough for log output while still letting you find the source. This is the default CallerEncoder.

func ShortTextLevelEncoder

func ShortTextLevelEncoder(lvl Level, m TypeEncoder)

ShortTextLevelEncoder formats levels as compact 3-character uppercase strings (DBG, INF, WRN, ERR). This is the default for text/console output where horizontal space is precious.

func StringDurationEncoder

func StringDurationEncoder(d time.Duration, m TypeEncoder)

StringDurationEncoder formats durations as human-readable strings like "4.5s", "300ms", or "1h2m3s" — the same format as time.Duration.String() but without allocating. This is the default.

func UnixNanoTimeEncoder

func UnixNanoTimeEncoder(t time.Time, e TypeEncoder)

UnixNanoTimeEncoder formats timestamps as integer nanoseconds since the Unix epoch. Compact and machine-friendly, but not human-readable.

func UpperCaseLevelEncoder

func UpperCaseLevelEncoder(lvl Level, m TypeEncoder)

UpperCaseLevelEncoder formats levels as upper-case strings ("DEBUG", "INFO", "WARN", "ERROR").

func With

func With(ctx context.Context, fs ...Field) context.Context

With returns a new context carrying the given fields. If the context already has a Bag, the fields are appended to it. This is the primary way to attach request-scoped data (trace IDs, user info, etc.) that will automatically appear in every log entry — no need to pass fields around manually.

Types

type ArrayEncoder

type ArrayEncoder interface {
	EncodeLogfArray(TypeEncoder) error
}

ArrayEncoder lets your custom types serialize themselves as JSON arrays (or whatever array representation the encoder uses). Implement EncodeLogfArray and pass your type to logf.Array().

Example:

type stringArray []string

func (o stringArray) EncodeLogfArray(e TypeEncoder) error {
	for i := range o {
		e.EncodeTypeString(o[i])
	}
	return nil
}

type Bag

type Bag struct {
	// contains filtered or unexported fields
}

Bag is an immutable, goroutine-safe linked list of Fields — the backbone of logf's zero-copy field accumulation. Every call to With or WithGroup creates a new node pointing to the parent in O(1) time with no field copies. The encoder walks the chain at encoding time, and results are cached per encoder so repeated encoding of the same Bag is essentially free.

func BagFromContext

func BagFromContext(ctx context.Context) *Bag

BagFromContext returns the Bag stored in the context, or nil if none was set. Safe to call on any context.

func NewBag

func NewBag(fs ...Field) *Bag

NewBag creates a root Bag node with the given fields. Most of the time you will not call this directly — Logger.With and logf.With handle Bag creation for you.

func (*Bag) Fields

func (b *Bag) Fields() []Field

Fields collects all fields across the entire Bag chain in parent-first order. This allocates a new slice when the chain has more than one node, so for hot-path encoding prefer walking OwnFields + Parent directly.

func (*Bag) Group

func (b *Bag) Group() string

Group returns the group name for this Bag node, or an empty string if it is a regular field node.

func (*Bag) HasField

func (b *Bag) HasField(key string) bool

HasField reports whether any node in the Bag chain contains a field with the given key. Walks the full chain from this node up to the root.

func (*Bag) LoadCache

func (b *Bag) LoadCache(slot int) []byte

LoadCache returns previously cached encoded bytes for the given encoder slot, or nil on a cache miss. Slot 0 (no caching) always returns nil.

func (*Bag) OwnFields

func (b *Bag) OwnFields() []Field

OwnFields returns only the fields stored directly in this Bag node, without walking up to parents. Useful for cache-aware encoding.

func (*Bag) Parent

func (b *Bag) Parent() *Bag

Parent returns the parent Bag in the linked list, or nil if this is the root node.

func (*Bag) StoreCache

func (b *Bag) StoreCache(slot int, data []byte)

StoreCache saves encoded bytes for the given encoder slot so future Encode calls can skip re-encoding this Bag. Slot 0 (no caching) is a no-op. The internal cache structure is allocated lazily on first store.

func (*Bag) With

func (b *Bag) With(fs ...Field) *Bag

With returns a new Bag that includes the given additional fields. The original Bag is not modified — the new node simply points to the parent. O(1) time, zero copies.

func (*Bag) WithGroup

func (b *Bag) WithGroup(name string) *Bag

WithGroup returns a new Bag that opens a named group. All fields added to descendant nodes via subsequent With calls will be logically nested under this group name when encoded (e.g., as a nested JSON object). The original Bag is not modified.

type Buffer

type Buffer struct {
	Data []byte
}

Buffer is a lightweight byte buffer used throughout the encoder pipeline. It wraps a []byte with append-oriented methods and integrates with sync.Pool via GetBuffer/Free for allocation-free encoding.

func GetBuffer

func GetBuffer() *Buffer

GetBuffer grabs a *Buffer from the pool, reset and ready to use. When you are done, call Buffer.Free to return it — this keeps allocations close to zero on the hot path.

func NewBuffer

func NewBuffer() *Buffer

NewBuffer creates a new Buffer with the default 4 KB capacity.

func NewBufferWithCapacity

func NewBufferWithCapacity(capacity int) *Buffer

NewBufferWithCapacity creates a new Buffer pre-allocated to the given number of bytes.

func (*Buffer) AppendBool

func (b *Buffer) AppendBool(n bool)

AppendBool appends "true" or "false" according to the given bool.

func (*Buffer) AppendByte

func (b *Buffer) AppendByte(data byte)

AppendByte appends a single byte to the Buffer.

func (*Buffer) AppendBytes

func (b *Buffer) AppendBytes(data []byte)

AppendBytes appends a byte slice to the Buffer.

func (*Buffer) AppendFloat32

func (b *Buffer) AppendFloat32(n float32)

AppendFloat32 appends the string form of the given float32.

func (*Buffer) AppendFloat64

func (b *Buffer) AppendFloat64(n float64)

AppendFloat64 appends the string form of the given float64.

func (*Buffer) AppendInt

func (b *Buffer) AppendInt(n int64)

AppendInt appends the base-10 string representation of the given integer.

func (*Buffer) AppendString

func (b *Buffer) AppendString(data string)

AppendString appends a string to the Buffer.

func (*Buffer) AppendUint

func (b *Buffer) AppendUint(n uint64)

AppendUint appends the base-10 string representation of the given unsigned integer.

func (*Buffer) Back

func (b *Buffer) Back() byte

Back returns the last byte in the Buffer. The caller must ensure the Buffer is not empty.

func (*Buffer) Bytes

func (b *Buffer) Bytes() []byte

Bytes returns the underlying byte slice as is.

func (*Buffer) Cap

func (b *Buffer) Cap() int

Cap returns the capacity of the underlying byte slice.

func (*Buffer) EnsureSize

func (b *Buffer) EnsureSize(s int)

EnsureSize guarantees that at least s bytes can be appended without a reallocation.

func (*Buffer) ExtendBytes

func (b *Buffer) ExtendBytes(s int) []byte

ExtendBytes grows the Buffer by s bytes and returns a slice pointing to the newly added region. Useful for in-place encoding (e.g., base64).

func (*Buffer) Free

func (b *Buffer) Free()

Free returns the Buffer to the pool for reuse. The Buffer must not be accessed after calling Free.

func (*Buffer) Len

func (b *Buffer) Len() int

Len returns the length of the underlying byte slice.

func (*Buffer) Reset

func (b *Buffer) Reset()

Reset resets the underlying byte slice.

func (*Buffer) String

func (b *Buffer) String() string

String implements fmt.Stringer.

func (*Buffer) Truncate

func (b *Buffer) Truncate(n int)

Truncate shrinks the Buffer to the given length.

func (*Buffer) Write

func (b *Buffer) Write(p []byte) (n int, err error)

Write implements io.Writer.

type CallerEncoder

type CallerEncoder func(pc uintptr, m TypeEncoder)

CallerEncoder is a function that resolves a program counter and writes the caller location (file + line) into the log output via TypeEncoder.

type ContextHandler

type ContextHandler struct {
	// contains filtered or unexported fields
}

ContextHandler is the Handler middleware that makes context-based logging work. It extracts the Bag from the context (populated by logf.With) and any external fields from FieldSource functions, attaches them to the Entry, and passes it downstream. Without a ContextHandler in the pipeline, context fields are silently ignored.

func NewContextHandler

func NewContextHandler(next Handler, sources ...FieldSource) *ContextHandler

NewContextHandler returns a new ContextHandler wrapping the given Handler. Optional FieldSource functions are called on every Handle to pull in additional fields from the context (prepended to Entry.Fields so they appear before per-call fields).

func (*ContextHandler) Enabled

func (w *ContextHandler) Enabled(ctx context.Context, lvl Level) bool

Enabled delegates to the downstream Handler to check whether the given level is active.

func (*ContextHandler) Handle

func (w *ContextHandler) Handle(ctx context.Context, e Entry) error

Handle extracts the Bag from the context, collects fields from any registered FieldSource functions, attaches everything to the Entry, and hands it off to the downstream Handler.

type DurationEncoder

type DurationEncoder func(time.Duration, TypeEncoder)

DurationEncoder is a function that formats a time.Duration into the log output via the TypeEncoder.

type Encoder

type Encoder interface {
	Encode(Entry) (*Buffer, error)
	Clone() Encoder
}

Encoder is the interface that turns an Entry into bytes — it decides your log format (JSON, text, or whatever you dream up). The built-in JSON and Text encoders handle most needs, but implementing Encoder lets you go fully custom.

Encode serializes the Entry and returns a pooled *Buffer. The caller must call Buffer.Free when done. Encode is safe for concurrent use — implementations handle internal cloning and buffer pooling.

Clone returns an independent copy that shares immutable config but has its own mutable state, suitable for use in another goroutine.

func NewJSONEncoder

func NewJSONEncoder(cfg JSONEncoderConfig) Encoder

NewJSONEncoder creates a JSON Encoder from a JSONEncoderConfig struct. For a friendlier builder-style API, use JSON() instead.

func NewTextEncoder

func NewTextEncoder(cfg TextEncoderConfig) Encoder

NewTextEncoder creates a text Encoder from a TextEncoderConfig struct. For a friendlier builder-style API, use Text() instead.

type EncoderBuilder

type EncoderBuilder interface {
	Build() Encoder
}

EncoderBuilder builds an Encoder from accumulated configuration. Implemented by JSONEncoderBuilder and TextEncoderBuilder, and accepted by LoggerBuilder.EncoderFrom for composable builder chains.

type Entry

type Entry struct {
	// LoggerBag holds logger-scoped fields added via Logger.With. These are
	// typically service-level context like "component" or "version".
	LoggerBag *Bag

	// Bag holds request-scoped fields extracted from context by ContextHandler.
	// Think trace IDs, request metadata — anything you stuff into the context
	// via logf.With(ctx, ...).
	Bag *Bag

	// Fields are the per-call fields passed directly to Debug/Info/Warn/Error.
	Fields []Field

	// Level is the severity of this log record.
	Level Level

	// Time is when this log record was created (usually time.Now()).
	Time time.Time

	// LoggerName is the dot-separated name set via Logger.WithName.
	// Empty string means the logger has no name.
	LoggerName string

	// Text is the human-readable log message.
	Text string

	// CallerPC is the program counter of the call site. Zero means caller
	// reporting is disabled or unavailable.
	CallerPC uintptr
}

Entry is a single log record — the thing that travels through the pipeline from Logger to Handler to Encoder. It carries the message, level, timestamp, caller info, and all accumulated fields (both from Logger.With and from context). You rarely create one yourself; the Logger builds it for you on every Debug/Info/Warn/Error call.

type ErrorEncoder

type ErrorEncoder func(string, error, FieldEncoder)

ErrorEncoder is a function that writes an error into the log output. It receives the field key, the error, and a FieldEncoder so it can emit one or more fields (e.g., a short message plus a verbose stack).

type ErrorEncoderConfig

type ErrorEncoderConfig struct {
	VerboseFieldSuffix string
	NoVerboseField     bool
}

ErrorEncoderConfig controls how errors are encoded — specifically the verbose field suffix and whether verbose output is included at all.

func (ErrorEncoderConfig) WithDefaults

func (c ErrorEncoderConfig) WithDefaults() ErrorEncoderConfig

WithDefaults returns a copy of the config with zero-value fields replaced by defaults (verbose suffix ".verbose").

type Field

type Field struct {
	Key  string
	Type FieldType
	Any  interface{}
	Ptr  unsafe.Pointer
	Val  int64
}

Field is the fundamental key-value unit in logf's structured logging. Every Bool(), String(), Int(), etc. call creates one of these. Fields are designed to be small (56 bytes) and allocation-free for scalar types — the value is packed inline rather than boxed into an interface.

Layout (56 bytes):

Key  string         // 16  field name
Type FieldType      //  8  (1 byte + 7 padding)
Any  interface{}    // 16  error, object, array, stringer, any
Ptr  unsafe.Pointer //  8  slice/string data pointer
Val  int64          //  8  scalar value OR slice/string length

func Any

func Any(k string, v interface{}) Field

Any returns a Field for an arbitrary value, picking the most efficient typed representation it can via a type switch. It handles all the common Go types (scalars, pointers, slices, time, errors, Stringer) and falls back to reflection for named types.

For hot paths, prefer the specific constructors (String, Int, etc.) — they avoid the type switch overhead entirely.

func Array

func Array(k string, v ArrayEncoder) Field

Array returns a Field that carries a custom array value under the given key. The ArrayEncoder's EncodeLogfArray method is called at encoding time.

func Bool

func Bool(k string, v bool) Field

Bool returns a Field that carries a boolean value under the given key.

func ByteString

func ByteString(k string, v []byte) Field

ByteString returns a Field that interprets the []byte as a UTF-8 string (not base64-encoded like Bytes). Use this when you have text data in a byte slice and want it logged as a readable string.

func Bytes

func Bytes(k string, v []byte) Field

Bytes returns a Field that carries a []byte value under the given key. The bytes are base64-encoded in JSON output.

func Duration

func Duration(k string, v time.Duration) Field

Duration returns a Field that carries a time.Duration value under the given key.

func Durations

func Durations(k string, v []time.Duration) Field

Durations returns a Field that carries a []time.Duration value under the given key.

func Error

func Error(v error) Field

Error returns a Field that carries an error under the key "error". It is shorthand for NamedError("error", v).

func Fields

func Fields(ctx context.Context) []Field

Fields returns all fields from the context's Bag, or nil if the context has no Bag.

func Float32

func Float32(k string, v float32) Field

Float32 returns a Field that carries a float32 value under the given key.

func Float64

func Float64(k string, v float64) Field

Float64 returns a Field that carries a float64 value under the given key.

func Floats64

func Floats64(k string, v []float64) Field

Floats64 returns a Field that carries a []float64 value under the given key.

func Formatter

func Formatter(k string, verb string, v interface{}) Field

Formatter returns a Field that formats the value with fmt.Sprintf using the given verb and stores the result as a string.

func FormatterV

func FormatterV(k string, v interface{}) Field

FormatterV returns a Field that formats the value with "%#v" (Go-syntax representation) and stores the result as a string under the given key.

func Group

func Group(k string, fs ...Field) Field

Group returns a Field that nests the given fields as a sub-object under the given key. Think of it as an inline WithGroup for a single log call.

Example:

logger.Info(ctx, "done",
    logf.Group("request", logf.String("id", "abc"), logf.Int("status", 200)),
)
// → {"msg":"done", "request":{"id":"abc", "status":200}}

func Inline

func Inline(v ObjectEncoder) Field

Inline returns a Field that splices the ObjectEncoder's fields directly into the parent object — no wrapping key, no nesting. Perfect for flattening a struct's fields into the log entry.

Example:

logger.Info(ctx, "request handled",
    logf.Inline(requestInfo),
    logf.Int("status", 200),
)
// → {"msg":"request handled", "trace_id":"abc", "method":"GET", "status":200}

func Int

func Int(k string, v int) Field

Int returns a Field that carries an int value under the given key.

func Int8

func Int8(k string, v int8) Field

Int8 returns a Field that carries an int8 value under the given key.

func Int16

func Int16(k string, v int16) Field

Int16 returns a Field that carries an int16 value under the given key.

func Int32

func Int32(k string, v int32) Field

Int32 returns a Field that carries an int32 value under the given key.

func Int64

func Int64(k string, v int64) Field

Int64 returns a Field that carries an int64 value under the given key.

func Ints

func Ints(k string, v []int) Field

Ints returns a Field that carries a []int value under the given key.

func Ints64

func Ints64(k string, v []int64) Field

Ints64 returns a Field that carries a []int64 value under the given key.

func NamedError

func NamedError(k string, v error) Field

NamedError returns a Field that carries an error value under the given key.

func Object

func Object(k string, v ObjectEncoder) Field

Object returns a Field that carries a custom object value under the given key. The ObjectEncoder's EncodeLogfObject method is called at encoding time.

func String

func String(k string, v string) Field

String returns a Field that carries a string value under the given key.

func Stringer

func Stringer(k string, v fmt.Stringer) Field

Stringer returns a Field that calls v.String() and logs the result as a string under the given key. Nil values are logged as "nil".

func Strings

func Strings(k string, v []string) Field

Strings returns a Field that carries a []string value under the given key.

func Time

func Time(k string, v time.Time) Field

Time returns a Field that carries a time.Time value under the given key.

func Uint

func Uint(k string, v uint) Field

Uint returns a Field that carries a uint value under the given key.

func Uint8

func Uint8(k string, v uint8) Field

Uint8 returns a Field that carries a uint8 value under the given key.

func Uint16

func Uint16(k string, v uint16) Field

Uint16 returns a Field that carries a uint16 value under the given key.

func Uint32

func Uint32(k string, v uint32) Field

Uint32 returns a Field that carries a uint32 value under the given key.

func Uint64

func Uint64(k string, v uint64) Field

Uint64 returns a Field that carries a uint64 value under the given key.

func (Field) Accept

func (fd Field) Accept(v FieldEncoder)

Accept dispatches the Field to the appropriate FieldEncoder method based on its FieldType. This is the bridge between the type-erased Field storage and the strongly-typed encoder interface.

type FieldEncoder

type FieldEncoder interface {
	EncodeFieldAny(string, interface{})
	EncodeFieldBool(string, bool)
	EncodeFieldInt64(string, int64)
	EncodeFieldUint64(string, uint64)
	EncodeFieldFloat64(string, float64)
	EncodeFieldDuration(string, time.Duration)
	EncodeFieldError(string, error)
	EncodeFieldTime(string, time.Time)
	EncodeFieldString(string, string)
	EncodeFieldStrings(string, []string)
	EncodeFieldBytes(string, []byte)
	EncodeFieldInts64(string, []int64)
	EncodeFieldFloats64(string, []float64)
	EncodeFieldDurations(string, []time.Duration)
	EncodeFieldArray(string, ArrayEncoder)
	EncodeFieldObject(string, ObjectEncoder)
	EncodeFieldGroup(string, []Field)
}

FieldEncoder provides methods for encoding key-value pairs. It is the interface that ObjectEncoder and ErrorEncoder receive to write named fields into the output. Each method encodes one field with the given key and typed value.

type FieldSource

type FieldSource func(ctx context.Context) []Field

FieldSource is a function that extracts fields from a context. Pass one to NewContextHandler or LoggerBuilder.Context to automatically inject fields from external sources — tracing libraries, request ID middleware, authentication context, you name it.

type FieldType

type FieldType byte

FieldType tells the encoder how to interpret the data packed inside a Field. Each type corresponds to a specific encoding path in the FieldEncoder.

const (
	FieldTypeUnknown FieldType = iota

	// Scalars (value stored in Val/Any).
	FieldTypeAny
	FieldTypeBool
	FieldTypeInt64
	FieldTypeUint64
	FieldTypeFloat64
	FieldTypeDuration
	FieldTypeError
	FieldTypeTime

	// Unsafe pointer slices (data in Ptr, length in Val).
	FieldTypeBytes
	FieldTypeBytesToString
	FieldTypeBytesToInts64
	FieldTypeBytesToFloats64
	FieldTypeBytesToDurations
	FieldTypeBytesToStrings

	// Interface-based (encoder callback in Any).
	FieldTypeArray
	FieldTypeObject
	FieldTypeGroup
)

Set of FileType values.

type Handler

type Handler interface {
	Handle(context.Context, Entry) error
	Enabled(context.Context, Level) bool
}

Handler is the core interface that processes log entries. Implement it to control where and how logs are written. The built-in handlers — SyncHandler, ContextHandler, and Router — cover most use cases, but you can wrap or replace them for custom behavior like sampling, rate-limiting, or sending logs to an external service.

func NewSyncHandler

func NewSyncHandler(level Level, w io.Writer, enc Encoder) Handler

NewSyncHandler returns the simplest possible Handler — it encodes each entry right there in the calling goroutine and writes it immediately. No routing, no buffering, no background goroutines. Think of it as the "just write it" handler.

Encoding is fully parallel across goroutines (the Encoder handles its own cloning and buffer pooling), but the provided io.Writer must be safe for concurrent use. Great for benchmarks, tests, and simple single-destination setups.

type JSONEncoderBuilder

type JSONEncoderBuilder struct {
	// contains filtered or unexported fields
}

JSONEncoderBuilder configures and builds a JSON Encoder using a clean builder-style API. Create one with JSON(), chain methods to customize, then call Build() or pass directly to LoggerBuilder.EncoderFrom().

func JSON

func JSON() *JSONEncoderBuilder

JSON returns a new JSONEncoderBuilder with default settings. This is the recommended way to create a JSON encoder — chain the methods you need and call Build:

enc := logf.JSON().Build()
enc := logf.JSON().TimeKey("time").LevelKey("severity").Build()

func (*JSONEncoderBuilder) Build

func (b *JSONEncoderBuilder) Build() Encoder

Build finalizes the configuration and returns a ready-to-use JSON Encoder.

func (*JSONEncoderBuilder) CallerKey

func (b *JSONEncoderBuilder) CallerKey(k string) *JSONEncoderBuilder

CallerKey sets the JSON key for the caller location field (default "caller").

func (*JSONEncoderBuilder) DisableCaller

func (b *JSONEncoderBuilder) DisableCaller() *JSONEncoderBuilder

DisableCaller omits the caller location field from JSON output entirely.

func (*JSONEncoderBuilder) DisableLevel

func (b *JSONEncoderBuilder) DisableLevel() *JSONEncoderBuilder

DisableLevel omits the severity level field from JSON output entirely.

func (*JSONEncoderBuilder) DisableMsg

func (b *JSONEncoderBuilder) DisableMsg() *JSONEncoderBuilder

DisableMsg omits the message text field from JSON output entirely.

func (*JSONEncoderBuilder) DisableName

func (b *JSONEncoderBuilder) DisableName() *JSONEncoderBuilder

DisableName omits the logger name field from JSON output entirely.

func (*JSONEncoderBuilder) DisableTime

func (b *JSONEncoderBuilder) DisableTime() *JSONEncoderBuilder

DisableTime omits the timestamp field from JSON output entirely.

func (*JSONEncoderBuilder) EncodeCaller

EncodeCaller sets a custom CallerEncoder for formatting caller locations (default short format).

func (*JSONEncoderBuilder) EncodeDuration

EncodeDuration sets a custom DurationEncoder for formatting durations (default string representation).

func (*JSONEncoderBuilder) EncodeError

EncodeError sets a custom ErrorEncoder for formatting error values.

func (*JSONEncoderBuilder) EncodeLevel

EncodeLevel sets a custom LevelEncoder for formatting severity levels.

func (*JSONEncoderBuilder) EncodeTime

EncodeTime sets a custom TimeEncoder for formatting timestamps (default RFC3339).

func (*JSONEncoderBuilder) LevelKey

LevelKey sets the JSON key for the severity level field (default "level").

func (*JSONEncoderBuilder) MsgKey

MsgKey sets the JSON key for the log message field (default "msg").

func (*JSONEncoderBuilder) NameKey

NameKey sets the JSON key for the logger name field (default "logger").

func (*JSONEncoderBuilder) TimeKey

TimeKey sets the JSON key for the timestamp field (default "ts").

type JSONEncoderConfig

type JSONEncoderConfig struct {
	FieldKeyMsg    string
	FieldKeyTime   string
	FieldKeyLevel  string
	FieldKeyName   string
	FieldKeyCaller string

	DisableFieldMsg    bool
	DisableFieldTime   bool
	DisableFieldLevel  bool
	DisableFieldName   bool
	DisableFieldCaller bool

	EncodeTime     TimeEncoder
	EncodeDuration DurationEncoder
	EncodeError    ErrorEncoder
	EncodeLevel    LevelEncoder
	EncodeCaller   CallerEncoder
	// contains filtered or unexported fields
}

JSONEncoderConfig controls how the JSON encoder formats log entries — field keys, which fields to include, and how types like time, duration, and errors are rendered. For a friendlier builder-style API, use JSON() instead.

func (JSONEncoderConfig) WithDefaults

func (c JSONEncoderConfig) WithDefaults() JSONEncoderConfig

WithDefaults returns a copy of the config with all zero-value fields replaced by sensible defaults (RFC3339 timestamps, string durations, short caller format, etc.).

type Level

type Level int8

Level represents the severity of a log message. Higher numeric values mean more verbose output — LevelDebug (3) lets everything through, while LevelError (0) only lets errors pass.

const (
	// LevelError logs errors only — the quietest setting.
	LevelError Level = iota
	// LevelWarn logs errors and warnings.
	LevelWarn
	// LevelInfo logs errors, warnings, and informational messages. This is
	// the typical production setting.
	LevelInfo
	// LevelDebug logs everything — all severity levels pass through.
	LevelDebug
)

Severity levels.

func LevelFromString

func LevelFromString(lvl string) (Level, bool)

LevelFromString parses a level name (case-insensitive) and returns the corresponding Level. Returns false if the name is not recognized.

func (Level) Enabled

func (l Level) Enabled(o Level) bool

Enabled reports whether a message at level o would be logged under this level threshold. For example, LevelInfo.Enabled(LevelDebug) is false.

func (Level) MarshalText

func (l Level) MarshalText() ([]byte, error)

MarshalText marshals the Level to its lower-case text representation.

func (Level) String

func (l Level) String() string

String returns a lower-case string representation of the Level ("debug", "info", "warn", "error").

func (*Level) UnmarshalText

func (l *Level) UnmarshalText(text []byte) error

UnmarshalText parses a level string (case-insensitive) and sets the Level. Returns an error for unrecognized values.

func (Level) UpperCaseString

func (l Level) UpperCaseString() string

UpperCaseString returns an upper-case string representation of the Level ("DEBUG", "INFO", "WARN", "ERROR").

type LevelEncoder

type LevelEncoder func(Level, TypeEncoder)

LevelEncoder is a function that formats a Level into the log output via TypeEncoder. Swap it out to control how levels appear in your logs.

type LogFunc

type LogFunc func(context.Context, string, ...Field)

LogFunc is a logging function with a pre-bound level, used by AtLevel.

type Logger

type Logger struct {
	// contains filtered or unexported fields
}

Logger is the main entry point for structured logging. It wraps a Handler, checks levels before doing any work, and provides the familiar Debug/Info/Warn/Error methods plus context-aware field accumulation via With and WithGroup. Loggers are immutable — every With/WithName/WithGroup call returns a new Logger, so they are safe to share across goroutines.

func DisabledLogger

func DisabledLogger() *Logger

DisabledLogger returns a Logger that silently discards everything as fast as possible. Handy as a safe default when no logger is configured, and it is what FromContext returns when there is no Logger in the context.

func FromContext

func FromContext(ctx context.Context) *Logger

FromContext returns the Logger stored in the context by NewContext, or a DisabledLogger if no Logger was stored. It is always safe to call — you will never get nil.

func New

func New(w Handler) *Logger

New returns a Logger wired to the given Handler. Level filtering is controlled by the handler's Enabled method, so you can use any Handler implementation — SyncHandler, Router, ContextHandler, or your own. For a friendlier builder-style API, use NewLogger() instead.

func (*Logger) AtLevel

func (l *Logger) AtLevel(ctx context.Context, lvl Level, fn func(LogFunc))

AtLevel calls fn only if the specified level is enabled, passing it a LogFunc pre-bound to that level. This is perfect for guarding expensive log preparation without a separate Enabled check. A nil ctx is treated as context.Background().

func (*Logger) Debug

func (l *Logger) Debug(ctx context.Context, text string, fs ...Field)

Debug logs a message at LevelDebug. If debug logging is disabled, this is a no-op — no fields are evaluated, no allocations happen. A nil ctx is treated as context.Background().

func (*Logger) Debugx

func (l *Logger) Debugx(text string, fs ...Field)

Debugx is like Debug but without a context parameter. Equivalent to Debug(context.Background(), text, fs...).

func (*Logger) Enabled

func (l *Logger) Enabled(ctx context.Context, lvl Level) bool

Enabled reports whether logging at the given level would actually produce output. Use this to guard expensive argument preparation. A nil ctx is treated as context.Background().

func (*Logger) Error

func (l *Logger) Error(ctx context.Context, text string, fs ...Field)

Error logs a message at LevelError. Something went wrong and you want everyone to know about it. A nil ctx is treated as context.Background().

func (*Logger) Errorx

func (l *Logger) Errorx(text string, fs ...Field)

Errorx is like Error but without a context parameter. Equivalent to Error(context.Background(), text, fs...).

func (*Logger) Info

func (l *Logger) Info(ctx context.Context, text string, fs ...Field)

Info logs a message at LevelInfo. This is the default "something happened" level for normal operational events. A nil ctx is treated as context.Background().

func (*Logger) Infox

func (l *Logger) Infox(text string, fs ...Field)

Infox is like Info but without a context parameter. Equivalent to Info(context.Background(), text, fs...).

func (*Logger) Log

func (l *Logger) Log(ctx context.Context, lvl Level, text string, fs ...Field)

Log logs a message at an arbitrary level. Use this when the level is determined at runtime; for the common cases prefer Debug/Info/Warn/Error. A nil ctx is treated as context.Background().

func (*Logger) Slog

func (l *Logger) Slog() *slog.Logger

Slog returns a *slog.Logger backed by the same Handler, fields, and name as this Logger. Use it when you need to hand a standard library slog.Logger to code that does not know about logf.

func (*Logger) Warn

func (l *Logger) Warn(ctx context.Context, text string, fs ...Field)

Warn logs a message at LevelWarn. Use this for situations that are unexpected but not broken — things a human should probably look at. A nil ctx is treated as context.Background().

func (*Logger) Warnx

func (l *Logger) Warnx(text string, fs ...Field)

Warnx is like Warn but without a context parameter. Equivalent to Warn(context.Background(), text, fs...).

func (*Logger) With

func (l *Logger) With(fs ...Field) *Logger

With returns a new Logger that includes the given fields in every subsequent log entry. Fields are accumulated, not replaced — so calling With multiple times builds up context over time.

func (*Logger) WithCaller

func (l *Logger) WithCaller(enabled bool) *Logger

WithCaller returns a new Logger with caller reporting toggled on or off. When enabled (the default), every log entry includes the source file and line.

func (*Logger) WithCallerSkip

func (l *Logger) WithCallerSkip(skip int) *Logger

WithCallerSkip returns a new Logger that skips additional stack frames when capturing caller info. Use this when you wrap the Logger in your own helper function so the reported caller points to your caller, not your wrapper.

func (*Logger) WithGroup

func (l *Logger) WithGroup(name string) *Logger

WithGroup returns a new Logger that nests all subsequent fields — both from With and from per-call arguments — under the given group name. In JSON output this produces nested objects:

WithGroup("http") + Int("status", 200) → {"http":{"status":200}}

func (*Logger) WithName

func (l *Logger) WithName(n string) *Logger

WithName returns a new Logger with the given name appended to the existing name, separated by a period. Names appear in log output as "parent.child" and are great for identifying subsystems. Loggers have no name by default.

type LoggerBuilder

type LoggerBuilder struct {
	// contains filtered or unexported fields
}

LoggerBuilder accumulates options and builds a Logger with a sync pipeline: Encoder -> SyncHandler -> ContextHandler -> Logger. Chain its methods and finish with Build. For advanced multi-destination pipelines, use NewRouter directly.

func NewLogger

func NewLogger() *LoggerBuilder

NewLogger returns a LoggerBuilder — the easiest way to get a Logger up and running. It builds a single-destination sync pipeline, which is perfect for most applications. For async buffered or multi-destination setups, reach for NewRouter + SlabWriter instead.

Defaults: JSON encoder, LevelDebug, os.Stderr, caller enabled, no ContextHandler.

logger := logf.NewLogger().Build()

// Customized:
logger := logf.NewLogger().
    Level(logf.LevelInfo).
    EncoderFrom(logf.JSON().TimeKey("time")).
    Output(file).
    Context().
    Build()

func (*LoggerBuilder) Build

func (b *LoggerBuilder) Build() *Logger

Build finalizes the configuration and returns a ready-to-use Logger.

logger := logf.NewLogger().Build()

func (*LoggerBuilder) Context

func (b *LoggerBuilder) Context(sources ...FieldSource) *LoggerBuilder

Context enables the ContextHandler middleware, which extracts Bag fields from context on every log call. This is what makes logf.With(ctx, ...) work — without it, context fields are silently ignored. Optional FieldSource functions let you pull in external fields too (trace IDs, request metadata, etc.).

func (*LoggerBuilder) Encoder

func (b *LoggerBuilder) Encoder(enc Encoder) *LoggerBuilder

Encoder sets a pre-built Encoder directly. Use this when you already have an Encoder instance; otherwise prefer EncoderFrom for builder composition.

func (*LoggerBuilder) EncoderFrom

func (b *LoggerBuilder) EncoderFrom(eb EncoderBuilder) *LoggerBuilder

EncoderFrom sets an EncoderBuilder whose Build method will be called when LoggerBuilder.Build is called. This enables clean builder composition — no need to call Build on the encoder separately:

logf.NewLogger().EncoderFrom(logf.JSON().TimeKey("time")).Build()

func (*LoggerBuilder) Level

func (b *LoggerBuilder) Level(l Level) *LoggerBuilder

Level sets the minimum severity level. Messages below this level are discarded. Default is LevelDebug (everything gets through).

func (*LoggerBuilder) Output

func (b *LoggerBuilder) Output(w io.Writer) *LoggerBuilder

Output sets where encoded log entries are written. Default is os.Stderr.

type MutableLevel

type MutableLevel struct {
	// contains filtered or unexported fields
}

MutableLevel is a concurrency-safe level that can be changed at runtime without rebuilding the Logger. Perfect for admin endpoints that toggle debug logging on a live system — just call Set and every subsequent log call picks up the new level atomically.

func NewMutableLevel

func NewMutableLevel(l Level) *MutableLevel

NewMutableLevel creates a MutableLevel starting at the given level. Pass it where a Level is expected and call Set later to change the threshold at runtime without restarting.

func (*MutableLevel) Enabled

func (l *MutableLevel) Enabled(_ context.Context, lvl Level) bool

Enabled reports whether the given level is enabled at the current mutable level. Safe for concurrent use.

func (*MutableLevel) Level

func (l *MutableLevel) Level() Level

Level returns the current logging level atomically.

func (*MutableLevel) Set

func (l *MutableLevel) Set(o Level)

Set atomically switches the logging level. All subsequent log calls will use the new threshold.

type ObjectEncoder

type ObjectEncoder interface {
	EncodeLogfObject(FieldEncoder) error
}

ObjectEncoder lets your custom types serialize themselves as structured objects with named fields. This is how you get zero-allocation logging for your domain types — no reflection, no fmt.Sprintf, just direct calls to the encoder.

Example:

type user struct {
	Username string
	Password string
}

func (u user) EncodeLogfObject(e FieldEncoder) error {
	e.EncodeFieldString("username", u.Username)
	e.EncodeFieldString("password", u.Password)
	return nil
}

type RouteOption

type RouteOption func(*encoderGroup)

RouteOption configures a destination within a Route's encoder group. Use Output and OutputCloser to create RouteOptions.

func Output

func Output(level Level, w io.Writer) RouteOption

Output returns a RouteOption that adds a destination with the given level filter and writer. Writes happen directly in the caller's goroutine — no channel, no background goroutine, zero per-message allocations. The Writer must be safe for concurrent use.

For async I/O with batching and spike tolerance, wrap the writer in a SlabWriter before passing it to Output:

sw := logf.NewSlabWriter(conn).SlabSize(64*1024).SlabCount(8).FlushInterval(100*time.Millisecond).Build()
defer sw.Close()
router, close, _ := logf.NewRouter().
    Route(enc, logf.Output(logf.LevelDebug, sw)).
    Build()

func OutputCloser

func OutputCloser(level Level, w io.WriteCloser) RouteOption

OutputCloser is like Output but transfers ownership of the writer to the router — the router's close function will call Close on w after flushing. Perfect for SlabWriters and other resources you want the router to manage:

sw := logf.NewSlabWriter(conn).SlabSize(64*1024).SlabCount(8).Build()
router, close, _ := logf.NewRouter().
    Route(enc, logf.OutputCloser(logf.LevelDebug, sw)).
    Build()
defer close() // flushes and closes sw

type RouterBuilder

type RouterBuilder struct {
	// contains filtered or unexported fields
}

RouterBuilder accumulates routes and builds a fan-out Handler. Add as many routes as you need — each route has an encoder and one or more outputs with independent level filters.

Usage:

router, close, err := NewRouter().
    Route(jsonEnc,
        Output(LevelDebug, kibana),
        Output(LevelInfo, stderr),
    ).
    Build()

func NewRouter

func NewRouter() *RouterBuilder

NewRouter returns a RouterBuilder for constructing a fan-out Handler that sends log entries to multiple destinations. Each route groups outputs that share an encoder, so one Encode call serves all outputs in the group.

func (*RouterBuilder) Build

func (b *RouterBuilder) Build() (Handler, func() error, error)

Build validates the configuration and returns the Router as a Handler, plus a close function that flushes and syncs all writers. Always defer the close function to ensure data reaches its destination. Build returns an error if the configuration is invalid (no routes or no outputs).

func (*RouterBuilder) Route

func (b *RouterBuilder) Route(enc Encoder, opts ...RouteOption) *RouterBuilder

Route adds an encoder group with the given outputs. All outputs in the same route share a single Encode call per entry — so sending JSON to both a file and a network socket costs exactly one encode, not two.

type SlabStats

type SlabStats struct {
	QueuedSlabs int   // slabs waiting for I/O
	FreeSlabs   int   // slabs available in pool
	TotalSlabs  int   // total slab count
	Dropped     int64 // total messages dropped (dropOnFull mode)
	Written     int64 // total messages accepted by Write
	WriteErrors int64 // total write errors
}

SlabStats is a snapshot of SlabWriter runtime statistics. Pull it from Stats() and feed it to your metrics system to keep an eye on queue depth, drop rates, and write errors.

type SlabWriter

type SlabWriter struct {
	// contains filtered or unexported fields
}

SlabWriter is an async buffered writer that keeps your logging goroutines fast by decoupling them from slow I/O. It uses a pool of pre-allocated linear byte slabs — producers memcpy into a slab, and a background goroutine writes full slabs to the destination in big, efficient batches.

Architecture:

N goroutines (producers)    background I/O goroutine
  Write(p)                   ┌──────────┐
    ↓ mu.Lock                │   pool   │ ←── recycle after write
    ↓ memcpy into slab   ←── └──────────┘
    ↓ slab full?
    ↓ yes → send slab ──→ full chan ──→ w.Write(slab) → destination
    ↓ grab fresh slab ←── pool
    ↓ mu.Unlock

Each slab is a contiguous []byte. The I/O goroutine writes it in a single Write call — always linear, no wrap-around, no partial writes. After Write completes, the slab is returned to the pool for reuse.

Capacity planning:

The two parameters — slabSize and slabCount — control throughput and burst tolerance independently.

slabSize determines the batch size per Write call. With a typical log message of 256 bytes and slabSize of 16 KB, each Write delivers 64 messages. The maximum sustained throughput is:

throughput = slabSize / (writeLatency × msgSize)

For example, with 1 ms network latency and 256-byte messages:

16 KB slab:  64,000 msgs/sec
64 KB slab: 256,000 msgs/sec

slabCount determines burst tolerance — how many slabs producers can fill while the I/O goroutine is blocked on a slow Write. This absorbs temporary latency spikes without dropping messages or blocking the consumer.

During a latency spike the consumer keeps filling free slabs. The number of slabs acts as a time buffer:

burstTime = slabCount × slabSize / (msgRate × msgSize)

For example, with 4 slabs × 4 KB (default) and 256-byte messages:

 1,000 msgs/sec:  absorbs a    ~64 ms spike
10,000 msgs/sec:  absorbs a    ~6 ms spike
50,000 msgs/sec:  absorbs a   ~1.2 ms spike

More configurations (256-byte messages, 50,000 msgs/sec):

 4 slabs × 16 KB:  absorbs a   ~5 ms spike
 8 slabs × 16 KB:  absorbs a  ~10 ms spike
16 slabs × 16 KB:  absorbs a  ~20 ms spike
 8 slabs × 64 KB:  absorbs a  ~40 ms spike
16 slabs × 64 KB:  absorbs a  ~80 ms spike

Memory cost is slabCount × slabSize, plus one extra slabSize when FlushInterval is enabled (reusable buffer for idle flush). Typical configurations:

 4 ×  4 KB =  16 KB  (default, lightweight)
 8 × 64 KB = 512 KB  (general purpose, good burst tolerance)
16 × 64 KB =   1 MB  (high throughput + long spike tolerance)

When all slabs are in flight and the pool is empty, Write blocks until a slab is recycled. With DropOnFull, Write never blocks: if the I/O goroutine cannot keep up, the current slab's data is silently discarded and the slab is reused. Use Dropped to monitor the total number of messages lost.

Message integrity

Write guarantees that each message is either fully delivered or fully dropped — never partially written (torn). This holds for messages of any size:

  • len(p) <= slabSize: if the message does not fit in the remaining slab space, an early swap is performed before writing. The message always lands in a single slab. On drop the entire slab (including the message) is discarded atomically.

  • len(p) > slabSize: the message is allocated in a dedicated oversized buffer and sent through the I/O goroutine as a single write. The oversized buffer is discarded after write (not returned to the pool). One allocation per oversized message.

Performance notes

The early swap may leave unused space at the tail of a slab when a message does not fit in the remainder. With typical log messages (200–500 bytes) and slab sizes (16–64 KB), utilization stays above 99 %. Fragmentation becomes noticeable only when message size approaches slab size (e.g. msg/slab > 50 %), which is unusual for structured logging.

Oversized messages (> slabSize) incur one heap allocation (make + copy) per write. This is acceptable because such messages are rare; typical log entries are 100–500 bytes while the default slab is 4 KB.

Concurrency

Write and Flush are safe for concurrent use. Write and Flush must not be called after or concurrently with Close. Close itself is idempotent.

func (*SlabWriter) Close

func (sb *SlabWriter) Close() error

Close flushes remaining data, drains the queue, stops the background I/O goroutine, and calls Flush + Sync on the underlying Writer. Safe to call multiple times — subsequent calls return the same error.

func (*SlabWriter) Flush

func (sb *SlabWriter) Flush() error

Flush enqueues the current partial slab for writing by the background I/O goroutine. It returns immediately without waiting for the write to complete — if you need a durable flush, use Close instead. Must not be called after Close.

func (*SlabWriter) Stats

func (sb *SlabWriter) Stats() SlabStats

Stats returns a point-in-time snapshot of runtime statistics. Safe to call concurrently from a metrics scraper or health check endpoint.

func (*SlabWriter) Sync

func (sb *SlabWriter) Sync() error

Sync is a no-op on SlabWriter — the real Sync on the underlying writer happens during Close.

func (*SlabWriter) Write

func (sb *SlabWriter) Write(p []byte) (int, error)

Write copies p into the current slab. Every message is guaranteed to be either fully written or fully dropped — never partially torn. If the message does not fit in the remaining slab space, an early swap puts it in a fresh slab. Messages larger than slabSize get a dedicated buffer.

Write is safe for concurrent use. It must not be called after Close.

type SlabWriterBuilder

type SlabWriterBuilder struct {
	// contains filtered or unexported fields
}

SlabWriterBuilder accumulates configuration for a SlabWriter. Create one with NewSlabWriter, set options via chained method calls, and finalize with Build.

func NewSlabWriter

func NewSlabWriter(w io.Writer) *SlabWriterBuilder

NewSlabWriter returns a builder for a SlabWriter that will write to w. Call Build to create the SlabWriter. Default slab size is 4 KB and default slab count is 4.

func (*SlabWriterBuilder) Build

func (b *SlabWriterBuilder) Build() *SlabWriter

Build creates and starts the SlabWriter. You must call Close when you are done to flush remaining data and stop the background I/O goroutine — a defer sw.Close() right after creation is the way to go.

func (*SlabWriterBuilder) DropOnFull

func (b *SlabWriterBuilder) DropOnFull() *SlabWriterBuilder

DropOnFull makes Write non-blocking: if the I/O goroutine cannot keep up and all slabs are in flight, the current slab's data is silently dropped instead of blocking the caller. Use this when you would rather lose log messages than add latency to your hot path. Monitor dropped messages via Stats().Dropped.

func (*SlabWriterBuilder) ErrorWriter

func (b *SlabWriterBuilder) ErrorWriter(w io.Writer) *SlabWriterBuilder

ErrorWriter sets where I/O errors are reported. When the background goroutine fails to write a slab, it formats the error and writes it to w. By default errors are silently discarded — pass os.Stderr here if you want to know about write failures.

func (*SlabWriterBuilder) FlushInterval

func (b *SlabWriterBuilder) FlushInterval(d time.Duration) *SlabWriterBuilder

FlushInterval sets how long the SlabWriter waits for new data before flushing a partial slab. Without this, a quiet period could leave recent log entries sitting in the buffer. Default is 0 (no idle flush — data only goes out when a slab fills up or Close is called).

func (*SlabWriterBuilder) SlabCount

func (b *SlabWriterBuilder) SlabCount(n int) *SlabWriterBuilder

SlabCount sets the number of slab buffers in the pool. More slabs give better burst tolerance but use more memory. Default is 4.

func (*SlabWriterBuilder) SlabSize

func (b *SlabWriterBuilder) SlabSize(n int) *SlabWriterBuilder

SlabSize sets the size of each slab buffer in bytes. Larger slabs mean fewer I/O calls but more memory. Default is 4 KB.

type TextEncoderBuilder

type TextEncoderBuilder struct {
	// contains filtered or unexported fields
}

TextEncoderBuilder configures and builds a text Encoder using a clean builder-style API. Create one with Text(), chain methods to customize, then call Build().

func Text

func Text() *TextEncoderBuilder

Text returns a new TextEncoderBuilder — the recommended way to create a human-readable text encoder with ANSI colors. Colors are on by default; use NoColor() to disable them or check the NO_COLOR environment variable (https://no-color.org):

enc := logf.Text().Build()
enc := logf.Text().NoColor().Build()

Respect the NO_COLOR convention:

b := logf.Text()
if _, ok := os.LookupEnv("NO_COLOR"); ok {
    b = b.NoColor()
}

func (*TextEncoderBuilder) Build

func (b *TextEncoderBuilder) Build() Encoder

Build finalizes the configuration and returns a ready-to-use text Encoder.

func (*TextEncoderBuilder) DisableCaller

func (b *TextEncoderBuilder) DisableCaller() *TextEncoderBuilder

DisableCaller omits the caller location from text output entirely.

func (*TextEncoderBuilder) DisableLevel

func (b *TextEncoderBuilder) DisableLevel() *TextEncoderBuilder

DisableLevel omits the severity level from text output entirely.

func (*TextEncoderBuilder) DisableMsg

func (b *TextEncoderBuilder) DisableMsg() *TextEncoderBuilder

DisableMsg omits the message text from text output entirely.

func (*TextEncoderBuilder) DisableName

func (b *TextEncoderBuilder) DisableName() *TextEncoderBuilder

DisableName omits the logger name from text output entirely.

func (*TextEncoderBuilder) DisableTime

func (b *TextEncoderBuilder) DisableTime() *TextEncoderBuilder

DisableTime omits the timestamp from text output entirely.

func (*TextEncoderBuilder) EncodeCaller

EncodeCaller sets a custom CallerEncoder for formatting caller locations (default short format).

func (*TextEncoderBuilder) EncodeDuration

EncodeDuration sets a custom DurationEncoder for formatting durations (default string representation).

func (*TextEncoderBuilder) EncodeError

EncodeError sets a custom ErrorEncoder for formatting error values.

func (*TextEncoderBuilder) EncodeLevel

EncodeLevel sets a custom LevelEncoder for formatting severity levels.

func (*TextEncoderBuilder) EncodeTime

EncodeTime sets a custom TimeEncoder for formatting timestamps (default time.StampMilli).

func (*TextEncoderBuilder) NoColor

func (b *TextEncoderBuilder) NoColor() *TextEncoderBuilder

NoColor disables ANSI color escape sequences in the output. Use this when writing to files or non-TTY destinations.

type TextEncoderConfig

type TextEncoderConfig struct {
	NoColor            bool
	DisableFieldTime   bool
	DisableFieldLevel  bool
	DisableFieldName   bool
	DisableFieldMsg    bool
	DisableFieldCaller bool

	EncodeTime     TimeEncoder
	EncodeDuration DurationEncoder
	EncodeError    ErrorEncoder
	EncodeLevel    LevelEncoder
	EncodeCaller   CallerEncoder
}

TextEncoderConfig controls how the text encoder formats log entries — colors, which fields to show, and how types like time, duration, and errors are rendered. For a friendlier builder-style API, use Text() instead.

func (TextEncoderConfig) WithDefaults

func (c TextEncoderConfig) WithDefaults() TextEncoderConfig

WithDefaults returns a copy of the config with all zero-value fields replaced by sensible defaults (StampMilli timestamps, string durations, short caller format, short level names, etc.).

type TimeEncoder

type TimeEncoder func(time.Time, TypeEncoder)

TimeEncoder is a function that formats a time.Time into the log output via the TypeEncoder. Swap it out to control timestamp format globally.

func LayoutTimeEncoder

func LayoutTimeEncoder(layout string) TimeEncoder

LayoutTimeEncoder returns a TimeEncoder that formats timestamps using the given Go time layout string (same format as time.Format).

type TypeEncoder

type TypeEncoder interface {
	EncodeTypeAny(interface{})
	EncodeTypeBool(bool)
	EncodeTypeInt64(int64)
	EncodeTypeUint64(uint64)
	EncodeTypeFloat64(float64)
	EncodeTypeDuration(time.Duration)
	EncodeTypeTime(time.Time)
	EncodeTypeString(string)
	EncodeTypeStrings([]string)
	EncodeTypeBytes([]byte)
	EncodeTypeInts64([]int64)
	EncodeTypeFloats64([]float64)
	EncodeTypeDurations([]time.Duration)
	EncodeTypeArray(ArrayEncoder)
	EncodeTypeObject(ObjectEncoder)
	EncodeTypeUnsafeBytes(unsafe.Pointer)
}

TypeEncoder provides methods for encoding individual values (scalars, slices, arrays, objects) without field names. It is the companion interface used by TimeEncoder, DurationEncoder, LevelEncoder, and CallerEncoder to write their output into the buffer.

type TypeEncoderFactory

type TypeEncoderFactory interface {
	TypeEncoder(*Buffer) TypeEncoder
}

TypeEncoderFactory creates a TypeEncoder that writes into the given Buffer. This lets one encoder borrow another encoder's formatting — for example, the text encoder uses the JSON encoder's TypeEncoderFactory to render nested objects and arrays in JSON syntax within otherwise plain-text output.

type Writer

type Writer interface {
	io.Writer
	Flush() error
	Sync() error
}

Writer extends io.Writer with Flush and Sync — the two operations needed for reliable log delivery. Flush pushes buffered data to the underlying output, and Sync commits it to stable storage (think fsync). The Router calls Flush and Sync during its close sequence to make sure nothing is left in flight.

func WriterFromIO

func WriterFromIO(w io.Writer) Writer

WriterFromIO upgrades a plain io.Writer to a Writer. If w already implements Writer, it is returned as-is — no wrapping overhead. Otherwise, the wrapper discovers Flush and Sync capabilities from the underlying type:

  • Sync calls w.Sync() if available (e.g. *os.File)
  • Flush calls w.Flush() if available (e.g. *bufio.Writer)
  • Missing methods become no-ops

type WriterSlot

type WriterSlot struct {
	// contains filtered or unexported fields
}

WriterSlot is a placeholder Writer you can wire into a Logger now and connect to a real destination later via Set. This solves the chicken-and-egg problem where you need a Logger at startup but the actual output (file, network, etc.) is not ready yet.

Before Set is called, writes are either dropped or buffered (if WithSlotBuffer was used). After Set, all writes go straight to the real writer with no extra overhead.

slot := logf.NewWriterSlot()
logger := logf.NewLogger().Output(slot).Build()
// ... later, when destination is ready:
slot.Set(file)

WriterSlot is safe for concurrent Write/Flush/Sync calls. Set itself is NOT safe for concurrent calls — call it from a single goroutine.

func NewWriterSlot

func NewWriterSlot(opts ...WriterSlotOption) *WriterSlot

NewWriterSlot returns a new WriterSlot ready for use. Before Set is called, writes are either silently dropped or buffered (if you pass WithSlotBuffer).

func (*WriterSlot) Flush

func (s *WriterSlot) Flush() error

Flush delegates to the real writer's Flush. No-op before Set.

func (*WriterSlot) Set

func (s *WriterSlot) Set(w io.Writer)

Set connects the slot to a real writer. Any buffered data will be flushed on the next Write call, preserving temporal ordering without blocking Set itself. The writer is automatically wrapped via WriterFromIO if needed.

Set is NOT safe for concurrent calls — call it from a single goroutine.

func (*WriterSlot) Sync

func (s *WriterSlot) Sync() error

Sync delegates to the real writer's Sync. No-op before Set.

func (*WriterSlot) Write

func (s *WriterSlot) Write(p []byte) (int, error)

Write writes p to the real writer if Set has been called. Before Set, data is buffered (if WithSlotBuffer was used) or silently dropped.

type WriterSlotOption

type WriterSlotOption func(*WriterSlot)

WriterSlotOption configures a WriterSlot at creation time.

func WithSlotBuffer

func WithSlotBuffer(size int) WriterSlotOption

WithSlotBuffer enables buffering of early writes before Set is called, keeping up to size bytes in memory so you do not lose startup logs. Writes that do not fit entirely are dropped (no partial writes). The buffer is flushed to the real writer on the first Write after Set.

Directories

Path Synopsis
examples
basic command
Basic example: NewLogger builder, logging levels, fields, groups.
Basic example: NewLogger builder, logging levels, fields, groups.
context command
Context example: request-scoped fields via context.Context.
Context example: request-scoped fields via context.Context.
logfc command
logfc example: logger-in-context pattern.
logfc example: logger-in-context pattern.
router command
Router example: multi-destination logging with independent encoders, level filters, and sync/async I/O.
Router example: multi-destination logging with independent encoders, level filters, and sync/async I/O.
slog command
slog integration example: logf as backend for slog, third-party libraries, and mixed logf/slog usage in the same application.
slog integration example: logf as backend for slog, third-party libraries, and mixed logf/slog usage in the same application.
writerslot command
WriterSlot example: lazy destination initialization.
WriterSlot example: lazy destination initialization.

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL