timecache

package module
v1.0.0 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Aug 23, 2025 License: MPL-2.0 Imports: 2 Imported by: 9

README

go-timecache: Ultra-fast time caching for high-performance Go applications

an AGILira library

Go Security Status Go Report Card Coverage

Originally developed for Iris, go-timecache provides zero-allocation access to cached time values, eliminating the performance overhead of repeated time.Now() calls in high-throughput scenarios like logging, metrics collection, and real-time data processing.

Features

  • Zero-allocation time access: Get current time without heap allocations
  • Configurable precision: Choose your ideal balance between accuracy and performance
  • Thread-safe: Safe for concurrent use from multiple goroutines
  • Simple API: Drop-in replacement for time.Now() with minimal code changes
  • Multiple formats: Access time as time.Time, nanoseconds, or formatted string

Installation

go get github.com/agilira/go-timecache

Usage

import "github.com/agilira/go-timecache"

// Using the default global cache
now := timecache.CachedTime()
nanos := timecache.CachedTimeNano()  // Zero allocation!

// Create your own cache with custom settings
tc := timecache.NewWithResolution(1 * time.Millisecond)
defer tc.Stop()  // Important: remember to stop when done

customTime := tc.CachedTime()

Performance

Benchmarks show dramatic improvements over standard time.Now():

BenchmarkTimeNow-8                      25118025                42.98 ns/op            0 B/op          0 allocs/op
BenchmarkCachedTime-8                   1000000000               0.3549 ns/op          0 B/op          0 allocs/op
BenchmarkCachedTimeNano-8               1000000000               0.3574 ns/op          0 B/op          0 allocs/op
BenchmarkTimeNowUnixNano-8              27188656                42.68 ns/op            0 B/op          0 allocs/op
BenchmarkCachedTimeParallel-8           1000000000               0.1737 ns/op          0 B/op          0 allocs/op
BenchmarkTimeNowParallel-8              184139052                6.417 ns/op           0 B/op          0 allocs/op
  • CachedTime is ~121x faster than time.Now()
  • CachedTimeParallel is ~37x faster than parallel time.Now()
  • Zero heap allocations in all operations

When to Use

  • High-volume logging: Eliminate time-related allocations in hot paths
  • Metrics collection: Timestamp events with minimal overhead
  • Real-time systems: Get consistent timestamps with predictable performance
  • Microservices: Reduce GC pressure from frequent timestamp generation

API Reference

Global Functions
  • CachedTime() time.Time: Get current time from default cache
  • CachedTimeNano() int64: Get nanoseconds since epoch (zero allocation)
  • CachedTimeString() string: Get formatted time string
  • DefaultCache() *TimeCache: Access the default TimeCache instance
  • StopDefaultCache(): Stop the default cache (use during shutdown)
TimeCache Methods
  • New() *TimeCache: Create a new cache with default settings
  • NewWithResolution(resolution time.Duration) *TimeCache: Custom resolution
  • CachedTime() time.Time: Get current time from this cache
  • CachedTimeNano() int64: Get nanoseconds from this cache (zero allocation)
  • CachedTimeString() string: Get formatted time from this cache
  • Resolution() time.Duration: Get this cache's resolution
  • Stop(): Stop this cache's background updater

License

go-timecache is licensed under the Mozilla Public License 2.0.


go-timecache • an AGILira library

Documentation

Overview

Example
// Using the default global cache
now := timecache.CachedTime()
fmt.Printf("Current time: %v\n", now)

// Get time as Unix nano (zero allocation)
nano := timecache.CachedTimeNano()
fmt.Printf("Nanoseconds since epoch: %d\n", nano)

// Get formatted time string
timeStr := timecache.CachedTimeString()
fmt.Printf("Formatted time: %s\n", timeStr)
Example (HighThroughputUsage)

High throughput usage

// This example demonstrates a typical high-throughput usage pattern

// In your init() or setup code:
tc := timecache.New()
defer tc.Stop()

// Simulate processing multiple events
process := func(id int) {
	// Get timestamp with zero allocation
	timestamp := tc.CachedTimeNano()

	// Use timestamp in your high-performance code
	_ = fmt.Sprintf("Event %d processed at %d", id, timestamp)
}

// Process multiple events with minimal overhead
for i := 0; i < 10; i++ {
	process(i)
}

Index

Examples

Constants

This section is empty.

Variables

This section is empty.

Functions

func CachedTime

func CachedTime() time.Time

CachedTime returns cached time as time.Time from default cache

func CachedTimeNano

func CachedTimeNano() int64

CachedTimeNano returns cached time in nanoseconds from default cache (ZERO ALLOCATION)

func CachedTimeString

func CachedTimeString() string

CachedTimeString returns cached time formatted as RFC3339Nano string from default cache

func StopDefaultCache

func StopDefaultCache()

StopDefaultCache stops the global default time cache Note: This is mainly for testing and shutdown scenarios

Types

type TimeCache

type TimeCache struct {
	// contains filtered or unexported fields
}

TimeCache provides cached time access to eliminate time.Now() allocations Optimized for high-throughput scenarios where multiple goroutines need frequent access to the current time with minimal overhead.

func DefaultCache

func DefaultCache() *TimeCache

DefaultCache returns the global default TimeCache instance

func New

func New() *TimeCache

New creates a new TimeCache with default resolution (500µs)

Example
// Create a custom time cache with 1ms resolution
tc := timecache.NewWithResolution(1 * time.Millisecond)
defer tc.Stop() // Important: stop the cache when done to prevent goroutine leak

// Use the custom cache instance
now := tc.CachedTime()
fmt.Printf("Custom cache time: %v\n", now)

func NewWithResolution

func NewWithResolution(resolution time.Duration) *TimeCache

NewWithResolution creates a new TimeCache with custom update resolution

The resolution parameter controls how frequently the cached time is updated. Smaller values provide more accurate timestamps but consume more CPU. Recommended values: - 100µs to 500µs for high precision - 1ms to 10ms for balanced performance - >10ms only for non-critical timing with minimal CPU impact

func (*TimeCache) CachedTime

func (tc *TimeCache) CachedTime() time.Time

CachedTime returns cached time as time.Time

func (*TimeCache) CachedTimeNano

func (tc *TimeCache) CachedTimeNano() int64

CachedTimeNano returns cached time in nanoseconds (ZERO ALLOCATION)

Example
// Create a new cache with default settings
tc := timecache.New()
defer tc.Stop()

// Get cached time as nanoseconds (zero allocation)
nano := tc.CachedTimeNano()
fmt.Printf("Nanoseconds: %d\n", nano)

// Convert nanoseconds to time.Time if needed
tm := time.Unix(0, nano)
fmt.Printf("Converted to time.Time: %v\n", tm)

func (*TimeCache) CachedTimeString

func (tc *TimeCache) CachedTimeString() string

CachedTimeString returns cached time formatted as RFC3339Nano string

func (*TimeCache) Resolution

func (tc *TimeCache) Resolution() time.Duration

Resolution returns the update frequency of this cache

func (*TimeCache) Stop

func (tc *TimeCache) Stop()

Stop permanently stops the time cache updater The cache will no longer be updated after this call

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL