Documentation
¶
Overview ¶
Package cache provides a generic thread-safe LRU cache with file-metadata invalidation, along with incremental, tail, and head JSONL readers that share pooled scanner buffers.
Index ¶
- Constants
- Variables
- func FileChanged(path string, cachedSize int64, cachedModTime time.Time) (changed, grew bool, info os.FileInfo, err error)
- func GetScannerBuffer() []byte
- func NewScanner(r io.Reader) (*bufio.Scanner, []byte)
- func PutScannerBuffer(buf []byte)
- type Cache
- func (c *Cache[T]) Delete(key string)
- func (c *Cache[T]) DeleteIf(pred func(key string, entry Entry[T]) bool)
- func (c *Cache[T]) Get(key string, size int64, modTime time.Time) (T, bool)
- func (c *Cache[T]) GetWithOffset(key string) (T, int64, int64, time.Time, bool)
- func (c *Cache[T]) InvalidateIfChanged(key string, size int64, modTime time.Time)
- func (c *Cache[T]) Len() int
- func (c *Cache[T]) Set(key string, data T, size int64, modTime time.Time, offset int64)
- type Entry
- type HeadReader
- type IncrementalReader
- type TailReader
Constants ¶
const ( // DefaultScannerBufSize is the initial buffer size for JSONL scanning (1MB). DefaultScannerBufSize = 1024 * 1024 // DefaultScannerMaxSize is the maximum buffer size for JSONL scanning (10MB). DefaultScannerMaxSize = 10 * 1024 * 1024 )
Variables ¶
var ScannerPool = sync.Pool{ New: func() any { return make([]byte, DefaultScannerBufSize) }, }
ScannerPool provides reusable scanner buffers to reduce allocations.
Functions ¶
func FileChanged ¶
func FileChanged(path string, cachedSize int64, cachedModTime time.Time) (changed, grew bool, info os.FileInfo, err error)
FileChanged checks if a file has changed since a cached entry. Returns (changed, grew, info, err). - changed: true if size or modTime differs - grew: true if file size increased (allows incremental parsing) - info: current file info for updating cache
func GetScannerBuffer ¶
func GetScannerBuffer() []byte
GetScannerBuffer retrieves a buffer from the pool.
func NewScanner ¶
NewScanner creates a buffered scanner configured for JSONL files. The caller must call PutScannerBuffer(buf) when done with the scanner.
func PutScannerBuffer ¶
func PutScannerBuffer(buf []byte)
PutScannerBuffer returns a buffer to the pool.
Types ¶
type Cache ¶
type Cache[T any] struct { // contains filtered or unexported fields }
Cache is a thread-safe generic cache with LRU eviction.
func (*Cache[T]) Get ¶
Get returns cached data if the file hasn't changed. Returns (data, true) if cache hit, (zero, false) if miss or stale.
func (*Cache[T]) GetWithOffset ¶
GetWithOffset returns cached data and byte offset for incremental parsing. Use when file may have grown and you want to resume parsing from the offset. Returns (data, offset, true) if cached entry exists with matching key. Caller should check if file grew (newSize > cachedSize) to decide on incremental parse.
func (*Cache[T]) InvalidateIfChanged ¶
InvalidateIfChanged removes entry if file metadata has changed.
type Entry ¶
type Entry[T any] struct { Data T ModTime time.Time Size int64 LastAccess time.Time ByteOffset int64 // for incremental parsing }
Entry holds cached data with file metadata for invalidation.
type HeadReader ¶
type HeadReader struct {
// contains filtered or unexported fields
}
HeadReader reads the first N lines of a file for efficient head parsing.
func NewHeadReader ¶
func NewHeadReader(path string, maxLines int) (*HeadReader, error)
NewHeadReader opens a file for reading the first maxLines lines.
func (*HeadReader) Close ¶
func (r *HeadReader) Close() error
Close releases resources associated with the reader.
func (*HeadReader) LinesRead ¶
func (r *HeadReader) LinesRead() int
LinesRead returns the number of lines read so far.
func (*HeadReader) Next ¶
func (r *HeadReader) Next() ([]byte, error)
Next returns the next line, up to maxLines. Returns (nil, io.EOF) when maxLines reached or file ends.
func (*HeadReader) Offset ¶
func (r *HeadReader) Offset() int64
Offset returns the current byte offset in the file.
type IncrementalReader ¶
type IncrementalReader struct {
// contains filtered or unexported fields
}
IncrementalReader wraps a file for incremental JSONL reading from an offset.
func NewIncrementalReader ¶
func NewIncrementalReader(path string, startOffset int64) (*IncrementalReader, error)
NewIncrementalReader opens a file and seeks to the given offset for reading. Returns an IncrementalReader that tracks bytes read for caching.
func (*IncrementalReader) Close ¶
func (r *IncrementalReader) Close() error
Close releases resources associated with the reader.
func (*IncrementalReader) Next ¶
func (r *IncrementalReader) Next() ([]byte, error)
Next returns the next line from the file. Returns (line, nil) on success, (nil, io.EOF) at end of file, or (nil, err) on error.
func (*IncrementalReader) Offset ¶
func (r *IncrementalReader) Offset() int64
Offset returns the current byte offset in the file.
type TailReader ¶
type TailReader struct {
// contains filtered or unexported fields
}
TailReader reads the last N bytes of a file for efficient tail parsing.
func NewTailReader ¶
func NewTailReader(path string, tailSize int64) (*TailReader, error)
NewTailReader opens a file and seeks to the tail portion. tailSize specifies how many bytes from the end to read.
func (*TailReader) Close ¶
func (r *TailReader) Close() error
Close releases resources associated with the reader.
func (*TailReader) Next ¶
func (r *TailReader) Next() ([]byte, error)
Next returns the next line from the tail. The first call skips the partial line after seeking.