Documentation
¶
Overview ¶
Package disk implements the secondary adapter for persistent storage in hexagonal architecture. This package provides high-performance disk-based storage with Brotli compression and JSON streaming for the CloudZero Agent's metric collection and processing pipeline.
Package disk provides storage functionality.
Index ¶
- Constants
- func NewParquetStreamer(input io.Reader) io.ReadCloser
- type DiskStore
- func (d *DiskStore) All(ctx context.Context, file string) (types.MetricRange, error)
- func (d *DiskStore) Find(ctx context.Context, filterName string, filterExtension string) ([]string, error)
- func (d *DiskStore) Flush() error
- func (d *DiskStore) GetFiles(paths ...string) ([]string, error)
- func (d *DiskStore) GetUsage(limit uint64, paths ...string) (*types.StoreUsage, error)
- func (d *DiskStore) ListFiles(paths ...string) ([]os.DirEntry, error)
- func (d *DiskStore) MaxInterval() time.Duration
- func (d *DiskStore) Pending() int
- func (d *DiskStore) Put(ctx context.Context, metrics ...types.Metric) error
- func (d *DiskStore) Walk(loc string, process filepath.WalkFunc) error
- type DiskStoreOpt
- type MetricFile
Constants ¶
const ( // CostContentIdentifier marks metrics as cost-related for CloudZero billing analysis. CostContentIdentifier = "metrics" // ObservabilityContentIdentifier marks metrics as observability-focused rather than cost-related. ObservabilityContentIdentifier = "observability" // LogsContentIdentifider marks log data for separate processing and storage. LogsContentIdentifider = "logs" )
Content type identifiers for metric classification and storage routing. These constants determine the storage path and processing logic for different metric categories.
Variables ¶
This section is empty.
Functions ¶
func NewParquetStreamer ¶
func NewParquetStreamer(input io.Reader) io.ReadCloser
NewParquetStreamer reads a Brotli-compressed JSON file containing an array of Metrics, and returns a reader with the data transcoded to Snappy-compressed Parquet.
Types ¶
type DiskStore ¶
type DiskStore struct {
// contains filtered or unexported fields
}
DiskStore is a data store intended to be backed by a disk. Currently, data is stored in Brotli-compressed JSON, but transcoded to Snappy-compressed Parquet
func NewDiskStore ¶
func NewDiskStore(settings config.Database, opts ...DiskStoreOpt) (*DiskStore, error)
NewDiskStore initializes a DiskStore with a directory path and row limit
func (*DiskStore) All ¶
All retrieves all metrics from uncompacted .json.br files, excluding the active and compressed files. It reads the data into memory and returns a MetricRange.
func (*DiskStore) Find ¶ added in v1.2.0
func (d *DiskStore) Find(ctx context.Context, filterName string, filterExtension string) ([]string, error)
Find searches for files recursively starting from a given directory with optional filename and extension filters.
func (*DiskStore) Flush ¶
Flush finalizes the current writer, writes all buffered data to disk, and renames the file
func (*DiskStore) GetUsage ¶
GetUsage gathers disk usage stats using syscall.Statfs. paths will be used as `filepath.Join(paths...)`
func (*DiskStore) MaxInterval ¶
type DiskStoreOpt ¶
func WithContentIdentifier ¶
func WithContentIdentifier(identifier string) DiskStoreOpt
func WithMaxInterval ¶
func WithMaxInterval(interval time.Duration) DiskStoreOpt
type MetricFile ¶
type MetricFile struct {
*os.File // wrapper around an os.File
// contains filtered or unexported fields
}
func NewMetricFile ¶
func NewMetricFile(path string) (*MetricFile, error)
func (*MetricFile) Close ¶
func (f *MetricFile) Close() error
func (*MetricFile) Location ¶
func (f *MetricFile) Location() string
func (*MetricFile) Rename ¶
func (f *MetricFile) Rename(new string) error
func (*MetricFile) Size ¶
func (f *MetricFile) Size() (int64, error)
Size returns the size of the file.
func (*MetricFile) UniqueID ¶
func (f *MetricFile) UniqueID() string