Documentation
¶
Index ¶
Constants ¶
const UseStandardZstdLib = true
UseStandardZstdLib indicates whether the zstd implementation is a port of the official one in the facebook/zstd repository.
This constant is only used in tests. Some tests rely on reproducibility of SST files, but a custom implementation of zstd will produce different compression result. So those tests have to be disabled in such cases.
We cannot always use the official facebook/zstd implementation since it relies on CGo.
Variables ¶
var ( None = makePreset(NoCompression, 0) Snappy = makePreset(SnappyAlgorithm, 0) MinLZFastest = makePreset(MinLZ, minlz.LevelFastest) MinLZBalanced = makePreset(MinLZ, minlz.LevelBalanced) ZstdLevel1 = makePreset(Zstd, 1) ZstdLevel3 = makePreset(Zstd, 3) ZstdLevel5 = makePreset(Zstd, 5) ZstdLevel7 = makePreset(Zstd, 7) )
Setting presets.
Functions ¶
This section is empty.
Types ¶
type AdaptiveCompressor ¶
type AdaptiveCompressor struct {
// contains filtered or unexported fields
}
AdaptiveCompressor is a Compressor that automatically chooses between two algorithms: it uses a slower but better algorithm as long as it reduces the compressed size (compared to the faster algorithm) by a certain relative amount. The decision is probabilistic and based on sampling a subset of blocks.
func NewAdaptiveCompressor ¶
func NewAdaptiveCompressor(p AdaptiveCompressorParams) *AdaptiveCompressor
func (*AdaptiveCompressor) Close ¶
func (ac *AdaptiveCompressor) Close()
type AdaptiveCompressorParams ¶
type AdaptiveCompressorParams struct {
// Fast and Slow are the two compression settings the adaptive compressor
// chooses between.
Fast Setting
Slow Setting
// ReductionCutoff is the relative size reduction (when using the slow
// algorithm vs the fast algorithm) below which we use the fast algorithm. For
// example, if ReductionCutoff is 0.3 then we only use the slow algorithm if
// it reduces the compressed size (compared to the fast algorithm) by at least
// 30%.
ReductionCutoff float64
// SampleEvery defines the sampling frequency: the probability we sample a
// block is 1.0/SampleEvery. Sampling means trying both algorithms and
// recording the compression ratio.
SampleEvery int
// SampleHalfLife defines the half-life of the exponentially weighted moving
// average. It should be a factor larger than the expected average block size.
SampleHalfLife int64
SamplingSeed uint64
}
AdaptiveCompressorParams contains the parameters for an adaptive compressor.
type Algorithm ¶
type Algorithm uint8
Algorithm identifies a compression algorithm. Some compression algorithms support multiple compression levels.
Decompressing data requires only an Algorithm.
type Compressor ¶
type Compressor interface {
// Compress a block, appending the compressed data to dst[:0].
// Returns setting used.
Compress(dst, src []byte) ([]byte, Setting)
// Close must be called when the Compressor is no longer needed.
// After Close is called, the Compressor must not be used again.
Close()
}
Compressor is an interface for compressing data. An instance is associated with a specific Setting.
func GetCompressor ¶
func GetCompressor(s Setting) Compressor
type Decompressor ¶
type Decompressor interface {
// DecompressInto decompresses compressed into buf. The buf slice must have the
// exact size as the decompressed value. Callers may use DecompressedLen to
// determine the correct size.
DecompressInto(buf, compressed []byte) error
// DecompressedLen returns the length of the provided block once decompressed,
// allowing the caller to allocate a buffer exactly sized to the decompressed
// payload.
DecompressedLen(b []byte) (decompressedLen int, err error)
// Close must be called when the Decompressor is no longer needed.
// After Close is called, the Decompressor must not be used again.
Close()
}
Decompressor is an interface for compressing data. An instance is associated with a specific Algorithm.
func GetDecompressor ¶
func GetDecompressor(a Algorithm) Decompressor