Documentation
¶
Overview ¶
Package blockstore implements a thin wrapper over a datastore, giving a clean interface for Getting and Putting block objects.
Index ¶
Constants ¶
This section is empty.
Variables ¶
var BlockPrefix = ds.NewKey("blocks")
BlockPrefix namespaces blockstore datastores
var ErrHashMismatch = errors.New("block in storage has different hash than requested")
ErrHashMismatch is an error returned when the hash of a block is different than expected.
Functions ¶
This section is empty.
Types ¶
type Blockstore ¶
type Blockstore interface {
DeleteBlock(context.Context, cid.Cid) error
Has(context.Context, cid.Cid) (bool, error)
Get(context.Context, cid.Cid) (blocks.Block, error)
// GetSize returns the CIDs mapped BlockSize
GetSize(context.Context, cid.Cid) (int, error)
// Put puts a given block to the underlying datastore
Put(context.Context, blocks.Block) error
// PutMany puts a slice of blocks at the same time using batching
// capabilities of the underlying datastore whenever possible.
PutMany(context.Context, []blocks.Block) error
// AllKeysChan returns a channel from which
// the CIDs in the Blockstore can be read. It should respect
// the given context, closing the channel if it becomes Done.
//
// AllKeysChan treats the underlying blockstore as a set, and returns that
// set in full. The only guarantee is that the consumer of AKC will
// encounter every CID in the underlying set, at least once. If the
// underlying blockstore supports duplicate CIDs it is up to the
// implementation to elect to return such duplicates or not. Similarly no
// guarantees are made regarding CID ordering.
//
// When underlying blockstore is operating on Multihash and codec information
// is not preserved, returned CIDs will use Raw (0x55) codec.
AllKeysChan(ctx context.Context) (<-chan cid.Cid, error)
}
Blockstore wraps a Datastore block-centered methods and provides a layer of abstraction which allows to add different caching strategies.
func CachedBlockstore ¶
func CachedBlockstore( ctx context.Context, bs Blockstore, opts CacheOpts, ) (cbs Blockstore, err error)
CachedBlockstore returns a blockstore wrapped in an TwoQueueCache and then in a bloom filter cache, if the options indicate it.
func NewBlockstore ¶
func NewBlockstore(d ds.Batching, opts ...Option) Blockstore
NewBlockstore returns a default Blockstore implementation using the provided datastore.Batching backend.
func NewBlockstoreNoPrefix
deprecated
func NewBlockstoreNoPrefix(d ds.Batching) Blockstore
NewBlockstoreNoPrefix returns a default Blockstore implementation using the provided datastore.Batching backend. This constructor does not modify input keys in any way
Deprecated: Use NewBlockstore with the NoPrefix option instead.
func NewIdStore ¶
func NewIdStore(bs Blockstore) Blockstore
type CacheOpts ¶
type CacheOpts struct {
HasBloomFilterSize int // 1 byte
HasBloomFilterHashes int // No size, 7 is usually best, consult bloom papers
HasTwoQueueCacheSize int // 32 bytes
}
CacheOpts wraps options for CachedBlockStore(). Next to each option is it approximate memory usage per unit
func DefaultCacheOpts ¶
func DefaultCacheOpts() CacheOpts
DefaultCacheOpts returns a CacheOpts initialized with default values.
type GCBlockstore ¶
type GCBlockstore interface {
Blockstore
GCLocker
}
GCBlockstore is a blockstore that can safely run garbage-collection operations.
func NewGCBlockstore ¶
func NewGCBlockstore(bs Blockstore, gcl GCLocker) GCBlockstore
NewGCBlockstore returns a default implementation of GCBlockstore using the given Blockstore and GCLocker.
type GCLocker ¶
type GCLocker interface {
// GCLock locks the blockstore for garbage collection. No operations
// that expect to finish with a pin should occur simultaneously.
// Reading during GC is safe, and requires no lock.
GCLock(context.Context) Unlocker
// PinLock locks the blockstore for sequences of puts expected to finish
// with a pin (before GC). Multiple put->pin sequences can write through
// at the same time, but no GC should happen simultaneously.
// Reading during Pinning is safe, and requires no lock.
PinLock(context.Context) Unlocker
// GcRequested returns true if GCLock has been called and is waiting to
// take the lock
GCRequested(context.Context) bool
}
GCLocker abstract functionality to lock a blockstore when performing garbage-collection operations.
func NewGCLocker ¶
func NewGCLocker() GCLocker
NewGCLocker returns a default implementation of GCLocker using standard [RW] mutexes.
type Option ¶
type Option struct {
// contains filtered or unexported fields
}
Option is a default implementation Blockstore option
func NoPrefix ¶
func NoPrefix() Option
NoPrefix avoids wrapping the blockstore into the BlockPrefix namespace ("/blocks"), so keys will not be modified in any way.
func Provider ¶ added in v0.34.0
func Provider(provider provider.MultihashProvider) Option
Provider allows performing a StartProvide operation for every block written.
func WriteThrough ¶
WriteThrough skips checking if the blockstore already has a block before writing it, when enabled.
type ValidatingBlockstore ¶ added in v0.34.0
type ValidatingBlockstore struct {
Blockstore
}
ValidatingBlockstore validates blocks on get.
type Viewer ¶
Viewer can be implemented by blockstores that offer zero-copy access to values.
Callers of View must not mutate or retain the byte slice, as it could be an mmapped memory region, or a pooled byte buffer.
View is especially suitable for deserialising in place.
The callback will only be called iff the query operation is successful (and the block is found); otherwise, the error will be propagated. Errors returned by the callback will be propagated as well.