dataloader

package
v0.3.0 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: May 23, 2024 License: Apache-2.0 Imports: 3 Imported by: 0

Documentation

Index

Constants

This section is empty.

Variables

This section is empty.

Functions

This section is empty.

Types

type AddressBalanceLoader

type AddressBalanceLoader struct {
	// contains filtered or unexported fields
}

AddressBalanceLoader batches and caches requests

func NewAddressBalanceLoader

func NewAddressBalanceLoader(config AddressBalanceLoaderConfig) *AddressBalanceLoader

NewAddressBalanceLoader creates a new AddressBalanceLoader given a fetch, wait, and maxBatch

func (*AddressBalanceLoader) Clear

func (l *AddressBalanceLoader) Clear(key string)

Clear the value at key from the cache, if it exists

func (*AddressBalanceLoader) Load

func (l *AddressBalanceLoader) Load(key string) (int64, error)

Load a int64 by key, batching and caching will be applied automatically

func (*AddressBalanceLoader) LoadAll

func (l *AddressBalanceLoader) LoadAll(keys []string) ([]int64, []error)

LoadAll fetches many keys at once. It will be broken into appropriate sized sub batches depending on how the loader is configured

func (*AddressBalanceLoader) LoadAllThunk

func (l *AddressBalanceLoader) LoadAllThunk(keys []string) func() ([]int64, []error)

LoadAllThunk returns a function that when called will block waiting for a int64s. This method should be used if you want one goroutine to make requests to many different data loaders without blocking until the thunk is called.

func (*AddressBalanceLoader) LoadThunk

func (l *AddressBalanceLoader) LoadThunk(key string) func() (int64, error)

LoadThunk returns a function that when called will block waiting for a int64. This method should be used if you want one goroutine to make requests to many different data loaders without blocking until the thunk is called.

func (*AddressBalanceLoader) Prime

func (l *AddressBalanceLoader) Prime(key string, value int64) bool

Prime the cache with the provided key and value. If the key already exists, no change is made and false is returned. (To forcefully prime the cache, clear the key first with loader.clear(key).prime(key, value).)

type AddressBalanceLoaderConfig

type AddressBalanceLoaderConfig struct {
	// Fetch is a method that provides the data for the loader
	Fetch func(keys []string) ([]int64, []error)

	// Wait is how long wait before sending a batch
	Wait time.Duration

	// MaxBatch will limit the maximum number of keys to send in one batch, 0 = not limit
	MaxBatch int
}

AddressBalanceLoaderConfig captures the config to create a new AddressBalanceLoader

type BlockLoader

type BlockLoader struct {
	// contains filtered or unexported fields
}

BlockLoader batches and caches requests

func NewBlockLoader

func NewBlockLoader(config BlockLoaderConfig) *BlockLoader

NewBlockLoader creates a new BlockLoader given a fetch, wait, and maxBatch

func (*BlockLoader) Clear

func (l *BlockLoader) Clear(key string)

Clear the value at key from the cache, if it exists

func (*BlockLoader) Load

func (l *BlockLoader) Load(key string) ([]*model.Block, error)

Load a Block by key, batching and caching will be applied automatically

func (*BlockLoader) LoadAll

func (l *BlockLoader) LoadAll(keys []string) ([][]*model.Block, []error)

LoadAll fetches many keys at once. It will be broken into appropriate sized sub batches depending on how the loader is configured

func (*BlockLoader) LoadAllThunk

func (l *BlockLoader) LoadAllThunk(keys []string) func() ([][]*model.Block, []error)

LoadAllThunk returns a function that when called will block waiting for a Blocks. This method should be used if you want one goroutine to make requests to many different data loaders without blocking until the thunk is called.

func (*BlockLoader) LoadThunk

func (l *BlockLoader) LoadThunk(key string) func() ([]*model.Block, error)

LoadThunk returns a function that when called will block waiting for a Block. This method should be used if you want one goroutine to make requests to many different data loaders without blocking until the thunk is called.

func (*BlockLoader) Prime

func (l *BlockLoader) Prime(key string, value []*model.Block) bool

Prime the cache with the provided key and value. If the key already exists, no change is made and false is returned. (To forcefully prime the cache, clear the key first with loader.clear(key).prime(key, value).)

type BlockLoaderConfig

type BlockLoaderConfig struct {
	// Fetch is a method that provides the data for the loader
	Fetch func(keys []string) ([][]*model.Block, []error)

	// Wait is how long wait before sending a batch
	Wait time.Duration

	// MaxBatch will limit the maximum number of keys to send in one batch, 0 = not limit
	MaxBatch int
}

BlockLoaderConfig captures the config to create a new BlockLoader

type PostLoader

type PostLoader struct {
	// contains filtered or unexported fields
}

PostLoader batches and caches requests

func NewPostLoader

func NewPostLoader(config PostLoaderConfig) *PostLoader

NewPostLoader creates a new PostLoader given a fetch, wait, and maxBatch

func (*PostLoader) Clear

func (l *PostLoader) Clear(key string)

Clear the value at key from the cache, if it exists

func (*PostLoader) Load

func (l *PostLoader) Load(key string) (*model.Post, error)

Load a Post by key, batching and caching will be applied automatically

func (*PostLoader) LoadAll

func (l *PostLoader) LoadAll(keys []string) ([]*model.Post, []error)

LoadAll fetches many keys at once. It will be broken into appropriate sized sub batches depending on how the loader is configured

func (*PostLoader) LoadAllThunk

func (l *PostLoader) LoadAllThunk(keys []string) func() ([]*model.Post, []error)

LoadAllThunk returns a function that when called will block waiting for a Posts. This method should be used if you want one goroutine to make requests to many different data loaders without blocking until the thunk is called.

func (*PostLoader) LoadThunk

func (l *PostLoader) LoadThunk(key string) func() (*model.Post, error)

LoadThunk returns a function that when called will block waiting for a Post. This method should be used if you want one goroutine to make requests to many different data loaders without blocking until the thunk is called.

func (*PostLoader) Prime

func (l *PostLoader) Prime(key string, value *model.Post) bool

Prime the cache with the provided key and value. If the key already exists, no change is made and false is returned. (To forcefully prime the cache, clear the key first with loader.clear(key).prime(key, value).)

type PostLoaderConfig

type PostLoaderConfig struct {
	// Fetch is a method that provides the data for the loader
	Fetch func(keys []string) ([]*model.Post, []error)

	// Wait is how long wait before sending a batch
	Wait time.Duration

	// MaxBatch will limit the maximum number of keys to send in one batch, 0 = not limit
	MaxBatch int
}

PostLoaderConfig captures the config to create a new PostLoader

type ProfileLoader

type ProfileLoader struct {
	// contains filtered or unexported fields
}

ProfileLoader batches and caches requests

func NewProfileLoader

func NewProfileLoader(config ProfileLoaderConfig) *ProfileLoader

NewProfileLoader creates a new ProfileLoader given a fetch, wait, and maxBatch

func (*ProfileLoader) Clear

func (l *ProfileLoader) Clear(key string)

Clear the value at key from the cache, if it exists

func (*ProfileLoader) Load

func (l *ProfileLoader) Load(key string) (*model.Profile, error)

Load a Profile by key, batching and caching will be applied automatically

func (*ProfileLoader) LoadAll

func (l *ProfileLoader) LoadAll(keys []string) ([]*model.Profile, []error)

LoadAll fetches many keys at once. It will be broken into appropriate sized sub batches depending on how the loader is configured

func (*ProfileLoader) LoadAllThunk

func (l *ProfileLoader) LoadAllThunk(keys []string) func() ([]*model.Profile, []error)

LoadAllThunk returns a function that when called will block waiting for a Profiles. This method should be used if you want one goroutine to make requests to many different data loaders without blocking until the thunk is called.

func (*ProfileLoader) LoadThunk

func (l *ProfileLoader) LoadThunk(key string) func() (*model.Profile, error)

LoadThunk returns a function that when called will block waiting for a Profile. This method should be used if you want one goroutine to make requests to many different data loaders without blocking until the thunk is called.

func (*ProfileLoader) Prime

func (l *ProfileLoader) Prime(key string, value *model.Profile) bool

Prime the cache with the provided key and value. If the key already exists, no change is made and false is returned. (To forcefully prime the cache, clear the key first with loader.clear(key).prime(key, value).)

type ProfileLoaderConfig

type ProfileLoaderConfig struct {
	// Fetch is a method that provides the data for the loader
	Fetch func(keys []string) ([]*model.Profile, []error)

	// Wait is how long wait before sending a batch
	Wait time.Duration

	// MaxBatch will limit the maximum number of keys to send in one batch, 0 = not limit
	MaxBatch int
}

ProfileLoaderConfig captures the config to create a new ProfileLoader

type SlpBatonLoader

type SlpBatonLoader struct {
	// contains filtered or unexported fields
}

SlpBatonLoader batches and caches requests

func NewSlpBatonLoader

func NewSlpBatonLoader(config SlpBatonLoaderConfig) *SlpBatonLoader

NewSlpBatonLoader creates a new SlpBatonLoader given a fetch, wait, and maxBatch

func (*SlpBatonLoader) Clear

func (l *SlpBatonLoader) Clear(key model.HashIndex)

Clear the value at key from the cache, if it exists

func (*SlpBatonLoader) Load

func (l *SlpBatonLoader) Load(key model.HashIndex) (*model.SlpBaton, error)

Load a SlpBaton by key, batching and caching will be applied automatically

func (*SlpBatonLoader) LoadAll

func (l *SlpBatonLoader) LoadAll(keys []model.HashIndex) ([]*model.SlpBaton, []error)

LoadAll fetches many keys at once. It will be broken into appropriate sized sub batches depending on how the loader is configured

func (*SlpBatonLoader) LoadAllThunk

func (l *SlpBatonLoader) LoadAllThunk(keys []model.HashIndex) func() ([]*model.SlpBaton, []error)

LoadAllThunk returns a function that when called will block waiting for a SlpBatons. This method should be used if you want one goroutine to make requests to many different data loaders without blocking until the thunk is called.

func (*SlpBatonLoader) LoadThunk

func (l *SlpBatonLoader) LoadThunk(key model.HashIndex) func() (*model.SlpBaton, error)

LoadThunk returns a function that when called will block waiting for a SlpBaton. This method should be used if you want one goroutine to make requests to many different data loaders without blocking until the thunk is called.

func (*SlpBatonLoader) Prime

func (l *SlpBatonLoader) Prime(key model.HashIndex, value *model.SlpBaton) bool

Prime the cache with the provided key and value. If the key already exists, no change is made and false is returned. (To forcefully prime the cache, clear the key first with loader.clear(key).prime(key, value).)

type SlpBatonLoaderConfig

type SlpBatonLoaderConfig struct {
	// Fetch is a method that provides the data for the loader
	Fetch func(keys []model.HashIndex) ([]*model.SlpBaton, []error)

	// Wait is how long wait before sending a batch
	Wait time.Duration

	// MaxBatch will limit the maximum number of keys to send in one batch, 0 = not limit
	MaxBatch int
}

SlpBatonLoaderConfig captures the config to create a new SlpBatonLoader

type SlpGenesisLoader

type SlpGenesisLoader struct {
	// contains filtered or unexported fields
}

SlpGenesisLoader batches and caches requests

func NewSlpGenesisLoader

func NewSlpGenesisLoader(config SlpGenesisLoaderConfig) *SlpGenesisLoader

NewSlpGenesisLoader creates a new SlpGenesisLoader given a fetch, wait, and maxBatch

func (*SlpGenesisLoader) Clear

func (l *SlpGenesisLoader) Clear(key string)

Clear the value at key from the cache, if it exists

func (*SlpGenesisLoader) Load

func (l *SlpGenesisLoader) Load(key string) (*model.SlpGenesis, error)

Load a SlpGenesis by key, batching and caching will be applied automatically

func (*SlpGenesisLoader) LoadAll

func (l *SlpGenesisLoader) LoadAll(keys []string) ([]*model.SlpGenesis, []error)

LoadAll fetches many keys at once. It will be broken into appropriate sized sub batches depending on how the loader is configured

func (*SlpGenesisLoader) LoadAllThunk

func (l *SlpGenesisLoader) LoadAllThunk(keys []string) func() ([]*model.SlpGenesis, []error)

LoadAllThunk returns a function that when called will block waiting for a SlpGenesiss. This method should be used if you want one goroutine to make requests to many different data loaders without blocking until the thunk is called.

func (*SlpGenesisLoader) LoadThunk

func (l *SlpGenesisLoader) LoadThunk(key string) func() (*model.SlpGenesis, error)

LoadThunk returns a function that when called will block waiting for a SlpGenesis. This method should be used if you want one goroutine to make requests to many different data loaders without blocking until the thunk is called.

func (*SlpGenesisLoader) Prime

func (l *SlpGenesisLoader) Prime(key string, value *model.SlpGenesis) bool

Prime the cache with the provided key and value. If the key already exists, no change is made and false is returned. (To forcefully prime the cache, clear the key first with loader.clear(key).prime(key, value).)

type SlpGenesisLoaderConfig

type SlpGenesisLoaderConfig struct {
	// Fetch is a method that provides the data for the loader
	Fetch func(keys []string) ([]*model.SlpGenesis, []error)

	// Wait is how long wait before sending a batch
	Wait time.Duration

	// MaxBatch will limit the maximum number of keys to send in one batch, 0 = not limit
	MaxBatch int
}

SlpGenesisLoaderConfig captures the config to create a new SlpGenesisLoader

type SlpOutputLoader

type SlpOutputLoader struct {
	// contains filtered or unexported fields
}

SlpOutputLoader batches and caches requests

func NewSlpOutputLoader

func NewSlpOutputLoader(config SlpOutputLoaderConfig) *SlpOutputLoader

NewSlpOutputLoader creates a new SlpOutputLoader given a fetch, wait, and maxBatch

func (*SlpOutputLoader) Clear

func (l *SlpOutputLoader) Clear(key model.HashIndex)

Clear the value at key from the cache, if it exists

func (*SlpOutputLoader) Load

Load a SlpOutput by key, batching and caching will be applied automatically

func (*SlpOutputLoader) LoadAll

func (l *SlpOutputLoader) LoadAll(keys []model.HashIndex) ([]*model.SlpOutput, []error)

LoadAll fetches many keys at once. It will be broken into appropriate sized sub batches depending on how the loader is configured

func (*SlpOutputLoader) LoadAllThunk

func (l *SlpOutputLoader) LoadAllThunk(keys []model.HashIndex) func() ([]*model.SlpOutput, []error)

LoadAllThunk returns a function that when called will block waiting for a SlpOutputs. This method should be used if you want one goroutine to make requests to many different data loaders without blocking until the thunk is called.

func (*SlpOutputLoader) LoadThunk

func (l *SlpOutputLoader) LoadThunk(key model.HashIndex) func() (*model.SlpOutput, error)

LoadThunk returns a function that when called will block waiting for a SlpOutput. This method should be used if you want one goroutine to make requests to many different data loaders without blocking until the thunk is called.

func (*SlpOutputLoader) Prime

func (l *SlpOutputLoader) Prime(key model.HashIndex, value *model.SlpOutput) bool

Prime the cache with the provided key and value. If the key already exists, no change is made and false is returned. (To forcefully prime the cache, clear the key first with loader.clear(key).prime(key, value).)

type SlpOutputLoaderConfig

type SlpOutputLoaderConfig struct {
	// Fetch is a method that provides the data for the loader
	Fetch func(keys []model.HashIndex) ([]*model.SlpOutput, []error)

	// Wait is how long wait before sending a batch
	Wait time.Duration

	// MaxBatch will limit the maximum number of keys to send in one batch, 0 = not limit
	MaxBatch int
}

SlpOutputLoaderConfig captures the config to create a new SlpOutputLoader

type TxInputsLoader

type TxInputsLoader struct {
	// contains filtered or unexported fields
}

TxInputsLoader batches and caches requests

func NewTxInputsLoader

func NewTxInputsLoader(config TxInputsLoaderConfig) *TxInputsLoader

NewTxInputsLoader creates a new TxInputsLoader given a fetch, wait, and maxBatch

func (*TxInputsLoader) Clear

func (l *TxInputsLoader) Clear(key string)

Clear the value at key from the cache, if it exists

func (*TxInputsLoader) Load

func (l *TxInputsLoader) Load(key string) ([]*model.TxInput, error)

Load a TxInput by key, batching and caching will be applied automatically

func (*TxInputsLoader) LoadAll

func (l *TxInputsLoader) LoadAll(keys []string) ([][]*model.TxInput, []error)

LoadAll fetches many keys at once. It will be broken into appropriate sized sub batches depending on how the loader is configured

func (*TxInputsLoader) LoadAllThunk

func (l *TxInputsLoader) LoadAllThunk(keys []string) func() ([][]*model.TxInput, []error)

LoadAllThunk returns a function that when called will block waiting for a TxInputs. This method should be used if you want one goroutine to make requests to many different data loaders without blocking until the thunk is called.

func (*TxInputsLoader) LoadThunk

func (l *TxInputsLoader) LoadThunk(key string) func() ([]*model.TxInput, error)

LoadThunk returns a function that when called will block waiting for a TxInput. This method should be used if you want one goroutine to make requests to many different data loaders without blocking until the thunk is called.

func (*TxInputsLoader) Prime

func (l *TxInputsLoader) Prime(key string, value []*model.TxInput) bool

Prime the cache with the provided key and value. If the key already exists, no change is made and false is returned. (To forcefully prime the cache, clear the key first with loader.clear(key).prime(key, value).)

type TxInputsLoaderConfig

type TxInputsLoaderConfig struct {
	// Fetch is a method that provides the data for the loader
	Fetch func(keys []string) ([][]*model.TxInput, []error)

	// Wait is how long wait before sending a batch
	Wait time.Duration

	// MaxBatch will limit the maximum number of keys to send in one batch, 0 = not limit
	MaxBatch int
}

TxInputsLoaderConfig captures the config to create a new TxInputsLoader

type TxOutputLoader

type TxOutputLoader struct {
	// contains filtered or unexported fields
}

TxOutputLoader batches and caches requests

func NewTxOutputLoader

func NewTxOutputLoader(config TxOutputLoaderConfig) *TxOutputLoader

NewTxOutputLoader creates a new TxOutputLoader given a fetch, wait, and maxBatch

func (*TxOutputLoader) Clear

func (l *TxOutputLoader) Clear(key model.HashIndex)

Clear the value at key from the cache, if it exists

func (*TxOutputLoader) Load

func (l *TxOutputLoader) Load(key model.HashIndex) (*model.TxOutput, error)

Load a TxOutput by key, batching and caching will be applied automatically

func (*TxOutputLoader) LoadAll

func (l *TxOutputLoader) LoadAll(keys []model.HashIndex) ([]*model.TxOutput, []error)

LoadAll fetches many keys at once. It will be broken into appropriate sized sub batches depending on how the loader is configured

func (*TxOutputLoader) LoadAllThunk

func (l *TxOutputLoader) LoadAllThunk(keys []model.HashIndex) func() ([]*model.TxOutput, []error)

LoadAllThunk returns a function that when called will block waiting for a TxOutputs. This method should be used if you want one goroutine to make requests to many different data loaders without blocking until the thunk is called.

func (*TxOutputLoader) LoadThunk

func (l *TxOutputLoader) LoadThunk(key model.HashIndex) func() (*model.TxOutput, error)

LoadThunk returns a function that when called will block waiting for a TxOutput. This method should be used if you want one goroutine to make requests to many different data loaders without blocking until the thunk is called.

func (*TxOutputLoader) Prime

func (l *TxOutputLoader) Prime(key model.HashIndex, value *model.TxOutput) bool

Prime the cache with the provided key and value. If the key already exists, no change is made and false is returned. (To forcefully prime the cache, clear the key first with loader.clear(key).prime(key, value).)

type TxOutputLoaderConfig

type TxOutputLoaderConfig struct {
	// Fetch is a method that provides the data for the loader
	Fetch func(keys []model.HashIndex) ([]*model.TxOutput, []error)

	// Wait is how long wait before sending a batch
	Wait time.Duration

	// MaxBatch will limit the maximum number of keys to send in one batch, 0 = not limit
	MaxBatch int
}

TxOutputLoaderConfig captures the config to create a new TxOutputLoader

type TxOutputSpendLoader

type TxOutputSpendLoader struct {
	// contains filtered or unexported fields
}

TxOutputSpendLoader batches and caches requests

func NewTxOutputSpendLoader

func NewTxOutputSpendLoader(config TxOutputSpendLoaderConfig) *TxOutputSpendLoader

NewTxOutputSpendLoader creates a new TxOutputSpendLoader given a fetch, wait, and maxBatch

func (*TxOutputSpendLoader) Clear

func (l *TxOutputSpendLoader) Clear(key model.HashIndex)

Clear the value at key from the cache, if it exists

func (*TxOutputSpendLoader) Load

Load a TxInput by key, batching and caching will be applied automatically

func (*TxOutputSpendLoader) LoadAll

func (l *TxOutputSpendLoader) LoadAll(keys []model.HashIndex) ([][]*model.TxInput, []error)

LoadAll fetches many keys at once. It will be broken into appropriate sized sub batches depending on how the loader is configured

func (*TxOutputSpendLoader) LoadAllThunk

func (l *TxOutputSpendLoader) LoadAllThunk(keys []model.HashIndex) func() ([][]*model.TxInput, []error)

LoadAllThunk returns a function that when called will block waiting for a TxInputs. This method should be used if you want one goroutine to make requests to many different data loaders without blocking until the thunk is called.

func (*TxOutputSpendLoader) LoadThunk

func (l *TxOutputSpendLoader) LoadThunk(key model.HashIndex) func() ([]*model.TxInput, error)

LoadThunk returns a function that when called will block waiting for a TxInput. This method should be used if you want one goroutine to make requests to many different data loaders without blocking until the thunk is called.

func (*TxOutputSpendLoader) Prime

func (l *TxOutputSpendLoader) Prime(key model.HashIndex, value []*model.TxInput) bool

Prime the cache with the provided key and value. If the key already exists, no change is made and false is returned. (To forcefully prime the cache, clear the key first with loader.clear(key).prime(key, value).)

type TxOutputSpendLoaderConfig

type TxOutputSpendLoaderConfig struct {
	// Fetch is a method that provides the data for the loader
	Fetch func(keys []model.HashIndex) ([][]*model.TxInput, []error)

	// Wait is how long wait before sending a batch
	Wait time.Duration

	// MaxBatch will limit the maximum number of keys to send in one batch, 0 = not limit
	MaxBatch int
}

TxOutputSpendLoaderConfig captures the config to create a new TxOutputSpendLoader

type TxOutputsLoader

type TxOutputsLoader struct {
	// contains filtered or unexported fields
}

TxOutputsLoader batches and caches requests

func NewTxOutputsLoader

func NewTxOutputsLoader(config TxOutputsLoaderConfig) *TxOutputsLoader

NewTxOutputsLoader creates a new TxOutputsLoader given a fetch, wait, and maxBatch

func (*TxOutputsLoader) Clear

func (l *TxOutputsLoader) Clear(key string)

Clear the value at key from the cache, if it exists

func (*TxOutputsLoader) Load

func (l *TxOutputsLoader) Load(key string) ([]*model.TxOutput, error)

Load a TxOutput by key, batching and caching will be applied automatically

func (*TxOutputsLoader) LoadAll

func (l *TxOutputsLoader) LoadAll(keys []string) ([][]*model.TxOutput, []error)

LoadAll fetches many keys at once. It will be broken into appropriate sized sub batches depending on how the loader is configured

func (*TxOutputsLoader) LoadAllThunk

func (l *TxOutputsLoader) LoadAllThunk(keys []string) func() ([][]*model.TxOutput, []error)

LoadAllThunk returns a function that when called will block waiting for a TxOutputs. This method should be used if you want one goroutine to make requests to many different data loaders without blocking until the thunk is called.

func (*TxOutputsLoader) LoadThunk

func (l *TxOutputsLoader) LoadThunk(key string) func() ([]*model.TxOutput, error)

LoadThunk returns a function that when called will block waiting for a TxOutput. This method should be used if you want one goroutine to make requests to many different data loaders without blocking until the thunk is called.

func (*TxOutputsLoader) Prime

func (l *TxOutputsLoader) Prime(key string, value []*model.TxOutput) bool

Prime the cache with the provided key and value. If the key already exists, no change is made and false is returned. (To forcefully prime the cache, clear the key first with loader.clear(key).prime(key, value).)

type TxOutputsLoaderConfig

type TxOutputsLoaderConfig struct {
	// Fetch is a method that provides the data for the loader
	Fetch func(keys []string) ([][]*model.TxOutput, []error)

	// Wait is how long wait before sending a batch
	Wait time.Duration

	// MaxBatch will limit the maximum number of keys to send in one batch, 0 = not limit
	MaxBatch int
}

TxOutputsLoaderConfig captures the config to create a new TxOutputsLoader

type TxRawLoader

type TxRawLoader struct {
	// contains filtered or unexported fields
}

TxRawLoader batches and caches requests

func NewTxRawLoader

func NewTxRawLoader(config TxRawLoaderConfig) *TxRawLoader

NewTxRawLoader creates a new TxRawLoader given a fetch, wait, and maxBatch

func (*TxRawLoader) Clear

func (l *TxRawLoader) Clear(key string)

Clear the value at key from the cache, if it exists

func (*TxRawLoader) Load

func (l *TxRawLoader) Load(key string) (*model.Tx, error)

Load a Tx by key, batching and caching will be applied automatically

func (*TxRawLoader) LoadAll

func (l *TxRawLoader) LoadAll(keys []string) ([]*model.Tx, []error)

LoadAll fetches many keys at once. It will be broken into appropriate sized sub batches depending on how the loader is configured

func (*TxRawLoader) LoadAllThunk

func (l *TxRawLoader) LoadAllThunk(keys []string) func() ([]*model.Tx, []error)

LoadAllThunk returns a function that when called will block waiting for a Txs. This method should be used if you want one goroutine to make requests to many different data loaders without blocking until the thunk is called.

func (*TxRawLoader) LoadThunk

func (l *TxRawLoader) LoadThunk(key string) func() (*model.Tx, error)

LoadThunk returns a function that when called will block waiting for a Tx. This method should be used if you want one goroutine to make requests to many different data loaders without blocking until the thunk is called.

func (*TxRawLoader) Prime

func (l *TxRawLoader) Prime(key string, value *model.Tx) bool

Prime the cache with the provided key and value. If the key already exists, no change is made and false is returned. (To forcefully prime the cache, clear the key first with loader.clear(key).prime(key, value).)

type TxRawLoaderConfig

type TxRawLoaderConfig struct {
	// Fetch is a method that provides the data for the loader
	Fetch func(keys []string) ([]*model.Tx, []error)

	// Wait is how long wait before sending a batch
	Wait time.Duration

	// MaxBatch will limit the maximum number of keys to send in one batch, 0 = not limit
	MaxBatch int
}

TxRawLoaderConfig captures the config to create a new TxRawLoader

type TxSeenLoader

type TxSeenLoader struct {
	// contains filtered or unexported fields
}

TxSeenLoader batches and caches requests

func NewTxSeenLoader

func NewTxSeenLoader(config TxSeenLoaderConfig) *TxSeenLoader

NewTxSeenLoader creates a new TxSeenLoader given a fetch, wait, and maxBatch

func (*TxSeenLoader) Clear

func (l *TxSeenLoader) Clear(key string)

Clear the value at key from the cache, if it exists

func (*TxSeenLoader) Load

func (l *TxSeenLoader) Load(key string) (*model.Date, error)

Load a Date by key, batching and caching will be applied automatically

func (*TxSeenLoader) LoadAll

func (l *TxSeenLoader) LoadAll(keys []string) ([]*model.Date, []error)

LoadAll fetches many keys at once. It will be broken into appropriate sized sub batches depending on how the loader is configured

func (*TxSeenLoader) LoadAllThunk

func (l *TxSeenLoader) LoadAllThunk(keys []string) func() ([]*model.Date, []error)

LoadAllThunk returns a function that when called will block waiting for a Dates. This method should be used if you want one goroutine to make requests to many different data loaders without blocking until the thunk is called.

func (*TxSeenLoader) LoadThunk

func (l *TxSeenLoader) LoadThunk(key string) func() (*model.Date, error)

LoadThunk returns a function that when called will block waiting for a Date. This method should be used if you want one goroutine to make requests to many different data loaders without blocking until the thunk is called.

func (*TxSeenLoader) Prime

func (l *TxSeenLoader) Prime(key string, value *model.Date) bool

Prime the cache with the provided key and value. If the key already exists, no change is made and false is returned. (To forcefully prime the cache, clear the key first with loader.clear(key).prime(key, value).)

type TxSeenLoaderConfig

type TxSeenLoaderConfig struct {
	// Fetch is a method that provides the data for the loader
	Fetch func(keys []string) ([]*model.Date, []error)

	// Wait is how long wait before sending a batch
	Wait time.Duration

	// MaxBatch will limit the maximum number of keys to send in one batch, 0 = not limit
	MaxBatch int
}

TxSeenLoaderConfig captures the config to create a new TxSeenLoader

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL