Documentation
¶
Index ¶
Constants ¶
const ( DefaultTableSize = 16381 // seed=$(head -c12 /dev/urandom | base64 -w0) DefaultHashSeed = "JLfvgnHc2kaSUFaI" MaglevTableSizeName = "bpf-lb-maglev-table-size" MaglevHashSeedName = "bpf-lb-maglev-hash-seed" )
Variables ¶
var Cell = cell.Module( "maglev", "Maglev table computations", cell.Config(DefaultUserConfig), cell.Provide( New, UserConfig.ToConfig, ), )
var DefaultConfig, _ = DefaultUserConfig.ToConfig()
DefaultConfig is the default maglev configuration for testing.
var DefaultUserConfig = UserConfig{ TableSize: DefaultTableSize, HashSeed: DefaultHashSeed, }
Functions ¶
This section is empty.
Types ¶
type BackendInfo ¶ added in v1.17.0
type BackendInfo struct {
ID loadbalancer.BackendID
Addr loadbalancer.L3n4Addr
Weight uint16
// contains filtered or unexported fields
}
BackendInfo describes the backend information relevant for the maglev computation.
type Config ¶ added in v1.17.0
type Config struct {
// Maglev backend table size (M) per service. Must be prime number.
// "Let N be the size of a VIP's backend pool." [...] "In practice, we choose M to be
// larger than 100 x N to ensure at most a 1% difference in hash space assigned to
// backends." (from Maglev paper, page 6)
TableSize uint
// HashSeed contains the cluster-wide seed for the hash(es).
HashSeed string
SeedJhash0 uint32
SeedJhash1 uint32
SeedMurmur uint32
}
Config is the maglev configuration derived from the user configuration.
type Maglev ¶ added in v1.17.0
type Maglev struct {
Config
// contains filtered or unexported fields
}
func (*Maglev) GetLookupTable ¶ added in v1.17.0
func (ml *Maglev) GetLookupTable(backends iter.Seq[BackendInfo]) []loadbalancer.BackendID
GetLookupTable returns the Maglev lookup table for the given backends. The lookup table contains the IDs of the given backends.
Maglev algorithm might produce different lookup table for the same set of backends listed in a different order. To avoid that sort backends by the hash, as these are the same on all nodes (in opposite to backend IDs which are node-local).
The weights implementation is inspired by https://github.com/envoyproxy/envoy/pull/2982.
A backend weight is honored by altering the frequency how often a backend's turn is selected. A backend weight is multiplied in each turn by (n + 1) and compared to weightCntr[index] value which is an incrementation of weightSum (but starts at backend's weight / number of backends, so that each backend is selected at least once). If this is lower than weightCntr[index], another backend has a turn (and weightCntr[index] is incremented). This way we honor the weights.
type UserConfig ¶ added in v1.18.0
type UserConfig struct {
// Maglev backend table size (M) per service. Must be prime number.
// "Let N be the size of a VIP's backend pool." [...] "In practice, we choose M to be
// larger than 100 x N to ensure at most a 1% difference in hash space assigned to
// backends." (from Maglev paper, page 6)
TableSize uint `mapstructure:"bpf-lb-maglev-table-size"`
// HashSeed contains the cluster-wide seed for the hash(es).
HashSeed string `mapstructure:"bpf-lb-maglev-hash-seed"`
}
UserConfig is the user-facing configuration, i.e. the command-line flags.
func (UserConfig) Flags ¶ added in v1.18.0
func (def UserConfig) Flags(flags *pflag.FlagSet)
func (UserConfig) ToConfig ¶ added in v1.18.0
func (userCfg UserConfig) ToConfig() (Config, error)