xsync

package
v1.3.5 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Aug 19, 2025 License: MIT, Apache-2.0 Imports: 10 Imported by: 2

README

标准库 sync 扩展包

forked from puzpuzpuz/xsync v20250127 v3.5.1

改动:

  • 保留了对 go1.18 以下的支持 (最后的版本)
  • go1.24+ 推荐使用: "github.com/fufuok/cache/xsync" 或直接使用官方版本

GoDoc reference GoReport codecov

xsync

Concurrent data structures for Go. Aims to provide more scalable alternatives for some of the data structures from the standard sync package, but not only.

Covered with tests following the approach described here.

Benchmarks

Benchmark results may be found here. I'd like to thank @felixge who kindly ran the benchmarks on a beefy multicore machine.

Also, a non-scientific, unfair benchmark comparing Java's j.u.c.ConcurrentHashMap and xsync.MapOf is available here.

Usage

The latest xsync major version is v3, so /v3 suffix should be used when importing the library:

import (
	"github.com/fufuok/utils/xsync"
)

Note for pre-v3 users: v1 and v2 support is discontinued, so please upgrade to v3. While the API has some breaking changes, the migration should be trivial.

Counter

A Counter is a striped int64 counter inspired by the j.u.c.a.LongAdder class from the Java standard library.

c := xsync.NewCounter()
// increment and decrement the counter
c.Inc()
c.Dec()
// read the current value
v := c.Value()

Works better in comparison with a single atomically updated int64 counter in high contention scenarios.

Map

A Map is like a concurrent hash table-based map. It follows the interface of sync.Map with a number of valuable extensions like Compute or Size.

m := xsync.NewMap()
m.Store("foo", "bar")
v, ok := m.Load("foo")
s := m.Size()

Map uses a modified version of Cache-Line Hash Table (CLHT) data structure: https://github.com/LPD-EPFL/CLHT

CLHT is built around the idea of organizing the hash table in cache-line-sized buckets, so that on all modern CPUs update operations complete with minimal cache-line transfer. Also, Get operations are obstruction-free and involve no writes to shared memory, hence no mutexes or any other sort of locks. Due to this design, in all considered scenarios Map outperforms sync.Map.

One important difference with sync.Map is that only string keys are supported. That's because Golang standard library does not expose the built-in hash functions for interface{} values.

MapOf[K, V] is an implementation with parametrized key and value types. While it's still a CLHT-inspired hash map, MapOf's design is quite different from Map. As a result, less GC pressure and fewer atomic operations on reads.

m := xsync.NewMapOf[string, string]()
m.Store("foo", "bar")
v, ok := m.Load("foo")

Apart from CLHT, MapOf borrows ideas from Java's j.u.c.ConcurrentHashMap (immutable K/V pair structs instead of atomic snapshots) and C++'s absl::flat_hash_map (meta memory and SWAR-based lookups). It also has more dense memory layout when compared with Map. Long story short, MapOf should be preferred over Map when possible.

An important difference with Map is that MapOf supports arbitrary comparable key types:

type Point struct {
	x int32
	y int32
}
m := NewMapOf[Point, int]()
m.Store(Point{42, 42}, 42)
v, ok := m.Load(point{42, 42})

Apart from Range method available for map iteration, there are also ToPlainMap/ToPlainMapOf utility functions to convert a Map/MapOf to a built-in Go's map:

m := xsync.NewMapOf[int, int]()
m.Store(42, 42)
pm := xsync.ToPlainMapOf(m)

Both Map and MapOf use the built-in Golang's hash function which has DDOS protection. This means that each map instance gets its own seed number and the hash function uses that seed for hash code calculation. However, for smaller keys this hash function has some overhead. So, if you don't need DDOS protection, you may provide a custom hash function when creating a MapOf. For instance, Murmur3 finalizer does a decent job when it comes to integers:

m := NewMapOfWithHasher[int, int](func(i int, _ uint64) uint64 {
	h := uint64(i)
	h = (h ^ (h >> 33)) * 0xff51afd7ed558ccd
	h = (h ^ (h >> 33)) * 0xc4ceb9fe1a85ec53
	return h ^ (h >> 33)
})

When benchmarking concurrent maps, make sure to configure all of the competitors with the same hash function or, at least, take hash function performance into the consideration.

SPSCQueue

A SPSCQueue is a bounded single-producer single-consumer concurrent queue. This means that not more than a single goroutine must be publishing items to the queue while not more than a single goroutine must be consuming those items.

q := xsync.NewSPSCQueue(1024)
// producer inserts an item into the queue
// optimistic insertion attempt; doesn't block
inserted := q.TryEnqueue("bar")
// consumer obtains an item from the queue
// optimistic obtain attempt; doesn't block
item, ok := q.TryDequeue() // interface{} pointing to a string

SPSCQueueOf[I] is an implementation with parametrized item type. It is available for Go 1.19 or later.

q := xsync.NewSPSCQueueOf[string](1024)
inserted := q.TryEnqueue("foo")
item, ok := q.TryDequeue() // string

The queue is based on the data structure from this article. The idea is to reduce the CPU cache coherency traffic by keeping cached copies of read and write indexes used by producer and consumer respectively.

MPMCQueue

A MPMCQueue is a bounded multi-producer multi-consumer concurrent queue.

q := xsync.NewMPMCQueue(1024)
// producer optimistically inserts an item into the queue
// optimistic insertion attempt; doesn't block
inserted := q.TryEnqueue("bar")
// consumer obtains an item from the queue
// optimistic obtain attempt; doesn't block
item, ok := q.TryDequeue() // interface{} pointing to a string

MPMCQueueOf[I] is an implementation with parametrized item type. It is available for Go 1.19 or later.

q := xsync.NewMPMCQueueOf[string](1024)
inserted := q.TryEnqueue("foo")
item, ok := q.TryDequeue() // string

The queue is based on the algorithm from the MPMCQueue C++ library which in its turn references D.Vyukov's MPMC queue. According to the following classification, the queue is array-based, fails on overflow, provides causal FIFO, has blocking producers and consumers.

The idea of the algorithm is to allow parallelism for concurrent producers and consumers by introducing the notion of tickets, i.e. values of two counters, one per producers/consumers. An atomic increment of one of those counters is the only noticeable contention point in queue operations. The rest of the operation avoids contention on writes thanks to the turn-based read/write access for each of the queue items.

In essence, MPMCQueue is a specialized queue for scenarios where there are multiple concurrent producers and consumers of a single queue running on a large multicore machine.

To get the optimal performance, you may want to set the queue size to be large enough, say, an order of magnitude greater than the number of producers/consumers, to allow producers and consumers to progress with their queue operations in parallel most of the time.

RBMutex

A RBMutex is a reader-biased reader/writer mutual exclusion lock. The lock can be held by many readers or a single writer.

mu := xsync.NewRBMutex()
// reader lock calls return a token
t := mu.RLock()
// the token must be later used to unlock the mutex
mu.RUnlock(t)
// writer locks are the same as in sync.RWMutex
mu.Lock()
mu.Unlock()

RBMutex is based on a modified version of BRAVO (Biased Locking for Reader-Writer Locks) algorithm: https://arxiv.org/pdf/1810.01553.pdf

The idea of the algorithm is to build on top of an existing reader-writer mutex and introduce a fast path for readers. On the fast path, reader lock attempts are sharded over an internal array based on the reader identity (a token in the case of Golang). This means that readers do not contend over a single atomic counter like it's done in, say, sync.RWMutex allowing for better scalability in terms of cores.

Hence, by the design RBMutex is a specialized mutex for scenarios, such as caches, where the vast majority of locks are acquired by readers and write lock acquire attempts are infrequent. In such scenarios, RBMutex should perform better than the sync.RWMutex on large multicore machines.

RBMutex extends sync.RWMutex internally and uses it as the "reader bias disabled" fallback, so the same semantics apply. The only noticeable difference is in the reader tokens returned from the RLock/RUnlock methods.

Apart from blocking methods, RBMutex also has methods for optimistic locking:

mu := xsync.NewRBMutex()
if locked, t := mu.TryRLock(); locked {
	// critical reader section...
	mu.RUnlock(t)
}
if mu.TryLock() {
	// critical writer section...
	mu.Unlock()
}

License

Licensed under MIT.

Documentation

Index

Examples

Constants

This section is empty.

Variables

This section is empty.

Functions

func ToPlainMap added in v1.3.1

func ToPlainMap(m *Map) map[string]interface{}

ToPlainMap returns a native map with a copy of xsync Map's contents. The copied xsync Map should not be modified while this call is made. If the copied Map is modified, the copying behavior is the same as in the Range method.

func ToPlainMapOf added in v1.3.1

func ToPlainMapOf[K comparable, V any](m *MapOf[K, V]) map[K]V

ToPlainMapOf returns a native map with a copy of xsync Map's contents. The copied xsync Map should not be modified while this call is made. If the copied Map is modified, the copying behavior is the same as in the Range method.

func WithGrowOnly added in v1.1.2

func WithGrowOnly() func(*MapConfig)

WithGrowOnly configures new Map/MapOf instance to be grow-only. This means that the underlying hash table grows in capacity when new keys are added, but does not shrink when keys are deleted. The only exception to this rule is the Clear method which shrinks the hash table back to the initial capacity.

func WithPresize added in v1.1.2

func WithPresize(sizeHint int) func(*MapConfig)

WithPresize configures new Map/MapOf instance with capacity enough to hold sizeHint entries. The capacity is treated as the minimal capacity meaning that the underlying hash table will never shrink to a smaller capacity. If sizeHint is zero or negative, the value is ignored.

Types

type Counter

type Counter struct {
	// contains filtered or unexported fields
}

A Counter is a striped int64 counter.

Should be preferred over a single atomically updated int64 counter in high contention scenarios.

A Counter must not be copied after first use.

func NewCounter added in v0.8.2

func NewCounter() *Counter

NewCounter creates a new Counter instance.

func (*Counter) Add

func (c *Counter) Add(delta int64)

Add adds the delta to the counter.

func (*Counter) Dec

func (c *Counter) Dec()

Dec decrements the counter by 1.

func (*Counter) Inc

func (c *Counter) Inc()

Inc increments the counter by 1.

func (*Counter) Reset

func (c *Counter) Reset()

Reset resets the counter to zero. This method should only be used when it is known that there are no concurrent modifications of the counter.

func (*Counter) Value

func (c *Counter) Value() int64

Value returns the current counter value. The returned value may not include all of the latest operations in presence of concurrent modifications of the counter.

type HashMapOf added in v0.8.0

type HashMapOf[K comparable, V any] interface {
	// Load returns the value stored in the map for a key, or nil if no
	// value is present.
	// The ok result indicates whether value was found in the map.
	Load(key K) (value V, ok bool)

	// Store sets the value for a key.
	Store(key K, value V)

	// LoadOrStore returns the existing value for the key if present.
	// Otherwise, it stores and returns the given value.
	// The loaded result is true if the value was loaded, false if stored.
	LoadOrStore(key K, value V) (actual V, loaded bool)

	// LoadAndStore returns the existing value for the key if present,
	// while setting the new value for the key.
	// It stores the new value and returns the existing one, if present.
	// The loaded result is true if the existing value was loaded,
	// false otherwise.
	LoadAndStore(key K, value V) (actual V, loaded bool)

	// LoadOrCompute returns the existing value for the key if present.
	// Otherwise, it computes the value using the provided function and
	// returns the computed value. The loaded result is true if the value
	// was loaded, false if stored.
	LoadOrCompute(key K, valueFn func() V) (actual V, loaded bool)

	// Compute either sets the computed new value for the key or deletes
	// the value for the key. When the delete result of the valueFn function
	// is set to true, the value will be deleted, if it exists. When delete
	// is set to false, the value is updated to the newValue.
	// The ok result indicates whether value was computed and stored, thus, is
	// present in the map. The actual result contains the new value in cases where
	// the value was computed and stored. See the example for a few use cases.
	Compute(
		key K,
		valueFn func(oldValue V, loaded bool) (newValue V, delete bool),
	) (actual V, ok bool)

	// LoadAndDelete deletes the value for a key, returning the previous
	// value if any. The loaded result reports whether the key was
	// present.
	LoadAndDelete(key K) (value V, loaded bool)

	// Delete deletes the value for a key.
	Delete(key K)

	// Range calls f sequentially for each key and value present in the
	// map. If f returns false, range stops the iteration.
	//
	// Range does not necessarily correspond to any consistent snapshot
	// of the Map's contents: no key will be visited more than once, but
	// if the value for any key is stored or deleted concurrently, Range
	// may reflect any mapping for that key from any point during the
	// Range call.
	//
	// It is safe to modify the map while iterating it. However, the
	// concurrent modification rule apply, i.e. the changes may be not
	// reflected in the subsequently iterated entries.
	Range(f func(key K, value V) bool)

	// Clear deletes all keys and values currently stored in the map.
	Clear()

	// Size returns current size of the map.
	Size() int
}

type MPMCQueue

type MPMCQueue struct {
	// contains filtered or unexported fields
}

A MPMCQueue is a bounded multi-producer multi-consumer concurrent queue.

MPMCQueue instances must be created with NewMPMCQueue function. A MPMCQueue must not be copied after first use.

Based on the data structure from the following C++ library: https://github.com/rigtorp/MPMCQueue

func NewMPMCQueue

func NewMPMCQueue(capacity int) *MPMCQueue

NewMPMCQueue creates a new MPMCQueue instance with the given capacity.

func (*MPMCQueue) Dequeue deprecated

func (q *MPMCQueue) Dequeue() interface{}

Dequeue retrieves and removes the item from the head of the queue. Blocks, if the queue is empty.

Deprecated: use TryDequeue in combination with runtime.Gosched().

func (*MPMCQueue) Enqueue deprecated

func (q *MPMCQueue) Enqueue(item interface{})

Enqueue inserts the given item into the queue. Blocks, if the queue is full.

Deprecated: use TryEnqueue in combination with runtime.Gosched().

func (*MPMCQueue) TryDequeue

func (q *MPMCQueue) TryDequeue() (item interface{}, ok bool)

TryDequeue retrieves and removes the item from the head of the queue. Does not block and returns immediately. The ok result indicates that the queue isn't empty and an item was retrieved.

func (*MPMCQueue) TryEnqueue

func (q *MPMCQueue) TryEnqueue(item interface{}) bool

TryEnqueue inserts the given item into the queue. Does not block and returns immediately. The result indicates that the queue isn't full and the item was inserted.

type MPMCQueueOf added in v0.11.1

type MPMCQueueOf[I any] struct {
	// contains filtered or unexported fields
}

A MPMCQueueOf is a bounded multi-producer multi-consumer concurrent queue. It's a generic version of MPMCQueue.

MPMCQueueOf instances must be created with NewMPMCQueueOf function. A MPMCQueueOf must not be copied after first use.

Based on the data structure from the following C++ library: https://github.com/rigtorp/MPMCQueue

func NewMPMCQueueOf added in v0.11.1

func NewMPMCQueueOf[I any](capacity int) *MPMCQueueOf[I]

NewMPMCQueueOf creates a new MPMCQueueOf instance with the given capacity.

func (*MPMCQueueOf[I]) Dequeue deprecated added in v0.11.1

func (q *MPMCQueueOf[I]) Dequeue() I

Dequeue retrieves and removes the item from the head of the queue. Blocks, if the queue is empty.

Deprecated: use TryDequeue in combination with runtime.Gosched().

func (*MPMCQueueOf[I]) Enqueue deprecated added in v0.11.1

func (q *MPMCQueueOf[I]) Enqueue(item I)

Enqueue inserts the given item into the queue. Blocks, if the queue is full.

Deprecated: use TryEnqueue in combination with runtime.Gosched().

func (*MPMCQueueOf[I]) TryDequeue added in v0.11.1

func (q *MPMCQueueOf[I]) TryDequeue() (item I, ok bool)

TryDequeue retrieves and removes the item from the head of the queue. Does not block and returns immediately. The ok result indicates that the queue isn't empty and an item was retrieved.

func (*MPMCQueueOf[I]) TryEnqueue added in v0.11.1

func (q *MPMCQueueOf[I]) TryEnqueue(item I) bool

TryEnqueue inserts the given item into the queue. Does not block and returns immediately. The result indicates that the queue isn't full and the item was inserted.

type Map

type Map struct {
	// contains filtered or unexported fields
}

Map is like a Go map[string]interface{} but is safe for concurrent use by multiple goroutines without additional locking or coordination. It follows the interface of sync.Map with a number of valuable extensions like Compute or Size.

A Map must not be copied after first use.

Map uses a modified version of Cache-Line Hash Table (CLHT) data structure: https://github.com/LPD-EPFL/CLHT

CLHT is built around idea to organize the hash table in cache-line-sized buckets, so that on all modern CPUs update operations complete with at most one cache-line transfer. Also, Get operations involve no write to memory, as well as no mutexes or any other sort of locks. Due to this design, in all considered scenarios Map outperforms sync.Map.

One important difference with sync.Map is that only string keys are supported. That's because Golang standard library does not expose the built-in hash functions for interface{} values.

func NewMap

func NewMap(options ...func(*MapConfig)) *Map

NewMap creates a new Map instance configured with the given options.

func NewMapPresized deprecated added in v0.9.0

func NewMapPresized(sizeHint int) *Map

NewMapPresized creates a new Map instance with capacity enough to hold sizeHint entries. The capacity is treated as the minimal capacity meaning that the underlying hash table will never shrink to a smaller capacity. If sizeHint is zero or negative, the value is ignored.

Deprecated: use NewMap in combination with WithPresize.

func (*Map) Clear added in v0.8.2

func (m *Map) Clear()

Clear deletes all keys and values currently stored in the map.

func (*Map) Compute added in v0.8.2

func (m *Map) Compute(
	key string,
	valueFn func(oldValue interface{}, loaded bool) (newValue interface{}, delete bool),
) (actual interface{}, ok bool)

Compute either sets the computed new value for the key or deletes the value for the key. When the delete result of the valueFn function is set to true, the value will be deleted, if it exists. When delete is set to false, the value is updated to the newValue. The ok result indicates whether value was computed and stored, thus, is present in the map. The actual result contains the new value in cases where the value was computed and stored. See the example for a few use cases.

This call locks a hash table bucket while the compute function is executed. It means that modifications on other entries in the bucket will be blocked until the valueFn executes. Consider this when the function includes long-running operations.

func (*Map) Delete

func (m *Map) Delete(key string)

Delete deletes the value for a key.

func (*Map) Load

func (m *Map) Load(key string) (value interface{}, ok bool)

Load returns the value stored in the map for a key, or nil if no value is present. The ok result indicates whether value was found in the map.

func (*Map) LoadAndDelete

func (m *Map) LoadAndDelete(key string) (value interface{}, loaded bool)

LoadAndDelete deletes the value for a key, returning the previous value if any. The loaded result reports whether the key was present.

func (*Map) LoadAndStore added in v0.7.5

func (m *Map) LoadAndStore(key string, value interface{}) (actual interface{}, loaded bool)

LoadAndStore returns the existing value for the key if present, while setting the new value for the key. It stores the new value and returns the existing one, if present. The loaded result is true if the existing value was loaded, false otherwise.

func (*Map) LoadOrCompute added in v0.7.18

func (m *Map) LoadOrCompute(key string, valueFn func() interface{}) (actual interface{}, loaded bool)

LoadOrCompute returns the existing value for the key if present. Otherwise, it computes the value using the provided function, and then stores and returns the computed value. The loaded result is true if the value was loaded, false if computed.

This call locks a hash table bucket while the compute function is executed. It means that modifications on other entries in the bucket will be blocked until the valueFn executes. Consider this when the function includes long-running operations.

func (*Map) LoadOrStore

func (m *Map) LoadOrStore(key string, value interface{}) (actual interface{}, loaded bool)

LoadOrStore returns the existing value for the key if present. Otherwise, it stores and returns the given value. The loaded result is true if the value was loaded, false if stored.

func (*Map) LoadOrTryCompute added in v1.3.1

func (m *Map) LoadOrTryCompute(
	key string,
	valueFn func() (newValue interface{}, cancel bool),
) (value interface{}, loaded bool)

LoadOrTryCompute returns the existing value for the key if present. Otherwise, it tries to compute the value using the provided function and, if successful, stores and returns the computed value. The loaded result is true if the value was loaded, or false if computed (whether successfully or not). If the compute attempt was cancelled (due to an error, for example), a nil value will be returned.

This call locks a hash table bucket while the compute function is executed. It means that modifications on other entries in the bucket will be blocked until the valueFn executes. Consider this when the function includes long-running operations.

func (*Map) Range

func (m *Map) Range(f func(key string, value interface{}) bool)

Range calls f sequentially for each key and value present in the map. If f returns false, range stops the iteration.

Range does not necessarily correspond to any consistent snapshot of the Map's contents: no key will be visited more than once, but if the value for any key is stored or deleted concurrently, Range may reflect any mapping for that key from any point during the Range call.

It is safe to modify the map while iterating it, including entry creation, modification and deletion. However, the concurrent modification rule apply, i.e. the changes may be not reflected in the subsequently iterated entries.

func (*Map) Size added in v0.7.7

func (m *Map) Size() int

Size returns current size of the map.

func (*Map) Stats added in v1.2.1

func (m *Map) Stats() MapStats

Stats returns statistics for the Map. Just like other map methods, this one is thread-safe. Yet it's an O(N) operation, so it should be used only for diagnostics or debugging purposes.

func (*Map) Store

func (m *Map) Store(key string, value interface{})

Store sets the value for a key.

type MapConfig added in v1.1.2

type MapConfig struct {
	// contains filtered or unexported fields
}

MapConfig defines configurable Map/MapOf options.

type MapOf added in v0.4.3

type MapOf[K comparable, V any] struct {
	// contains filtered or unexported fields
}

MapOf is like a Go map[K]V but is safe for concurrent use by multiple goroutines without additional locking or coordination. It follows the interface of sync.Map with a number of valuable extensions like Compute or Size.

A MapOf must not be copied after first use.

MapOf uses a modified version of Cache-Line Hash Table (CLHT) data structure: https://github.com/LPD-EPFL/CLHT

CLHT is built around idea to organize the hash table in cache-line-sized buckets, so that on all modern CPUs update operations complete with at most one cache-line transfer. Also, Get operations involve no write to memory, as well as no mutexes or any other sort of locks. Due to this design, in all considered scenarios MapOf outperforms sync.Map.

MapOf also borrows ideas from Java's j.u.c.ConcurrentHashMap (immutable K/V pair structs instead of atomic snapshots) and C++'s absl::flat_hash_map (meta memory and SWAR-based lookups).

func NewMapOf added in v0.4.3

func NewMapOf[K comparable, V any](options ...func(*MapConfig)) *MapOf[K, V]

NewMapOf creates a new MapOf instance configured with the given options.

func NewMapOfPresized deprecated added in v0.9.0

func NewMapOfPresized[K comparable, V any](sizeHint int) *MapOf[K, V]

NewMapOfPresized creates a new MapOf instance with capacity enough to hold sizeHint entries. The capacity is treated as the minimal capacity meaning that the underlying hash table will never shrink to a smaller capacity. If sizeHint is zero or negative, the value is ignored.

Deprecated: use NewMapOf in combination with WithPresize.

func NewMapOfWithHasher added in v1.2.1

func NewMapOfWithHasher[K comparable, V any](
	hasher func(K, uint64) uint64,
	options ...func(*MapConfig),
) *MapOf[K, V]

NewMapOfWithHasher creates a new MapOf instance configured with the given hasher and options. The hash function is used instead of the built-in hash function configured when a map is created with the NewMapOf function.

func (*MapOf[K, V]) Clear added in v0.8.2

func (m *MapOf[K, V]) Clear()

Clear deletes all keys and values currently stored in the map.

func (*MapOf[K, V]) Compute added in v0.8.2

func (m *MapOf[K, V]) Compute(
	key K,
	valueFn func(oldValue V, loaded bool) (newValue V, delete bool),
) (actual V, ok bool)

Compute either sets the computed new value for the key or deletes the value for the key. When the delete result of the valueFn function is set to true, the value will be deleted, if it exists. When delete is set to false, the value is updated to the newValue. The ok result indicates whether value was computed and stored, thus, is present in the map. The actual result contains the new value in cases where the value was computed and stored. See the example for a few use cases.

This call locks a hash table bucket while the compute function is executed. It means that modifications on other entries in the bucket will be blocked until the valueFn executes. Consider this when the function includes long-running operations.

Example
package main

import (
	"errors"
	"fmt"

	"github.com/fufuok/utils/xsync"
)

func main() {
	counts := xsync.NewMapOf[int, int]()

	// Store a new value.
	v, ok := counts.Compute(42, func(oldValue int, loaded bool) (newValue int, delete bool) {
		// loaded is false here.
		newValue = 42
		delete = false
		return
	})
	// v: 42, ok: true
	fmt.Printf("v: %v, ok: %v\n", v, ok)

	// Update an existing value.
	v, ok = counts.Compute(42, func(oldValue int, loaded bool) (newValue int, delete bool) {
		// loaded is true here.
		newValue = oldValue + 42
		delete = false
		return
	})
	// v: 84, ok: true
	fmt.Printf("v: %v, ok: %v\n", v, ok)

	// Set a new value or keep the old value conditionally.
	var oldVal int
	minVal := 63
	v, ok = counts.Compute(42, func(oldValue int, loaded bool) (newValue int, delete bool) {
		oldVal = oldValue
		if !loaded || oldValue < minVal {
			newValue = minVal
			delete = false
			return
		}
		newValue = oldValue
		delete = false
		return
	})
	// v: 84, ok: true, oldVal: 84
	fmt.Printf("v: %v, ok: %v, oldVal: %v\n", v, ok, oldVal)

	// Delete an existing value.
	v, ok = counts.Compute(42, func(oldValue int, loaded bool) (newValue int, delete bool) {
		// loaded is true here.
		delete = true
		return
	})
	// v: 84, ok: false
	fmt.Printf("v: %v, ok: %v\n", v, ok)

	// Propagate an error from the compute function to the outer scope.
	var err error
	v, ok = counts.Compute(42, func(oldValue int, loaded bool) (newValue int, delete bool) {
		if oldValue == 42 {
			err = errors.New("something went wrong")
			return 0, true // no need to create a key/value pair
		}
		newValue = 0
		delete = false
		return
	})
	fmt.Printf("err: %v\n", err)
}

func (*MapOf[K, V]) Delete added in v0.4.3

func (m *MapOf[K, V]) Delete(key K)

Delete deletes the value for a key.

func (*MapOf[K, V]) Load added in v0.4.3

func (m *MapOf[K, V]) Load(key K) (value V, ok bool)

Load returns the value stored in the map for a key, or zero value of type V if no value is present. The ok result indicates whether value was found in the map.

func (*MapOf[K, V]) LoadAndDelete added in v0.4.3

func (m *MapOf[K, V]) LoadAndDelete(key K) (value V, loaded bool)

LoadAndDelete deletes the value for a key, returning the previous value if any. The loaded result reports whether the key was present.

func (*MapOf[K, V]) LoadAndStore added in v0.7.5

func (m *MapOf[K, V]) LoadAndStore(key K, value V) (actual V, loaded bool)

LoadAndStore returns the existing value for the key if present, while setting the new value for the key. It stores the new value and returns the existing one, if present. The loaded result is true if the existing value was loaded, false otherwise.

func (*MapOf[K, V]) LoadOrCompute added in v0.7.18

func (m *MapOf[K, V]) LoadOrCompute(key K, valueFn func() V) (actual V, loaded bool)

LoadOrCompute returns the existing value for the key if present. Otherwise, it computes the value using the provided function, and then stores and returns the computed value. The loaded result is true if the value was loaded, false if computed.

This call locks a hash table bucket while the compute function is executed. It means that modifications on other entries in the bucket will be blocked until the valueFn executes. Consider this when the function includes long-running operations.

func (*MapOf[K, V]) LoadOrStore added in v0.4.3

func (m *MapOf[K, V]) LoadOrStore(key K, value V) (actual V, loaded bool)

LoadOrStore returns the existing value for the key if present. Otherwise, it stores and returns the given value. The loaded result is true if the value was loaded, false if stored.

func (*MapOf[K, V]) LoadOrTryCompute added in v1.3.1

func (m *MapOf[K, V]) LoadOrTryCompute(
	key K,
	valueFn func() (newValue V, cancel bool),
) (value V, loaded bool)

LoadOrTryCompute returns the existing value for the key if present. Otherwise, it tries to compute the value using the provided function and, if successful, stores and returns the computed value. The loaded result is true if the value was loaded, or false if computed (whether successfully or not). If the compute attempt was cancelled (due to an error, for example), a zero value of type V will be returned.

This call locks a hash table bucket while the compute function is executed. It means that modifications on other entries in the bucket will be blocked until the valueFn executes. Consider this when the function includes long-running operations.

func (*MapOf[K, V]) Range added in v0.4.3

func (m *MapOf[K, V]) Range(f func(key K, value V) bool)

Range calls f sequentially for each key and value present in the map. If f returns false, range stops the iteration.

Range does not necessarily correspond to any consistent snapshot of the Map's contents: no key will be visited more than once, but if the value for any key is stored or deleted concurrently, Range may reflect any mapping for that key from any point during the Range call.

It is safe to modify the map while iterating it, including entry creation, modification and deletion. However, the concurrent modification rule apply, i.e. the changes may be not reflected in the subsequently iterated entries.

func (*MapOf[K, V]) Size added in v0.7.7

func (m *MapOf[K, V]) Size() int

Size returns current size of the map.

func (*MapOf[K, V]) Stats added in v1.2.1

func (m *MapOf[K, V]) Stats() MapStats

Stats returns statistics for the MapOf. Just like other map methods, this one is thread-safe. Yet it's an O(N) operation, so it should be used only for diagnostics or debugging purposes.

func (*MapOf[K, V]) Store added in v0.4.3

func (m *MapOf[K, V]) Store(key K, value V)

Store sets the value for a key.

type MapStats added in v1.2.1

type MapStats struct {
	// RootBuckets is the number of root buckets in the hash table.
	// Each bucket holds a few entries.
	RootBuckets int
	// TotalBuckets is the total number of buckets in the hash table,
	// including root and their chained buckets. Each bucket holds
	// a few entries.
	TotalBuckets int
	// EmptyBuckets is the number of buckets that hold no entries.
	EmptyBuckets int
	// Capacity is the Map/MapOf capacity, i.e. the total number of
	// entries that all buckets can physically hold. This number
	// does not consider the load factor.
	Capacity int
	// Size is the exact number of entries stored in the map.
	Size int
	// Counter is the number of entries stored in the map according
	// to the internal atomic counter. In case of concurrent map
	// modifications this number may be different from Size.
	Counter int
	// CounterLen is the number of internal atomic counter stripes.
	// This number may grow with the map capacity to improve
	// multithreaded scalability.
	CounterLen int
	// MinEntries is the minimum number of entries per a chain of
	// buckets, i.e. a root bucket and its chained buckets.
	MinEntries int
	// MinEntries is the maximum number of entries per a chain of
	// buckets, i.e. a root bucket and its chained buckets.
	MaxEntries int
	// TotalGrowths is the number of times the hash table grew.
	TotalGrowths int64
	// TotalGrowths is the number of times the hash table shrinked.
	TotalShrinks int64
}

MapStats is Map/MapOf statistics.

Warning: map statistics are intented to be used for diagnostic purposes, not for production code. This means that breaking changes may be introduced into this struct even between minor releases.

func (*MapStats) ToString added in v1.2.1

func (s *MapStats) ToString() string

ToString returns string representation of map stats.

type RBMutex

type RBMutex struct {
	// contains filtered or unexported fields
}

A RBMutex is a reader biased reader/writer mutual exclusion lock. The lock can be held by an many readers or a single writer. The zero value for a RBMutex is an unlocked mutex.

A RBMutex must not be copied after first use.

RBMutex is based on a modified version of BRAVO (Biased Locking for Reader-Writer Locks) algorithm: https://arxiv.org/pdf/1810.01553.pdf

RBMutex is a specialized mutex for scenarios, such as caches, where the vast majority of locks are acquired by readers and write lock acquire attempts are infrequent. In such scenarios, RBMutex performs better than sync.RWMutex on large multicore machines.

RBMutex extends sync.RWMutex internally and uses it as the "reader bias disabled" fallback, so the same semantics apply. The only noticeable difference is in reader tokens returned from the RLock/RUnlock methods.

func NewRBMutex added in v0.8.3

func NewRBMutex() *RBMutex

NewRBMutex creates a new RBMutex instance.

func (*RBMutex) Lock

func (mu *RBMutex) Lock()

Lock locks m for writing. If the lock is already locked for reading or writing, Lock blocks until the lock is available.

func (*RBMutex) RLock

func (mu *RBMutex) RLock() *RToken

RLock locks m for reading and returns a reader token. The token must be used in the later RUnlock call.

Should not be used for recursive read locking; a blocked Lock call excludes new readers from acquiring the lock.

func (*RBMutex) RUnlock

func (mu *RBMutex) RUnlock(t *RToken)

RUnlock undoes a single RLock call. A reader token obtained from the RLock call must be provided. RUnlock does not affect other simultaneous readers. A panic is raised if m is not locked for reading on entry to RUnlock.

func (*RBMutex) TryLock added in v1.2.2

func (mu *RBMutex) TryLock() bool

TryLock tries to lock m for writing without blocking.

func (*RBMutex) TryRLock added in v1.2.2

func (mu *RBMutex) TryRLock() (bool, *RToken)

TryRLock tries to lock m for reading without blocking. When TryRLock succeeds, it returns true and a reader token. In case of a failure, a false is returned.

func (*RBMutex) Unlock

func (mu *RBMutex) Unlock()

Unlock unlocks m for writing. A panic is raised if m is not locked for writing on entry to Unlock.

As with RWMutex, a locked RBMutex is not associated with a particular goroutine. One goroutine may RLock (Lock) a RBMutex and then arrange for another goroutine to RUnlock (Unlock) it.

type RToken

type RToken struct {
	// contains filtered or unexported fields
}

RToken is a reader lock token.

type SPSCQueue added in v1.3.1

type SPSCQueue struct {
	// contains filtered or unexported fields
}

A SPSCQueue is a bounded single-producer single-consumer concurrent queue. This means that not more than a single goroutine must be publishing items to the queue while not more than a single goroutine must be consuming those items.

SPSCQueue instances must be created with NewSPSCQueue function. A SPSCQueue must not be copied after first use.

Based on the data structure from the following article: https://rigtorp.se/ringbuffer/

func NewSPSCQueue added in v1.3.1

func NewSPSCQueue(capacity int) *SPSCQueue

NewSPSCQueue creates a new SPSCQueue instance with the given capacity.

func (*SPSCQueue) TryDequeue added in v1.3.1

func (q *SPSCQueue) TryDequeue() (item interface{}, ok bool)

TryDequeue retrieves and removes the item from the head of the queue. Does not block and returns immediately. The ok result indicates that the queue isn't empty and an item was retrieved.

func (*SPSCQueue) TryEnqueue added in v1.3.1

func (q *SPSCQueue) TryEnqueue(item interface{}) bool

TryEnqueue inserts the given item into the queue. Does not block and returns immediately. The result indicates that the queue isn't full and the item was inserted.

type SPSCQueueOf added in v1.3.1

type SPSCQueueOf[I any] struct {
	// contains filtered or unexported fields
}

A SPSCQueueOf is a bounded single-producer single-consumer concurrent queue. This means that not more than a single goroutine must be publishing items to the queue while not more than a single goroutine must be consuming those items.

SPSCQueueOf instances must be created with NewSPSCQueueOf function. A SPSCQueueOf must not be copied after first use.

Based on the data structure from the following article: https://rigtorp.se/ringbuffer/

func NewSPSCQueueOf added in v1.3.1

func NewSPSCQueueOf[I any](capacity int) *SPSCQueueOf[I]

NewSPSCQueueOf creates a new SPSCQueueOf instance with the given capacity.

func (*SPSCQueueOf[I]) TryDequeue added in v1.3.1

func (q *SPSCQueueOf[I]) TryDequeue() (item I, ok bool)

TryDequeue retrieves and removes the item from the head of the queue. Does not block and returns immediately. The ok result indicates that the queue isn't empty and an item was retrieved.

func (*SPSCQueueOf[I]) TryEnqueue added in v1.3.1

func (q *SPSCQueueOf[I]) TryEnqueue(item I) bool

TryEnqueue inserts the given item into the queue. Does not block and returns immediately. The result indicates that the queue isn't full and the item was inserted.

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL