README
ΒΆ
byteforge
Byteforge is a modular collection of handcrafted Go data structures, concurrency utilities, and functional helpers. Built for speed, safety, and scalability.
Features
Collection
- Map
- Filter
- Reduce
Data Types
- Ring Buffer
- FIFO Queue
- Set
- Tuple
- Stack
- Deque
- Priority Queue
Utility Functions
Slices
- Shallow Equals (slices.ShallowEquals)
- Deep Equals (slices.DeepEquals)
- Inclusive Range (slices.IRange)
- Exclusive Range (slices.ERange)
- Map (slices.Map)
- Filter (slices.Filter)
- Reduce
- Parallel Map (slices.ParallelMap)
- Parallel Filter (slices.ParallelFilter)
- Parallel Reduce
- Partition
- Chunk
- Unique
- Flatten
Maps
- Map
- Filter
- Parallel Map
- Parallel Filter
(It's not an exhaustive list, it's just what came to my mind up until now. More will be added as they are required or provided)
Testing
All components come with comprehensive unit tests. Thread-safe variants include specific concurrency tests to ensure correct behavior under parallel access patterns.
To run the tests and get coverage:
make test
Getting Started
Install using go get
:
go get -u "github.com/PsionicAlch/byteforge@latest"
Collection
Collection provides a fluent, chainable API for performing functional-style operations like map, filter, and reduce on slices.
Collection
Collection is roughly based off Laravel's Collections package. It's not as feature rich, so feel free to make any feature requests or send a pull request if you want to get your hands dirty.
Honestly, I would not suggest using Collection in production yet.
Because of the current lack of generics for methods, I had to use a lot of any
and reflect
.
The code looks pretty when you chain a bunch of method calls together, and you can paint a really nice picture of how the data mutates over time β
but I'd recommend sticking with byteforge/functions/slices instead.
You won't get the pretty chainability or the smooth data flow, and you'll need intermediate variables,
but you'll get much better performance, full type safety and full IntelliSense support.
import (
"fmt"
"strconv"
"github.com/PsionicAlch/byteforge/collection"
"github.com/PsionicAlch/byteforge/functions/slices"
)
func main() {
s := []int{1, 2, 3, 4, 5, 6, 7, 8, 9, 10}
// Step 1: Create a new collection.
// FromSlice takes your input slice and wraps it in a Collection.
// Internally, Collection stores data as 'any' because Go doesn't support
// generic methods yet, so this sacrifices some type safety for flexibility.
c := collection.FromSlice(s)
// Step 2: Map over all elements.
// Map takes a function that accepts one element (same type as the slice)
// and returns one transformed element β which can be a **different** type.
squared := c.Map(func(e int) int {
return e * e
})
// You can also change the type, e.g., convert numbers to strings:
asStrings := c.Map(func(e int) string {
return strconv.Itoa(e)
})
// Step 3: Filter elements.
// Filter takes a function that receives one element and returns a bool.
// If the function returns true, the element stays; if false, itβs excluded.
evens := c.Filter(func(e int) bool {
return e % 2 == 0
})
// Step 4: ForEach side-effects.
// ForEach lets you perform an action on each element **without** changing the
// data. The function must accept one element and return nothing.
c.ForEach(func(e int) {
fmt.Printf("Value: %d\n", e)
})
// Step 5: Reduce to a single value.
// Reduce combines the elements into a single accumulated value.
sum, err := c.Reduce(func(acc, e int) int {
return acc + e
}, 0)
// If there were any issues with the functions you passed in the chain this
// error will tell you about it.
if err == nil {
fmt.Println("Sum:", sum)
}
// Step 6: Extract the final slice.
// ToSlice returns the processed slice as 'any' plus any accumulated error.
result, err := c.ToSlice()
// If there were any issues with the functions you passed in the chain this
// error will tell you about it.
if err == nil {
fmt.Printf("Final slice: %#v\n", result)
}
// Optional: Convert to a typed slice.
// Use the standalone generic function to cast safely.
typed, err := collection.ToTypedSlice[int, []int](c)
// If there were any issues with the functions you passed in the chain this
// error will tell you about it.
if err == nil {
fmt.Printf("Typed slice: %#v\n", typed)
}
collection.
FromSlice(slices.IRange(1, 100)).
Filter(func (i int) bool {
return i % 2 ==0
}).
Map(func (i int) string {
return strconv.Itoa(e)
}).
ForEach(func (s string) {
fmt.Printf("Value: %s\n", s)
})
}
Data Structures
All data structures come with a basic version and a thread-safe version. The thread-safe version is usually prefixed with "Sync" but the underlying API is the same and allow you to freely convert between the basic and thread-safe versions.
Ring Buffer
Ring Buffer is a generic dynamically resizable circular buffer. It supports enqueue and dequeue operations in constant amortized time, and grows or shrinks based on usage to optimize memory consumption.
import "github.com/PsionicAlch/byteforge/datastructs/buffers/ring"
func main() {
// To create a new ring buffer you can call the New
// function with the type you want to store and an optional
// initial capacity for performance sake. If no capacity is
// provided it will default to 8.
buf := ring.New[int]()
// Or if you already have a slice of elements you can
// construct a new ring buffer using the slice.
buf = ring.FromSlice([]int{0, 1, 2, 3, 4, 5})
// You can get the number of items in the buffer with the
// Len method.
fmt.Printf("Num of elements in buf: %d\n", buf.Len())
// You can get the capacity of the buffer using the Cap
// method.
fmt.Printf("Capacity of the buffer: %d\n", buf.Cap())
// You can check if the buffer is empty using the IsEmpty
// method.
fmt.Printf("Buffer is empty: %t\n", buf.IsEmpty())
// You can add values to the back of the buffer using the
// Enqueue method. It takes a variable amount of elements.
// The underlying buffer will grow to fit the data so you
// don't need to manually check the size and capacity.
buf.Enqueue(6, 7, 8, 9, 10)
// You can remove values from the front of the buffer using
// the Dequeue method. It returns a value and boolean to
// indicate whether the value returned is actually valid.
// If the boolean returned is false then the value will just
// be a 0 value of whatever the underlying type is. A value
// will be invalid if the buffer is empty.
element, found := buf.Dequeue()
// If you want to see what the value of the next element in
// the buffer is without actually removing it from the buffer
// you can use Peek method. Peek will return the value as well
// as a boolean indicating whether or not the value is valid.
// A value will be invalid if the buffer is empty.
element, found = buf.Peek()
// If you want to extract the values in the buffer to a
// slice it's as easy as calling the ToSlice method. It will
// return a new slice that is completely disconnected from
// the underlying buffer so you don't have to worry about
// mutating the buffer by interacting with the new slice.
s := buf.ToSlice()
// You can get a fresh copy of the buffer by calling the
// Clone method. This will create a deep clone of the underlying
// buffer. So you don't need to worry about mutating the
// original buffer by interacting with the new buffer.
clone := buf.Clone()
}
The basic version of Ring Buffer isn't thread-safe so I wouldn't suggest sharing it between threads without the use of a mutex. If, however, you're not in the mood to manage your own mutexes I got you covered. I made sure to create a thread-safe version of Ring Buffer called Sync Ring Buffer. It's not as optimised as it can be because I just wrapped the basic version with a RWMutex instead of using atomic operations for things like managing the size and capacity but everything works just fine. You shouldn't really notice the difference in performance. The API for Sync Ring Buffer is also the same as the basic Ring Buffer.
import "github.com/PsionicAlch/byteforge/datastructs/buffers/ring"
func main() {
// To create a new sync ring buffer you can call the NewSync
// function with the type you want to store and an optional
// initial capacity for performance sake. If no capacity is
// provided it will default to 8.
buf := ring.NewSync[int]()
// Or if you already have a slice of elements you can
// construct a new sync ring buffer using the slice.
buf = ring.SyncFromSlice([]int{0, 1, 2, 3, 4, 5})
// You can get the number of items in the buffer with the
// Len method.
fmt.Printf("Num of elements in buf: %d\n", buf.Len())
// You can get the capacity of the buffer using the Cap
// method.
fmt.Printf("Capacity of the buffer: %d\n", buf.Cap())
// You can check if the buffer is empty using the IsEmpty
// method.
fmt.Printf("Buffer is empty: %t\n", buf.IsEmpty())
// You can add values to the back of the buffer using the
// Enqueue method. It takes a variable amount of elements.
// The underlying buffer will grow to fit the data so you
// don't need to manually check the size and capacity.
buf.Enqueue(6, 7, 8, 9, 10)
// You can remove values from the front of the buffer using
// the Dequeue method. It returns a value and boolean to
// indicate whether the value returned is actually valid.
// If the boolean returned is false then the value will just
// be a 0 value of whatever the underlying type is. A value
// will be invalid if the buffer is empty.
element, found := buf.Dequeue()
// If you want to see what the value of the next element in
// the buffer is without actually removing it from the buffer
// you can use Peek method. Peek will return the value as well
// as a boolean indicating whether or not the value is valid.
// A value will be invalid if the buffer is empty.
element, found = buf.Peek()
// If you want to extract the values in the buffer to a
// slice it's as easy as calling the ToSlice method. It will
// return a new slice that is completely disconnected from
// the underlying buffer so you don't have to worry about
// mutating the buffer by interacting with the new slice.
s := buf.ToSlice()
// You can get a fresh copy of the buffer by calling the
// Clone method. This will create a deep clone of the underlying
// buffer. So you don't need to worry about mutating the
// original buffer by interacting with the new buffer.
clone := buf.Clone()
}
You can also easily convert between the basic and sync versions of Ring Buffer. Although keep in mind that each conversion will result in a deep clone being produced so it's not the fastest operating in the world but at least it's safe.
import "slices"
import "github.com/PsionicAlch/byteforge/datastructs/buffers/ring"
func main() {
orig := ring.FromSlice([]int{0, 1, 1, 2, 3, 5, 8, 13, 21, 34, 55})
// You can convert a basic ring buffer to a sync ring buffer
// by calling SyncFromRingBuffer.
syncBuf := ring.SyncFromRingBuffer(orig)
// You can convert a sync ring buffer to a basic ring buffer
// by calling FromSyncRingBuffer.
basicBuf := ring.FromSyncRingBuffer(syncBuf)
// The conversions don't impact the order of the underlying buffer.
match := slices.Equal(syncBuf.ToSlice(), basicBuf.ToSlice())
fmt.Printf("Buffers match: %t\n", match)
}
FIFO Queue (First In, First Out Queue)
Queue is a generic dynamically resizable FIFO Queue. It supports enqueue and dequeue operations in constant amortized time, and grows or shrinks based on usage to optimize memory consumption.
import "github.com/PsionicAlch/byteforge/datastructs/queue"
func main() {
// To create a new queue you can call the New function
// with the type you want to store and an optional initial
// capacity for performance sake. If no capacity is provided
// it will default to 8.
q := queue.New[int]()
// Or if you already have a slice of elements you can
// construct a new queue using the slice.
q = queue.FromSlice([]int{0, 1, 2, 3, 4, 5})
// You can get the number of items in the queue with the
// Len method.
fmt.Printf("Num of elements in buf: %d\n", q.Len())
// You can get the capacity of the queue using the Cap
// method.
fmt.Printf("Capacity of the buffer: %d\n", q.Cap())
// You can check if the queue is empty using the IsEmpty
// method.
fmt.Printf("Buffer is empty: %t\n", q.IsEmpty())
// You can add values to the back of the queue using the
// Enqueue method. It takes a variable amount of elements.
// The underlying buffer will grow to fit the data so you
// don't need to manually check the size and capacity.
q.Enqueue(6, 7, 8, 9, 10)
// You can remove values from the front of the queue using
// the Dequeue method. It returns a value and boolean to
// indicate whether the value returned is actually valid.
// If the boolean returned is false then the value will just
// be a 0 value of whatever the underlying type is. A value
// will be invalid if the buffer is empty.
element, found := q.Dequeue()
// If you want to see what the value of the next element in
// the queue is without actually removing it from the queue
// you can use Peek method. Peek will return the value as
// well as a boolean indicating whether or not the value is
// valid. A value will be invalid if the buffer is empty.
element, found = q.Peek()
// If you want to extract the values in the queue to a
// slice it's as easy as calling the ToSlice method. It will
// return a new slice that is completely disconnected from
// the underlying buffer so you don't have to worry about
// mutating the queue by interacting with the new slice.
s := q.ToSlice()
// You can get a fresh copy of the queue by calling the
// Clone method. Clone will create a deep clone of the
// underlying buffer. So you don't need to worry about
// mutating the original queue by interacting with the
// new queue.
clone := q.Clone()
// You can compare two queues to see if they are equal to
// one another. Two queues are equal if their underlying
// slices are equal according to slices.Equal.
equal := q.Equals(clone)
fmt.Printf("Queue equals clone: %t\n", equal)
}
The basic version of Queue isn't thread-safe so I wouldn't suggest sharing it between threads without the use of a mutex. If, however, you're not in the mood to manage your own mutexes I got you covered. I made sure to create a thread-safe version of Queue called Sync Queue. It's not as optimised as it can be because I just wrapped the basic version with a RWMutex instead of using atomic operations for things like managing the size and capacity but everything works just fine. You shouldn't really notice the difference in performance. The API for Sync Queue is also the same as the basic Queue.
import "github.com/PsionicAlch/byteforge/datastructs/queue"
func main() {
// To create a new sync queue you can call the NewSync
// function with the type you want to store and an optional
// initial capacity for performance sake. If no capacity is
// provided it will default to 8.
q := queue.NewSync[int]()
// Or if you already have a slice of elements you can
// construct a new sync queue using the slice.
q = queue.SyncFromSlice([]int{0, 1, 2, 3, 4, 5})
// You can get the number of items in the queue with the
// Len method.
fmt.Printf("Num of elements in buf: %d\n", q.Len())
// You can get the capacity of the queue using the Cap
// method.
fmt.Printf("Capacity of the buffer: %d\n", q.Cap())
// You can check if the queue is empty using the IsEmpty
// method.
fmt.Printf("Buffer is empty: %t\n", q.IsEmpty())
// You can add values to the back of the queue using the
// Enqueue method. It takes a variable amount of elements.
// The underlying buffer will grow to fit the data so you
// don't need to manually check the size and capacity.
q.Enqueue(6, 7, 8, 9, 10)
// You can remove values from the front of the queue using
// the Dequeue method. It returns a value and boolean to
// indicate whether the value returned is actually valid.
// If the boolean returned is false then the value will just
// be a 0 value of whatever the underlying type is. A value
// will be invalid if the buffer is empty.
element, found := q.Dequeue()
// If you want to see what the value of the next element in
// the queue is without actually removing it from the queue
// you can use Peek method. Peek will return the value as well
// as a boolean indicating whether or not the value is valid.
// A value will be invalid if the buffer is empty.
element, found = q.Peek()
// If you want to extract the values in the queue to a
// slice it's as easy as calling the ToSlice method. It will
// return a new slice that is completely disconnected from
// the underlying buffer so you don't have to worry about
// mutating the queue by interacting with the new slice.
s := q.ToSlice()
// You can get a fresh copy of the queue by calling the
// Clone method. This will create a deep clone of the underlying
// buffer. So you don't need to worry about mutating the
// original queue by interacting with the new queue.
clone := q.Clone()
// You can compare two queues to see if they are equal to
// one another. Two queues are equal if their underlying
// slices are equal according to slices.Equal.
equal := q.Equals(clone)
fmt.Printf("Queue equals clone: %t\n", equal)
}
You can also easily convert between the basic and sync versions of Queue. Although keep in mind that each conversion will result in a deep clone being produced so it's not the fastest operating in the world but at least it's safe.
import "slices"
import "github.com/PsionicAlch/byteforge/datastructs/queue"
func main() {
orig := queue.FromSlice([]int{0, 1, 1, 2, 3, 5, 8, 13, 21, 34, 55})
// You can convert a basic queue to a sync queue by calling
// SyncFromRingBuffer.
syncQ := queue.SyncFromRingBuffer(orig)
// You can convert a sync queue to a basic queue by calling
// FromSyncRingBuffer.
basicQ := queue.FromSyncRingBuffer(syncQ)
// The conversions don't impact the order of the underlying buffer.
match := slices.Equal(syncQ.ToSlice(), basicQ.ToSlice())
fmt.Printf("Queues match: %t\n", match)
}
Set
π§ Documentation is currently under construction π§
Tuple
π§ Documentation is currently under construction π§
Utility Functions
slices.ShallowEquals
π§ Documentation is currently under construction π§
slices.DeepEquals
π§ Documentation is currently under construction π§
slices.IRange
π§ Documentation is currently under construction π§
slices.ERange
π§ Documentation is currently under construction π§
slices.Map
π§ Documentation is currently under construction π§
slices.Filter
π§ Documentation is currently under construction π§
slices.ParallelMap
π§ Documentation is currently under construction π§
slices.ParallelFilter
π§ Documentation is currently under construction π§
Contributing
Contributions, feature requests, and bug reports are welcome! Please open an issue or submit a PR.
License
This project is licensed under the MIT License. See LICENSE for details.
Author
Directories
ΒΆ
Path | Synopsis |
---|---|
Package collection provides a fluent, chainable API for performing functional-style operations like map, filter, and reduce on slices.
|
Package collection provides a fluent, chainable API for performing functional-style operations like map, filter, and reduce on slices. |
datastructs
|
|
buffers/ring
Package ring provides a generic ring buffer (circular buffer) implementation.
|
Package ring provides a generic ring buffer (circular buffer) implementation. |
queue
Queue is a generic dynamically resizable FIFO Queue.
|
Queue is a generic dynamically resizable FIFO Queue. |
tuple
Package tuple provides a generic, fixed-size tuple type with safe access and mutation.
|
Package tuple provides a generic, fixed-size tuple type with safe access and mutation. |
functions
|
|
internal
|
|
datastructs/buffers/ring
Package ring provides a generic ring buffer (circular buffer) implementation.
|
Package ring provides a generic ring buffer (circular buffer) implementation. |
datastructs/tuple
Package tuple provides a generic, fixed-size tuple type with safe access and mutation.
|
Package tuple provides a generic, fixed-size tuple type with safe access and mutation. |