Born - Production-Ready ML for Go

"Models are born production-ready"
Born is a modern deep learning framework for Go, inspired by Burn (Rust). Build ML models in pure Go and deploy as single binaries - no Python runtime, no complex dependencies.
Project Status: π v0.5.5 Ready! (WebGPU Performance - Multi-dim Transpose/Expand on GPU!)
Latest: β‘ GPU-accelerated multi-dimensional operations for transformer training
Pure Go ML with GPU acceleration - no CGO required!
Why Born?
The Problem
Deploying ML models is hard:
- Python runtime required
- Complex dependency management
- Large Docker images
- Slow startup times
- Integration friction with Go backends
The Born Solution
import "github.com/born-ml/born"
// Models "born" ready for production
model := born.Load("resnet50.born")
prediction := model.Predict(image)
// That's it. No Python. No containers. Just Go.
Benefits:
- Single binary deployment
- Fast startup (< 100ms)
- Small memory footprint
- Native Go integration
- Cross-platform out of the box
Features
Core
- Pure Go - No CGO dependencies, trivial cross-compilation
- Type Safe - Generics-powered API for compile-time guarantees
- GPU Acceleration - WebGPU backend with 35+ operations (zero-CGO, 123x speedup)
- Autodiff - Automatic differentiation via decorator pattern
- Production Ready - Single binary deployment, fast startup
- WebAssembly - Run inference in browsers natively
Model Serialization (v0.5.4) π
- Save/Load Models - Native
.born format with nn.Save() / nn.Load()
- Training Checkpoints - Resume training with
nn.SaveCheckpoint() / nn.LoadCheckpoint()
- SafeTensors Export - HuggingFace compatible with
serialization.WriteSafeTensors()
- Optimizer State - SGD/Adam momentum and moments preserved in checkpoints
- Metadata Support - Custom metadata in model files
- Multi-dim Transpose - GPU-accelerated 3D/4D/5D/6D tensor transpose
- Expand on GPU - NumPy-style broadcasting with WGSL shaders
- ~60x Speedup - Eliminated CPU fallback for transformer training
- Full dtype support - float32 and int32 operations
GPU Backend (v0.5.3)
- Complete WebGPU - All operations for LLM inference on GPU
- CNN Support - Conv2D, MaxPool2D with WGSL compute shaders
- BatchMatMul - 3D/4D tensor support for attention mechanisms
- Zero-CGO - Pure Go via go-webgpu
LLM Support (v0.5.0) π
- Grouped Query Attention (GQA) - Memory-efficient attention (LLaMA 2/3, Mistral)
- SwiGLU FFN - Modern FFN with gated activations (+ GeGLU, ReGLU, GLU)
- Model Loading - GGUF format support, weight mapping for LLaMA/Mistral/DeepSeek
- Tokenizers - TikToken, BPE, HuggingFace format, chat templates
- Sampling - Temperature, Top-K, Top-P (nucleus), Min-P, repetition penalty
- Text Generation - Streaming API, KV-cache integration, stop sequences
- Multi-Head Attention (MHA) - Full implementation with Q, K, V projections
- Scaled Dot-Product Attention - Core attention with optional mask/dropout
- KV-Cache - Efficient autoregressive generation (3.94x speedup)
- Positional Encodings - RoPE, ALiBi, Sinusoidal, Learned
- TransformerBlock - Complete Pre-Norm/Post-Norm support
- Normalizations - LayerNorm, RMSNorm (LLaMA style)
- FFN - Feed-Forward Networks with SiLU activation
Quick Start
Installation
# Clone repository
git clone https://github.com/born-ml/born.git
cd born
# Build
make build
# Or install CLI
make install
Development Setup
Requirements:
- Go 1.25+
- Make (optional, but recommended)
- golangci-lint (for linting)
Build:
make build # Build all binaries
make test # Run tests
make lint # Run linter
make bench # Run benchmarks
Example: MNIST Classification
Working example included! See examples/mnist/ for complete implementation.
package main
import (
"github.com/born-ml/born/autodiff"
"github.com/born-ml/born/backend/cpu"
"github.com/born-ml/born/nn"
"github.com/born-ml/born/optim"
)
func main() {
// Create backend with autodiff
backend := autodiff.New(cpu.New())
// Define model (784 β 128 β 10)
model := NewMNISTNet(backend)
// Create loss and optimizer
criterion := nn.NewCrossEntropyLoss(backend)
optimizer := optim.NewAdam(model.Parameters(), optim.AdamConfig{
LR: 0.001,
Betas: [2]float32{0.9, 0.999},
}, backend)
// Training loop
for epoch := range 10 {
// Forward pass
logits := model.Forward(batch.ImagesTensor)
loss := criterion.Forward(logits, batch.LabelsTensor)
// Backward pass
optimizer.ZeroGrad()
grads := backend.Backward(loss.Raw())
optimizer.Step(grads)
// Log progress
acc := nn.Accuracy(logits, batch.LabelsTensor)
fmt.Printf("Epoch %d: Loss=%.4f, Accuracy=%.2f%%\n",
epoch, loss.Raw().AsFloat32()[0], acc*100)
}
}
Run it: cd examples/mnist && go run .
Example: LLM Text Generation (v0.5.0)
package main
import (
"fmt"
"github.com/born-ml/born/generate"
"github.com/born-ml/born/tokenizer"
"github.com/born-ml/born/loader"
)
func main() {
// Load tokenizer
tok, _ := tokenizer.NewTikTokenForModel("gpt-4")
// Load model (GGUF format)
model, _ := loader.OpenModel("llama-7b.gguf")
// Create generator with sampling config
gen := generate.NewTextGenerator(model, tok, generate.SamplingConfig{
Temperature: 0.7,
TopP: 0.9,
TopK: 40,
})
// Generate text
result, _ := gen.Generate("Hello, world!", generate.GenerateConfig{
MaxTokens: 100,
})
fmt.Println(result)
// Or use streaming
stream, _ := gen.GenerateStream("Once upon a time", generate.GenerateConfig{
MaxTokens: 50,
Stream: true,
})
for chunk := range stream {
fmt.Print(chunk.Token)
}
}
Core Features:
- β
Tensor operations (Add, MatMul, Reshape, Exp, Sqrt, Cat, etc.)
- β
35+ GPU operations (BatchMatMul, Conv2D, MaxPool2D, Comparisons, Reductions)
- β
31 type-safe public API operations (MulScalar, Greater, Softmax, Int32, etc.)
- β
Automatic differentiation with gradient tape
- β
Neural network modules (Linear, Conv2D, ReLU, SiLU, RMSNorm, Embedding)
- β
Optimizers (SGD with momentum, Adam with bias correction)
- β
Losses (CrossEntropyLoss with numerical stability)
- β
Complete WebGPU backend (zero-CGO, 123x MatMul speedup)
- β
Transformer primitives (for LLaMA, GPT, Mistral architectures)
Architecture
Backend Abstraction
Born uses a backend interface for device independence:
type Backend interface {
Add(a, b *RawTensor) *RawTensor
MatMul(a, b *RawTensor) *RawTensor
// ... other operations
}
Available Backends:
| Backend |
Status |
Description |
| CPU |
β
Available |
Pure Go implementation, all operations (v0.1.1) |
| WebGPU |
β
Available |
Zero-CGO GPU via go-webgpu (v0.5.3) |
| Vulkan |
π Q3 2025 |
Cross-platform GPU compute |
| CUDA |
π Q3 2025 |
NVIDIA GPU via zero-CGO |
| Metal |
π Q4 2025 |
Apple GPU (macOS/iOS) |
WebGPU Operation Support (v0.5.3) - COMPLETE! π
| Category |
Operations |
Backend |
| Math |
Add, Sub, Mul, Div (float32 + int32), Exp, Sqrt, Rsqrt, Log, Cos, Sin |
β
GPU |
| Matrix |
MatMul, BatchMatMul (3D/4D), Transpose, Reshape |
β
GPU |
| CNN |
Conv2D, MaxPool2D |
β
GPU |
| Activation |
ReLU, Sigmoid, Tanh, Softmax |
β
GPU |
| Scalar |
MulScalar, AddScalar, SubScalar, DivScalar |
β
GPU |
| Reduction |
Sum, SumDim, MeanDim, Argmax |
β
GPU/CPU hybrid |
| Compare |
Greater, Lower, GreaterEqual, LowerEqual, Equal, NotEqual |
β
GPU |
| Boolean |
And, Or, Not |
β
GPU |
| Shape |
Cat, Chunk, Unsqueeze, Squeeze, Expand |
β
CPU (efficient) |
| Selection |
Where, Gather, Embedding |
β
GPU |
| Type |
Cast (float32, int32) |
β
CPU |
Total: 38+ GPU-accelerated operations!
All operations required for LLM inference (Attention, RoPE, LayerNorm, etc.) are fully supported on GPU.
GPU Backend Setup (v0.5.2+):
WebGPU requires the wgpu_native library. Download from wgpu-native releases:
Windows (x64):
# Download latest release
curl -LO https://github.com/gfx-rs/wgpu-native/releases/latest/download/wgpu-windows-x86_64-msvc-release.zip
unzip wgpu-windows-x86_64-msvc-release.zip
# Install DLL system-wide (requires admin)
copy lib\wgpu_native.dll C:\Windows\System32\
# Or place next to your executable
copy lib\wgpu_native.dll .\your-app\
Linux (x64):
curl -LO https://github.com/gfx-rs/wgpu-native/releases/latest/download/wgpu-linux-x86_64-release.zip
unzip wgpu-linux-x86_64-release.zip
sudo cp lib/libwgpu_native.so /usr/local/lib/
sudo ldconfig
macOS (ARM64):
curl -LO https://github.com/gfx-rs/wgpu-native/releases/latest/download/wgpu-macos-aarch64-release.zip
unzip wgpu-macos-aarch64-release.zip
sudo cp lib/libwgpu_native.dylib /usr/local/lib/
Usage:
import (
"github.com/born-ml/born/autodiff"
"github.com/born-ml/born/backend/cpu"
"github.com/born-ml/born/backend/webgpu"
)
// Automatic GPU/CPU selection with graceful fallback
var backend tensor.Backend
if webgpu.IsAvailable() {
gpu, err := webgpu.New()
if err == nil {
backend = autodiff.New(gpu)
defer gpu.Release() // Don't forget to release GPU resources
}
}
if backend == nil {
backend = autodiff.New(cpu.New())
}
Decorator Pattern
Functionality composed via decorators (inspired by Burn):
// Basic backend
base := cpu.New()
// Add autodiff
withAutodiff := autodiff.New(base)
// Add kernel fusion
optimized := fusion.New(withAutodiff)
// Your code works with any backend!
model := createModel(optimized)
Type Safety with Generics
type Tensor[T DType, B Backend] struct {
raw *RawTensor
backend B
}
// Compile-time type checking
func (t *Tensor[float32, B]) MatMul(other *Tensor[float32, B]) *Tensor[float32, B]
Roadmap
Phase 1: Core (v0.1) - β
COMPLETE (Nov 2025)
- Tensor API with generics
- CPU backend (pure Go)
- Autodiff decorator with gradient tape
- NN modules (Linear, ReLU, Sigmoid, Tanh, Sequential)
- SGD/Adam optimizers with momentum/bias correction
- CrossEntropyLoss with numerical stability
- MNIST classification example
Status: All 7 core tasks complete. 132 unit tests, 83.8% average coverage, 0 linter issues.
Phase 2: GPU Backends (v0.2-v0.5.3) - β
COMPLETE (Dec 2025)
- WebGPU backend (zero-CGO via go-webgpu)
- WGSL compute shaders (35+ operations)
- GPU buffer pooling & memory management
- MNIST GPU inference (10.9x speedup)
- v0.5.3: BatchMatMul, Conv2D, MaxPool2D
- v0.5.3: Comparison ops (Greater, Lower, Equal, etc.)
- v0.5.3: Boolean ops (And, Or, Not)
- v0.5.3: Sum, Argmax, Expand, Cast
Status: COMPLETE WebGPU backend! 35+ GPU ops, 123x MatMul speedup, all LLM ops supported.
- Math operations (Exp, Sqrt, Rsqrt, Cos, Sin, Log)
- Reductions (SumDim, MeanDim with keepDim, Sum, Argmax)
- Tensor manipulation (Cat, Chunk, Unsqueeze, Squeeze, Expand)
- Indexing (Gather, Where)
- Modern layers (SiLU, RMSNorm, Embedding, Softmax)
- Gradient control (NoGrad, Detach)
- 31 public API operations (MulScalar, Greater/Gt, Int32, etc.)
Status: All 7 tasks complete. 112 new tests, 0 linter issues.
Phase 4: Attention Mechanisms (v0.4.0) - December 2025 β
COMPLETE
- Multi-head attention (MHA)
- Scaled dot-product attention (SDPA)
- KV-cache for inference (3.94x speedup)
- Layer normalization (LayerNorm + RMSNorm)
- Positional encodings (RoPE, ALiBi, Sinusoidal, Learned)
- Transformer block with FFN
- BatchMatMul for 3D/4D tensors
Status: All 8 tasks complete. 80+ new tests, 0 linter issues. Full Transformer architecture ready!
Phase 5: LLM Support (v0.5.0) - December 2025 β
COMPLETE
- Grouped Query Attention (GQA) - LLaMA 2/3, Mistral style
- SwiGLU + GLU variants (GeGLU, ReGLU)
- Model Loader (GGUF format, weight mappers)
- Tokenizer integration (TikToken, BPE, chat templates)
- Sampling strategies (Top-K, Top-P, Min-P, temperature, penalties)
- Inference Pipeline (TextGenerator, streaming, stop sequences)
Status: All 6 LLM tasks complete. 100+ new tests, 0 linter issues. Ready for LLM inference!
- Linux/macOS WebGPU support
- ONNX import/export
- Model quantization (INT8, FP16)
- Pre-trained model hub integration
Long-Term: v1.0 LTS - 2026
- Distributed training
- Flash Attention
- Model zoo with pre-trained weights
- Production optimizations (SIMD, memory pooling)
Full roadmap: See ROADMAP.md
Documentation
For Users
For Contributors
Philosophy
"Born Ready"
Models trained anywhere (PyTorch, TensorFlow) are imported and born production-ready:
Training β Birth β Production
(Burn) (Born) (Run)
PyTorch trains β Born imports β Born deploys
TensorFlow trains β Born imports β Born deploys
Born trains β Born ready β Born serves
Production First
- Single Binary: Entire model in one executable
- No Runtime: No Python, no dependencies
- Fast Startup: < 100ms cold start
- Small Memory: Minimal footprint
- Cloud Native: Natural fit for Go services
Developer Experience
- Type Safe: Catch errors at compile time
- Clean API: Intuitive and ergonomic
- Great Docs: Comprehensive documentation
- Easy Deploy:
go build and you're done
Actual Benchmarks (AMD Ryzen 9 5950X, NVIDIA RTX 3080):
Matrix Operations (WebGPU vs CPU)
| Operation |
CPU |
GPU |
Speedup |
| MatMul 1024x1024 |
7143ms |
58ms |
123x |
| MatMul 512x512 |
499ms |
12ms |
41x |
| MatMul 256x256 |
56ms |
3.7ms |
15x |
Neural Network Inference
| Batch Size |
CPU |
GPU |
Speedup |
Throughput |
| 64 |
48ms |
19ms |
2.5x |
3,357/s |
| 256 |
182ms |
21ms |
8.5x |
11,883/s |
| 512 |
348ms |
32ms |
10.9x |
15,973/s |
Note: CPU backend uses naive O(nΒ³) MatMul. SIMD optimizations planned for future releases.
WebGPU WGSL Shaders (v0.5.3)
Born includes 30+ optimized WGSL compute shaders:
| Shader |
Workgroup |
Description |
addShader |
256 |
Element-wise addition |
subShader |
256 |
Element-wise subtraction |
mulShader |
256 |
Element-wise multiplication |
divShader |
256 |
Element-wise division |
matmulShader |
16x16 |
Matrix multiplication (2D) |
batchMatMulShader |
8x8x1 |
Batched matmul (3D/4D) |
conv2dShader |
8x8x1 |
2D convolution with padding |
maxPool2dShader |
8x8x1 |
2D max pooling |
transposeShader |
16x16 |
Matrix transpose |
reluShader |
256 |
ReLU activation |
sigmoidShader |
256 |
Sigmoid activation |
tanhShader |
256 |
Tanh activation |
softmaxShader |
256 |
Softmax (numerically stable) |
expShader |
256 |
Element-wise exp |
sqrtShader |
256 |
Element-wise sqrt |
rsqrtShader |
256 |
Reciprocal sqrt (1/βx) |
cosShader |
256 |
Element-wise cosine |
sinShader |
256 |
Element-wise sine |
greaterShader |
256 |
Greater-than comparison |
lowerShader |
256 |
Less-than comparison |
equalShader |
256 |
Equality comparison |
andShader |
256 |
Logical AND |
orShader |
256 |
Logical OR |
notShader |
256 |
Logical NOT |
argmaxShader |
256 |
Argmax along dimension |
globalSumShader |
256 |
Parallel sum reduction |
scalarMulShader |
256 |
Scalar multiplication |
scalarAddShader |
256 |
Scalar addition |
addShaderInt32 |
256 |
Int32 element-wise addition |
subShaderInt32 |
256 |
Int32 element-wise subtraction |
mulShaderInt32 |
256 |
Int32 element-wise multiplication |
divShaderInt32 |
256 |
Int32 element-wise division |
All shaders use workgroup shared memory for optimal performance and support bounds checking for safety.
Inspiration
Born is inspired by and learns from:
- Burn - Architecture patterns, decorator design
- PyTorch - API ergonomics
- TinyGrad - Simplicity principles
- Gonum - Go numerical computing
- HDF5 for Go - Model serialization, dataset storage (planned)
Acknowledgments
Special thanks to the projects that made Born possible:
Born's GPU acceleration is powered by go-webgpu - a remarkable pure Go binding for WebGPU via wgpu-native.
Why this stack is special:
- Zero CGO - Pure Go bindings using goffi for FFI
- Cross-platform - Works on Windows (D3D12), Linux (Vulkan), macOS (Metal)
- Modern API - Clean, idiomatic Go interface to WebGPU
- wgpu-native - Battle-tested Rust implementation of WebGPU by gfx-rs
- Active development - Both projects are actively maintained
Without go-webgpu and wgpu-native, Born would need CGO for GPU support, making cross-compilation complex and defeating our "pure Go" goal. This stack enables us to offer production-ready GPU acceleration while maintaining the simplicity of go build.
Thank you to Alfred Dobra, gfx-rs team, and all contributors!
Project is in early development. Star the repo to follow progress!
License
Licensed under the Apache License, Version 2.0.
Why Apache 2.0?
- β
Patent protection - Critical for ML algorithms and production use
- β
Enterprise-friendly - Clear legal framework for commercial adoption
- β
Industry standard - Same as TensorFlow, battle-tested in ML ecosystem
- β
Contributor protection - Explicit patent grant and termination clauses
See LICENSE file for full terms.
FAQ
Q: Why not use Gorgonia?
A: Gorgonia is great but uses a different approach. Born focuses on modern Go (generics), pure Go (no CGO), and production-first design inspired by Burn.
Q: Can I run LLMs with Born?
A: Yes! v0.5.0 includes full LLM support - GGUF model loading, tokenizers, sampling strategies, and text generation with streaming. Load LLaMA, Mistral, or DeepSeek models directly.
Q: When will it be ready?
A: Core features (v0.1-v0.5) are RELEASED! Includes CPU/GPU backends, transformer architecture, and LLM support. ONNX import targeted for v0.6.0 (Q1 2026).
Q: Can I use PyTorch models?
A: Yes! Via ONNX import (v0.6.0, Q1 2026). Train in PyTorch, deploy with Born. Currently GGUF models are supported.
Q: WebAssembly support?
A: Yes! Pure Go compiles to WASM natively. Inference in browsers out of the box.
Q: What LLM architectures are supported?
A: LLaMA 2/3, Mistral, DeepSeek, and compatible architectures. GQA, RoPE, SwiGLU are all supported.
Q: How do I enable GPU acceleration?
A: Install wgpu_native library from wgpu-native releases, then use webgpu.IsAvailable() to check GPU support. See Architecture for setup instructions. v0.5.3 includes 35+ GPU operations - everything needed for LLM inference!
Q: What GPU operations are supported?
A: All operations needed for production ML! Math (Add, Mul, Exp, etc.), Matrix (MatMul, BatchMatMul, Conv2D), Activations (ReLU, Softmax), Comparisons (Greater, Equal), Boolean (And, Or, Not), Reductions (Sum, Argmax), and more. See the WebGPU Operation Table.
Q: How can I help?
A: Check our Contributing Guide and GitHub Issues!