Documentation
¶
Rendered for windows/amd64
Overview ¶
MNIST GPU Inference Benchmark
This example demonstrates CPU vs WebGPU performance comparison for neural network inference. It creates a simple MLP model and measures forward pass times on both backends.
Usage:
go run ./examples/mnist-gpu -batch 256 -iterations 100
Note: This benchmark focuses on the compute-intensive operations (MatMul, ReLU). WebGPU shines for large matrix operations, while CPU may be faster for small batches.
Click to show internal directories.
Click to hide internal directories.