Documentation
¶
Overview ¶
Package benchkit is the lightweight, feather touch, benchmarking kit.
In comparison to the standard pprof utilities, this package is meant to help generating graphs and other artifacts.
Usage ¶
Get a benchmark kit:
bench, result := benchkit.Memory(n)
A benchmark kit consists of 4 methods:
- Setup(): call before doing any benchmark allocation.
- Starting(): call once your benchmark data is ready, and you're about to start the work you want to benchmark.
- Each(): gives an object that tracks each step of your work.
- Teardown(): call once your benchmark is done.
Here's an example:
bench, result := benchkit.Memory(n) bench.Setup() // create benchmark data bench.Starting() doBenchmark(bench.Each()) bench.Teardown()
Inside your benchmark, you will use the `BenchEach` object. This object consists of 2 methods:
- Before(i int): call it _before_ starting an atomic part of work.
- After(i int): call it _after_ finishing an atomic part of work.
In both case, you must ensure that `0 <= i < n`, or you will panic.
func doBenchmark(each BenchEach) {
for i, job := range thingsToDoManyTimes {
each.Before(i)
job()
each.After(i)
}
}
In the example above, you could use `defer each.After(i)`, however `defer` has some overhead and thus, will reduce the precision of your benchmark results.
The `result` object given with your `bench` object will not be populated before your call to `Teardown`.:
bench, result := benchkit.Memory(n) // don't use `result` bench.Teardown() // now you can use `result`
Using `result` before `Teardown` will result in:
panic: runtime error: invalid memory address or nil pointer dereference
So don't do that. =)
Memory kit ¶
Collects memory allocation during the benchmark, using `runtime.ReadMemStats`. The measurements are coarse.
Index ¶
Examples ¶
Constants ¶
This section is empty.
Variables ¶
This section is empty.
Functions ¶
func Bench ¶
func Bench(kit BenchKit, results interface{}) *eacher
Bench is a helper func that will call Starting/Teardown for you.
Example ¶
package main
import (
"github.com/aybabtme/benchkit"
)
func main() {
mem := benchkit.Bench(benchkit.Memory(10)).Each(func(each benchkit.BenchEach) {
for i := 0; i < 10; i++ {
each.Before(i)
// do stuff
each.After(i)
}
}).(*benchkit.MemResult)
_ = mem
times := benchkit.Bench(benchkit.Time(10, 100)).Each(func(each benchkit.BenchEach) {
for repeat := 0; repeat < 100; repeat++ {
for i := 0; i < 10; i++ {
each.Before(i)
// do stuff
each.After(i)
}
}
}).(*benchkit.TimeResult)
_ = times
}
func Memory ¶
Memory will track memory allocations using `runtime.ReadMemStats`.
Example ¶
n := 5
size := 1000000
buf := bytes.NewBuffer(nil)
memkit, results := benchkit.Memory(n)
memkit.Setup()
files := GenTarFiles(n, size)
memkit.Starting()
each := memkit.Each()
tarw := tar.NewWriter(buf)
for i, file := range files {
each.Before(i)
_ = tarw.WriteHeader(file.TarHeader())
_, _ = tarw.Write(file.Data())
each.After(i)
}
_ = tarw.Close()
memkit.Teardown()
// Look at the results!
fmt.Printf("setup=%s\n", effectMem(results.Setup))
fmt.Printf("starting=%s\n", effectMem(results.Start))
for i := 0; i < results.N; i++ {
fmt.Printf(" %d before=%s after=%s\n",
i,
effectMem(results.BeforeEach[i]),
effectMem(results.AfterEach[i]),
)
}
fmt.Printf("teardown=%s\n", effectMem(results.Teardown))
Output: setup=2.0 MB starting=6.9 MB 0 before=6.9 MB after=8.0 MB 1 before=8.0 MB after=10 MB 2 before=10 MB after=15 MB 3 before=15 MB after=15 MB 4 before=15 MB after=15 MB teardown=26 MB
func Time ¶
func Time(n, m int) (BenchKit, *TimeResult)
Time will track timings over exactly n steps, m times for each step. Memory is allocated in advance for m times per step, but you can record less than m times without effect, or more than m times with a loss of precision (due to extra allocation).
Types ¶
type BenchEach ¶
type BenchEach interface {
// Before must be called _before_ starting a unit of work.
Before(id int)
// After must be called _after_ finishing a unit of work.
After(id int)
}
BenchEach tracks metrics about work units of your benchmark.
type BenchKit ¶
type BenchKit interface {
// Setup must be called before doing any benchmark allocation.
Setup()
// Starting must be called once your benchmark data is ready,
// and you're about to start the work you want to benchmark.
Starting()
// Teardown must be called once your benchmark is done.
Teardown()
// Each gives an object that tracks each step of your work.
Each() BenchEach
}
BenchKit tracks metrics about your benchmark.
type MemResult ¶
type MemResult struct {
N int
Setup *runtime.MemStats
Start *runtime.MemStats
Teardown *runtime.MemStats
BeforeEach []*runtime.MemStats
AfterEach []*runtime.MemStats
}
MemResult contains the memory measurements of a memory benchmark at each point of the benchmark.
type TimeResult ¶
TimeResult contains the memory measurements of a memory benchmark at each point of the benchmark.
type TimeStep ¶
type TimeStep struct {
Significant []time.Duration
Min time.Duration
Max time.Duration
Avg time.Duration
SD time.Duration
// contains filtered or unexported fields
}
TimeStep contains statistics about a step of the benchmark.
Notes ¶
Bugs ¶
not sure I like the Bench func.

