benchmarks

package
v0.0.9 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Mar 12, 2026 License: MIT Imports: 4 Imported by: 0

README

Benchmark suite: go-specs vs Testify vs Gomega

Reproducible benchmarks comparing go-specs, Testify (assert), and Gomega on assertions, matchers, runner execution, hooks, and large suites.

Run all benchmarks

From the repository root (recommended):

make bench              # quick run, output to terminal
make bench-report       # run 10 times, write report to benchmarks/results/current.txt
make bench-compare      # compare previous.txt vs current.txt (requires benchstat)

Or with go test directly:

go test ./benchmarks -bench=. -benchmem

From the benchmarks directory:

cd benchmarks
go test -bench=. -benchmem

Run by category

# Assertions (single equality assertion)
go test ./benchmarks -bench=BenchmarkAssertion -benchmem

# Matchers (Expect().To(BeTrue) style)
go test ./benchmarks -bench=BenchmarkMatcher -benchmem

# Runner execution (N specs, one assertion per spec)
go test ./benchmarks -bench=BenchmarkRunner -benchmem

# Hooks (before-each + assertion per spec)
go test ./benchmarks -bench=BenchmarkHooks -benchmem

# Large-scale suites (100, 1000, 10000, 50000 specs; execution time + allocs + scaling)
go test ./benchmarks -bench=BenchmarkSuite_ -benchmem

Structure

File Description
helpers.go Suite generation: BuildSpecsProgram(n), CreateGoSpecsSuite(n), SuiteSize100/1000/10000/50000
assertion_bench_test.go BenchmarkAssertion_GoSpecs_EqualTo, _ExpectToEqual, BenchmarkAssertion_Testify_Equal, BenchmarkAssertion_Gomega_ExpectToEqual
matcher_bench_test.go BenchmarkMatcher_GoSpecs, BenchmarkMatcher_Gomega
runner_bench_test.go BenchmarkRunner_GoSpecs, BenchmarkRunner_Testify, BenchmarkRunner_Gomega
hooks_bench_test.go BenchmarkHooks_GoSpecs, BenchmarkHooks_Testify, BenchmarkHooks_Gomega
large_suite_bench_test.go BenchmarkSuite_100, _1000, _10000, _50000 (large-scale; suite creation outside timed region)
minimal_and_buildsuite_bench_test.go BenchmarkRunner_GoSpecs_BuildSuite, BenchmarkRunner_Minimal, BenchmarkRunner_MinimalParallel_*, BenchmarkHooks_GoSpecs_Nested (from former bench/ package)

Requirements

  • Deterministic: Same N produces the same program shape; no randomness.
  • Realistic: Suite sizes 100, 1000, 10000 where applicable.
  • go-specs target: Zero allocations in assertion/runner fast path where possible (0 allocs/op).
  • Isolate setup: Build suite / create runner before b.ResetTimer(); only the measured loop runs after.
  • Avoid reflection in go-specs benchmarks (use EqualTo / ExpectT().ToEqual for comparable types).

Fair comparison

  • Assertion: Context/setup created once; loop measures only the assertion. Same comparison (42 == 42) for all three.
  • Runner: Same number of specs (1000) and one equality per spec.
  • Hooks: Same hook depth (5) and spec count (100); one assertion per spec.
  • Large suite: go-specs at 100, 1000, 10000 specs to measure scalability.

Statistical comparison (benchstat)

Results are stored under benchmarks/results/. Scripts run from the repository root (the directory containing benchmarks/ and scripts/).

Run the suite and save results
make bench-report

This runs go test ./benchmarks -bench=. -benchmem -count=10 and writes output to benchmarks/results/current.txt. Use -count=10 for stable statistics.

Compare previous vs current
  1. Install benchstat (once):

    go install golang.org/x/perf/cmd/benchstat@latest
    
  2. First time: run the suite and save a baseline as previous.txt:

    make bench-report
    cp benchmarks/results/current.txt benchmarks/results/previous.txt
    
  3. After making changes, run the suite again, then compare:

    make bench-report
    make bench-compare
    

    make bench-compare runs benchstat on benchmarks/results/previous.txt and current.txt and prints a comparison table (delta and significance).

Determinism

Run from the same directory (repo root) and the same OS. For more stable results, close other heavy processes and avoid changing GOMAXPROCS between runs.

Charts (matplotlib)

Generate bar charts from benchmarks/results/current.txt:

pip install -r scripts/requirements-charts.txt   # or: pip install matplotlib
python3 scripts/bench_to_chart.py

Outputs (in benchmarks/results/): assertion_chart.png, runner_chart.png, hooks_chart.png. Each chart shows framework name, ns/op, and relative performance (× vs fastest). If a category has no data (e.g. missing benchmarks), that chart is skipped with a warning.

Prerequisites

The specs package must build. From repo root: go build ./specs/...

Documentation

Index

Constants

View Source
const (
	SuiteSize100   = 100
	SuiteSize1000  = 1000
	SuiteSize10000 = 10000
	SuiteSize50000 = 50000
)

Suite sizes for benchmarks (deterministic, realistic).

Variables

This section is empty.

Functions

func BuildSpecsProgram

func BuildSpecsProgram(n int) *specs.Program

BuildSpecsProgram creates a compiled Program with n specs, each running one EqualTo(ctx, 1, 1). All specs share one Describe and one (optional) BeforeEach, so they form one group. Deterministic: same n always produces the same program shape.

func BuildSpecsProgramWithHooks

func BuildSpecsProgramWithHooks(n, depth int) *specs.Program

BuildSpecsProgramWithHooks creates a Program with n specs and depth levels of BeforeEach. Each spec runs depth before hooks then one assertion. Deterministic.

func CreateGoSpecsSuite

func CreateGoSpecsSuite(specCount int) func(tb testing.TB)

CreateGoSpecsSuite builds a suite of n specs (one EqualTo per spec) and returns a runnable function. Call it before b.ResetTimer(); the returned function runs the suite with minimal allocations per run.

func CreateGomegaSuite

func CreateGomegaSuite(specCount int) func(tb testing.TB)

CreateGomegaSuite returns a function that runs n Gomega expectations (Expect(1).To(Equal(1))). Call it before b.ResetTimer(). NewWithT(tb) is called once per suite run inside the returned function.

func CreateTestifySuite

func CreateTestifySuite(specCount int) func(tb testing.TB)

CreateTestifySuite returns a function that runs n assert.Equal(tb, 1, 1) calls. Call it before b.ResetTimer(); the returned function runs the loop with no per-spec suite allocation.

func RunSpecsProgram

func RunSpecsProgram(tb testing.TB, prog *specs.Program)

RunSpecsProgram runs the program with specs.NewRunner. Used by benchmarks.

Types

This section is empty.

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL