README
¶
teststat
A tool to aggregate and mine data from JSON reports of Go tests.
Why?
Mature Go projects often have a lot of tests, and not all of them are implemented in the best way. Some tests exhibit slightly different behavior and fail randomly.
Such unstable tests are annoying and kill productivity, identifying and improving them can have a huge impact on the quality of the project and developer experience.
In general, tests can flake in these ways.
Dependency on the environment
Tests would pass on one platform and fail on another.
Dependency on the initial state
Tests would pass on first invocation and fail on consecutive invocations.
Such tests can be identified by running suite multiple times.
go test -count 5 .
In some cases, this failure behavior is expected and desirable, though it reduces the usefulness of go test tool by
noising -count mode. You would need to run the test suite multiple times with -count 1 and compare results to
identify cases that flake on first invocation.
Dependency on undetermined or random factors
Tests would pass or fail randomly.
Such behavior almost always indicates a bug either in test or the code, and it needs a fix. Instability can be caused by data races which lead to subtle bugs and data corruption.
Running tests with race detector may help to expose some issues.
go test -race -count 5 ./...
Same as for data races, there is no guaranteed way to expose flaky tests that depend on random things. Running tests multiple times increases chances to hit an abnormal condition, but one can never be sure all issues have been found.
One data race can affect many test cases and "spam" the test output, this can be solved by grouping data races by tails
of their stack traces, teststat accepts -race-depth int flag to perform such grouping. The bigger race depth value,
more race groups are reported.
Go test tool offers machine-readable output with -json flag, teststat tool can read such output and determine racy,
flaky or slow tests.
go test -race -json -count 5 ./... |& teststat -race-depth 4 -
Another way of using it can be by running test suite multiple times and analyze reports together. This can help to expose tests that flake on first invocation.
go test -json ./... > test1.jsonl
go test -json ./... > test2.jsonl
go test -json ./... > test3.jsonl
teststat test1.jsonl test2.jsonl test3.jsonl
Resulting report can be formatted with -markdown to make it more readable as issue/pr comment.
Installation
go install github.com/vearutop/teststat@latest
Or download the binary from releases.
Linux AMD64
wget -q https://github.com/vearutop/teststat/releases/latest/download/linux_amd64.tar.gz && tar xf linux_amd64.tar.gz && rm linux_amd64.tar.gz
./teststat -version
Usage
Usage: teststat [options] report.jsonl ...
Use `-` or `/dev/stdin` as file name to read from STDIN.
-allure string
path to write allure report
-buckets int
number of buckets for histogram (default 10)
-failed-builds string
store build failures to a file
-failed-tests string
store regexp of failed tests to a file, useful for a retry run
-failure-stats string
store failure stats (total) to a file
-limit-report int
maximum report length, exceeding part is truncated (default 60000)
-markdown
render output as markdown
-pkg-cache-csv string
store build cache units as CSV
-progress
show progress
-race-depth int
stacktrace depth to group similar data races (default 5)
-skip-parent
exclude parent tests of subtests in regexp of failed tests, this may help to avoid running full suite on single failure
-skip-report
skip reporting, useful for multiple retries
-slow duration
minimal duration of slow test (default 1s)
-slowest int
limit number of slowest tests to list (default 30)
-store string
store received json lines to file, useful for STDIN
-verbosity int
output verbosity, 0 for no output, 1 for failed test names, 2 for failure message
-version
show version and exit
Examples
Read from multiple files
Once you've collected JSONL test report, you can analyze it with this tool.
teststat -race-depth 4 -buckets 15 -slowest 7 ./flaky.jsonl ./test.jsonl
Sample report.
Flaky tests:
github.com/acme/foo/core/affiliate/networks.TestBarSuite/TestOisGetReinvented: 2 passed, 8 failed
github.com/acme/foo/core/affiliate/networks.TestBarSuite/TestOisGetReinstallCallbacks: 2 passed, 8 failed
github.com/acme/foo/core/affiliate/networks.TestBarSuite: 2 passed, 8 failed
github.com/acme/foo/core/kafka.TestClose_Graceful_Pooled: 15 passed, 1 failed
github.com/acme/foo/core/kafka.TestClose_ClosePause: 14 passed, 2 failed
Slowest tests:
pass github.com/acme/foo/manipulation_services/api_server TestCreateLeafTracer_Ok 1m26.4s
pass github.com/acme/foo/manipulation_services/api_server TestCreateTracer_Ok 1m16.55s
pass github.com/acme/foo/manipulation_services/api_server TestCreateTracer_Ok/D4 1m16.45s
pass github.com/acme/foo/manipulation_services/api_server TestCreateLeafTracer_Ok 1m3.28s
pass github.com/acme/foo/manipulation_services/refresh_worker TestConsumeImpression_Success 52.85s
pass github.com/acme/foo/manipulation_services/api_server TestCreateLeafTracer_Ok 31.58s
pass github.com/acme/foo/manipulation_services/refresh_worker TestSubscriptionConsumer_DifferentEventSubtypes 30.39s
Events: map[cont:2368 fail:196 flaky:32 output:1805716 pass:660182 pause:2336 run:780596 skip:120154 slow:863]
Elapsed: 1h36m1.129999952s
Slow: 40m34.649999952s
Elapsed distribution (seconds):
[ min max] cnt total% (37862 events)
[ 0.01 0.10] 32284 85.27% .....................................................................................
[ 0.11 0.24] 3383 8.94% ........
[ 0.25 0.52] 814 2.15% ..
[ 0.53 1.05] 574 1.52% .
[ 1.06 2.03] 552 1.46% .
[ 2.04 3.21] 122 0.32%
[ 3.30 4.90] 37 0.10%
[ 4.99 6.22] 36 0.10%
[ 6.40 8.68] 27 0.07%
[ 8.69 11.41] 22 0.06%
[12.48 14.30] 3 0.01%
[17.92 17.92] 1 0.00%
[30.39 31.58] 2 0.01%
[52.85 63.28] 2 0.01%
[76.45 86.40] 3 0.01%
Read from STDIN
Build errors are reported in plain text (non JSON) via STDERR, please use |& pipe to collect STDERR too.
go test -count 5 -json -race ./... |& teststat -
Pipe discards non-successful exit code by default, in such case you may want to collect failed tests and build failures in a file to check them later.
go test -count 5 -json -race ./... |& teststat -failed-tests failed.txt -failed-builds errors.txt -
Collect build cache stats
If you want to investigate huge build cache, you can collect build cache stats in a CSV file.
go test -x -json ./... | teststat -pkg-cache-csv build-cache.csv -
The resulting file looks like:
package,cache,size,timestamp
vendor/golang.org/x/net/http/httpguts,/Users/vearutop/Library/Caches/go-build/c0/c09f67107fc00c93f590a6a8a27781ab71d8bf37615b963f72ea2c738f9295f4-d,78060,2025-10-11T12:00:06+02:00
mime,/Users/vearutop/Library/Caches/go-build/dc/dc6f6b77e970371eefb13c87186408a4d4ed1b0d7578851a405e645eb1d6a174-d,1258448,2025-10-11T12:00:06+02:00
...