A statistics-driven micro-benchmarking framework heavily inspired by the wonderful criterion library for Haskell; originally created by LemonBoy.
Works, API is still not 100% stable yet.
import criterion
var cfg = newDefaultConfig()
benchmark cfg:
func fib(n: int): int =
case n
of 0: 1
of 1: 1
else: fib(n-1) + fib(n-2)
# on nim-1.0 you have to use {.measure: [].} instead
proc fib5() {.measure.} =
var n = 5
blackBox fib(n)
# ... equivalent to ...
iterator argFactory(): int =
for x in [5]:
yield x
proc fibN(x: int) {.measure: argFactory.} =
blackBox fib(x)
# ... equivalent to ...
proc fibN1(x: int) {.measure: [5].} =
blackBox fib(x)
A bit too much info? Just set cfg.brief = true
and the results will be output
in a condensed format:
Much easier to parse, isn't it?
If you need to pass more than a single argument to your benchmark fixture just use a tuple: they are automagically unpacked at compile-time.
import criterion
let cfg = newDefaultConfig()
benchmark cfg:
proc foo(x: int, y: float) {.measure: [(1,1.0),(2,2.0)].} =
discard x.float + y
If you need the measurement data in order to compare different benchmarks, to plot the results or to post-process them you can do so by adding a single line to your benchmark setup:
let cfg = newDefaultConfig()
# Your usual config goes here...
cfg.outputPath = "my_benchmark.json"
benchmark(cfg):
# Your benchmark code goes here...
The data will be dumped once the block has been completed into a json file that's ready for consumption by other tools.
See the documentation for the criterion module as generated directly from the source.
MIT