You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
an outlier detection which is useful if a background process has interferred with a single (or a few) benchmark runs
a "first run was significantly longer than the rest" detection which can detect caching effects
But there are other things that can go wrong during benchmarking. For example:
a sudden jump in runtime which would result in a bimodal (or multi-modal) runtime distribution. This could be caused by a long-running background process that spun up
a continuous trend in the runtimes. Like a slowly growing/decreasing runtime which might be caused by things like thermal throttling.
It would be nice to add more heuristics to hyperfine to detect these scenarios as well. The second case could maybe be handled by fitting a linear model to the runtimes and to check whether its slope is close (enough) to zero.
The text was updated successfully, but these errors were encountered:
We already have
But there are other things that can go wrong during benchmarking. For example:
It would be nice to add more heuristics to
hyperfine
to detect these scenarios as well. The second case could maybe be handled by fitting a linear model to the runtimes and to check whether its slope is close (enough) to zero.The text was updated successfully, but these errors were encountered: