-
-
Notifications
You must be signed in to change notification settings - Fork 27
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Paired Benchmarks #18
Comments
Hi, this feature can be really fruitful for a project of mine. |
It is possible to build N-1 pairs measurements if it is solves your problem. Which leads me to the following:
It's an interesting use case. Can you elaborate more on the task your are trying to solve? |
My case: 1 rust version (= my implementation) + 2 C versions (with different algorithms) of a performance critical function. In the case of N-1 pair it seems OK, but it also means running the inner versions twice as much. |
I'm convinced that currently the best way to implement this today would be to
I don't see why we would be limited to 2 benchmarks. We could theoretically interleave an arbitrary number of benchmark runs. |
Sorry for the late reply. After several months of experience with different approaches in tango, I think I have more or less a complete approach to paired testing. The way it works is the following. Each tango benchmark executable is a mixed object (binary/shared library). The binary part contains the tango benchmark runner, and the shared library part (FFI) contains the benchmark registry. Therefore runner can load benchmarks from self as well as any other tango binary. This way several benchmarks could be run simultaneously. This IMO solves a lot of different problems:
|
Paired benchmarking spreads measurement noise across benchmarks. It is used in Tango.
@bazhenov, I would love to collaborate on how this approach would look like in Divan. 🙂
The text was updated successfully, but these errors were encountered: