-
Notifications
You must be signed in to change notification settings - Fork 2
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Investigate switch from Hyperfine to Bencher #4
Comments
Developer of hyperfine here. I just found this by chance. Let us know (on the hyperfine issue tracker), if there is anything to help you better support your use case.
This is one of the design goals. Do you think that there is anything that would make it even simpler to use?
The default output of hyperfine shows some basic statistical properties. Under the hood, hyperfine performs a few more advanced tests for outliers. Finally, you can use the JSON export and the provided
true. please see sharkdp/hyperfine#86
I'd love to know more. |
Hi @sharkdp First of all, thanks for creating and maintaining Hyperfine! :) It is an excellent tool. To address your points:
When I get the chance, I could indeed raise an issue on the Hyperfine repo explaining my use case if you would like. I suspect that what I want to do is probably actually out-of-scope for what you want to achieve with Hyperfine, though. It is essentially the same idea as what Bencher at the Benchmarks Game is intended to do though, i.e. compare the performance of implementations of the same program(s) in different languages.
No, I can't think of anything else that you could do to simplify usage (with the exception of introducing 'matrix measurements' - see below). Which means that you have already done a good job of making it easy-to-use.
I had somehow totally missed that (or forgotten about it). Thanks!
Yip, I think it's reasonable to exclude memory analysis. I think you could use perf counters on Linux, and the equivalent on Windows (the name escapes me momentarily), but that would probably be a HUGE amount of work, and I have no idea how to implement it myself. I had already seen the issue about matrix measurements - I have to admit I was a bit surprised to realise it wasn't already supported. BenchmarkDotNet, where I have used the same functionality in the past, has probably spoiled me. I imagine, however, that BenchmarkDotNet has probably had an order-of-magnitude more contributors and person-hours sunk into it (and has the .NET VM to rely on too). I expect to be performing measurements sooner than the likely introduction of this feature though, and unfortunately, I don't have time to work on this it myself.
For the naming of the output files, please see the use of the |
Thank you for the feedback!
That's certainly possible, yes.
Haven't heard of this tool. I'll take a look - thank you for the reference.
I'd be really surprised if that would be a bug in hyperfine, to be honest. Maybe run |
Hi @sharkdp I have some good (though not unexpected) news! It turns out that you were entirely correct that Hyperfine was working correctly with regards to the output filenames - and I only finally fixed it after you suggested echoing the command. It turns out that I needed to change my way of declaring the output file's name from e.g., Clearly, it's a shell scripting issue, and I suspect might be specific to Zsh, which I am using. On a completely separate note, if you're interested in learning about BenchmarkDotNet, one of the lead contributors to it wrote a whole book about it. The book focuses on BDN, but I'm under the impression that quite a lot of it is much more broadly applicable. Thanks again for Hyperfine and your helpful comments here! :) |
👍
Thank you for the reference! |
The initial plan with this research was to use Hyperfine to measure the comparative timings of the different programs. It is attractive in that it is:
It has some downsides, however, in that it:
The problem is likely due as much as anything to the fact that Hyperfine probably wasn't really made with this particular purpose in mind. On the other hand, the Bencher system from The Computer Language Benchmarks Game is explicitly designed to compare different implementations of the same program in different languages. Moreover, it doesn't seem to suffer from any of the limitations described above for Hyperfine. Thus, it probably is actually the better choice for this work.
At some point before performing final measurements, Bencher should be investigated to see whether it is a better idea. It looks like it probably is more difficult to use, but overall much more suited towards the task at hand. Whichever option wins out between those two should be used from that point.
The text was updated successfully, but these errors were encountered: