-
-
Notifications
You must be signed in to change notification settings - Fork 376
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Allow choice of reference benchmark #577
Comments
That sounds like a very useful feature request, thank you. How would this work in cases where we do not have a |
I believe --reference should always contain a complete command (and separate from normal run parameters, notice no hyperfine --reference 'nu -c "echo \"Hello, world!\"' -L shell sh,ash,ksh,bash,zsh,csh,fish,tcsh,pwsh '{shell} -c "echo \"Hello, world!\"' This way we can also measure it against not necessarily same programs. |
@christianhujer what do you think? |
I'm finding this issue while searching for a related one -- going to mention that here since it may be relevant to designing But in my case, I want a reference command per parameter I think. Imagine the use case being if I were similarly testing 10 shells running some command, and I want to measure how slowly running the command is relative to a reference point, where the reference point is each shell's own startup time, and therefore basically normalize each shell by how long it took to start up, thereby measuring just the command execution. (Of course I'd separately want to measure them against each other in absolute value, but sometimes I care just about the runtime once the shell is up. And But yeah -- is there any thoughts on whether letting |
Implemented by @JanEricNitschke in #744 |
When running
hyperfine
, we get a nice output.For example, when running
hyperfine
like this:The Summary of the output looks like this:
It would be really great if it were possible to pick one of the benchmark runs as "reference". For example, if I'm interested in how
nu
performs in comparison, I would like it to be on top, and the other benchmarks should be listed astimes faster
ortimes slower
, depending on their performance.In this example, when I run it like this:
Then the Summary should look like this:
Summary
The text was updated successfully, but these errors were encountered: