-
Notifications
You must be signed in to change notification settings - Fork 66
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Measure with best available precision #219
Conversation
* native time measurer for determining n_times * added nano second unit, unit tests pass there PICK ME UP: * Integration tests failing * running fast_functions seems to break as well
Fast functions are still fast but they don't trigger a fast warning anymore on nanosecond systems so we had to tag a couple of tests to not execute anymore. Their times still don't differ by much, I call compiler optimization so they all basically return static values.
Normal functions behave normal:
|
…nersion like we want
ah jeez coveralls is failing now because in the runner all the repetition code is now untested as only the linux builds report to it :'( We could adjust coveralls max coverage drop or extract the code and test it in more isolation. |
* RepeatedMeasurement to unit test it and not pollute general runner with this very specific code for runtime measurements that's only relevant on Windows these days it seems * Hooks code in its own module to not pollute and reuse from both place * made `measure` public on the runner... not 100% happy with this but putting it into even another module seems like an overhead now. * temporarily duplicated `main_function` code
Ok this should start passing now :) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Don't be scared by the big diff please - this is a lot of moving code around and simple changes :)
@@ -4,7 +4,6 @@ elixir: | |||
- 1.5.3 | |||
- 1.6.4 | |||
otp_release: | |||
- 18.3 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
scaling native to nano seconds threw an error on OTP 18 so I'm inclined to believe it doesn't support it. As newer elixir versions also drop support I thought it was ok for us to do so as well.
This module actually runs our benchmark scenarios, adding information about | ||
run time and memory usage to each scenario. | ||
""" | ||
@moduledoc false |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
going towards 1.0 I think we should also be more vigilant what we want to be official API vs our internal stuff.
Also mainly did this because I made some methods here public to be used from our repeating code.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
👍
end | ||
@no_input Benchmark.no_input() | ||
def main_function(function, @no_input), do: function | ||
def main_function(function, input), do: fn -> function.(input) end |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I had hoped to get rid off this together and saving it in the ScenarioContext
- something I wanted to do for a long time as I also fear that this might introduce a performance overhead/stands in the way of optimizations.
However, as hooks can change the input it's not as easy. Now that we have our own measurers we could just call them with an argument if we feel like it and implement them like this. I'd wanna leave this for another day though :)
end) | ||
end) | ||
end | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this test was always a flaky candidate and it started failing consistently with more than 3 std_dev so I don't think it's worth it anymore.
}, | ||
measurer | ||
) do | ||
Runner.measure(scenario, scenario_context, measurer) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not a fan of this personally. It's a circular dependency (Runner depends on us, we depend on runner) and if things go very run we might even trigger an endless loop.
Extracting the function to another module might be the better call but then it'd only have that one function (likely) and I had trouble coming up with a name for it which suggests that I didn't find the higher level concept yet or that it really wouldn't be an extraction of a higher level concept.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think the thing we're seeing here is that these functions are really part of the Runner
process, which, honestly, they are. I think that's the higher concept this dependency graph is trying to tell us. But I also don't think it's a problem to split this up as it is now.
Rebuilding appveyor, it seems to have problems to access hex.pm |
Ok I wrote the beginning of a little benchmark I wanted to do for board games and compared this against master (yes I know lists are bad for this, end goal is to compare vs. maps, tuples, records): defmodule BoardBuilder do
def list(size) do
range = 1..size
Enum.map(range, fn _ ->
Enum.map(range, fn _ -> :value end)
end)
end
end
alias Benchee.Utility.RepeatN
list = BoardBuilder.list(19)
Benchee.run(
%{
"get(0, 0)" => fn -> list |> Enum.at(0) |> Enum.at(0) end,
"get(max, max)" => fn -> list |> Enum.at(18) |> Enum.at(18) end
}
) this branch:
master:
so I decided to put in some manual repetition: defmodule BoardBuilder do
def list(size) do
range = 1..size
Enum.map(range, fn _ ->
Enum.map(range, fn _ -> :value end)
end)
end
end
alias Benchee.Utility.RepeatN
list = BoardBuilder.list(19)
Benchee.run(
%{
"get(0, 0)" => fn -> RepeatN.repeat_n(fn -> list |> Enum.at(0) |> Enum.at(0) end, 100) end,
"get(max, max)" => fn -> RepeatN.repeat_n(fn -> list |> Enum.at(18) |> Enum.at(18) end, 100) end
}
) Then my results are about the same as on master but more precise:
It makes it seem to me like the overhead from the function call from the function call is bigger than the one from the repetition - i.e. I feel that the repeated samples are somewhat more realistic 🤔 Now let me set an empty function call as the baseline we get these results:
Let me deduct the empty average then we end up with: 103 and 410 which says that get(max, max) is about 4 times slower than my get(0, 0). If I do the same using the median (flipping the bird to outliers) We get 85 and 383 respectively arriving at ~ 4.5 slower So now the question is - which one of these values is most trust worthy?
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we're basically there. Just a couple little questions!
|
||
alias Benchee.Benchmark.{Scenario, ScenarioContext} | ||
|
||
def run_before_scenario( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So, a formatting thing. I find these really tricky to read, and one thing that they're doing in the Elixir codebase itself is saying basically "if you don't need to pattern match in the function head, don't unless it fits on one line." Do you think you'd be cool with adopting a similar heuristic for benchee? So, this function would end up looking something like this:
def run_before_scenario(scenario, scenario_context) do
%Scenario{
before_scenario: local_before_scenario,
input: input
} = scenario
%ScenarioContext{
config: %{before_scenario: global_before_scenario}
} = scenario_context
input
|> run_before_function(global_before_scenario)
|> run_before_function(local_before_scenario)
end
What do you think?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I've heard/seen this rule before... and I'm not sure. I sort of like just pattern matching on everything in the function "head" so that it's easy at a first glance to see what data the function needs.
Although to be fair, in your version it's fairly easy as well as the matches just moved to the first couple of lines. I'll think on it but yeah probably a good idea
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Well, I also like that in this case, when you see pattern matching in a function, you know it's needed for functionality. Like, I expect to see multiple function heads if I see pattern matching in a function head now.
It's just a personal thing that I've sort of gotten used to. I'm sure I'd get used to the opposite with enough time as well 😉
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I propose we postpone the discussion to a separate PR/discussion not to hold this up. I think I'll cook up a topic on elixirforum to see what the feelings in the community are :)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, totally fine! We can discuss more later.
@@ -0,0 +1,18 @@ | |||
defmodule Benchee.Benchmark.Measure.NativeTime do | |||
@moduledoc """ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If we're thinking about what should be public and private API now, I'm not sure this should be public. The only reasons that I can think of that would make stuff should be public is if users need it to write their benchmarks or formatters need it, and I don't think users or formatters will need these modules, right?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
They shouldn't right 👍
@@ -1,11 +1,23 @@ | |||
defmodule Benchee.Benchmark.Measure.Time do | |||
@moduledoc """ | |||
Measure the time consumed by a executing function. | |||
Measure the time elapsed while executing a given function. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should this be private API as well?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Here I'm... a bit torn. It's nothing I expect 99% users of benchee to ever need - so probably not. On the other hand, it can contain documentation on how benchee conducts its measurements which might be interesting to the more inclined reader. Also as an example if someone wants to implement their own measurer
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We don't allow for people to implement their own measurer yet - is that something you see coming in the future? That sort of strikes me as a thing we might not want to let people do, since you've put a lot of time and effort to make sure measurements are taken in the best way possible.
And we can still document in line with #
s to show how it's done if folks are curious. It's a little tricky since José has said that the only way a library can mark something as private API is to not document it, so we're sort of without options here. My preference is usually to make as much private as possible, and then later on make things public if a good reason comes up to do so.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You're right, thanks Devon. Will change it!
# Mac OS and we've failed to produce a measurable function call that takes less than 10 nano | ||
# seconds. | ||
# | ||
# That's also why this code lives in a separate module and not `Runner` - as it's rarely used |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the helpful documentation!
}, | ||
measurer | ||
) do | ||
Runner.measure(scenario, scenario_context, measurer) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think the thing we're seeing here is that these functions are really part of the Runner
process, which, honestly, they are. I think that's the higher concept this dependency graph is trying to tell us. But I also don't think it's a problem to split this up as it is now.
This module actually runs our benchmark scenarios, adding information about | ||
run time and memory usage to each scenario. | ||
""" | ||
@moduledoc false |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
👍
Fixes #207