Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Measure with best available precision #219

Merged
merged 9 commits into from
May 21, 2018
1 change: 0 additions & 1 deletion .travis.yml
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,6 @@ elixir:
- 1.5.3
- 1.6.4
otp_release:
- 18.3
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

scaling native to nano seconds threw an error on OTP 18 so I'm inclined to believe it doesn't support it. As newer elixir versions also drop support I thought it was ok for us to do so as well.

- 19.3
- 20.3
matrix:
Expand Down
11 changes: 11 additions & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
@@ -1,3 +1,14 @@
## 0.14.0 (unreleased)

### Breaking Changes (User Facing)
* dropped support for Erlang 18.x

### Features (User Facing)
* benchee now uses the maximum precision available to measure which on Linux and OSX is nanoseonds instead of microseconds. Somewhat surprisingly `:timer.tc/1` always cut down to microseconds although better precision is available.

### Breaking Changes (Plugins)
* all reported times are now in nanoseconds instead of microseconds

## 0.13.1 (2018-05-02)

Mostly fixing memory measurement bugs and related issues :) Enjoy a better memory measurement experience from now on!
Expand Down
63 changes: 63 additions & 0 deletions lib/benchee/benchmark/hooks.ex
Original file line number Diff line number Diff line change
@@ -0,0 +1,63 @@
defmodule Benchee.Benchmark.Hooks do
@moduledoc false
# Non benchee code should not rely on this module.

alias Benchee.Benchmark.{Scenario, ScenarioContext}

def run_before_scenario(
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So, a formatting thing. I find these really tricky to read, and one thing that they're doing in the Elixir codebase itself is saying basically "if you don't need to pattern match in the function head, don't unless it fits on one line." Do you think you'd be cool with adopting a similar heuristic for benchee? So, this function would end up looking something like this:

  def run_before_scenario(scenario, scenario_context) do
    %Scenario{
      before_scenario: local_before_scenario,
      input: input
    } = scenario

    %ScenarioContext{
      config: %{before_scenario: global_before_scenario}
    } = scenario_context

    input
    |> run_before_function(global_before_scenario)
    |> run_before_function(local_before_scenario)
  end

What do you think?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've heard/seen this rule before... and I'm not sure. I sort of like just pattern matching on everything in the function "head" so that it's easy at a first glance to see what data the function needs.

Although to be fair, in your version it's fairly easy as well as the matches just moved to the first couple of lines. I'll think on it but yeah probably a good idea

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Well, I also like that in this case, when you see pattern matching in a function, you know it's needed for functionality. Like, I expect to see multiple function heads if I see pattern matching in a function head now.

It's just a personal thing that I've sort of gotten used to. I'm sure I'd get used to the opposite with enough time as well 😉

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I propose we postpone the discussion to a separate PR/discussion not to hold this up. I think I'll cook up a topic on elixirforum to see what the feelings in the community are :)

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, totally fine! We can discuss more later.

%Scenario{
before_scenario: local_before_scenario,
input: input
},
%ScenarioContext{
config: %{before_scenario: global_before_scenario}
}
) do
input
|> run_before_function(global_before_scenario)
|> run_before_function(local_before_scenario)
end

defp run_before_function(input, nil), do: input
defp run_before_function(input, function), do: function.(input)

def run_before_each(
%{
before_each: local_before_each
},
%{
config: %{before_each: global_before_each},
scenario_input: input
}
) do
input
|> run_before_function(global_before_each)
|> run_before_function(local_before_each)
end

def run_after_each(
return_value,
%{
after_each: local_after_each
},
%{
config: %{after_each: global_after_each}
}
) do
if local_after_each, do: local_after_each.(return_value)
if global_after_each, do: global_after_each.(return_value)
end

def run_after_scenario(
%{
after_scenario: local_after_scenario
},
%{
config: %{after_scenario: global_after_scenario},
scenario_input: input
}
) do
if local_after_scenario, do: local_after_scenario.(input)
if global_after_scenario, do: global_after_scenario.(input)
end
end
18 changes: 18 additions & 0 deletions lib/benchee/benchmark/measure/native_time.ex
Original file line number Diff line number Diff line change
@@ -0,0 +1,18 @@
defmodule Benchee.Benchmark.Measure.NativeTime do
@moduledoc """
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If we're thinking about what should be public and private API now, I'm not sure this should be public. The only reasons that I can think of that would make stuff should be public is if users need it to write their benchmarks or formatters need it, and I don't think users or formatters will need these modules, right?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

They shouldn't right 👍

Measure the time elapsed while executing a given function.

Uses only the time unit native to the platform. Used for determining how many times a function
should be repeated in `Benchee.Benchmark.Runner.determine_n_times/3` (private method though).
"""

@behaviour Benchee.Benchmark.Measure

def measure(function) do
start = :erlang.monotonic_time()
result = function.()
finish = :erlang.monotonic_time()

{finish - start, result}
end
end
16 changes: 14 additions & 2 deletions lib/benchee/benchmark/measure/time.ex
Original file line number Diff line number Diff line change
@@ -1,11 +1,23 @@
defmodule Benchee.Benchmark.Measure.Time do
@moduledoc """
Measure the time consumed by a executing function.
Measure the time elapsed while executing a given function.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should this be private API as well?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Here I'm... a bit torn. It's nothing I expect 99% users of benchee to ever need - so probably not. On the other hand, it can contain documentation on how benchee conducts its measurements which might be interesting to the more inclined reader. Also as an example if someone wants to implement their own measurer

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We don't allow for people to implement their own measurer yet - is that something you see coming in the future? That sort of strikes me as a thing we might not want to let people do, since you've put a lot of time and effort to make sure measurements are taken in the best way possible.

And we can still document in line with #s to show how it's done if folks are curious. It's a little tricky since José has said that the only way a library can mark something as private API is to not document it, so we're sort of without options here. My preference is usually to make as much private as possible, and then later on make things public if a good reason comes up to do so.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You're right, thanks Devon. Will change it!


In contrast to `:timer.tc/1` it always returns the result in nano seconds instead of micro
seconds. This helps us avoid losing precision as both Linux and MacOSX seem to be able to
measure in nano seconds. `:timer.tc/n`
[forfeits this precision](
https://github.com/erlang/otp/blob/master/lib/stdlib/src/timer.erl#L164-L169).
"""

@behaviour Benchee.Benchmark.Measure

def measure(function) do
:timer.tc(function)
start = :erlang.monotonic_time()
result = function.()
finish = :erlang.monotonic_time()

duration_nano_seconds = :erlang.convert_time_unit(finish - start, :native, :nanosecond)

{duration_nano_seconds, result}
end
end
143 changes: 143 additions & 0 deletions lib/benchee/benchmark/repeated_measurement.ex
Original file line number Diff line number Diff line change
@@ -0,0 +1,143 @@
defmodule Benchee.Benchmark.RepeatedMeasurement do
@moduledoc false
# This module is an internal implementation detail, and should absolutely not be relied upon
# from external clients.
#
# It is used when we can't conduct measurements precise enough with our measurement precision.
# I.e. we can measure in microseconds but we measure a function call to take 1 microsecond then
# most measurements will either be 1 or 2 microseconds which won't give us great results.
# Instead we repeat the function call n times until we measure at least ~10 (time unit) so
# that the difference between measurements can at least be ~10%.
#
# Today this is mostly only relevant on Windows as we have nanosecond precision on Linux and
# Mac OS and we've failed to produce a measurable function call that takes less than 10 nano
# seconds.
#
# That's also why this code lives in a separate module and not `Runner` - as it's rarely used
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the helpful documentation!

# and clutters that code + we need a way to test it even if we can't trigger it's conditions.

# If a function executes way too fast measurements are too unreliable and
# with too high variance. Therefore determine an n how often it should be
# executed in the measurement cycle.

alias Benchee.Benchmark.{Hooks, Runner, Scenario, ScenarioContext, Measure}
alias Benchee.Utility.RepeatN

@minimum_execution_time 10
@times_multiplier 10
def determine_n_times(
scenario,
scenario_context = %ScenarioContext{
num_iterations: num_iterations,
printer: printer
},
fast_warning,
measurer \\ Measure.NativeTime
) do
run_time = measure_iteration(scenario, scenario_context, measurer)

if run_time >= @minimum_execution_time do
{num_iterations, adjust_for_iterations(run_time, num_iterations)}
else
if fast_warning, do: printer.fast_warning()

new_context = %ScenarioContext{
scenario_context
| num_iterations: num_iterations * @times_multiplier
}

determine_n_times(scenario, new_context, false, measurer)
end
end

defp adjust_for_iterations(measurement, 1), do: measurement
defp adjust_for_iterations(measurement, num_iterations), do: measurement / num_iterations

def measure(
scenario,
scenario_context = %ScenarioContext{
num_iterations: num_iterations
},
measurer
) do
measurement = measure_iteration(scenario, scenario_context, measurer)

adjust_for_iterations(measurement, num_iterations)
end

defp measure_iteration(
scenario,
scenario_context = %ScenarioContext{
num_iterations: 1
},
measurer
) do
Runner.measure(scenario, scenario_context, measurer)
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not a fan of this personally. It's a circular dependency (Runner depends on us, we depend on runner) and if things go very run we might even trigger an endless loop.

Extracting the function to another module might be the better call but then it'd only have that one function (likely) and I had trouble coming up with a name for it which suggests that I didn't find the higher level concept yet or that it really wouldn't be an extraction of a higher level concept.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think the thing we're seeing here is that these functions are really part of the Runner process, which, honestly, they are. I think that's the higher concept this dependency graph is trying to tell us. But I also don't think it's a problem to split this up as it is now.

end

defp measure_iteration(
scenario,
scenario_context = %ScenarioContext{
num_iterations: iterations
},
measurer
)
when iterations > 1 do
# When we have more than one iteration, then the repetition and calling
# of hooks is already included in the function, for reference/reasoning see
# `build_benchmarking_function/2`
function = build_benchmarking_function(scenario, scenario_context)

{measurement, _return_value} = measurer.measure(function)

measurement
end

# Builds the appropriate function to benchmark. Takes into account the
# combinations of the following cases:
#
# * an input is specified - creates a 0-argument function calling the original
# function with that input
# * number of iterations - when there's more than one iteration we repeat the
# benchmarking function during execution and measure the the total run time.
# We only run multiple iterations if a function is so fast that we can't
# accurately measure it in one go. Hence, we can't split up the function
# execution and hooks anymore and sadly we also measure the time of the
# hooks.
defp build_benchmarking_function(
%Scenario{
function: function,
before_each: nil,
after_each: nil
},
%ScenarioContext{
num_iterations: iterations,
scenario_input: input,
config: %{after_each: nil, before_each: nil}
}
)
when iterations > 1 do
main = Runner.main_function(function, input)
# with no before/after each we can safely omit them and don't get the hit
# on run time measurements (See PR discussions for this for more info #127)
fn -> RepeatN.repeat_n(main, iterations) end
end

defp build_benchmarking_function(
scenario = %Scenario{function: function},
scenario_context = %ScenarioContext{num_iterations: iterations}
)
when iterations > 1 do
fn ->
RepeatN.repeat_n(
fn ->
new_input = Hooks.run_before_each(scenario, scenario_context)
main = Runner.main_function(function, new_input)
return_value = main.()
Hooks.run_after_each(return_value, scenario, scenario_context)
end,
iterations
)
end
end
end
Loading