You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
During #195 we learned about the differences in precision between different systems (some measure in nano seconds, some in milliseconds). We'd like to always use the best precision available to us. Double check for instance with fast function measurements if it really improves things.
This most likely includes reimplementing timer:tc in our new Measure.Time introduced in #204.
It's really simple though:
tc(F) ->
T1 = erlang:monotonic_time(),
Val = F(),
T2 = erlang:monotonic_time(),
Time = erlang:convert_time_unit(T2 - T1, native, microsecond),
{Time, Val}.
The easiest would be to always return the measured time in nano seconds (as that's maximum accuracy afaik). However, that won't roll with our determine_n_times code as it is right now as it checks whenever we get a measurement bigger than 8 with the intention being that when we measure values we have at least some small different between them. However, 1000 is bigger than 8 and would result in us always measuring 1000 or 0 (if the precision is microseconds).
So, the determination of n times likely has the work with the native measurements. For real runtime measurements, we can probably always convert to nanoseconds without losing anything (except maybe memory etc. when the numbers get realllyyyy big).
We could also just store all of them in the native resolution but note down the native resolution in the configuration or so so that formatters etc. can know whatever time they read there.
Soooo, do we want to handle this at our level, or try and submit a patch to Erlang? Based on the response to your report, it seems like they're open to it, but want someone else to do it.
I'm already on handling it :) There's a lot to do on our side, PR coming in soon. Helping patch up Erlang of course not out of the question but that'd take a year to land I guess and then only be available on the new OTP version.
During #195 we learned about the differences in precision between different systems (some measure in nano seconds, some in milliseconds). We'd like to always use the best precision available to us. Double check for instance with fast function measurements if it really improves things.
This most likely includes reimplementing
timer:tc
in our newMeasure.Time
introduced in #204.It's really simple though:
(see how it just always go to microseconds...)
source
The easiest would be to always return the measured time in nano seconds (as that's maximum accuracy afaik). However, that won't roll with our
determine_n_times
code as it is right now as it checks whenever we get a measurement bigger than 8 with the intention being that when we measure values we have at least some small different between them. However,1000
is bigger than 8 and would result in us always measuring1000
or0
(if the precision is microseconds).So, the determination of n times likely has the work with the native measurements. For real runtime measurements, we can probably always convert to nanoseconds without losing anything (except maybe memory etc. when the numbers get realllyyyy big).
We could also just store all of them in the native resolution but note down the native resolution in the configuration or so so that formatters etc. can know whatever time they read there.
Getting the native time isn't an official interface though.
It's an interesting problem - I'd like to solve it before 1.0 though because people should get the maximum accuracy their machine is capable of.
The text was updated successfully, but these errors were encountered: