-
Notifications
You must be signed in to change notification settings - Fork 112
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Performance benchmarks #32
Conversation
Broken tests are those expression that do not evaluate as fast as their unit-less counterpart
Thank you so much for taking the time to implement benchmarking! For a package like this it is sorely needed. I'll review this PR carefully when I get a moment to do so. It's disappointing that some operations seem slow, but keep in mind that at least with 0.5.0, there are some bugs in julia that could be impacting performance, notably JuliaLang/julia#18465. Rational numbers are used wherever exact unit conversions are possible and it could be that the type instability described in that issue is causing wide-ranging performance problems. Of course, this is just a guess, and I need to look into what you found a little more. |
How about we do a little benchmarking case study. cc @timholy since he is a user of this package and is interested in its performance. Let's first consider addition when units are mixed, which was a problematic benchmark:
Yikes! Why is that taking so long? Well, since you specified integers, an exact conversion is possible and
The return type of
Now, let's compare with the performance of SIUnits:
So, why does SIUnits win here (2ns vs 6ns)? Well, the only length unit SIUnits can internalize is the meter. Other length units specified by the user are just converted to meters, as you see above. So SIUnits wins because the benchmark doesn't count the computation needed to convert In the case of arrays, let's again consider floating-point numbers only for simplicity, and do another comparison:
For both Unitful and SIUnits, to get a concretely-typed array, unit conversion is required before the benchmarking is done (no conversion required for Eventually (once I don't have to explain the type instability) I will write up some of this in the Unitful documentation, to emphasize differences between Unitful and SIUnits and enable users to choose the package that best suits their needs. |
using Unitful | ||
using BenchmarkTools | ||
using Base.Test | ||
using DataFrames |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
DataFrames no longer needed?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Indeed. Sorry, the pull-request is a bit dirty. In part, it's because I'm not sure how benchmarks are integrated into packages. As part of the testing framework? outside of it? as a report? not all?
In any case, I'll correct that and the unused function.
using Base.Test | ||
using DataFrames | ||
|
||
function benchmark() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is this function used anywhere? If I try it out I get the following:
julia> benchmark()
ERROR: UndefVarError: benchmark! not defined
in benchmark() at /Users/ajkeller/.julia/v0.5/Unitful/test/benchmarks.jl:10
What about the case for arrays with a single unit type |
You'd need to avoid using integers in that comparison, due to JuliaLang/julia#18465. Once that bug is fixed, it shouldn't matter. |
I've tried this with arrays of floats, and verified Unitful does not add much overhead, if any. |
I've added a number of benchmarks in a file, to check whether operations between unitful objects are just as fast as their unitless counterparts.
Unfortunately, this is not always the case (assuming the code in the pull-request is correct :) ). Most notably, operations involving unitful arrays are quite expensive.
Not sure whether you want this in the code itself.
The benchmarks are arranged as tests, and can be run with:
Benchmarks can be trialed in the REPL with: