-
Notifications
You must be signed in to change notification settings - Fork 1
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
A suite of benchmarking tests for BrainGlobe #21
Comments
Just to flesh this out, I would say the ideal case initially would be to have: For cellfinder-core
For brainreg
All of these jobs should fail if run time or peak memory usage exceeds 10% (?) of some baseline (TBC). Stretch goals
What do you think @alessandrofelder? |
FYI - #23 If there's lots of shared benchmarking code it could live in |
@sfmig one idea that isn't in this issue is to look at the timings from the original brainreg and cellfinder papers. The machines used to generate these benchmarks are still available, and we can run both the version released with the paper and the most recent to get a performance baseline. |
Some thoughts on benchmarking toolsAfter a bit of research I'd highlight these two: Asv
Pyperf
Other comments
Suggested next steps
|
Profiling toolsWe'd like to use profilers to identify bottlenecks, to narrow down what we need to benchmark. From having a look at the suggested tools here and in the top comment of the thread:
|
https://github.com/brainglobe/cellfinder-core/pull/184/files#r1269597227 so we don't forget that we'd like/expect the benchmarking instructions to become more BG-specific in the future. |
A key requirement for all BrainGlobe tools is that they can reasonably (say hours - 1 day) on PhD student laptops, to make the tools accessible for everyone. It is therefore important that we ensure that future changes to the code don't cause significant performance loss (especially considering brainglobe/cellfinder-core#170 !)
Ultimately, we'd therefore like to have a suite of benchmarks that form part of the tests of all maintained BrainGlobe packages. A preliminary discussion in a developer meeting suggests
airspeed-velocity/asv
as a useful tool to achieve this (but this is not set-in-stone).A naive, initial approach might be:
asv
bg_utils
package (maybe abenchmarking
submodule?)bg_utils
and re-use in other reposResources
Profiling tools that have come in handy before and may help understand what to benchmark/deepen general understanding of our code performance are
pyinstrument
memory_profiler
The text was updated successfully, but these errors were encountered: