-
Notifications
You must be signed in to change notification settings - Fork 26
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Heuristic Benchmarking #278
Conversation
Codecov Report
@@ Coverage Diff @@
## main #278 +/- ##
=====================================
Coverage 92.0% 92.0%
=====================================
Files 45 45
Lines 3813 3888 +75
Branches 638 650 +12
=====================================
+ Hits 3508 3580 +72
- Misses 305 308 +3
Flags with carried forward coverage won't be shown. Click here to find out more.
... and 3 files with indirect coverage changes Help us with your feedback. Take ten seconds to tell us how you rate us. Have a feature suggestion? Share it here. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Many thanks. I love the additional information this provides.
You can find a few small, nit-picky suggestions in the comments down below.
One bigger thing: have you tested how much the information tracking/computation affects the performance? Considering runtime, but also memory. Would it make sense to "hide" these computation behind a "debug" or "monitoring" or "benchmarking" flag in the configuration and only track the metrics if the flag is enabled? Or is the overhead negligible anyway and we might as well include it all the time?
I guess one of my main concern comes from the Results
object being passed to Python which requires translating the C++ vector of results to Python. If the vector is large, this could be costly.
Oh, I did not even think about Python here. Using C++ directly, I did not see any significant change in mapping time (however, I also did not check memory usage, though I do not think there is any potential for a huge memory overhead). But I guess it cannot hurt to hide the feature behind a debugging flag in the configuration. Thank you for the suggestion! |
I agree. It's probably not the worst thing (the data structure is rather tiny), but in order to maximize performance I believe it could be helpful to deactivate the metric tracking. |
Signed-off-by: Lukas Burgholzer <[email protected]>
Signed-off-by: Lukas Burgholzer <[email protected]>
calling `back()` on an empty vector is undefined behavior. This would happen whenever debug=false. This commit also restructures the debug code so that it is more compact. Signed-off-by: Lukas Burgholzer <[email protected]>
Signed-off-by: Lukas Burgholzer <[email protected]>
Signed-off-by: Lukas Burgholzer <[email protected]>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just pushed some commits with finishing touches here.
Most importantly I took care of a case of undefined behavior (calling back()
on an empty vector) and made the debug code in the mapper a little more compact/less spread out.
If CI checks turn all-green and you are ok with the changes I made, I'll proceed and merge.
LGTM. Thank you for the refinements! |
Description
Implements some benchmarks into the heuristic mapper (both for the whole mapping process, and for each layer):
All of this will help keeping track of performance changes, when implementing new heuristics or updating the existing heuristic.
Checklist: