You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
@GretaCB is working on benchmarking scripts in #61. While the current code coverage answers the question of "what code do I have unit tests for?" it could also be useful, after #61 lands, to answer the question of "what code do I have benchmarks for?".
If the answer were: "only for some of the performance critical code and not all the performance critical code" then we have a problem. One of the easiest mistakes to make in performance optimization is to spend time optimizing the wrong thing. Performance optimization is hard enough when you are focused on the right code. So we should use all the tools we have to try to avoid this issue.
Solution
Codecov has a feature called flags. This allows you to mark a specific coverage upload by name. We could use this to provide display of coverage isolated to our unit tests vs our benchmark scripts. See more at https://docs.codecov.io/docs/flags
The text was updated successfully, but these errors were encountered:
Context
We use https://codecov.io for online reports of code coverage. We consider code coverage a critical tool in ensuring robust code since it:
Currently we have coverage reporting available, by file and by line, that reflects just one metric: what code was executed when running the unit tests. This is viewable at https://codecov.io/gh/mapbox/node-cpp-skel/tree/620a268920ca47ca3bc49dd96fd8839e80774411/src.
Opportunity
@GretaCB is working on benchmarking scripts in #61. While the current code coverage answers the question of "what code do I have unit tests for?" it could also be useful, after #61 lands, to answer the question of "what code do I have benchmarks for?".
If the answer were: "only for some of the performance critical code and not all the performance critical code" then we have a problem. One of the easiest mistakes to make in performance optimization is to spend time optimizing the wrong thing. Performance optimization is hard enough when you are focused on the right code. So we should use all the tools we have to try to avoid this issue.
Solution
Codecov has a feature called
flags
. This allows you to mark a specific coverage upload by name. We could use this to provide display of coverage isolated to our unit tests vs our benchmark scripts. See more at https://docs.codecov.io/docs/flagsThe text was updated successfully, but these errors were encountered: