Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Blue sky: separate coverage reports for unit test + benchmarks #62

Open
springmeyer opened this issue Aug 11, 2017 · 0 comments
Open

Blue sky: separate coverage reports for unit test + benchmarks #62

springmeyer opened this issue Aug 11, 2017 · 0 comments

Comments

@springmeyer
Copy link
Contributor

Context

We use https://codecov.io for online reports of code coverage. We consider code coverage a critical tool in ensuring robust code since it:

  • Allows you to quickly see what code is actually run in your tests (to help find either dead code or code needing tests)
  • Helps you be motivated to write tests
  • Coverage shows how many times code is executed, by line: this can be a really helpful indication of where performance bottlenecks might be.

Currently we have coverage reporting available, by file and by line, that reflects just one metric: what code was executed when running the unit tests. This is viewable at https://codecov.io/gh/mapbox/node-cpp-skel/tree/620a268920ca47ca3bc49dd96fd8839e80774411/src.

Opportunity

@GretaCB is working on benchmarking scripts in #61. While the current code coverage answers the question of "what code do I have unit tests for?" it could also be useful, after #61 lands, to answer the question of "what code do I have benchmarks for?".

If the answer were: "only for some of the performance critical code and not all the performance critical code" then we have a problem. One of the easiest mistakes to make in performance optimization is to spend time optimizing the wrong thing. Performance optimization is hard enough when you are focused on the right code. So we should use all the tools we have to try to avoid this issue.

Solution

Codecov has a feature called flags. This allows you to mark a specific coverage upload by name. We could use this to provide display of coverage isolated to our unit tests vs our benchmark scripts. See more at https://docs.codecov.io/docs/flags

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant