Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

include coverage instrumentation #797

Closed
wants to merge 14 commits into from

Conversation

gonzalesMK
Copy link
Contributor

Pull Request Template

Checklist

  • [ x ] Confirm that run-checks script has been executed.

Related Issues/PRs

#174

Changes

  • Created run_cargo_with_envs to set env var for instrumentation
  • All tests run with instrumentation now
  • Included grcov, coverage, and reporting steps in GitHub workflow

Testing

Tested locally with act: https://github.com/nektos/act

@antimora
Copy link
Collaborator

Thanks so much for adding this coverage capability.

The CI is failing for some reason: https://github.com/burn-rs/burn/actions/runs/6146093378/job/16686274619?pr=797

Let us know if admin needs to do something to configure.

@gonzalesMK
Copy link
Contributor Author

It seems that Github changed some permission configurations. Per default, there is no "write" permission for including a comment on the PR from untrusted sources (in this case, me).

marocchino/sticky-pull-request-comment#930

They included this documentation on how to fix this. The safe approach requires uploading the coverage report and having a separate workflow that runs on "workflow_run" .

https://securitylab.github.com/research/github-actions-preventing-pwn-requests/

I cite this paragraph with relevant information:

Together with the pull_request_target, a new trigger workflow_run was introduced to enable scenarios that require building the untrusted code and also need write permissions to update the PR with e.g. code coverage results or other test results. To do this in a secure manner, the untrusted code must be handled via the pull_request trigger so that it is isolated in an unprivileged environment. The workflow processing the PR should then store any results like code coverage or failed/passed tests in artifacts and exit. The following workflow then starts on workflow_run where it is granted write permission to the target repository and access to repository secrets, so that it can download the artifacts and make any necessary modifications to the repository or interact with third-party services that require repository secrets (e.g. API tokens).

If it is OK, I will proceed to create the necessary workflow:

In the existent workflow, add a step to

  • upload coverages

New workflow to comment on PR:

  • download coverage
  • write PR

Copy link
Collaborator

@Luni-4 Luni-4 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks a lot for your PR!

I propose to use the RUST_MATRIX environment variable and run code coverage tests only on CI and not locally, in this way:

  • We reduce the overhead, because code coverage tests are heavy in terms of resources
  • A developer is not obliged to install grcov locally

Here an example of how we use that environment variable for clippy

We can add the badge on README in this way

[![CodeCov][codecov badge]][codecov]

<!-- Links -->
[codecov]: https://codecov.io/gh/burn-rs/burn/

<!-- Badges -->
[codecov badge]: https://codecov.io/gh/burn-rs/burn/branch/main/graph/badge.svg

@@ -52,11 +52,29 @@ jobs:
- name: run checks & tests
run: RUST_MATRIX=${{ matrix.rust }} cargo xtask run-checks ${{ matrix.test }}

- name: Install Grcov
run: cargo install grcov
Copy link
Collaborator

@Luni-4 Luni-4 Sep 11, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

To speed up CI and save grcov binary in cache we can use the provided binary

    - name: Install grcov
      env:
        GRCOV_LINK: https://github.com/mozilla/grcov/releases/download
        GRCOV_VERSION: v0.8.18
      run: |
        curl -L "$GRCOV_LINK/$GRCOV_VERSION/grcov-x86_64-unknown-linux-musl.tar.bz2" |
        tar xj -C $HOME/.cargo/bin

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

👍

run: cargo install grcov

- name: Run grcov
run: grcov . --binary-path target/debug/deps/ -s . -t cobertura --branch --ignore-not-existing --ignore 'target/**' --ignore '../**' --ignore '/*' -o coverage.xml
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We can upload the coverage results on codecov that better shows which lines of code are still not covered by tests, so we can replace coverage.xml with lcov.info and add -t lcov instead of -t cobertura. Perhaps we can also replace target/debug/deps with only target/debug/. We can also decide to exclude some directories from code coverage analysis adding more --ignore path directories. This should also speed CI

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ok! I was using a vendorless approach, but Codecov is great.

Is the Burn repo already set on Codecov?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It is not necessary to set up anything, just the action is sufficient

Comment on lines 61 to 72
- name: Code Coverage Report
uses: irongut/[email protected]
with:
filename: coverage.xml
badge: true
fail_below_min: false
format: markdown
hide_branch_rate: false
hide_complexity: false
indicators: true
output: both
thresholds: '60 80'
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We can replace this action with codecov action:

    - name: Codecov upload
      uses: codecov/codecov-action@v3
      with:
        files: lcov.info

check-typos:
runs-on: ubuntu-latest
steps:
- name: checkout
uses: actions/checkout@v4

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We can re-insert this empty line, it seems a typo

Comment on lines 74 to 80
- name: Add Coverage PR Comment
uses: marocchino/sticky-pull-request-comment@v2
if: github.event_name == 'pull_request'
with:
recreate: true
path: code-coverage-results.md

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

With codecov we automatically have comments inside PR at the end of a CI run, so we can remove this action

@gonzalesMK
Copy link
Contributor Author

Up to date, coverage instrumentation does not work on wasm target, looks like it is an open issue

Copy link
Member

@nathanielsimard nathanielsimard left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it would be very valuable if one could run the coverage locally and not only on the CI. May I suggest to put the code inside an xtasks with the proper error messages if a dependency isn't installed and how to install it?

Also, could we use that crate? https://github.com/xd009642/tarpaulin

@Luni-4
Copy link
Collaborator

Luni-4 commented Sep 13, 2023

@gonzalesMK

It seems there is a strange failure with the stable - std action, perhaps we can try to run that action again.

@nathanielsimard Having the possibility to run code coverage locally could be helpful too, but I propose to do that through a specific command and not as a default behavior because of the overhead in running tests. I do not know how data should be visualized locally though.

@nathanielsimard
Copy link
Member

@nathanielsimard Having the possibility to run code coverage locally could be helpful too, but I propose to do that through a specific command and not as a default behavior because of the overhead in running tests. I do not know how data should be visualized locally though.

I agree, but having an xtask for it is important. It shouldn't be included with run-checks since this isn't a check.

@Luni-4
Copy link
Collaborator

Luni-4 commented Sep 15, 2023

Yep, sure! We can add an xtask file for code coverage in another PR since they are different concerns. @nathanielsimard may you try to run the CI of this PR again, please? I suppose there is a spurious network problem, let's see if it works

@gonzalesMK
Copy link
Contributor Author

@Luni-4 @nathanielsimard I'm making some local tests with tarpaulin as a coverage tool. I'm still looking for the correct configuration that makes build/clippy/tarpaulin/doc not recompile.

Anyway, to use tarpaulin I would implement a new task on xtask... does that sound good? It would require a change on this PR.

@nathanielsimard
Copy link
Member

Anyway, to use tarpaulin I would implement a new task on xtask... does that sound good? It would require a change on this PR.

Yes you can try! Maybe the coverage won't work with burn-wgpu and our templating method, so I think it's valuable to test it locally to see if the coverage makes sense.

@Luni-4
Copy link
Collaborator

Luni-4 commented Sep 18, 2023

Yes you can try! Maybe the coverage won't work with burn-wgpu and our templating method, so I think it's valuable to test it locally to see if the coverage makes sense.

@nathanielsimard
We can try tarpaulin and if it does not work, we can fallback to -Cinstrument-coverage coverage. My only concern is that tarpaulin is an external dependency not maintained by the Rust project, but we can update that later, so not a strong issue

@gonzalesMK
Copy link
Contributor Author

gonzalesMK commented Sep 19, 2023

@Luni-4 I've got out of memory errors on my PC with 15GB RAM (actually, my terminal just dies) when compiling with coverage, maybe could this skipped test be the same thing ?

Limiting the number of jobs fixes this error in my case.

@gonzalesMK
Copy link
Contributor Author

image

@Luni-4
Copy link
Collaborator

Luni-4 commented Sep 19, 2023

@Luni-4 I've got out of memory errors on my PC with 15GB RAM (actually, my terminal just dies) when compiling with coverage, maybe could this skipped test be the same thing ?

Limiting the number of jobs fixes this error in my case.

Oh, I see, yes, we can limit the number of jobs.

image

Wow, fantastic! Thank you @gonzalesMK! Perhaps we can exclude xtask and examples directory from the analysis, so we can reduce our computation time

@nathanielsimard
Copy link
Member

image

Thanks for the picture. We should exclude burn-derive from the coverage, since the generated code is tested within burn-core. burn-tensor-testgen is also tooling to run tests on tensor backends, so we can exclude it as well since it runs within each backend. Like @Luni-4 mentionned, we can also exclude xtask.

@gonzalesMK
Copy link
Contributor Author

How much RAM does the runner have?

@nathanielsimard
Copy link
Member

How much RAM does the runner have?

Probably less than 16 GB, since it's listed under large runners. I would guess 8 GB.

@nathanielsimard
Copy link
Member

@gonzalesMK @Luni-4 I think we can close this one, since we have an integration with codecov now using grcov! Thanks @gonzalesMK for your PR, I'm sure it was really useful in comming up with the current solution PR :)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants