Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add machine readable test reports #6733

Closed
TomAFrench opened this issue Dec 7, 2024 · 4 comments · Fixed by #6810
Closed

Add machine readable test reports #6733

TomAFrench opened this issue Dec 7, 2024 · 4 comments · Fixed by #6810
Assignees

Comments

@TomAFrench
Copy link
Member

We currently run tests from external repositories only if the run-external-checks flag is set manually. This means that we can have regressions sneak in through PRs where this flag has not been set however we also cannot run it by default as any test failures due to breaking changes will cause our CI to fail until the external repository has been updated.

One solution to this would be for us to add a machine-readable report option to nargo test. This would allow us to filter out any expected failures to be ignored when when considering whether to fail CI or not.

See https://nexte.st/docs/machine-readable/libtest-json/ for inspiration.

@github-project-automation github-project-automation bot moved this to 📋 Backlog in Noir Dec 7, 2024
@TomAFrench
Copy link
Member Author

Note that we'd want to have some mechanism to flag when ignored tests should be reactivated again.

@asterite
Copy link
Collaborator

I'm thinking how to do this. For each repo, here's what can happen:

  1. The project compiles fine and all tests pass
  2. The project compiles fine but some tests fail
  3. The project doesn't compile at all

So, for each repo, we want to have a file that tells it's status, one of:

  1. We know the project compiles fine and all tests pass (this would be signaled by a lack of this file, or with an empty file)
  2. We know the project compiles but some tests fail. Here I guess we'll list the full names of the tests that fail, and the script should tell us if some of these started to pass again.
  3. We know the project doesn't compile (but let us know if it starts compiling again)

So maybe it can be a yaml file.

# All good
compiles: true
known_failures: []
# Some tests fail
compiles: true
known_failures:
  - test_one
  - test_two
# Does not compile
compiles: false

# this list could be non-empty and it's just ignored, but it's useful to keep around
# when the project starts compiling again
known_failures: [] 

But then this logic seems a bit too complex to have in a .sh file. What's the usual way we do these things? Could it be a Rust script? Or maybe it can be directly in nargo test as an extra option?

@TomAFrench
Copy link
Member Author

I don't think that it's necessary to add this into nargo (and would prefer we didn't). I think some use of jq would avoid this.

@TomAFrench
Copy link
Member Author

LIBRARIES=$(grep -Po "^https://github.com/\K.+" ./CRITICAL_NOIR_LIBRARIES | jq -R -s -c 'split("\n") | map(select(. != "")) | map({ repo: ., path: ""})')

Something like this to get a JSON object which contains tests along with their success statuses which we then compare against the yaml/JSON file.

@github-project-automation github-project-automation bot moved this from 📋 Backlog to ✅ Done in Noir Dec 17, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
Status: ✅ Done
Development

Successfully merging a pull request may close this issue.

2 participants