You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
It's been always an issue for every project i worked on that used mocha. Once in a while someone accidentally commits describe.only and starts the CI\CD pipeline. In the best case it'll fail during the code coverage check, but if the test suite is relatively small, or the suite that is not checked for coverage (integration, functional) the broken code potentially can go to production.
What if mocha would keep track of all .only that have been used during the test execution and when it's done, print the results + an optional warning that .only has been used and exit the process with non-zero.
In this case there will be less confusion around 'why is my build failing' and all the tests are guaranteed to run.
The text was updated successfully, but these errors were encountered:
I'm pretty sure this issue was raised before, and we decided not to break any API for it.
Please consider a pre-commit hook that greps for "only" in your test files and rejects the commit if present.
@mochajs/mocha please reopen if you think we need more controls around this. Ideally, in the future, we could emit an event when certain functions are called, which a custom reporter could consume.
It's been always an issue for every project i worked on that used mocha. Once in a while someone accidentally commits
describe.only
and starts the CI\CD pipeline. In the best case it'll fail during the code coverage check, but if the test suite is relatively small, or the suite that is not checked for coverage (integration, functional) the broken code potentially can go to production.What if mocha would keep track of all
.only
that have been used during the test execution and when it's done, print the results + an optional warning that.only has been used
and exit the process with non-zero.In this case there will be less confusion around 'why is my build failing' and all the tests are guaranteed to run.
The text was updated successfully, but these errors were encountered: