-
-
Notifications
You must be signed in to change notification settings - Fork 305
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Flaky tests #219
Comments
What would marking a test as "flaky" do? What's the value in keeping a flaky test? |
Ah, I should've explained. Failure of such a test would not result in a non-0 exit code, but a warning. |
I feel like this is an antipattern that tape shouldn't encourage. Flaky tests should be fixed immediately or deleted immediately, with prejudice - a test whose failure doesn't fail the build adds no value, it just trains people to ignore test failures. |
That's what my gut tells me too. But I think there are valid cases that are simply out of ones' control by design. I would still prefer to have the test than to not have it in those cases. Anyway, your call to keep this open or not. I would be happy with it, I'm not sure if I'm alone in this. |
Do you have any strong examples of such cases? |
I reckon if you want flaky tests, you can add them to a separate run and ignore the exit status. It does require a lot more involvement, but I feel like flaky tests are hazardous by definition. If a test fails: fix it. If you don't fix it straight away, chances are it'll never get fixed. Tape shouldn't encourage bad behavior. |
I'm inclined to agree with you @yoshuawuyts, and I've personally never seen a flaky test that provided any value. But I also don't know everything, that's why I'm hoping @ronkorving can enlighten me. |
@ronkorving I think your asking for the "SKIP" feature which we support with `t.skip(str, block) |
You can always write code to make tests not flaky test('some thingy', function t(assert) {
doSomethingWeird(function (err, result) {
if (result) {
/* some asserts */
} else {
assert.ok(true, 'skipping error case because flap');
}
assert.end();
});
}); |
Isn't a "flaky" test what the TODO directive is for? |
TODO directive? Well, the point was.. you do run a flaky test, but consider it a warning if it fails, not fatal. So skip doesn't solve that, TODO (you mean a code comment?) doesn't solve that. But of course, as @Raynos did point out, this can be done in userland. It does mean you can no longer really rely on t.assert* functions, which is a shame. It affects the test's structure a lot, and if a test becomes non-flaky, you can't just turn it into a normal test by replacing t.flaky by t.test. |
@ronkorving |
Wow, one can do that? (that options object is not documented in t.test btw, only in "global" test) |
@ronkorving the TAP spec has a SKIP "directive" for marking a test as skipped. It has another TODO "directive" for tests that are not necessarily expected to pass (features in progress, etc.). |
Ah, that sounds perfect! Now can tape generate that output? There's nothing in the tape readme on this. |
I had never tried to use it. If tape doesn't currently support todo, I think the TAP spec would be a pretty good argument for adding it. |
Agreed, and rather than naming it |
Turns out it was already implemented as far as the TAP output goes. I just had to hook into it, so I made the option available. See the PR referenced above this comment. |
Is support and welcome this change. Our use case: The alternative way would be to create an emulated mock device that serves up faux data, but since we have a plethora of different devices to probe, the test suite will be much better qualitatively, if we can simply use the actual devices. So from me and our team, a welcome +1 |
@axit-joost isolate those tests, and don't run them on the other device - this way tests are allowed to fail on your device too, which is not what you want |
It would be nice if tape would support the ability to mark a test (or assertion) as flaky. It can be quite a pain when a test that depends on a 3rd party online service blocks the deployment of my own software, because that 3rd party is not cooperating.
The text was updated successfully, but these errors were encountered: