-
Notifications
You must be signed in to change notification settings - Fork 781
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add New Skip Feature #434
Add New Skip Feature #434
Conversation
As an occasional user of Mocha, I'm a big fan of this idea and have talked about implementing it before as well. IIRC, though, the other QUnit collaborators weren't big on the idea. 😕 Notes:
Known Prior Art:
|
I'm sure we had more discussion about this before, though the only reference I can currently find is #326. The skipIf has some charme, but isn't really needed. You can just wrap the test call in the condition. We do that for QUnit's own tests and never had a problem with that. So while I appreciate the detailed pull request, a regular ticket to discuss the feature request would have saved a lot of time. II'll leave this open for now, but I'm generally against adding this feature. |
@jzaefferer Adding a condition to the test calls prevents them from running, but doesn't tell you anything. I really miss the feature of being able to see that we skipped it visually and why. |
So feature condition is one thing. Within QUnit we only need that when running tests in non-browser environments. I don't really care which tests are skipped there. The other one is usually countered with: If you're going to implement the test in the next minutes, make it fail. If not, don't write the test. What's the usecase outside of that? |
I often stub out tests that I can come back to later. By later I may not mean minutes later, I may mean months. By which time I will have surely forgotten if I just commented out that test. Or if I just leave it as failing, my build has been marked as broken for months, when really it isn't broken, I just didn't finish the tests yet. It is a system I got used to using on mocha, and really enjoyed because I could easy switch back and forth between my tests and code and not forget my place. Specifically with QUnit I have to have passing tests (which I will probably never look at again unless that feature changes later) or failing tests (which for this case will mark a working build as failed). I want a third state of pending or skipped that isn't passed, but isn't failed. I don't feel that pass/fail fully describes all the states that a test could be in, that is how I usually work with testing which probably grew from using Mocha so much. |
Fully agree with @englercj on the usefulness of stubbing out a test list with pending tests: it's enjoyable for me alone but also very helpful in the OSS world to show test cases that are not proven yet (as a disclaimer if sorts), as well as for any inspiring contributors to have an easy place to jump in and help. I also much prefer how the test author indicates a pending test in Mocha over this PR's implementation: by simply not providing a test callback function, e.g. test('normal test', function(assert) {
assert.ok(true);
});
test('pending test'); |
I can definitely get on board with that. Having it sample and mark a test as |
TL;DR |
Can you expand on that a little further? To me this sounds like you're using unit tests for issue/task tracking, which seems like a bad idea. I appreciate the references to other frameworks, but feature parity is not a goal of QUnit, so that doesn't pull much weight. |
I think @JamesMGreene said it best, having a pending test is just a useful feature for stubbing out which tests you want to write, but not having to write them right then. But still having a reference to come back to later and finish. |
You will never create a task "create test for X feature when Y value is greater than Z".
Let's say I need to create 5 tests regarding a task. I analyse and see which tests I should create. Today we create a test without any assertion or something like But that is not technically an error, but a reminder. A few times I have gone to similar situation where a pending test would fit better than a red one. But the argument
is quite valid. All depends how much qunit is willing to help the programmer to test and control the software behavior. |
Sorry for spamming, but I thought it wouldn't harm to write down my opinion too, if I understood the above correctly. (*) Another example case can be that you're using web workers, which (only in Chrome) can't run in a file:/// environment. |
My opinion on some of the mentioned use cases: If you have tests for a features that doesn't exist yet (so an unconditional skip for tests that don't yet pass), the test should probably be in a feature branch instead, once the feature is implemented in the same branch, you merge it. No need to indiciate this within the test suite. As long as the tests are incomplete or complete but failing because the feature isn't there yet, that's good. Because if someone were to merge it too early, it's supposed to fail (not silently pass with a "skipped" message somewhere buried in the results). It also helps if your continuous integration environment supports testing branches, you'll be able to see which branches have work to do and work are ready to merge (at least from a test point of view). If you need a reminder to write tests for something (so an unconditional skip for tests that don't exist yet), then file an issue in your issue tracker. Whether it is commented out or marked skip, I don't see how either is a workable solution to work as a reminder. Assuming the skipped tests should not affect the overal status of the suite as a whole, you'd only notice it if you look at the code or read through the results manually. Having an open issue (in a certain component, perhaps with label |
I disagree. If there's an important part of feature X not tested, I don't see why you wouldn't file an issue that a test should be created for it. Actually, if it is a regression test I'd argue the commit fixing it shouldn't been merged in the first place, but that's discipline. Basically any test you know should be added should either be added when the bugfix/feature is merged or have an issue filed for it as a todo item. |
Long-story-short, it sounds like this one isn't going to get merged. On the other hand, I do think it is possible to implement as a [very hacky] plugin right now via duck-punching For example, the following would give you the ability to skip tests and add them to the UI (but not its "counts"... you could implement that yourself with a little more time invested): QUnit.test = (function() {
var realTest = QUnit.test,
slicer = [].slice;
return function() {
if (arguments.length && (typeof arguments[arguments.length - 1] !== 'function') {
// Add a "pending" test to the UI, check qunit.js core code for more details on
// what you need to mimic.
// Also, add a "pending" count and/or add the pending test count to the total
// count... may want to wait until `QUnit.done` to change the counters, though.
}
else {
realTest.apply(this, slicer.call(arguments, 0));
}
}
})(); |
The main ideia in my POV is not to merge a topic branch with the pending tests, but mark their need while developing. Let's say you have 5 tests and write 3. In the next day you go and finish the remaining, which are already marked. Of course, you could mimic such situation failing in red those that are needed. But the main view (red stripe at the top) would return to you that tests are still failing, even though I haven't created any yet and those created are passing. The main problem could be that you would be showing green success in the stripe at the top, so that could drive the user to think that everything is ok, when that is actually not. Visual inspection should be avoided indeed. But the problem is that it goes against the OP proposed behavior and usage that is skipping the tests for browser lacking certain feature (or maybe a yellow stripe in this case is not so bad?). |
👍 to @JamesMGreene and @FagnerMartinsBrack |
Let's go with that. I don't want this in QUnit, but it makes sense to have a plugin that provides the feature. |
Sad to see that this hasn't been merged. My use case is this one: I have a single file containing my tests, some of them can't pass on some browsers because of the lack of a specific feature (web workers). I have to wrap them in a condition, but if I make a mistake and wrap another test in that condition, I have no way to know that. |
You have a clear point, this is really needed. But as @jzaefferer pointed out it would be nice to have a plugin for that (I am not sure how that stood) |
I still don't see the need for a conditional I do plan to eventually implement a JamesMGreene/qunit-pending plugin but only to achieve functional parity with Mocha, i.e. |
test( "testing feature", function() {
if ( unsupportedFeature() ) {
ok( true, "Unsupported browser" );
return;
}
expect(5);
...
}); You will never have more than one condition per test (otherwise yor unit is doing too much work). A single condition make it clear the support as the first statement in the operation and "expect()" always follows. At least this is how I do it. If you forget the return it will complain. As @JamesMGreene pointed out it really makes no sense, unless I am missing something. |
I think the difference is that your method will still mark it as pass/fail, but a skipped test shouldn't be pass or failed it should be "skipped" or "pending" to note the different state. It isn't that there is no way to skip a test now, there just isn't a good way to know that it was skipped. Having the third state is what this is all about. |
I agree with @englercj. This is not the same. Imagine we're back in time, and IE8 is the latest version. I have my test suite, I create a separate file that I skip for IE or use the trick to make the test pass. It seems a little bit to be an edge case, but a skip tests doesn't have the same semantic than a passed test (or a not-run-at-all test). |
Less time discussing, more time implementing a plugin, please! |
Such a shame this did not go through. Carrying the name "QUnit" and not supporting xUnit style Incomplete and Skipped Tests seems like a bit of a dud. Any news on the state of that plugin? |
@Potherca Not from me, I switched test runners instead, sorry! |
@englercj No worries, my current setup using this: https://gist.github.com/potherca/8579269 works fine for me an my team, for the time being. |
This is my interpretation (Mocha-style "pending" tests for QUnit): https://gist.github.com/JamesMGreene/8617691 I'll move it to JamesMGreene/qunit-pending when I get a chance. |
Fixes qunitjs#637 Fixes qunitjs#434
Fixes qunitjs#637 Fixes qunitjs#434
Fixes qunitjs#637 Fixes qunitjs#434
Also logs skipped tests on testDone callback that are handled on the html reporter Fixes qunitjs#637 Fixes qunitjs#434
Also logs skipped tests on testDone callback that are handled on the html reporter Fixes qunitjs#637 Fixes qunitjs#434
Also logs skipped tests on testDone callback that are handled on the html reporter Fixes qunitjs#637 Fixes qunitjs#434
Also logs skipped tests on testDone callback that are handled on the html reporter Fixes qunitjs#637 Fixes qunitjs#434
Why
I was missing a feature that would mark tests as "skipped" so that I can see tests that aren't implemented, or features lacking in that browser, or to temporarily skip a test with a reason so I could remember to come back to it.
How
The general implementation is to make your final result a 3-state variable instead of 2-state. Right now there is
true
for passing andfalse
for failing. This adds a thirds state ofnull
for skipped. Callingskip()
orskipIf()
will mark that test as "skipped" which will color it yellow, and add to the "skipped" counter (note thatskipIf()
has a test param and will only assert "skipped" if that param is truthy).Use cases
Mocked Test
The most basic use case that I started with was me mocking out tests, and implementing random ones out of order. I didn't want to have "failing" tests for unimplemented, nor did I want to use
expect(0)
and have a passing test because I will forget to go back to it. So the most basic case was something like:which gives me output like:
SkipIf based on Feature Detection
This was brought up in #189 that it would be nice if we could detect a browser feature, and conditionally skip the assertions depending on if it had it or not. Something like:
which on FF19 gives the result:
but in IE7 gives the result:
Final Notes
Obviously there a many reasons that skipping a test is preferred to it failing. I may have implemented it in a way that is not ideal, and if so please let me know and I will try to fix it because this is a feature I would really like to see land. The final thing I want to note is that if you have failed assertions in a test with a
skip()
call as well, the test is still marked as failed:If you have tests you know will fail if some condition isn't met, then that is what
skipIf()
is for. You can do the test and only run code inside if you need to, otherwise you get a skip assertion.