-
-
Notifications
You must be signed in to change notification settings - Fork 54
Should auto-generated test cases fail by default? #31
Comments
generators producing failing tests by default is not something we will do, this will great excess pain/noise, with little rewards. I do see the value of red/green refactor cycles. I can think of two approaches. use something like: https://github.com/JamesMGreene/qunit-pending and generating a pending test, this would be a sort of yellow test, pending implementation. Typically these sorts of requests are to be provided via the https://github.com/ember-cli/rfcs, i noticed the CONTRIBUTING.MD doesn't say this clearly, I have updated it accordingly: fbbb354 I welcome an issue (or fleshed our RFC) on the RFC repo. |
I am a firm believer in the red/green refactor flow, but am unfortunately always in the minority. I suspect the masses would revolt. I believe my above recommendation, of |
"What's in a name? A rose by any other name would still smell as sweet." That is, I don't see much difference between alerting the developer with pending - This test is not implemented yet and alerting them with failed - this needs to be implemented (aside from perhaps color and aesthetics/wording). If the aim is just to draw attention to missing/not-implemented tests, then the failed mechanism does the job fine and would save us from having to package this extra plugin (I hope I understand correctly that this qunit-pending plugin would need to be referenced to make "pending" an option). If, on the other hand, the aim is really to introduce additional semantics to differentiate "not implemented" tests from "truly" failing implemented tests, then the pending option sounds ok. That said, call me a test Nazi ("opinionated" again), but a not-implemented test is a failure in my book...it's big and red and angry and deserves my time and attention. With respect to the revolting masses, again my gut doesn't really agree with this mindset. If an engineering concept/pattern has proven its worth in the industry over many years, why should a framework that's trying to be progressive ("ambitious") in that industry be afraid to follow it? The masses will always be resistant to any kind of change, but if we can help them to understand the reasons for that change then they "should" appreciate it in the long run. I'm no social scientist, but I suspect such moves also help to strengthen Ember's "culture" and "opinionatedness". Maybe I'm just being ueber-optimistic and utopian though. It's clear to me that we would need more input/consensus from the community on these broad issues before fleshing out any details on this suggestion. |
@samcic the reason I am thinking introducing a new type, (its not really new in most test environments, but the browser is...). Is because it allows us to ship code that works for all our users, but a user could easily opt-in, to I also believe it would allow |
Ok thanks for clarifying. I can of course understand your arguments for wanting to avoid a change that might display too much negative "red" stuff to users. Personally, a pending result by default would solve my problem in that it would alert me to work needing to be done, so if that's they happy middle-ground then I'm all for it. |
Now someone needs to figure out what actually needs to all happen to land |
As much as I'd like to get this in, are we representative enough? I appreciate you're an owner, but are there no others whose additional views/consensus on this might be useful? |
@samcic My opinion is I don't really care if the first test fails or passes, because I start changing it immediately. But for our newer users, red in the console might give the impression the tool is doing something wrong. We just rolled back our CSP addition for the same reason, the red in the console was confusing and reduced faith in the tool. Maybe when the freshest of users run their first generator and run the tests for the first time, they see a green test, might be more good than bad? |
Is there any advantage to use pending over skip? Skip is already supported by QUnit out of the box. |
@kellyselden Most of the time I'm with you...I start changing the test immediately. That said, I really do come across cases where I'm "scaffolding" the "feel" for the skeleton of a feature with a bunch of resources and don't want to flesh out a unit test for each one as I'm going. In such cases there is a clear value of having tests "remind" me that they need to be completed. Maybe I get interrupted and don't get the chance to finish what I was doing for days. Can you appreciate this line of thought? @stefanpenner's suggestion of a yellow "pending" might be a healthy compromise between TDD idealism and newbie-friendliness. Surely the console message could be appropriately worded in these cases to not frighten people ("Warning: this test has not yet been written" or similar). If the freshest of users run the generator, perhaps code a bit, then run the tests and realize that they pass, I think we'd be holding their hand down Antipattern street...it's at such an early stage that they need to at least think about their testing methodology. The CSP addition is also a slightly different situation. It's relevant to the actual application code, so even people who don't care about testing at all will be affected by it. For people that are just hacking around and don't care about tests, our suggested change here wouldn't affect them. |
@csantero - The idea of |
confirm |
When you're using rspec in rails and generate a controller, you also get a test for it, much like with ember-cli. There, the test is pending, and shows up yellow with a default message with something along the lines of
We feel this is a good way to go about it. Some times you just want to generate the thing, but it's good to also get a pending test as a kind of to do list. |
@kimroen Yep, this kind of "to-do" generation is also seen when generating an implementation for an interface in OO languages like Java and C# (Resharper, for example, will generate the method with a body that always has "throw new NotImplementedException();"). |
Another option for consideration: Out of the box, if no assertions are made in the test, the test fails with what I would consider to be an intuitive message:
These failures still come up "red", but the message wording makes it clear that nothing useful has been done in the test, i.e. one could argue that it's a kind of "pending". This option wouldn't require the use of any additional add-ons/packages. Concretely, it might simply involve commenting (or removing...resulting in fewer lines of generated code!) the existing code/assertion that is made in the generated tests, something like:
This way we're simply exploiting qunit's "out of the box" (native) handling of an empty test case, which it defines to be a failure and dumps an understandable and sensible message in this context. |
Given the latest state of this discussion, does anybody have any objection to me submitting a pull-request to ember-cli that simply comments out the assertions in all the test blueprints (model, service, component, mixin etc)? Practically this would mean that newly-generated tests would not run assertions in their unedited state. This means that, based on qunit's default behaviour, these tests would fail with the "Expected at least one assertion..." message (and hence increase the project's recommendation for developing with a mindset more along the lines of TDD, red-green refactor cycles etc). |
That would be a very large undertaking. As our test suite actually uses this to test that basic blueprints work. As mentioned above, if someone wants to introduce the first class topic of pending I believe we would settle on that. Pending is totally practical for all parties, those with who prefer TDD and those that do not. |
I think this will get more difficult now that some of the test blueprints have been moved to ember-data and the others are supposed to be moved to ember itself. |
We are working on closing the ember-cli/rfcs repo in favor of using a single central RFC's repo for everything. This was laid out in https://emberjs.github.io/rfcs/0300-rfc-process-update.html. Sorry for the troubles, but would you mind reviewing to see if this is still something we need, and if so migrating this over to emberjs/rfcs? |
The suggestion is to change the auto-generated unit-test cases from something like this...
...to this...
I originally posted this as an ember-cli issue. It was promptly closed. The comment was "generators producing failing tests by default is not something we will do, this will great excess pain/noise, with little rewards."
Although the issue was closed on ember-cli because it apparently didn't belong there (it rather belongs here), I found the comment above interesting. Particularly, I have to say that it didn't convince me because it wasn't clear. In what circumstances would such a change cause great excess pain/noise? Are the benefits I mentioned in the original post not rewarding enough in comparison?
Some factors I can think of that might have played a part in the decision:
I'm hoping for an elaboration or discussion of the above comment, because my personal experience tells me that the weighting is in the other direction (i.e. having failing tests by default would do more good than harm for the end quality of an app).
Sure, testing is one of those "delayed gratification" things that is painful at the start, and sure, the ignorant may be vociferous, but: Ember's supposed to be an opinionated framework, so here I am offering an opinion: such behavior is for me a clear fit to the "convention over configuration" mantra that makes us all love Ember. Convince me (and yourselves) otherwise.
The text was updated successfully, but these errors were encountered: