Skip to content
This repository has been archived by the owner on Jan 20, 2019. It is now read-only.

Should auto-generated test cases fail by default? #31

Closed
samcic opened this issue Jan 4, 2016 · 19 comments
Closed

Should auto-generated test cases fail by default? #31

samcic opened this issue Jan 4, 2016 · 19 comments

Comments

@samcic
Copy link

samcic commented Jan 4, 2016

The suggestion is to change the auto-generated unit-test cases from something like this...

// Replace this with your real tests.
test('it exists', function(assert) {
    var service = this.subject();
    assert.ok(service);
});

...to this...

test('it exists', function(assert) {
    var service = this.subject();
    assert.ok(false,'The service "my-service" has not yet been tested. It is highly recommended that you test your code for reasons X,Y,Z...');
});

I originally posted this as an ember-cli issue. It was promptly closed. The comment was "generators producing failing tests by default is not something we will do, this will great excess pain/noise, with little rewards."

Although the issue was closed on ember-cli because it apparently didn't belong there (it rather belongs here), I found the comment above interesting. Particularly, I have to say that it didn't convince me because it wasn't clear. In what circumstances would such a change cause great excess pain/noise? Are the benefits I mentioned in the original post not rewarding enough in comparison?

Some factors I can think of that might have played a part in the decision:

  • "Such behaviour might annoy newbies and people just making prototypes (non-production code).": Here I'd argue that such people wouldn't ever run the test suite anyway, so I can't see much pain being caused here.
  • "People might not be using ember-qunit": If that's the case I'm guessing that these scaffolded test cases wouldn't be relevant anyway.
  • "People can just opt-in to a plugin for generating such tests if they want". Here I could equally argue the flip side: why not make the current behavior an opt-in, changing the default behavior to one which forces the developer to at least think about and question how they're testing their app?

I'm hoping for an elaboration or discussion of the above comment, because my personal experience tells me that the weighting is in the other direction (i.e. having failing tests by default would do more good than harm for the end quality of an app).

Sure, testing is one of those "delayed gratification" things that is painful at the start, and sure, the ignorant may be vociferous, but: Ember's supposed to be an opinionated framework, so here I am offering an opinion: such behavior is for me a clear fit to the "convention over configuration" mantra that makes us all love Ember. Convince me (and yourselves) otherwise.

@stefanpenner
Copy link
Contributor

generators producing failing tests by default is not something we will do, this will great excess pain/noise, with little rewards.

I do see the value of red/green refactor cycles. I can think of two approaches.

use something like: https://github.com/JamesMGreene/qunit-pending and generating a pending test, this would be a sort of yellow test, pending implementation.
a community add-on that provides blueprints which generate failing tests, as an opt-in.
I am closing this, not because the discussion isn't good but it isn't quite the correct venue.

Typically these sorts of requests are to be provided via the https://github.com/ember-cli/rfcs, i noticed the CONTRIBUTING.MD doesn't say this clearly, I have updated it accordingly: fbbb354

I welcome an issue (or fleshed our RFC) on the RFC repo.

@stefanpenner
Copy link
Contributor

Sure, testing is one of those "delayed gratification" things that is painful at the start, and sure, the ignorant may be vociferous, but: Ember's supposed to be an opinionated framework, so here I am offering an opinion: such behavior is for me a clear fit to the "convention over configuration" mantra that makes us all love Ember. Convince me (and yourselves) otherwise.

I am a firm believer in the red/green refactor flow, but am unfortunately always in the minority. I suspect the masses would revolt.

I believe my above recommendation, of pending tests be a helpful middle-ground. Thoughts?

@samcic
Copy link
Author

samcic commented Jan 4, 2016

"What's in a name? A rose by any other name would still smell as sweet." That is, I don't see much difference between alerting the developer with pending - This test is not implemented yet and alerting them with failed - this needs to be implemented (aside from perhaps color and aesthetics/wording). If the aim is just to draw attention to missing/not-implemented tests, then the failed mechanism does the job fine and would save us from having to package this extra plugin (I hope I understand correctly that this qunit-pending plugin would need to be referenced to make "pending" an option).

If, on the other hand, the aim is really to introduce additional semantics to differentiate "not implemented" tests from "truly" failing implemented tests, then the pending option sounds ok. That said, call me a test Nazi ("opinionated" again), but a not-implemented test is a failure in my book...it's big and red and angry and deserves my time and attention.

With respect to the revolting masses, again my gut doesn't really agree with this mindset. If an engineering concept/pattern has proven its worth in the industry over many years, why should a framework that's trying to be progressive ("ambitious") in that industry be afraid to follow it? The masses will always be resistant to any kind of change, but if we can help them to understand the reasons for that change then they "should" appreciate it in the long run. I'm no social scientist, but I suspect such moves also help to strengthen Ember's "culture" and "opinionatedness". Maybe I'm just being ueber-optimistic and utopian though.

It's clear to me that we would need more input/consensus from the community on these broad issues before fleshing out any details on this suggestion.

@stefanpenner
Copy link
Contributor

@samcic the reason I am thinking introducing a new type, (its not really new in most test environments, but the browser is...). Is because it allows us to ship code that works for all our users, but a user could easily opt-in, to pending tests being failures or not.

I also believe it would allow pending to be a soft reminder that work needs to be done for those who do not opt-in to the stricter requirements.

@samcic
Copy link
Author

samcic commented Jan 5, 2016

Ok thanks for clarifying. I can of course understand your arguments for wanting to avoid a change that might display too much negative "red" stuff to users. Personally, a pending result by default would solve my problem in that it would alert me to work needing to be done, so if that's they happy middle-ground then I'm all for it.

@stefanpenner
Copy link
Contributor

Now someone needs to figure out what actually needs to all happen to land pending support, and make it easy for people to opt into pending === failure

@samcic
Copy link
Author

samcic commented Jan 5, 2016

As much as I'd like to get this in, are we representative enough? I appreciate you're an owner, but are there no others whose additional views/consensus on this might be useful?

@kellyselden
Copy link
Member

@samcic My opinion is I don't really care if the first test fails or passes, because I start changing it immediately. But for our newer users, red in the console might give the impression the tool is doing something wrong. We just rolled back our CSP addition for the same reason, the red in the console was confusing and reduced faith in the tool.

Maybe when the freshest of users run their first generator and run the tests for the first time, they see a green test, might be more good than bad?

@csantero
Copy link

csantero commented Jan 5, 2016

Is there any advantage to use pending over skip? Skip is already supported by QUnit out of the box.

@samcic
Copy link
Author

samcic commented Jan 5, 2016

@kellyselden Most of the time I'm with you...I start changing the test immediately. That said, I really do come across cases where I'm "scaffolding" the "feel" for the skeleton of a feature with a bunch of resources and don't want to flesh out a unit test for each one as I'm going. In such cases there is a clear value of having tests "remind" me that they need to be completed. Maybe I get interrupted and don't get the chance to finish what I was doing for days. Can you appreciate this line of thought?

@stefanpenner's suggestion of a yellow "pending" might be a healthy compromise between TDD idealism and newbie-friendliness. Surely the console message could be appropriately worded in these cases to not frighten people ("Warning: this test has not yet been written" or similar).

If the freshest of users run the generator, perhaps code a bit, then run the tests and realize that they pass, I think we'd be holding their hand down Antipattern street...it's at such an early stage that they need to at least think about their testing methodology.

The CSP addition is also a slightly different situation. It's relevant to the actual application code, so even people who don't care about testing at all will be affected by it. For people that are just hacking around and don't care about tests, our suggested change here wouldn't affect them.

@rwjblue
Copy link
Member

rwjblue commented Jan 5, 2016

@csantero - The idea of pending is slightly different than skip. See qunitjs/qunit#787 for some additional rationale.

@stefanpenner
Copy link
Contributor

I think we'd be holding their hand down Antipattern street...

confirm

@kimroen
Copy link

kimroen commented Jan 5, 2016

When you're using rspec in rails and generate a controller, you also get a test for it, much like with ember-cli. There, the test is pending, and shows up yellow with a default message with something along the lines of

Please add tests for the something controller or remove this file. The test file can be found at path.

We feel this is a good way to go about it. Some times you just want to generate the thing, but it's good to also get a pending test as a kind of to do list.

@samcic
Copy link
Author

samcic commented Jan 5, 2016

@kimroen Yep, this kind of "to-do" generation is also seen when generating an implementation for an interface in OO languages like Java and C# (Resharper, for example, will generate the method with a body that always has "throw new NotImplementedException();").

@samcic
Copy link
Author

samcic commented Jan 8, 2016

Another option for consideration: Out of the box, if no assertions are made in the test, the test fails with what I would consider to be an intuitive message:

Expected at least one assertion, but none were run - call expect(0) to accept zero assertions.

These failures still come up "red", but the message wording makes it clear that nothing useful has been done in the test, i.e. one could argue that it's a kind of "pending".

This option wouldn't require the use of any additional add-ons/packages. Concretely, it might simply involve commenting (or removing...resulting in fewer lines of generated code!) the existing code/assertion that is made in the generated tests, something like:

test('it exists', function(assert) {
    // e.g.
    // var service = this.subject();
    // assert.ok(subject);
});

This way we're simply exploiting qunit's "out of the box" (native) handling of an empty test case, which it defines to be a failure and dumps an understandable and sensible message in this context.

@samcic
Copy link
Author

samcic commented Feb 9, 2016

Given the latest state of this discussion, does anybody have any objection to me submitting a pull-request to ember-cli that simply comments out the assertions in all the test blueprints (model, service, component, mixin etc)?

Practically this would mean that newly-generated tests would not run assertions in their unedited state. This means that, based on qunit's default behaviour, these tests would fail with the "Expected at least one assertion..." message (and hence increase the project's recommendation for developing with a mindset more along the lines of TDD, red-green refactor cycles etc).

@stefanpenner
Copy link
Contributor

That would be a very large undertaking. As our test suite actually uses this to test that basic blueprints work.

As mentioned above, if someone wants to introduce the first class topic of pending I believe we would settle on that.

Pending is totally practical for all parties, those with who prefer TDD and those that do not.

@Turbo87
Copy link
Member

Turbo87 commented Feb 19, 2016

I think this will get more difficult now that some of the test blueprints have been moved to ember-data and the others are supposed to be moved to ember itself.

@rwjblue
Copy link
Member

rwjblue commented Jan 19, 2019

We are working on closing the ember-cli/rfcs repo in favor of using a single central RFC's repo for everything. This was laid out in https://emberjs.github.io/rfcs/0300-rfc-process-update.html.

Sorry for the troubles, but would you mind reviewing to see if this is still something we need, and if so migrating this over to emberjs/rfcs?

@rwjblue rwjblue closed this as completed Jan 19, 2019
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

7 participants