-
-
Notifications
You must be signed in to change notification settings - Fork 3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Data-generated test cases (parameterized tests) #1454
Comments
describe('prime number checker', function() {
[2,3,5,7].forEach(function(value) {
it('returns true for prime number ' + value, function() {
expect(isPrime(value)).to.be.true();
});
}); this should be done like imo:
and it seems that chai could add, might even already have, a matcher that would take an array and apply a fn against it - in this case isPrime. thanks for the issue but i'm gonna close. i don't think an api for this should be in mocha, should be in chai et al. if anything. |
With all my respect, the proposed solution sucks. First of all, the error message will be "expected false to be true", which is absolutely unhelpful. It can be worked around via Secondly, the first failure will stop the test case, therefore you won't know whether it is only one data point failing the test, or whether there are more of them. I.e. if Is there a way how to implement this feature as a mocha plugin? |
Also note that parameterized tests are a common feature of unit-testing frameworks.
|
This may help in simple examples like I described above, but it becomes cumbersome as the test method grows in length. Example: it('should return custom 404 response',
['/Products/unknown-id', '/unknown-root', '/images/unknown.jpg'],
function(url, done) {
supertest(app).get(url)
.expect(404)
.expect('Content-Type', /html/)
.end(function(err, res) {
if (err) return done(err);
expect(res.body).to.contain('<title>Lost your way?</title>');
done();
});
}
); |
Let's me rephrase that. The error message "expected false to be true" is very unhelpful, as it is lacking the context - what value was being tested? This can be worked around via However, it's the second argument (the first failure aborts the test case) which is most important. And again, most developers are not aware how important it is to get this right and will happily do things like it('returns true for prime numbers', function() {
expect(isPrime(2)).to.be.true();
expect(isPrime(3)).to.be.true();
}); While my proposed solution will not prevent them from doing that, it will at least make it super easy to fix their test code to do the right thing: it('returns true for prime numbers', [2, 3], function(num) {
expect(isPrime(num)).to.be.true();
}); @travisjeffery could you please elaborate more on why you are rejecting my idea and perhaps offer an advice on how I can get this feature without forking mocha, e.g. by modifying mocha to allow plugins to customize |
Good points there @bajtos. I'm not sure I dig adding optional parameters to the Some other options: var SAMPLES = [
{ scheme: 'http', host: 'localhost', path: '/', url: 'http://localhost/' },
{ scheme: 'https', host: '127.0.0.1', path: '/test', url: 'https://127.0.0.1/test' }
];
describe('url helper', function() {
it('builds url from components', {using:SAMPLES, tags: [tag1, tag2, tag3]}, function(data) {
var str = urlHelper.build({ scheme: data.scheme, host: data.host, path: data.path });
expect(str).to.equal(data.url);
});
using(SAMPLES).it('builds url from components', function(data) {
var str = urlHelper.build({ scheme: data.scheme, host: data.host, path: data.path });
expect(str).to.equal(data.url);
});
}); |
@dasilvacontin Yeah, I was thinking about the tags more. I don't think adding more parameters is the answer to that one, but in a nutshell a syntax like: this.tags = ['foo', 'bar', 'baz']; Is much more palatable. Anyway, regarding this: it('returns true for prime numbers', [2,3,5,7], function(data) {
expect(isPrime(data)).to.be.true();
}); What's the proposal for handling async tests in this manner? I'd imagine "more parameters". describe('prime number checker', function() {
[2,3,5,7].forEach(function(value) {
it('returns true for prime number ' + value, function() {
expect(isPrime(value)).to.be.true();
});
[4,6,8,9].forEach(function(value) {
it('returns false for composite number ' + value, function() {
expect(isPrime(value)).to.be.false();
});
}); The above seems pretty similar to param-test, though I don't read Ruby. @bajtos I'm not sure if it's possible to write a "plugin" that does this, but the correct way would be an interface that likely "extends" the BDD interface. If you can supply a 3p interface on the command-line (I can't recall if you can), then you can use whatever you come up with. If you can't supply a 3p interface, then you should be able to, which would make a great PR. |
the issue here is that you have a huge fn defined in the midst of your test. just create and use a named fn - e.g. you could write your own interface that supports an array parameter. |
In a nutshell: it's not essential for Mocha core. |
@dasilvacontin I like the using(SAMPLES).it('builds url from components', function(data) {
See the supertest example in my comments above: when the test case is parameterized, async test function takes two parameters. it('should return custom 404 response',
['/Products/unknown-id', '/unknown-root', '/images/unknown.jpg'],
function(url, done) {
// etc.
});
In my experience, it is quite common that test functions have 5-10 lines of code. There are two ways how to clean the test code - either extract a shared function, or write a parameterized test. Both approaches are valid. Your proposal involves too much unnecessary repeated boilerplate. it('should return custom 404 response for /Products/unknown-id'', function(done) {
assertUnknown('/Products/unknown-id', done);
});
it('should return custom 404 response for /unknown-root'', function(done) {
assertUnknown('/Products/unknown-root', done);
});
it('should return custom 404 response for /images/unknown.jpg'', function(done) {
assertUnknown('/images/unknown.jpg', done);
}); Compare it with what I am proposing:
Anyways, your proposal to write a custom reporter is a reasonable workaround. Honestly, I am getting tired from fighting what I see as bad decisions on mocha's side and I will most likely find another test framework that is closer to my mindset and contribute there. Here are some of the other issues I have with Mocha in case you were interested: #1218, #1401, #1065 . |
It's an interface, not a reporter. See bdd.js for a starting place.
What decisions? Feel free to email me, or join us in the Mocha Slack room to discuss (email Travis if you would like to join Slack), if you don't wish to do so here. I can't promise any resolutions, but statements like this beg more information. None of these tickets are closed--they are pending review and/or action; nothing has really been "decided" here. |
That was a typo on my side, thanks for correcting me and adding a link to source file.
Well, maybe "decision" was not the right word. My impression is that issues that are important to certain subset of mocha users like me are not given the same importance by mocha maintainers.
Ad #1065: I submitted a pull request #949 first. Since it was rejected, I created an issue to discuss what is the best way of addressing the problem. The issue has been opened for a year without any comment from project maintainers. In the meantime, a partial solution was landed via #993. Now I know very well (from my own experience) that you can't comment on all issues. However, I would appreciate if you could at least comment on issues that are following up on rejected pull requests, as such issues are likely to turn into another contribution (pull request). Ad #1218: This is super annoying for anybody who is using Jenkins JUnit integration, as any Ad #1401 - I see that you are going to merge this one soon, @boneskull, thank you for that. As I was thinking more about my feelings about mocha, I realised it may be the communication style I find most off-putting. When I try to scratch my mocha-related itch and contribute a patch, I usually end-up with an nonconstructive rejection that does not give me any alternative solution for fixing my problem. This creates an impression that mocha maintainers don't understand mocha users and the real-world issues they are facing. |
I share @bajtos feelings about mocha. For a long time, mocha served me really well, but at some point I started disagreeing on the project direction. It is barely usable now from my side, mostly related to #1401. I usually write tools/modules, but it seems that mocha is mostly focusing on applications now. I think I have more than 10k tests to maintain written in mocha, and I'm starting to think I should care more about this project, but it does not feel a welcoming community. I wrote my own thing at least 10 times for this particular issue, but here is the less ugly one https://github.com/nearform/nscale-planner/blob/master/test/integration/integration.js. IMHO it's superior to all the other proposals, but it's my taste. I also think it can be supported here: var custom = it.gen("should be generic with", function(a, b) {
//....
})
custom(1) // the test is reported as "should be generic with 1".
custom.skip(2)
custom.only("a string", 3) // the test is reported as "should be generic with a string". Also, it can be applied to |
@mcollina @bajtos Can you share some examples of where you've felt unwelcome?
@mcollina What makes you say that?
@bajtos If a critical issue (to you) has lingered for a long time, it's probably because we don't understand how many people it affects, and/or don't fully understand its severity. If something gets stale, it (often) gets closed. If something keeps getting bumped, it typically gets more attention, but that's not always the case (explanation follows). I'm going to ramble for a bit. There are two (2) active maintainers of this project; @travisjeffery and I. After TJ left, there was an influx of new collaborators, but many have dropped out. If either of us are terse when responding to issues, it's about all we can muster. I have maybe three (3) hours each week to devote to this project. Most of that time is taken responding to issues. At present time, my TO-DO list for Mocha is 158 items long, and I don't have a great idea about where to start if I do get a chance to implement something. Because we are so thin, we can't really afford to merge many features. Eventually those features will create more bugs. And we don't have time for more bugs. So, the priority right now for us is to ensure critical issues are addressed. But unless there's a flurry of activity on an issue, we won't necessarily know how critical it is. So, please excuse our reluctance to merge feature PRs. If there's any viable workaround, that's pretty much what's necessary right now. There are two main problems here, as I see it:
To address 1., we need more help. Presently we need help managing issues, reviewing PRs, and bringing critical issues to attention. This person or person(s) would have to actually enjoy "project management", or else they won't last long. If they can code, that's great too. This potential collaborator would need to understand that resources are very limited. I'm thinking maybe a splash on the README and post to the Google Group announcing we're looking for help in this area. Any other ideas? Regarding 2.: I did add CONTRIBUTING.md recently to explain a bit about the project's status, but I think that was insufficient, or maybe not worded well. Perhaps when closing PRs, we can be more courteous and reiterate the current needs of the project. Instead of a public declaration, individual attention would probably alleviate some pain. Users get upset when you don't merge their PRs. I think contributing to this is that users do not have a clear, documented way to develop a plugin, so they feel like the only way that their feature will ever see the light of day is if it gets merged. If a well-documented plugin API was published, then users could write all the weird features they want for Mocha, and keep them out of the core. Nobody gets hurt, everybody's happy. I think this is a great idea, but where to prioritize it next to the 158 other things? Any further comments or suggestions would be appreciated. |
@boneskull it's a feeling from my side of things. Every time I am writing a feature that is hard (like most of LevelGraph, nscale, Mosca, MQTT.js), I have issues with Mocha so that it is not simple, flexible and fun (as in the tagline) anymore. Every time I test an application with Mocha I feel "this is so great", and I recommend it in courses/etc. It is either missing a feature that I need, and I see no simple and easy way I can publish that as a module, or it has some annoying bug (like #1401), or I have to resort to ugly and non-reusable hacks to make things working.
You wrote that yourself, in more in-depth way than I ever could
You state "we will not likely accept your PR" in the CONTRIBUTING.md file, but there is no easy way for people to develop the feature they need with a plugin. This makes people feel unwelcome. Funding oss is hard, and I understand your problems, as these are often mine. However, so many companies and developers have interests in that mocha keeps going well, and I believe finding new contributors is possible. I think that saying it loud "we need help" is definitely important. I will try to help my best. Side note: you might consider applying for GSoC as a org, and hopefully let Google sponsor a young fellow developer to build and document that plugin API. |
@boneskull Thank you for sharing this. I did not realise how strained you are on time. In hindsight, I should have assumed from start that the lack of time was the reason for terse comments, instead of expecting some sort of intentional malice. I apologise for that.
I totally agree, now that I understand your situation.
Perhaps you can start encouraging people to write their features as plugins using the existing APIs, even though it's are not great, and let an official plugin API emerge from that work? I sent a PR adding a note about plugins to CONTRIBUTING.md: #1459
+1000 for that. IMO even a generic comment that you copy & paste every time you are rejecting an issue/a pull request would help a lot. Something along the lines: We appreciate your contribution. However, given how little time we have to maintain the project, we are accepting only critical bug fixes and absolutely essential features. We fully understand this is not great and you may be unhappy that your problem was not solved. Here are few alternative ways how to get what you need: 1) rewrite the feature as a mocha plugin 2) if you are fixing a bug in a non-essential component, consider extracting the component into a standalone plugin and fix the bug there 3) help us with triaging and managing issues so that we have more time left to work on the code.
Add a banner to the website (http://mochajs.org/). Make sure the plea includes the information from your comment (Presently we need help managing issues, reviewing PRs, and bringing critical issues to attention. This person or person(s) would have to actually enjoy "project management", or else they won't last long. If they can code, that's great too.) Frankly, I have other OSS projects where I have hard time keeping up with issues and pull requests, so unfortunately I can't offer much more help, even though I would like to :( |
TL;DR: how can I use parametrized tests? The only solution I found is using |
Something like this can work :) var assert = require('assert');
describe('suite', function() {
[1, 2, 3].forEach(function(n) {
it('correctly handles ' + n, function() {
assert(n);
});
});
});
// =>
suite
✓ correctly handles 1
✓ correctly handles 2
✓ correctly handles 3
3 passing (7ms) |
Very important:
|
What problems? [1, 2, 3, 4].forEach(function (val) {
describe('test ' + val, function () {
it('bla', function () {
})
})
})
|
I agree with @dasilvacontin, this is the approach we use and it works reasonably well. Not sure if polluting the mocha's API with parametrization is necessary at all TBH. |
FWIW, the @dasilvacontin 's solution only appears to work, but thte
|
@btelles What version of mocha are you using? Cause it works for me. $ cat test.js
[1, 2, 3, 4].forEach(function (val) {
describe('test ' + val, function () {
it('bla', function () {
console.log(val);
})
})
})
dstjules:~/Desktop
$ mocha --version
2.4.5
dstjules:~/Desktop
$ mocha test.js
test 1
1
✓ bla
test 2
2
✓ bla
test 3
3
✓ bla
test 4
4
✓ bla
4 passing (10ms) |
@danielstjules Works here as well. @btelles is that really the code you are using? From the output it looks as if you were referencing an iterator variable – once the |
FWIW, There's a module available I wrote called Unroll which provide parameterized tests like the above but with named parameters which are also reflected in the test title. Useful if you want the test titles to be informative with regard to the parameters. |
Mocha lacks built-in support [1] for writing parameterized tests and the suggested solution [2] involves a bunch of boilerplate* which has IMO resulted in different styles of parameterized tests in our codebase and not having parameterized tests when they would be useful to attain more complete coverage. This adds a helper inspired by [3] for writing parameterized tests and switches several existing places in our code to use it. * Though less with ES2015 syntax. [1] mochajs/mocha#1454 [2] https://mochajs.org/#dynamically-generating-tests [3] https://github.com/lawrencec/Unroll
Mocha lacks built-in support [1] for writing parameterized tests and the suggested solution [2] involves a bunch of boilerplate* which has IMO resulted in different styles of parameterized tests in our codebase and not having parameterized tests when they would be useful to attain more complete coverage. This adds a helper inspired by [3] for writing parameterized tests and switches several existing places in our code to use it. * Though less with ES2015 syntax. [1] mochajs/mocha#1454 [2] https://mochajs.org/#dynamically-generating-tests [3] https://github.com/lawrencec/Unroll
Mocha lacks built-in support [1] for writing parameterized tests and the suggested solution [2] involves a bunch of boilerplate* which has IMO resulted in different styles of parameterized tests in our codebase and not having parameterized tests when they would be useful to attain more complete coverage. This adds a helper inspired by [3] for writing parameterized tests and switches several existing places in our code to use it. * Though less with ES2015 syntax. [1] mochajs/mocha#1454 [2] https://mochajs.org/#dynamically-generating-tests [3] https://github.com/lawrencec/Unroll
I know this is old but there is now a really simple npm package to make this easier: mocha-param |
@mikejsdev I don't understand the reasoning behind mocha-param when we can use: https://mochajs.org/#dynamically-generating-tests. The latter allows for unique names for each test, which is pretty huge. Presumably we can use this same dynamic technique to stamp out multiple describes too (although I have not tried that.) Can you help me understand the use cases where mocha-param is preferable to dynamically generating tests? |
I agree with @binduwavell At all people who writing modules and publishing them like @mikejsdev
|
Hate to dig this one back up, but I wonder if being able to specify parameters in such a way that the mocha context is aware of them could improve support for use of |
Any way to do something like |
Just my two cents, but coming from the Python world the syntax proposed in https://mochajs.org/#dynamically-generating-tests looks very poor to me:
An (hypothetical) syntax that I would find superior would be:
|
At the moment, generating multiple test cases that differ only by input data is rather cumbersome:
I am proposing to extend the current API with syntactic sugar for data-generated test cases by adding an optional second parameter to both
it
anddescribe
, this parameter will be an array of data points to test.More advanced example:
I am happy to contribute the implementation.
The purpose of this issue is to discuss necessary details and to get maintainers' approval before I start writing code.
Related to #57.
/cc @mcollina
The text was updated successfully, but these errors were encountered: