-
-
Notifications
You must be signed in to change notification settings - Fork 32.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[tests] Improve the test suite #3330
Comments
Basic example test for (could do with style assertions where necessary too) |
What do you think about adding some helpers that help confirm basic expectations? You know the library a lot better than I do, are there some common assertions throughout the lib that we'll want to make for components that might get tedious repeating over and over in tests? We can provide some helpers for these common assertions so users could write certain things like: (contrived example) it('renders children', rendersChildren(Avatar)); |
Another note is that some components are currently exported by default with a HoC. Would it be okay to add a named export for these modules so we can access the plain component for unit tests? It significantly increases the ease of unit testing them. |
I updated the example above to use JSX in |
@nathanmarks This looks really really good!
Yeah that's would be useful. My gut feeling is to hold off on creating too many helpers until we start writing tests (and seeing patterns). The project is going through so much refactoring that I'd prefer to consider it with a fresh perspective. That being said, the first area that popped in my mind that a helper could be useful are checks for
That's a good point and good idea. /cc @oliviertassinari @alitaheri @mbrookes This would impact the migration to |
@newoga Agree with what you said (hold off and wait for patterns to emerge). FYI, I ran into a frustrating problem with @mbrookes The browser tests (where the current tests are sitting in my branch) would be a good place to flesh out some acceptance testing with a real DOM |
@nathanmarks That's an interesting question. My gut says for now to include those types of "integration" tests with the most top level related component (in this case Table). If the I'm open to other approaches though but I'm thinking this shouldn't affect too many of our components (or if it does, maybe there's a design problem with those components). |
@newoga Sounds good to me. I am happy with that approach. One thing RE browser testing, it would be great if someone more familiar with the library could identify parts of the codebase (esp UI interactions) that really would benefit from real browser testing. (side note -- leveraging the already maintained docs site for some basic browser acceptance testing might be some food for thought) |
@newoga @mbrookes @oliviertassinari @alitaheri What would be great is if you guys could decide on some goal(s) or examples you'd like to see achieved (or anything you don't like that you want to see changed) before I start preparing for a PR. 100% understand if you need time to review and think -- I'm just eager to get the unit testing situation in a better position for success! Also wondering if anything should/needs to wait until certain refactoring decisions are set in stone. Thoughts? |
Honestly, I'm very happy with where we're at with it so far. I think you're close enough to open up a PR and we can always come back to it after merging and improve more after we learn more about what's lacking. @oliviertassinari @alitaheri @mbrookes Any thoughts about https://github.com/nathanmarks/material-ui/tree/test-suite-overhaul before @nathanmarks opens a PR? |
@newoga I definitely have a couple things to verify and clean up, and intend to add some more examples. But my thought process is the sooner we get this in here, the sooner others can contribute to tests. I'm really hoping that being able to do: $ npm run test:unit:watch -- Avatar Will encourage use, even if just to verify the basics to begin with!' We should definitely write a very strong test (with style assertions and examples of testing click events callbacks etc) with some notes/justifications and mention it in the contributing guide as an example of how to use the unit tests to their full potential. There's also the matter of conventions. With all the refactoring in the FS going on, what are your thoughts? I'd like to name unit test files by their component names like I did with |
I agree and would like to get that process started as well.
Yeah I really wish we were finish with the directory changes. If so my vote would be to put the tests like
As much as I'm looking forward to getting the directories cleaned up and consistent, I'm leaning towards option 1 for now. Note: For the command line options, one option would be to |
Actually looking at your setup again, if we're using mocha's grep option, where the test actually is shouldn't matter since it's looking it should be matching against the test name. |
@newoga Yup, so we can pretty much set up the tests however we want. I just want some sort of convention to follow so there is consistency (which makes it easier to change later too). The reason I went with I'm anticipating having to add a little more sophistication to what I'm passing to mocha, but I also have the option of just doing that in the glob very easily since mocha is being run programatically, not via the mocha cli command. This option may be easier and would work great if If that's how you anticipate the refactoring will end up changing the main lib, they will also drop nicely into |
That makes sense! Let go with that: |
@nathanmarks Feel free to open a PR when you're happy with the implementation. I think this is ready (if not close). 👍 |
@newoga Awesome. I will do so soon then. Once/if we merge, would you consider enabling coveralls for the repo so we can have a nice GUI for looking at the current unit test coverage @ master? I can configure the travis.yml to submit the lcov report to coveralls. (Will need someone with access to add a couple of environmental vars to the travis settings though). Also -- this is where you may want to think about my Q earlier RE combining the reports. Otherwise in something like coveralls we can only choose a single report (unit, browser, etc) to submit. |
Yeah that's a good point, I didn't see the impact on those coverage services. Let's combine them but I'll leave it up to you whether you want to figure it out for this PR or submit it as a new one. I'm fine with either 👍 After the testing is in place, we'll sign up for one. @oliviertassinari showed me an example of PR coverage report comments that codecov can make when developers submit PRs. I think that feature is pretty nifty. |
Nice, yeah, coveralls has a similar thing too. Can leave that though until there is actually some coverage to maintain for most the components as you mentioned 👍 I'm going to try and get the combined output generated for this PR as well. |
I'm not that familiar with the code base. But I see some stuff that would benefit from real browser testing:
|
@oliviertassinari more so than I am ;) Is that a chrome webview? (Cordova, phonegap). RE visual regressions. Are they visual regressions with the same CSS or different CSS in the output? If so, this is where some acceptance testing (perhaps using the documentation site) could be useful. IIRC browserstack offer some free automation plans for open source. |
Yes, this is a chrome webview.
Indeed, I have seen some cool tool that can help. For instance http://webdriver.io/guide/plugins/webdrivercss.html.
I'm not sure to understand. That could be linked to inline style changes or component structure changes.
I agree, I think that all our documentation examples are great candidates to write test for. |
@oliviertassinari Sorry, let me clarify. We can make assertions on certain core styles without webdriver/screenshots/etc. These types of assertions are useful so that users don't have to run the whole E2E visual screenshot diff test setup to catch core styling screwups. However, having a visual diff on the documentation site + examples sounds like a great way to ensure E2E visual consistency across the whole lib. (And this ensures that structural changes aren't causing unwanted side effects too) As an added bonus, it will help encourage keep the docs site full of rich examples ;) |
@newoga working on a Friday deadline, then I'll prep the PR ;) Side note, I've been using this lib for an internal tool and I've found a number of performance problems (some of which I've seen mentioned in various issues). So after this PR, that will be my prio rather than expanding lists. I've also got some ideas for how to integrate certain performance benchmarks into the test suite -- it involves using |
That's an excellent idea 👍
Well, that's not a suprise 😕. |
@nathanmarks Thanks much, looking forward to it!
Sounds good, my story here is similar in that I was originally looking to upgrade a component and ended up working on a ton of other things instead 😆. When you get a chance, could you still open a new issue for the list animation implementation? It'd be good to document and we might have someone that could tackle it and you could provide some high level guidance for it.
Awesome, glad to hear! That will be crucial in evaluating how our code changes are impacting performance (positively or negatively). Let's do it! 😄 |
@newoga did you get feedback RE exposing the plain components via named exports for unit testing? |
@nathanmarks Yes I responded here unless I am missing something. 😄 I also touched on this topic in another issue and we agreed that we will export plain components one way or another. We'll probably address this while reorganizing directories but right now it should only impact a handful of components (Divider and Subheader are the only two I can think of at the moment and they're pretty simple components). |
@newoga Sorry -- I know you had mentioned it. I just noticed that the other guys didn't respond to your @ |
Oh okay, no worries! I spoke with them earlier about your testing branch and they are all fully in support of it (and I think they're just letting us take the lead on this 😝). As it relates to the plain exports, we briefly touched on the topic here. There are a lot of moving parts with the work in the project now but we're all heading the same direction. Right now, I'm currently focusing on removing some React anti-patterns (or things that may be deprecated in the future) to help future proof the project. Though I'm really looking forward to getting the testing changes in so we can start working on getting coverage for the components. If you have any blockers related to the project or anything that's holding you up, let me know and we can see if we can get that out of the way. Thanks @nathanmarks! |
@newoga Sounds good! No blockers now -- will issue a PR tonight. Can you point me to the issues RE the anti-patterns you are focusing on removing? I'm interested in checking them out |
Generally speaking, the ones that I'm currently focused on (that are the ROADMAP) are:
After those first two items are done, we should be able to convert the codebase to ES6 classes when we're ready to do so. Though I may try to take a stab at coming up with a plan for directory reorg before taking on |
@newoga Quick one. I had initially setup with split coverage. The browser (phantomjs) tests automatically generate coverage every run, and the unit tests have a separate command to gen coverage by instrumenting with isparta. At the moment, both are dumping html and lcov output to I'm about to work on combining the coverage report. Should we shift the individual coverage reports to just output the summary in the console and use the |
Alternatively we can still output to The suggestion for just ditching the full output for the individual reports was to avoid noise, but it doesn't really matter since the folders aren't checked into git anyways. I'll go with this for now, we can always remove later if it confuses people (it shouldn't though). |
@nathanmarks Yeah, that sounds like a good plan to me! And I think the names of those folders make it pretty clear too. 👍 |
This was resolved by #3405 |
As per the discussion in #3241, this is my plan (to help reach the goal of unit test coverage for most components):
You can take a look at my progress here: https://github.com/nathanmarks/material-ui/tree/test-suite-overhaul
So far I have:
The test folder now looks like this (if the coverage has been generated, which is
.gitignore
'd):Currently, I have these commands set up for running the various tests:
The text was updated successfully, but these errors were encountered: