Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[tests] Improve the test suite #3330

Closed
nathanmarks opened this issue Feb 13, 2016 · 36 comments
Closed

[tests] Improve the test suite #3330

nathanmarks opened this issue Feb 13, 2016 · 36 comments
Labels

Comments

@nathanmarks
Copy link
Member

As per the discussion in #3241, this is my plan (to help reach the goal of unit test coverage for most components):

  • Make testing quicker and easier. This will remove part of the barrier to testing and will help encourage contributors to get on board with testing. As a result of having better testing, the quality of this library will increase.
    • Provide a way to unit test components by using shallow rendering (and basic integration testing via "full" rendering using a virtual DOM renderer)
    • Unit tests should be quick and easy to run for individual components
  • Move the existing phantomjs tests to a browser test folder. Keep the current karma setup and use the browser test suite as a way to confirm important browser behaviour. This also leaves the door open to cross-browser compatibility tests as part of the test suite.
    • Update karma to use webpack instead of browserify
    • Fix the coverage reporting

You can take a look at my progress here: https://github.com/nathanmarks/material-ui/tree/test-suite-overhaul

So far I have:

  • Restructured the test folder to accommodate each part of the suite (unit, browser, etc)...
  • Swapped out browserify for webpack
  • Fixed the karma coverage report
  • Added a unit test setup (Mocha, no Karma) and coverage for it

The test folder now looks like this (if the coverage has been generated, which is .gitignore'd):

image

Currently, I have these commands set up for running the various tests:

# All the things!
$ npm run test

# Runs the browser tests once and generates coverage
$ npm run test:browser

# Runs the browser tests in watch mode
$ npm run test:browser:watch

# Runs the unit tests once
$ npm run test:unit

# Runs the <Avatar /> unit tests once
$ npm run test:unit -- Avatar

# Runs all unit tests in watch mode (requires nodemon)
$ npm run test:unit:watch

# Runs the <Avatar /> unit test in watch mode
$ npm run test:unit:watch -- Avatar

# Generates unit test coverage
$ npm run test:unit:coverage
@nathanmarks
Copy link
Member Author

Basic example test for <Avatar />: https://github.com/nathanmarks/material-ui/blob/test-suite-overhaul/test/unit/Avatar.spec.js

(could do with style assertions where necessary too)

@nathanmarks
Copy link
Member Author

@newoga

What do you think about adding some helpers that help confirm basic expectations?

You know the library a lot better than I do, are there some common assertions throughout the lib that we'll want to make for components that might get tedious repeating over and over in tests?

We can provide some helpers for these common assertions so users could write certain things like: (contrived example)

it('renders children', rendersChildren(Avatar));

@nathanmarks
Copy link
Member Author

Another note is that some components are currently exported by default with a HoC.

Would it be okay to add a named export for these modules so we can access the plain component for unit tests?

It significantly increases the ease of unit testing them.

@nathanmarks
Copy link
Member Author

I updated the example above to use JSX in shallow() instead of React.createElement. Makes it more approachable.

@newoga
Copy link
Contributor

newoga commented Feb 13, 2016

@nathanmarks This looks really really good!

What do you think about adding some helpers that help confirm basic expectations?

Yeah that's would be useful. My gut feeling is to hold off on creating too many helpers until we start writing tests (and seeing patterns). The project is going through so much refactoring that I'd prefer to consider it with a fresh perspective.

That being said, the first area that popped in my mind that a helper could be useful are checks for muiTheme. Such as, are theme variables properly modifying apply to certain keys in style objects.

Would it be okay to add a named export for these modules so we can access the plain component for unit tests?

That's a good point and good idea.

/cc @oliviertassinari @alitaheri @mbrookes This would impact the migration to muiThemeable because we would need to export the wrapped and plain components.

@nathanmarks
Copy link
Member Author

@newoga Agree with what you said (hold off and wait for patterns to emerge). FYI, I ran into a frustrating problem with <Table /> the other day related to rows and other props on the various table components. There are plenty of good assertions to be made for components like this with the virtual renderers that don't require a real DOM to identify important issues with behaviour. How do you want to categorize these sorts of "integration" tests in the file system that are testing the behaviour between the smaller, composable components that make up a feature heavy "component" (Table).

@mbrookes The browser tests (where the current tests are sitting in my branch) would be a good place to flesh out some acceptance testing with a real DOM

@newoga
Copy link
Contributor

newoga commented Feb 13, 2016

@nathanmarks That's an interesting question. My gut says for now to include those types of "integration" tests with the most top level related component (in this case Table). If the Table.spec.js got out of control we could probably explore splitting it afterwards. It seems react-bootstrap follows that or a similar approach. They put more independent tests near the top and more integrative tests near the button.

I'm open to other approaches though but I'm thinking this shouldn't affect too many of our components (or if it does, maybe there's a design problem with those components).

@mbrookes mbrookes added the Core label Feb 13, 2016
@nathanmarks
Copy link
Member Author

@newoga Sounds good to me. I am happy with that approach. <Table /> was the component that triggered the thought to begin with.

One thing RE browser testing, it would be great if someone more familiar with the library could identify parts of the codebase (esp UI interactions) that really would benefit from real browser testing.

(side note -- leveraging the already maintained docs site for some basic browser acceptance testing might be some food for thought)

@nathanmarks
Copy link
Member Author

@newoga @mbrookes @oliviertassinari @alitaheri

What would be great is if you guys could decide on some goal(s) or examples you'd like to see achieved (or anything you don't like that you want to see changed) before I start preparing for a PR. 100% understand if you need time to review and think -- I'm just eager to get the unit testing situation in a better position for success!

Also wondering if anything should/needs to wait until certain refactoring decisions are set in stone. Thoughts?

@newoga
Copy link
Contributor

newoga commented Feb 14, 2016

Honestly, I'm very happy with where we're at with it so far. I think you're close enough to open up a PR and we can always come back to it after merging and improve more after we learn more about what's lacking.

@oliviertassinari @alitaheri @mbrookes Any thoughts about https://github.com/nathanmarks/material-ui/tree/test-suite-overhaul before @nathanmarks opens a PR?

@nathanmarks
Copy link
Member Author

@newoga I definitely have a couple things to verify and clean up, and intend to add some more examples. But my thought process is the sooner we get this in here, the sooner others can contribute to tests.

I'm really hoping that being able to do:

$ npm run test:unit:watch -- Avatar

Will encourage use, even if just to verify the basics to begin with!'

We should definitely write a very strong test (with style assertions and examples of testing click events callbacks etc) with some notes/justifications and mention it in the contributing guide as an example of how to use the unit tests to their full potential.

There's also the matter of conventions. With all the refactoring in the FS going on, what are your thoughts?

I'd like to name unit test files by their component names like I did with Avatar, and put complex components like Table in a subfolder as in the main lib. And maintaining describe('<Avatar />') as a convention is important for the mocha grep option that we're using behind the scenes of npm run test:unit -- Avatar to only run the specified component test.

@newoga
Copy link
Contributor

newoga commented Feb 14, 2016

@newoga I definitely have a couple things to verify and clean up, and intend to add some more examples. But my thought process is the sooner we get this in here, the sooner others can contribute to tests.

I agree and would like to get that process started as well.

There's also the matter of conventions. With all the refactoring in the FS going on, what are your thoughts?

Yeah I really wish we were finish with the directory changes. If so my vote would be to put the tests like Avatar.spec.js in the same folder as the component. If we get tests merged before directory changes (very likely), I think we have two options:

  1. Follow the conventions of the component it's testing for avatar.jsx gets avatar.spec.js, DropdownMenu/DropdownMenu.jsx gets DropdownMenu/DropdownMenu.spec.js. When the directories change, we could clean up and combine both src and tests
  2. Try to put the tests in proper directories from the get go in anticipation for when components move avatar.jsx gets Avatar/Avatar.spec.js

As much as I'm looking forward to getting the directories cleaned up and consistent, I'm leaning towards option 1 for now.

Note: For the command line options, one option would be to .toLowerCase() the option that was passed and temporarily maintain a map of components to paths so that case sensitivity doesn't matter.

@newoga
Copy link
Contributor

newoga commented Feb 14, 2016

Actually looking at your setup again, if we're using mocha's grep option, where the test actually is shouldn't matter since it's looking it should be matching against the test name.

@nathanmarks
Copy link
Member Author

@newoga Yup, so we can pretty much set up the tests however we want. I just want some sort of convention to follow so there is consistency (which makes it easier to change later too).

The reason I went with Avatar.spec.js was to make the intention of the per component test argument clear in the filesystem.

I'm anticipating having to add a little more sophistication to what I'm passing to mocha, but I also have the option of just doing that in the glob very easily since mocha is being run programatically, not via the mocha cli command. This option may be easier and would work great if ComponentName => ComponentName.spec.js

If that's how you anticipate the refactoring will end up changing the main lib, they will also drop nicely into __tests__ folder(s).

@newoga
Copy link
Contributor

newoga commented Feb 14, 2016

The reason I went with Avatar.spec.js was to make the intention of the per component test argument clear in the filesystem.

That makes sense! Let go with that: ComponentName.spec.js 👍

@newoga
Copy link
Contributor

newoga commented Feb 15, 2016

@nathanmarks Feel free to open a PR when you're happy with the implementation. I think this is ready (if not close). 👍

@nathanmarks
Copy link
Member Author

@newoga Awesome. I will do so soon then.

Once/if we merge, would you consider enabling coveralls for the repo so we can have a nice GUI for looking at the current unit test coverage @ master? I can configure the travis.yml to submit the lcov report to coveralls. (Will need someone with access to add a couple of environmental vars to the travis settings though).

Also -- this is where you may want to think about my Q earlier RE combining the reports. Otherwise in something like coveralls we can only choose a single report (unit, browser, etc) to submit.

@newoga
Copy link
Contributor

newoga commented Feb 15, 2016

Also -- this is where you may want to think about my Q earlier RE combining the reports. Otherwise in something like coveralls we can only choose a single report (unit, browser, etc) to submit.

Yeah that's a good point, I didn't see the impact on those coverage services. Let's combine them but I'll leave it up to you whether you want to figure it out for this PR or submit it as a new one. I'm fine with either 👍 After the testing is in place, we'll sign up for one.

@oliviertassinari showed me an example of PR coverage report comments that codecov can make when developers submit PRs. I think that feature is pretty nifty.

@nathanmarks
Copy link
Member Author

Nice, yeah, coveralls has a similar thing too.

Can leave that though until there is actually some coverage to maintain for most the components as you mentioned 👍

I'm going to try and get the combined output generated for this PR as well.

@oliviertassinari
Copy link
Member

One thing RE browser testing, it would be great if someone more familiar with the library could identify parts of the codebase (esp UI interactions) that really would benefit from real browser testing.

I'm not that familiar with the code base. But I see some stuff that would benefit from real browser testing:

  • The focus, blur logic is kind of complexe and depend on the browser. For instance, I have noticed that the autocomplete doesn't work on a Cordova android browser (I'm working on it). That's affecting the Menu, List, IconMenu and probably more component
  • The Popover absolute positioning
  • More generally, it would be awesome to have a way to easily assert the visual rendering of all the components. For instance, I have seen many regressions with the styling of the TextField.

@nathanmarks
Copy link
Member Author

@oliviertassinari more so than I am ;)

Is that a chrome webview? (Cordova, phonegap).

RE visual regressions. Are they visual regressions with the same CSS or different CSS in the output?

If so, this is where some acceptance testing (perhaps using the documentation site) could be useful. IIRC browserstack offer some free automation plans for open source.

@oliviertassinari
Copy link
Member

Yes, this is a chrome webview.

RE visual regressions

Indeed, I have seen some cool tool that can help. For instance http://webdriver.io/guide/plugins/webdrivercss.html.

Are they visual regressions with the same CSS or different CSS in the output?

I'm not sure to understand. That could be linked to inline style changes or component structure changes.

(perhaps using the documentation site)

I agree, I think that all our documentation examples are great candidates to write test for.

@nathanmarks
Copy link
Member Author

@oliviertassinari Sorry, let me clarify.

We can make assertions on certain core styles without webdriver/screenshots/etc. These types of assertions are useful so that users don't have to run the whole E2E visual screenshot diff test setup to catch core styling screwups.

However, having a visual diff on the documentation site + examples sounds like a great way to ensure E2E visual consistency across the whole lib. (And this ensures that structural changes aren't causing unwanted side effects too)

As an added bonus, it will help encourage keep the docs site full of rich examples ;)

@nathanmarks
Copy link
Member Author

@newoga working on a Friday deadline, then I'll prep the PR ;)

Side note, I've been using this lib for an internal tool and I've found a number of performance problems (some of which I've seen mentioned in various issues). So after this PR, that will be my prio rather than expanding lists.

I've also got some ideas for how to integrate certain performance benchmarks into the test suite -- it involves using react-addons-perf and measuring wasted time after state thrashing via a loop.

@oliviertassinari
Copy link
Member

integrate certain performance benchmarks into the test suite

That's an excellent idea 👍

I've found a number of performance problems

Well, that's not a suprise 😕.

@newoga
Copy link
Contributor

newoga commented Feb 18, 2016

@newoga working on a Friday deadline, then I'll prep the PR ;)

@nathanmarks Thanks much, looking forward to it!

So after this PR, that will be my prio rather than expanding lists.

Sounds good, my story here is similar in that I was originally looking to upgrade a component and ended up working on a ton of other things instead 😆. When you get a chance, could you still open a new issue for the list animation implementation? It'd be good to document and we might have someone that could tackle it and you could provide some high level guidance for it.

I've also got some ideas for how to integrate certain performance benchmarks into the test suite

Awesome, glad to hear! That will be crucial in evaluating how our code changes are impacting performance (positively or negatively). Let's do it! 😄

@nathanmarks
Copy link
Member Author

@newoga did you get feedback RE exposing the plain components via named exports for unit testing?

@newoga
Copy link
Contributor

newoga commented Feb 19, 2016

@nathanmarks Yes I responded here unless I am missing something. 😄

I also touched on this topic in another issue and we agreed that we will export plain components one way or another. We'll probably address this while reorganizing directories but right now it should only impact a handful of components (Divider and Subheader are the only two I can think of at the moment and they're pretty simple components).

@nathanmarks
Copy link
Member Author

@newoga Sorry -- I know you had mentioned it. I just noticed that the other guys didn't respond to your @

@newoga
Copy link
Contributor

newoga commented Feb 20, 2016

@newoga Sorry -- I know you had mentioned it. I just noticed that the other guys didn't respond to your @

Oh okay, no worries! I spoke with them earlier about your testing branch and they are all fully in support of it (and I think they're just letting us take the lead on this 😝). As it relates to the plain exports, we briefly touched on the topic here.

There are a lot of moving parts with the work in the project now but we're all heading the same direction. Right now, I'm currently focusing on removing some React anti-patterns (or things that may be deprecated in the future) to help future proof the project. Though I'm really looking forward to getting the testing changes in so we can start working on getting coverage for the components. If you have any blockers related to the project or anything that's holding you up, let me know and we can see if we can get that out of the way.

Thanks @nathanmarks!

@nathanmarks
Copy link
Member Author

@newoga Sounds good! No blockers now -- will issue a PR tonight.

Can you point me to the issues RE the anti-patterns you are focusing on removing? I'm interested in checking them out

@newoga
Copy link
Contributor

newoga commented Feb 20, 2016

Generally speaking, the ones that I'm currently focused on (that are the ROADMAP) are:

After those first two items are done, we should be able to convert the codebase to ES6 classes when we're ready to do so. Though I may try to take a stab at coming up with a plan for directory reorg before taking on valueLink. Depends on how ambitious I'm feeling.

@nathanmarks
Copy link
Member Author

@newoga Quick one.

I had initially setup with split coverage. The browser (phantomjs) tests automatically generate coverage every run, and the unit tests have a separate command to gen coverage by instrumenting with isparta.

At the moment, both are dumping html and lcov output to test/coverage/browser and test/coverage/unit.

I'm about to work on combining the coverage report. Should we shift the individual coverage reports to just output the summary in the console and use the test/coverage/ directory only for saving the result of the combined coverage report? (Run using npm run coverage is what I'm thinking).

@nathanmarks
Copy link
Member Author

Alternatively we can still output to test/coverage/browser and test/coverage/unit, but then have test/coverage/combined for both.

The suggestion for just ditching the full output for the individual reports was to avoid noise, but it doesn't really matter since the folders aren't checked into git anyways.

I'll go with this for now, we can always remove later if it confuses people (it shouldn't though).

@newoga
Copy link
Contributor

newoga commented Feb 21, 2016

but it doesn't really matter since the folders aren't checked into git anyways.

I'll go with this for now, we can always remove later if it confuses people (it shouldn't though).

@nathanmarks Yeah, that sounds like a good plan to me! And I think the names of those folders make it pretty clear too. 👍

@newoga newoga changed the title [tests] Improve the test suite [Test] Improve the test suite Feb 21, 2016
@newoga
Copy link
Contributor

newoga commented Feb 21, 2016

This was resolved by #3405

@newoga newoga closed this as completed Feb 21, 2016
@newoga newoga changed the title [Test] Improve the test suite [tests] Improve the test suite Feb 22, 2016
@zannager zannager added the test label Dec 22, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

5 participants