Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Set up testing #4465

Closed
3 tasks done
sgomes opened this issue Jun 29, 2016 · 10 comments
Closed
3 tasks done

Set up testing #4465

sgomes opened this issue Jun 29, 2016 · 10 comments
Assignees
Milestone

Comments

@sgomes
Copy link
Contributor

sgomes commented Jun 29, 2016

  • Investigate alternatives
  • Investigate Chrome headless
  • Choose something
@sgomes sgomes added this to the V2 setup milestone Jun 29, 2016
@Garbee
Copy link
Collaborator

Garbee commented Jun 29, 2016

AVAJS is an alternative to mocha. I really like how it is kept simple and allows for concurrent test files to run. Tests shouldn't take forever to run, this helps speed them up even if you have tons of files.

@traviskaufman
Copy link
Contributor

I would love to use Ava but the problem with it is that browser testing is unsupported. Normally I'd be cool with using JSDom but given that we're building user interface components in a vanilla manner, I feel like we really should test in all of the browsers we support. I'm subscribed to avajs/ava#24 but it's marked as low priority.

@Garbee
Copy link
Collaborator

Garbee commented Jun 29, 2016

Ah yea, if we start testing across multiple browsers specifically AVA won't work right now. Which, I'm a big fan of testing across platforms. I haven't personally used any tooling for that, so no immediate recommendations come to mind for it.

@traviskaufman
Copy link
Contributor

traviskaufman commented Jul 12, 2016

Testing Options

Karma + Mocha + Isparta

In the wild

Pros

  • Highly configurable
  • Robust, mature tools that other UI frameworks rely on (e.g. Angular)
  • Works across all browsers when combined with BrowserStack / SauceLabs
  • Gives us pretty much everything we'll ever need in terms of tooling between sinon, chai,
    mocha, and isparta

Cons

  • Complex setup - lots of interdependent moving parts
  • Karma and Mocha are a bit crufty and slower than some leaner alternatives

Karma + Tape + Isparta

In the wild

Pros

  • People seem to like tape more
  • Comes with a powerful built-in assertion library, removing the need for chai.
  • Simpler to understand and therefore easier to use.
  • Modular; can be easily extended to facilitate things such as Promises (see blue-tape), esnext (see babel-tape-runner), etc.
  • No global pollution
  • No shared state

Cons

  • Modularity comes at the cost of having more dev dependencies for tape
  • Newer and (arguably?) less battle-tested than mocha
  • ???

Ava + JSDom + NYC

In the wild

Pros

  • Simple and ergonomic
  • Extremely fast - runs tests in subprocesses
  • Isolates global state
  • Supports ES2015 out-of-the-box - no need to run webpack
  • Supports async tests out-of-the-box
  • Excellent watch mode support

Cons

@traviskaufman
Copy link
Contributor

traviskaufman commented Jul 12, 2016

In terms of cross-browser testing for CI, here's a solid comparison of Browserstack vs. SauceLabs vs. Rainforest QA

Browserstack has the highest google trends but SauceLabs looks like it's the favorite.

Headless chrome looks awesome but also looks like there still work to be done before it's stable

@Garbee
Copy link
Collaborator

Garbee commented Jul 12, 2016

Yah, AVA is out right away due to no browser support. We need something that is cross-browser.

I also think that with our new direction, we shouldn't worry as much whether the tests run super-fast or take some time. As long as they are accurate and we build wide coverage, that is the important thing.

I believe we'd be better off with option 2 since it doesn't have shared global state. It is fairly critical that one test can't cause an accidental collision in another.

@Garbee
Copy link
Collaborator

Garbee commented Jul 12, 2016

We also need to think about how Closure compiler to check the comments is going to run into our testing. Should we continue with Closure comments? That is another thing to consider.

@traviskaufman
Copy link
Contributor

@Garbee I agree I'm partial to option 2 as well since tape is leaner and seems to require a lot less config song and dance.

re: Closure I spoke with the Angular team and apparently they're working on having Tsickle be able to create externs files from .dts files. Externs files means that we can forego comments and since we'll probably want to write .dts files for the TS community anyway, this could be a good solution that doesn't clutter our code.

@sgomes
Copy link
Contributor Author

sgomes commented Jul 19, 2016

Option 2 SGTM, as it gives us a modern architecture that we can easily integrate with CIs, and use for both headless and cross-browser testing.

Using externs means our code wouldn't be compiled by Closure, but would at least have an interface to Closurized code. I'm OK with that, as out main goal for Closure support is to make sure that Closure code can use MDL, and it sounds like .dts files are becoming a way of having that.

traviskaufman added a commit that referenced this issue Jul 19, 2016
* `npm test` lints files, runs karma, reports coverages, checks coverage thresholds
* `npm run test:watch` runs karma in auto-watch mode, with source maps
   for both source and test files.
* [karma](https://karma-runner.github.io/1.0/index.html) is used for running tests.
* [tape](https://github.com/substack/tape) is used as the actual test runner.
* [bel](https://github.com/shama/bel) is used for easy DOM fixtures.
* [testdouble](https://github.com/testdouble/testdouble.js) is used for mocking/doubles.
* [isparta](https://github.com/douglasduteil/isparta) is used to instrument source files for coverage.
* [istanbul](https://github.com/gotwarlost/istanbul) is used to check and report coverage.

resolves #4465
traviskaufman added a commit that referenced this issue Jul 19, 2016
* `npm test` lints files, runs karma, reports coverages, checks coverage thresholds
* `npm run test:watch` runs karma in auto-watch mode, with source maps
   for both source and test files.
* [karma](https://karma-runner.github.io/1.0/index.html) is used for running tests.
* [tape](https://github.com/substack/tape) is used as the actual test runner.
* [bel](https://github.com/shama/bel) is used for easy DOM fixtures.
* [testdouble](https://github.com/testdouble/testdouble.js) is used for mocking/doubles.
* [isparta](https://github.com/douglasduteil/isparta) is used to instrument source files for coverage.
* [istanbul](https://github.com/gotwarlost/istanbul) is used to check and report coverage.

resolves #4465
traviskaufman added a commit that referenced this issue Jul 19, 2016
* `npm test` lints files, runs karma, reports coverages, checks coverage thresholds
* `npm run test:watch` runs karma in auto-watch mode, with source maps
   for both source and test files.
* [karma](https://karma-runner.github.io/1.0/index.html) is used for running tests.
* [tape](https://github.com/substack/tape) is used as the actual test runner.
* [bel](https://github.com/shama/bel) is used for easy DOM fixtures.
* [testdouble](https://github.com/testdouble/testdouble.js) is used for mocking/doubles.
* [isparta](https://github.com/douglasduteil/isparta) is used to instrument source files for coverage.
* [istanbul](https://github.com/gotwarlost/istanbul) is used to check and report coverage.
* [eslint-plugin-tape](https://github.com/atabel/eslint-plugin-tape) is used for tape-specific test linting.

resolves #4465
traviskaufman added a commit that referenced this issue Jul 20, 2016
* `npm test` lints files, runs karma, reports coverages, checks coverage thresholds
* `npm run test:watch` runs karma in auto-watch mode, with source maps
   for both source and test files.
* `npm run fix` attempts to fix both CSS and JS lint issues in parallel
* `npm run lint` lints JS and CSS in parallel
* [karma](https://karma-runner.github.io/1.0/index.html) is used for running tests.
* [tape](https://github.com/substack/tape) is used as the actual test runner.
* [bel](https://github.com/shama/bel) is used for easy DOM fixtures.
* [testdouble](https://github.com/testdouble/testdouble.js) is used for mocking/doubles.
* [isparta](https://github.com/douglasduteil/isparta) is used to instrument source files for coverage.
* [istanbul](https://github.com/gotwarlost/istanbul) is used to check and report coverage.
* [eslint-plugin-tape](https://github.com/atabel/eslint-plugin-tape) is used for tape-specific test linting.

resolves #4465
traviskaufman added a commit that referenced this issue Jul 20, 2016
* `npm test` lints files, runs karma, reports coverages, checks coverage thresholds
* `npm run test:watch` runs karma in auto-watch mode, with source maps
   for both source and test files.
* `npm run fix` attempts to fix both CSS and JS lint issues in parallel
* `npm run lint` lints JS and CSS in parallel
* [karma](https://karma-runner.github.io/1.0/index.html) is used for running tests.
* [tape](https://github.com/substack/tape) is used as the actual test runner.
* [bel](https://github.com/shama/bel) is used for easy DOM fixtures.
* [testdouble](https://github.com/testdouble/testdouble.js) is used for mocking/doubles.
* [isparta](https://github.com/douglasduteil/isparta) is used to instrument source files for coverage.
* [istanbul](https://github.com/gotwarlost/istanbul) is used to check and report coverage.
* [eslint-plugin-tape](https://github.com/atabel/eslint-plugin-tape) is used for tape-specific test linting.

resolves #4465
@traviskaufman
Copy link
Contributor

Resolved by #4567

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants