Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Get coverage information for integration tests with nyc #548

Closed
juancarlosfarah opened this issue Apr 6, 2017 · 23 comments
Closed

Get coverage information for integration tests with nyc #548

juancarlosfarah opened this issue Apr 6, 2017 · 23 comments

Comments

@juancarlosfarah
Copy link

I'm trying to use nyc to get the coverage of some integration tests I'm running against an express.js server and I'm a bit confused as to what the process should be. Currently I'm starting up the server and then running something along the lines of:

NODE_ENV=test nyc _mocha --compilers js:babel-register test/**/*.spec.js

This runs the tests against the server mainly by calling the API endpoints it exposes, but I'm getting 0% coverage. I'm assuming this is because the code running the server is not instrumented. I imagine that I'm supposed to instrument the code first before starting up the server and then run the tests against that, but I'm not 100% sure if this is the case either.

If it is the case, how do you go about this using nyc? Also, I wanted to know whether I should be using nyc or Istanbul to do the instrumentation.

My .nycrc looks like this:

{
  "include": [
    "app/**/*.js",
    "server.js"
  ],
  "cache": true,
  "all": true
}

And my .babelrc looks like this:

{
  "presets": ["env", "es2015"],
  "env": {
    "test": {
      "plugins": ["istanbul"]
    }
  }
}

Any answers or suggestions would be very helpful! Thanks.

@JaKXz
Copy link
Member

JaKXz commented Apr 6, 2017

You're almost there! Take a look at: https://github.com/istanbuljs/nyc#use-with-babel-plugin-istanbul-for-es6es7es2015-support

I recommend also using the mocha executable instead of _mocha, and you could use [optionally, for simplicity] a mocha.opts file.

Finally, try poking around http://istanbul.js.org and let us know if information you need is missing :)

@juancarlosfarah
Copy link
Author

Thanks for the tips @JaKXz. I followed your advice regarding mocha and the mocha.opts file. Unfortunately, it looks like for my project, I cannot do the ES2015 transform using the all flag, as I have some files that cannot be transpiled out-of-the-box and the exclude/include options for babel-plugin-istanbul don't seem to be working as expected when using this all flag.

Leaving that for another issue, nevertheless, I don't think it is the source of the issue I was explaining above. I removed all the ES2015 leaving my config as follows:

.nycrc

{
  "include": [
    "app/**/*.js",
    "lib/**/*.js",
    "server.js"
  ],
  "cache": true,
  "all": true
}

.babelrc

{
  "presets": ["env"]
}

Running NODE_ENV=test nyc mocha test/**/*.spec.js the tests run fine and coverage is recorded for the unit tests, but for the integration tests, I still get coverage of zero.

I think it has something to do with instrumenting the code before starting up the server, but I'm not sure how to go about it using nyc.

@juancarlosfarah
Copy link
Author

I was able to get it to work as expected following @gotwarlost's steps here.

Is there an equivalent for nyc?

@JaKXz
Copy link
Member

JaKXz commented Apr 6, 2017

@juancarlosfarah try taking a look at https://github.com/Stupidism/react-starter-kit/pull/1/commits/bc159056a75e1ea06bacc8e8bf43fb0055c5aeb2

It's clear to me that our docs are not very obvious on how to do this and I've just done this setup a lot of times that it's become second nature... How can we improve the docs to make the instructions clearer?

I was hoping you'd see and follow https://istanbul.js.org/docs/tutorials/es2015/ which covers this pretty well, but is there some way we can improve that doc?

@almithani
Copy link

@juancarlosfarah, I was able to get my integration test coverage by running my server using nyc:
./node_modules/.bin/nyc npm start

Then I ran my integration tests as I normally would:
mocha test/integration

Then, quitting the server with SIGINT (ctrl+c), the code coverage is spit out.

As for @JaKXz 's question on the docs - there's no indication on the docs you posted that the tutorial will help set up integration tests. In fact, I find no reference at all in your documentation that you can cover a server using nyc.

Since you were asking for feedback, I think some more background on how nyc actually works would be useful in your documentation - talking about how it consumes the processes it is passed in and even having a separate piece of documentation for this workflow.

@vinayathimmappa
Copy link

Documentation is just not up-to mark. i kept trying out nyc mocha and i always got

> ----------|----------|----------|----------|----------|----------------|
> File           |  % Stmts | % Branch |  % Funcs |  % Lines |Uncovered Lines |
> ----------|----------|----------|----------|----------|----------------|
> All files     |   Unknown |  Unknown |  Unknown |  Unknown |                |
> ----------|----------|----------|----------|----------|----------------|

Only way i could get this going was looking @almithani comments and it works beautifully. But i cannot have this integrated to my CI-CD build because i have to do SIGINT to get the result.

Summarize

  1. Documentation needs to say instrumentation at server level ( i was just starting server and wondering magic would happen, then realized it that documentation was wrong) i.e nyc yarn start

@juancarlosfarah
Copy link
Author

Hi @vinayathimmappa, @almithani, @JaKXz, thanks for the feedback. I couldn't get it to work following @JaKXz's suggestions, so I implemented a solution using istanbul directly. It works something like the following (so that it can integrate with CI/CS solutions such as Jenkins):

#!/bin/bash

# run and fork instrumented server
istanbul cover --handle-sigint --report lcovonly server.js &
PID=$!
sleep 30

# run tests
mocha
TEST_EXIT_CODE=$?

# wait for istanbul to finish dumping the reports
kill -INT ${PID}
wait ${PID}
KILL_EXIT_CODE=$?

# set the exit code as appropriate
if [[ ${TEST_EXIT_CODE} -eq 0 && ${KILL_EXIT_CODE} -eq 0 ]]; then
  EXIT_CODE=0
else
  EXIT_CODE=1
fi

# return the exit code from mocha to the shell
exit ${EXIT_CODE}

I then call this script in my package.json with npm run test:cover, which runs something like this: "test:cover": "./scripts/test-cover.sh".

Hope that helps!

Juan Carlos

@nadavye
Copy link

nadavye commented Jun 13, 2017

Guys - thanks for the tips. Highly appreciated!

You guys can always check http://www.sealights.io as it provides an awesome dashboard &APIs that will help you to see & improve your tests coverage and overall build quality.

We (I'm working there) support integration tests coverage for several technologies, including Node JS, Java, Kotlin, Browser Tests (Selenium), Python, Kotlin and .NET.

We also use NYC but we handle all the issues needed to allow it to run easily in CI/CD pipeline.

Sorry for prompting Sealights, but I think that most of you can benefit of it while spending your time developing features instead of tools.

@bcoe
Copy link
Member

bcoe commented Aug 1, 2017

@juancarlosfarah as you noticed, haven't really put too much thought into how you'd use nyc to instrument a long-lived process.

What I've tended to do in my own unit tests is too start a server in in the unit-tests itself, generally in the beforeAll block. Because nyc hooks the require statement and subprocesses, this will then result in appropriate coverage for tests that exercise the server itself.

Another option, as you state initially in this issue, is to pre-instrument files. You can do this by running nyc instrument foo.js. You then, however, do need to make sure that before the process running the instrumented code exists, you output the global __coverage__ object to disk, so that you can run reports against it.

Don't know if this helps at all? what would the ideal solution look like for you?

@juancarlosfarah
Copy link
Author

@bcoe, thanks for your reply. I'm looking into the second option you presented. As you noted, my approach is to instrument the code, fire up the instrumented server, run multiple integration tests against that server and then run reports against the coverage results. In my opinion the ideal solution should somehow make that second approach as straightforward as possible. I know that it can be complex, but if it can be done with a special setting on the .nycrc file and one command, that would be fantastic.

@bcoe
Copy link
Member

bcoe commented Aug 8, 2017

@juancarlosfara, @miloss had almost an exact same requirement and put together these rough docs:

https://istanbul.js.org/docs/advanced/coverage-object-report/

It would be cool to work together and figure out a better workflow, and then we could work backwards (and update the docs so that we don't have to perform quite as manual of a step :p ).

@juancarlosfarah
Copy link
Author

@bcoe, I'm happy to help. Just got back from holidays so I'm focusing on this now. I'm more than willing to contribute to documenting the workflow as this is something that I'd like to get working smoothly.

@vinayathimmappa
Copy link

@bcoe, please let me know how I can pitch in. I would be happy to do this. I am currently trying out Jenkins integration and this "SIGINT" is causing code coverage not working.

@alekbarszczewski
Copy link

I had similar problem - in my case removing NODE_ENV=test "solved" it. With NODE_ENV=test I was getting:

----------|----------|----------|----------|----------|----------------|

File % Stmts % Branch % Funcs % Lines Uncovered Lines
All files Unknown Unknown Unknown Unknown
---------- ---------- ---------- ---------- ---------- ----------------

After removing NODE_ENV=test I am getting correct coverage report.

@bodinsamuel
Copy link

Hi sorry for plugin-in,
no matter what I do, I can't make it work.

I have a codebase that do not use babel at all.
All my tests are integration test because it's an api, with mocha/chai/chai-http.
I wonder if someone actually as a POC that works or is it still a WIP ?

I actually have the full codebase on 0% with this conf, which is probably a good start.
package.json

"scripts": {
  "test:e2e": "nyc mocha tests/mocha/*.js --no-timeouts --all"
},
"nyc": {
    "reporter": [
      "lcov",
      "text"
    ],
    "include": [
      "app"
    ],
    "require": [
    ],
    "sourceMap": false,
    "instrument": true,
    "all": true
  },

if someone as a clue? :)
Thanks

@ezeasharma
Copy link

I believe something equivalent to this is needed to get code coverage of a long lived process
https://github.com/gotwarlost/istanbul-middleware

@akivajgordon
Copy link

Elaborating on @bcoe's comment about starting the server in your tests, I ended up with something like this, and it seems to be giving accurate results:

start-server.js

import { spawn } from 'child_process'

let server

before(function (done) { // note: cannot use arrow function here if using `this.timeout()` in body
  this.timeout(30000) // give server some time to start up

  server = spawn('node', ['server.js']) // or however you want to start up your server

  doSomethingToPollIfServerIsListeningYet() // consider `server.stdout.on('data', (data) => {...})` as one possibility.
    .then(() => done())
})

after(() => {
  server.kill()
})

This will start the server before any of the tests are run, and kill the server after the tests are done. Then run:

npx nyc mocha --file start-server.js

Mocha will run that file before your tests. Starting the server within the mocha process that nyc creates will enable nyc to instrument your code to accurately determine the code coverage.

Not too much hassle, all things considered.

@Morikko
Copy link

Morikko commented Sep 13, 2018

I did launch the server separetly from the test and covering it as a long living server. But I faced problems I explained in this comment.

We switch at our company to use supertest to wrap the express app and launch the code from within Jest. The coverage is fully working now and better merged between integration and unit tests.

@stale stale bot added the wontfix label Jan 5, 2019
@JaKXz JaKXz added stale and removed wontfix labels Feb 8, 2019
@stale stale bot removed the stale label Feb 8, 2019
@stale
Copy link

stale bot commented Apr 9, 2019

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.

@stale stale bot added the stale label Apr 9, 2019
@JaKXz JaKXz closed this as completed Apr 9, 2019
@istanbuljs istanbuljs deleted a comment from stale bot Apr 9, 2019
@JaKXz
Copy link
Member

JaKXz commented Apr 9, 2019

Closing this because it seems to be inactive... If there are still problems, let's make new separate & specific issues with the latest version of nyc.

@coreyfarrell
Copy link
Member

@thegrims please create a new issue as @JaKXz has suggested, be sure to provide all information required from the new issue template.

@michaelsensibill
Copy link

@juancarlosfarah, I was able to get my integration test coverage by running my server using nyc:
./node_modules/.bin/nyc npm start

Then I ran my integration tests as I normally would:
mocha test/integration

Then, quitting the server with SIGINT (ctrl+c), the code coverage is spit out.

As for @JaKXz 's question on the docs - there's no indication on the docs you posted that the tutorial will help set up integration tests. In fact, I find no reference at all in your documentation that you can cover a server using nyc.

Since you were asking for feedback, I think some more background on how nyc actually works would be useful in your documentation - talking about how it consumes the processes it is passed in and even having a separate piece of documentation for this workflow.

Hello, could you share a portion of your package.json? I followed but could not get the coverage report for integration test.

@cuzzlor
Copy link

cuzzlor commented May 27, 2020

@michaelsensibill I was able to get integration test coverage with pretty minimal effort (it seems to be working at least - I'm getting results showing coverage from code executed by the running server, not the test process).

  1. installed + configured https://www.npmjs.com/package/@istanbuljs/nyc-config-typescript
  2. added -r source-map-support/register to existing node commands that run typescript files using ts-node
  3. using nyc to call the existing script that uses concurrently + wait-on to start up a server then wait for it to become available before running integration tests

example:

{
    "scripts": {
        "start-ts": "node -r dotenv/config -r ts-node/register -r source-map-support/register ./src/app.ts",
        "integration-test": "npx mocha --no-timeouts --colors --reporter mocha-multi-reporters --reporter-options configFile=reporters.json -r dotenv/config -r ts-node/register -r source-map-support/register test/integration/**/*.spec.ts",
        "start-then-integration-test": "concurrently -k -s first \"npm run start-ts\" \"wait-on tcp:3000 -t 15000 && npm run integration-test\"",
        "integration-test-with-coverage": "nyc npm run start-then-integration-test",
    }
}

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests