Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Further investigate flaky sequential/test-performance #25719

Closed
gireeshpunathil opened this issue Jan 26, 2019 · 3 comments
Closed

Further investigate flaky sequential/test-performance #25719

gireeshpunathil opened this issue Jan 26, 2019 · 3 comments
Labels
flaky-test Issues and PRs related to the tests with unstable failures on the CI.

Comments

@gireeshpunathil
Copy link
Member

  • Version: master
  • Platform: arm
  • Subsystem: test

#23291 reported a flake in sequential/test-performance that was subsequently identified as an improvement to the test and was fixed in #25695 . However, the fact that the test started failing all of sudden and that too very frequently warrants an investigation on the root cause of it - what caused to the change in timing w.r.t bootup. commit d3806f9 is identified as the change agent, and the change envisaged in that commit was a positive change. Effort is required to analyse that commit and generate inference about its effects.

@gireeshpunathil
Copy link
Member Author

./node -e "console.log(require('perf_hooks').performance.nodeTiming.bootstrapComplete)"

without d3806f9
51.95341110229492
with d3806f9
38.98078536987305

so basically this commit improved the bootup time, not decreased!

but then looking at what happened in the test, I am not fully comprehending it, but able to see at least one inconsistency:

assert(Math.abs(performance.timeOrigin - Date.now()) < 15000);

where the result is always going to be negative, so the assertion always passes.

So in short, the said commit's purpose was to improve the bootup time and it is confirmed to do so. No more action needed on this; so closing.

@refack refack added the flaky-test Issues and PRs related to the tests with unstable failures on the CI. label Jan 30, 2019
@ofrobots
Copy link
Contributor

FYI, I am continuing to see a flake in this test: https://ci.nodejs.org/job/node-test-binary-arm/5816/RUN_SUBSET=3,label=pi1-docker/testReport/junit/(root)/test/sequential_test_performance/

IIRC, the test asserts that Node.js can complete bootstrap in 1110ms. I don't know if this is a reasonable assertion to have in the CI. It will always depend on machine capability and load.

@Trott
Copy link
Member

Trott commented Jan 31, 2019

FYI, I am continuing to see a flake in this test: https://ci.nodejs.org/job/node-test-binary-arm/5816/RUN_SUBSET=3,label=pi1-docker/testReport/junit/(root)/test/sequential_test_performance/

That CI run was triggered by https://ci.nodejs.org/job/node-test-pull-request/20435/ which is itself a Resume Build of https://ci.nodejs.org/job/node-test-pull-request/20310/ which was started on January 24 before the fix landed on January 25. So that's a failure from before the thing was fixed. Use 'Rebuild' instead of 'Resume Build' or start a whole new CI to get the fix into your CI run.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
flaky-test Issues and PRs related to the tests with unstable failures on the CI.
Projects
None yet
Development

No branches or pull requests

4 participants