Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Run tachometer on Travis and report as a GitHub Check #887

Merged
merged 1 commit into from
Apr 24, 2019
Merged

Conversation

aomarks
Copy link
Member

@aomarks aomarks commented Apr 18, 2019

Test and report benchmark results for commits and PRs.

  • Runs https://github.com/PolymerLabs/tachometer on Travis.

  • For branch pushes or local PRs, the results will show up as a GitHub Check, next to where the Travis build status shows.

  • For PRs from forks, we don't get access to Travis secure variables, so we can't report a GitHub Check. The results will show up in the Travis logs, though, and once the commit is merged, will show up as a Check on that commit.

  • We round-robin between three versions of lit-html: the commit being tested, its first parent (e.g. for a pull request to master, the tip of master), and the latest version of lit-html published to NPM.

  • The report panel looks like this:

    tach

  • I'll be working on making this more readable soon, by reporting the differences in the opposite direction, and with markdown instead of ASCII.

@aomarks aomarks requested a review from justinfagnani as a code owner April 18, 2019 23:41
@aomarks aomarks changed the title DO NOT MERGE Test PR for tachometer integration Run tachometer on Travis and report as a GitHub Check Apr 19, 2019
Copy link
Collaborator

@justinfagnani justinfagnani left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🎉

@aomarks aomarks merged commit 4f93e35 into master Apr 24, 2019
@aomarks aomarks deleted the tachometer branch April 24, 2019 21:38
@blikblum
Copy link

When i added a benchmark to a library, i considered running it at Travis but got a doubt about the reliability of the results. Travis, and other cloud services, provision an amount of server (cpu/memory) resources for each build that may vary depending of several factors. So this difference between runs could not influence the benchmark results?

@justinfagnani
Copy link
Collaborator

@blikblum: @aomarks has done a lot of work to ensure we can reliably use the results even on Travis.

@aomarks
Copy link
Member Author

aomarks commented Apr 25, 2019

When i added a benchmark to a library, i considered running it at Travis but got a doubt about the reliability of the results. Travis, and other cloud services, provision an amount of server (cpu/memory) resources for each build that may vary depending of several factors. So this difference between runs could not influence the benchmark results?

Yes, definitely that's a very valid concern! Because of that, the way we decided to integrate these benchmarks is that for each PR/commit, we round-robin between 50+ samples each of the PR/commit, the previous commit, and the latest version published to NPM. So this way the differences we report for a PR/commit all come from measurements made on the same Travis instance at the same time, instead of being based on historical data which may have been measured on a system with different performance characteristics.

neuronetio pushed a commit to neuronetio/lit-html that referenced this pull request Dec 2, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants