Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Run benchmark automatically for performance pull requests #1297

Merged
merged 3 commits into from
Jun 14, 2020

Conversation

siku2
Copy link
Member

@siku2 siku2 commented Jun 9, 2020

Description

Right now the process of benchmarking changes is rather tedious and possibly even intimidating for newcomers.
This PR aims to help with this by automatically generating a benchmark report for pull requests labelled with "performance".

You can see it in action here: siku2#2
(Please ignore all of the deleted messages and force pushes. I was using the PR during development :D)

I wanted to use a /benchmark command to run the benchmarks manually but GitHub actions doesn't really support that right now...

Comment on lines 42 to 50
search="{ git = \"https://github.com/jstarry/yew\", branch = \"keyed-list\" }"
replace="{ git = \"${{ github.event.pull_request.head.html_url }}\", branch = \"${{ github.event.pull_request.head.ref }}\" }"
input=$(cat frameworks/keyed/yew/Cargo.toml)
output=${input//"$search"/"$replace"}
if [[ "$input" == "$output" ]]; then
echo "ERROR: failed to configure Cargo.toml"
exit 1
fi
echo $output > frameworks/keyed/yew/Cargo.toml
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is very hacky but it does the trick. If we want to make this cleaner I think we should tackle it over at yewstack/js-framework-benchmark.

(cd webdriver-ts && npm run build-prod)

- name: Benchmark
run: npm run bench -- --headless
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm unsure about how reliable the benchmarks will be reliable/accurate if run on Github actions – are all containers in which workflows are run guaranteeed to have the same performance?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, this is a good point but my hope was that this wouldn't be an issue due to the way we compare against the baseline. We aren't comparing run #1 against run #2. We are comparing 3 different frameworks from one single benchmarks run.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah. I think Github Actions spins stuff up in containers, and it may be that even during a run the resources available to a container change to manage supply/demand amongst all jobs? I'll look at the documentation and see what they say.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think this is valid criticism of this PR since the current process already asks users to open a PR over at the js-framework-benchmark which also runs the benchmark on GitHub Actions.

You still make a good point, but I don't think it's that much of a problem. The workflow isn't running in a docker container, it's a virtual machine and if the documentation is to be trusted, these have consistent hardware resources.

Copy link
Contributor

@teymour-aldridge teymour-aldridge Jun 10, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That's cool then; thanks!

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Continuous Benchmark has a paragraph about "Stability of Virtual Environment" which says:

As far as watching the benchmark results of examples in this repository, the amplitude of the benchmarks is about +- 10~20%.

@jstarry
Copy link
Member

jstarry commented Jun 14, 2020

Sweet, this looks good for now, we'll likely need to tweak it over time but that's fine with me

@jstarry jstarry merged commit ab9bb1b into yewstack:master Jun 14, 2020
@siku2 siku2 deleted the benchmark branch July 8, 2020 11:42
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants