Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add std-libs benchmarks GH workflow #7597

Merged

Conversation

Akirathan
Copy link
Member

@Akirathan Akirathan commented Aug 17, 2023

Closes #7490

Pull Request Description

Adds "Benchmark Standard Libraries" workflow that can be run with sbt std-benchmarks/bench.

The documentation for the workflow is in benchmarks.md

Important Notes

  • Tweak some warmup / measurement configurations for some benchmark such that they have enough time for warmup.

Checklist

Please ensure that the following checklist has been satisfied before submitting the PR:

  • The documentation has been updated, if necessary.
  • Screenshots/screencasts have been attached, if there are any visual changes. For interactive or animated visual changes, a screencast is preferred.
  • All code follows the
    Scala,
    Java,
    and
    Rust
    style guides. In case you are using a language not listed above, follow the Rust style guide.
  • All code has been tested:
    • Unit tests have been written where possible.
    • If GUI codebase was changed, the GUI was tested when built using ./run ide build.

@Akirathan Akirathan self-assigned this Aug 17, 2023
@Akirathan Akirathan requested a review from mwu-tow as a code owner August 17, 2023 10:39
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is essentially a clone of engine-benchmark.yml (which used to be named benchmark.yml). With the only exception that instead of ./run backend benchmark runtime, it invokes ./run backend benchmark enso-jmh

@@ -100,6 +101,9 @@
docker-entrypoint.sh:
Dockerfile:
simple-library-server/:
std-bits/:
benchmarks/:
bench-report.xml:
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is where sbt std-benchmarks/bench saves its output. In the same format as sbt runtime/bench. After the benchmarks are run, I would like to upload this file as an artifact to the GH Run.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you add this to the docs?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Which docs do you mean? This is a CI internal stuff that just instructs the CI job where to find the files that should be uploaded as artifacts.

@Akirathan
Copy link
Member Author

@mwu-tow I would like to try manually running the new "Benchmark Standard libraries" action to ensure that it does what it should do - run benchmarks and upload an XML report artifact. For that, we should probably merge this PR first, right?

@mwu-tow
Copy link
Contributor

mwu-tow commented Aug 17, 2023

@mwu-tow I would like to try manually running the new "Benchmark Standard libraries" action to ensure that it does what it should do - run benchmarks and upload an XML report artifact. For that, we should probably merge this PR first, right?

I think it should be possible to manually dispatch a workflow using the API or their CLI wrapper: https://cli.github.com/manual/gh_workflow_run

GitHub CLI
Take GitHub to the command line

@Akirathan
Copy link
Member Author

Akirathan commented Aug 18, 2023

According to https://stackoverflow.com/a/67840292/4816269, one has to first, at least temporarily, add on: pull-request to the new Workflow definition, so that the Workflow is run at least once. Then gh workflow list will recognize the new workflow.

Stack Overflow
Can someone help me understand the behaviour of the Github Actions tab? As somebody new to Actions working on a third party repo I would like to be able to create an action on a branch and execute ...

@Akirathan Akirathan added the CI: Clean build required CI runners will be cleaned before and after this PR is built. label Aug 21, 2023
@Akirathan Akirathan added the CI: No changelog needed Do not require a changelog entry for this PR. label Aug 21, 2023
Copy link
Member

@JaroslavTulach JaroslavTulach left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks like everything is working and the results of workflow run seem fine too.

@Akirathan Akirathan removed the CI: Clean build required CI runners will be cleaned before and after this PR is built. label Aug 22, 2023
@Akirathan
Copy link
Member Author

@mwu-tow I believe this PR is ready to merge. Since the last "Benchmark Standard Libraries" workflow run was successful and its output seems reasonable. Please, look into the build part modifications.

Comment on lines +61 to +66
- if: failure() && runner.os == 'Windows'
name: List files if failed (Windows)
run: Get-ChildItem -Force -Recurse
- if: failure() && runner.os != 'Windows'
name: List files if failed (non-Windows)
run: ls -lAR
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Btw. @mwu-tow are we ever using results of this?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good point. If you accidentally click on the output of this step, the UI freezes for a while, that's all I know. But I guess it might be useful sometimes when you debug CI jobs.

Comment on lines 34 to +35
collect_benches = Bench.build builder->
vector_size = 10000
vector_size = 1000 * 1000
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What was dictating these changes?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

From the PR description:

Tweak some warmup / measurement configurations for some benchmark such that they have enough time for warmup.

Optimally, we want to have roughly 5 seconds for warm up and 5 seconds for the measurement phase for each benchmark because we want the whole job to run no longer than 90 minutes and at the same time, we would like to have a reasonable number of benchmarks. From the perspective of performance measurement, 5 seconds for warmup is far too low. But it is sufficient for our purposes - we do not strive to publish a paper about the Engine performance after all.

I was driven by a rule of thumb: The score of each benchmark should be roughly between 0.5 ms/op and 500 ms/op. That way, it is ensured that the benchmark is roughly invoked at least 20 times in the warmup phase.

In this particular case, benchmarks in Array_Proxy_Bench were too fast - the score was lower than 0.005 ms/op, so I increased the data size.

@Akirathan Akirathan force-pushed the wip/akirathan/7490-Create-a-GitHub-action-for-std-benchmarks branch from d1e7550 to e3ca5c9 Compare August 22, 2023 17:02
@Akirathan Akirathan merged commit c32bfad into develop Aug 23, 2023
@Akirathan Akirathan deleted the wip/akirathan/7490-Create-a-GitHub-action-for-std-benchmarks branch August 23, 2023 10:18
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
-ci CI: No changelog needed Do not require a changelog entry for this PR.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Create a GitHub action for std-benchmarks
8 participants