Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merge the benchmark fixes and enhancements to main #5437

Merged
merged 10 commits into from
Aug 21, 2023
83 changes: 83 additions & 0 deletions .github/workflows/benchmarks_report.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,83 @@
# Post any reports generated by benchmarks_run.yml .
# Separated for security:
# https://securitylab.github.com/research/github-actions-preventing-pwn-requests/

name: benchmarks-report
run-name: Report benchmark results

on:
workflow_run:
workflows: [benchmarks-run]
types:
- completed

jobs:
download:
runs-on: ubuntu-latest
outputs:
reports_exist: ${{ steps.unzip.outputs.reports_exist }}
steps:
- name: Download artifact
id: download-artifact
# https://docs.github.com/en/actions/using-workflows/events-that-trigger-workflows#using-data-from-the-triggering-workflow
uses: actions/github-script@v6
with:
script: |
let allArtifacts = await github.rest.actions.listWorkflowRunArtifacts({
owner: context.repo.owner,
repo: context.repo.repo,
run_id: context.payload.workflow_run.id,
});
let matchArtifact = allArtifacts.data.artifacts.filter((artifact) => {
return artifact.name == "benchmark_reports"
})[0];
if (typeof matchArtifact != 'undefined') {
let download = await github.rest.actions.downloadArtifact({
owner: context.repo.owner,
repo: context.repo.repo,
artifact_id: matchArtifact.id,
archive_format: 'zip',
});
let fs = require('fs');
fs.writeFileSync(`${process.env.GITHUB_WORKSPACE}/benchmark_reports.zip`, Buffer.from(download.data));
};

- name: Unzip artifact
id: unzip
run: |
if test -f "benchmark_reports.zip"; then
reports_exist=1
unzip benchmark_reports.zip -d benchmark_reports
else
reports_exist=0
fi
echo "reports_exist=$reports_exist" >> "$GITHUB_OUTPUT"

- name: Store artifact
uses: actions/upload-artifact@v3
with:
name: benchmark_reports
path: benchmark_reports

post_reports:
runs-on: ubuntu-latest
needs: download
if: needs.download.outputs.reports_exist == 1
steps:
- name: Checkout repo
uses: actions/checkout@v3

- name: Download artifact
uses: actions/download-artifact@v3
with:
name: benchmark_reports
path: .github/workflows/benchmark_reports

- name: Set up Python
# benchmarks/bm_runner.py only needs builtins to run.
uses: actions/setup-python@v3

- name: Post reports
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
run: python benchmarks/bm_runner.py _gh_post
Original file line number Diff line number Diff line change
@@ -1,6 +1,9 @@
# Use ASV to check for performance regressions in the last 24 hours' commits.
# Use ASV to check for performance regressions, either:
# - In the last 24 hours' commits.
# - Introduced by this pull request.

name: benchmark-check
name: benchmarks-run
run-name: Run benchmarks

on:
schedule:
Expand All @@ -9,7 +12,7 @@ on:
workflow_dispatch:
inputs:
first_commit:
description: "Argument to be passed to the overnight benchmark script."
description: "First commit to benchmark (see bm_runner.py > Overnight)."
required: false
type: string
pull_request:
Expand Down Expand Up @@ -74,12 +77,17 @@ jobs:

- name: Benchmark this pull request
if: ${{ github.event.label.name == 'benchmark_this' }}
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
PR_NUMBER: ${{ github.event.number }}
run: |
git checkout ${{ github.head_ref }}
python benchmarks/bm_runner.py branch origin/${{ github.base_ref }}

- name: Run overnight benchmarks
id: overnight
if: ${{ github.event_name != 'pull_request' }}
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
run: |
first_commit=${{ inputs.first_commit }}
if [ "$first_commit" == "" ]
Expand All @@ -92,57 +100,27 @@ jobs:
python benchmarks/bm_runner.py overnight $first_commit
fi

- name: Create issues for performance shifts
if: ${{ github.event_name != 'pull_request' }}
- name: Warn of failure
if: >
failure() &&
steps.overnight.outcome == 'failure'
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
run: |
if [ -d benchmarks/.asv/performance-shifts ]
then
cd benchmarks/.asv/performance-shifts
for commit_file in *
do
commit="${commit_file%.*}"
pr_number=$(git log "$commit"^! --oneline | grep -o "#[0-9]*" | tail -1 | cut -c 2-)
author=$(gh pr view $pr_number --json author -q '.["author"]["login"]' --repo $GITHUB_REPOSITORY)
merger=$(gh pr view $pr_number --json mergedBy -q '.["mergedBy"]["login"]' --repo $GITHUB_REPOSITORY)
# Find a valid assignee from author/merger/nothing.
if curl -s https://api.github.com/users/$author | grep -q '"type": "User"'; then
assignee=$author
elif curl -s https://api.github.com/users/$merger | grep -q '"type": "User"'; then
assignee=$merger
else
assignee=""
fi
title="Performance Shift(s): \`$commit\`"
body="
Benchmark comparison has identified performance shifts at

* commit $commit (#$pr_number).

Please review the report below and \
take corrective/congratulatory action as appropriate \
:slightly_smiling_face:
title="Overnight benchmark workflow failed: \`${{ github.run_id }}\`"
body="Generated by GHA run [\`${{github.run_id}}\`](https://github.com/${{github.repository}}/actions/runs/${{github.run_id}})"
gh issue create --title "$title" --body "$body" --label "Bot" --label "Type: Performance" --repo $GITHUB_REPOSITORY

<details>
<summary>Performance shift report</summary>

\`\`\`
$(cat $commit_file)
\`\`\`

</details>

Generated by GHA run [\`${{github.run_id}}\`](https://github.com/${{github.repository}}/actions/runs/${{github.run_id}})
"
gh issue create --title "$title" --body "$body" --assignee $assignee --label "Bot" --label "Type: Performance" --repo $GITHUB_REPOSITORY
done
fi
- name: Upload any benchmark reports
if: success() || steps.overnight.outcome == 'failure'
uses: actions/upload-artifact@v3
with:
name: benchmark_reports
path: .github/workflows/benchmark_reports

- name: Archive asv results
if: ${{ always() }}
uses: actions/upload-artifact@v3
with:
name: asv-report
path: |
benchmarks/.asv/results
name: asv-raw-results
path: benchmarks/.asv/results
1 change: 1 addition & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -32,6 +32,7 @@ pip-cache
# asv data, environments, results
.asv
benchmarks/.data
.github/workflows/benchmark_reports

#Translations
*.mo
Expand Down
1 change: 0 additions & 1 deletion benchmarks/asv.conf.json
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,6 @@
"project_url": "https://github.com/SciTools/iris",
"repo": "..",
"environment_type": "conda-delegated",
"conda_channels": ["conda-forge", "defaults"],
"show_commit_url": "http://github.com/scitools/iris/commit/",
"branches": ["upstream/main"],

Expand Down
4 changes: 4 additions & 0 deletions benchmarks/asv_delegated_conda.py
Original file line number Diff line number Diff line change
Expand Up @@ -66,6 +66,8 @@ def __init__(
ignored.append("`requirements`")
if tagged_env_vars:
ignored.append("`tagged_env_vars`")
if conf.conda_channels:
ignored.append("conda_channels")
if conf.conda_environment_file:
ignored.append("`conda_environment_file`")
message = (
Expand All @@ -75,6 +77,8 @@ def __init__(
log.warning(message)
requirements = {}
tagged_env_vars = {}
# All that is required to create ASV's bare-bones environment.
conf.conda_channels = ["defaults"]
conf.conda_environment_file = None

super().__init__(conf, python, requirements, tagged_env_vars)
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -30,7 +30,7 @@
class MixinCombineRegions:
# Characterise time taken + memory-allocated, for various stages of combine
# operations on cubesphere-like test data.
params = [4, 500]
params = [50, 500]
param_names = ["cubesphere-N"]

def _parametrised_cache_filename(self, n_cubesphere, content_name):
Expand Down
2 changes: 1 addition & 1 deletion benchmarks/benchmarks/load/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -27,7 +27,7 @@ class LoadAndRealise:
# For data generation
timeout = 600.0
params = [
[(2, 2, 2), (1280, 960, 5), (2, 2, 1000)],
[(50, 50, 2), (1280, 960, 5), (2, 2, 1000)],
[False, True],
["FF", "PP", "NetCDF"],
]
Expand Down
2 changes: 1 addition & 1 deletion benchmarks/benchmarks/load/ugrid.py
Original file line number Diff line number Diff line change
Expand Up @@ -77,7 +77,7 @@ class DataRealisation:
warmup_time = 0.0
timeout = 300.0

params = [1, int(2e5)]
params = [int(1e4), int(2e5)]
param_names = ["number of faces"]

def setup_common(self, **kwargs):
Expand Down
2 changes: 1 addition & 1 deletion benchmarks/benchmarks/save.py
Original file line number Diff line number Diff line change
Expand Up @@ -21,7 +21,7 @@


class NetcdfSave:
params = [[1, 600], [False, True]]
params = [[50, 600], [False, True]]
param_names = ["cubesphere-N", "is_unstructured"]

def setup(self, n_cubesphere, is_unstructured):
Expand Down
Loading