Bump actions/upload-artifact from 4.3.6 to 4.4.0 #1491
Workflow file for this run
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
name: "Serverless Benchmarks" | |
on: | |
pull_request: | |
paths: | |
- 'cmd/serverless/**' | |
- 'pkg/serverless/**' | |
- '.github/workflows/serverless-benchmarks.yml' | |
env: | |
DD_API_KEY: must-be-set | |
concurrency: | |
group: ${{ github.workflow }}/PR#${{ github.event.pull_request.number }} | |
cancel-in-progress: true | |
permissions: {} | |
jobs: | |
baseline: | |
name: Baseline | |
runs-on: ubuntu-latest | |
outputs: | |
sha: ${{ steps.prepare.outputs.sha }} | |
steps: | |
- name: Checkout ${{ github.base_ref }} | |
uses: actions/checkout@0ad4b8fadaa221de15dcec353f45205ec38ea70b # v4.1.4 | |
with: | |
ref: ${{ github.base_ref }} | |
persist-credentials: false | |
- name: Install Go | |
uses: actions/setup-go@0a12ed9d6a96ab950c8f026ed9f722fe0da7ef32 # v5.0.2 | |
with: | |
go-version: stable | |
- name: Prepare working tree | |
id: prepare | |
run: | | |
echo "sha=$(git rev-parse HEAD)" >> $GITHUB_OUTPUT | |
go get ./... | |
- name: Run benchmark | |
env: | |
TEMP_RUNNER: ${{runner.temp}} | |
run: | | |
go test -tags=test -run='^$' -bench=StartEndInvocation -count=10 -benchtime=500ms -timeout=60m \ | |
./pkg/serverless/... | tee "$TEMP_RUNNER"/benchmark.log | |
- name: Upload result artifact | |
uses: actions/upload-artifact@50769540e7f4bd5e21e526ee35c689e35e0d6874 # v4.4.0 | |
with: | |
name: baseline.log | |
path: ${{runner.temp}}/benchmark.log | |
if-no-files-found: error | |
current: | |
name: Current | |
runs-on: ubuntu-latest | |
outputs: | |
sha: ${{ steps.prepare.outputs.sha }} | |
steps: | |
- name: Checkout ${{ github.ref }} | |
uses: actions/checkout@0ad4b8fadaa221de15dcec353f45205ec38ea70b # v4.1.4 | |
with: | |
ref: ${{ github.sha }} | |
persist-credentials: false | |
- name: Install Go | |
uses: actions/setup-go@0a12ed9d6a96ab950c8f026ed9f722fe0da7ef32 # v5.0.2 | |
with: | |
go-version: stable | |
- name: Prepare working tree | |
id: prepare | |
run: | | |
echo "sha=$(git rev-parse HEAD)" >> $GITHUB_OUTPUT | |
go get ./... | |
- name: Run benchmark | |
env: | |
TEMP_RUNNER: ${{runner.temp}} | |
run: | | |
go test -tags=test -run='^$' -bench=StartEndInvocation -count=10 -benchtime=500ms -timeout=60m \ | |
./pkg/serverless/... | tee "$TEMP_RUNNER"/benchmark.log | |
- name: Upload result artifact | |
uses: actions/upload-artifact@50769540e7f4bd5e21e526ee35c689e35e0d6874 # v4.4.0 | |
with: | |
name: current.log | |
path: ${{runner.temp}}/benchmark.log | |
if-no-files-found: error | |
summary: | |
name: Summary | |
runs-on: ubuntu-latest | |
needs: [baseline, current] | |
permissions: | |
pull-requests: write | |
steps: | |
- name: Install Go | |
uses: actions/setup-go@0a12ed9d6a96ab950c8f026ed9f722fe0da7ef32 # v5.0.2 | |
with: | |
go-version: stable | |
cache: false | |
- name: Install benchstat | |
run: | | |
go install golang.org/x/perf/cmd/benchstat@latest | |
- name: Download baseline artifact | |
uses: actions/download-artifact@65a9edc5881444af0b9093a5e628f2fe47ea3b2e # v4.1.7 | |
with: | |
name: baseline.log | |
path: baseline | |
- name: Download current artifact | |
uses: actions/download-artifact@65a9edc5881444af0b9093a5e628f2fe47ea3b2e # v4.1.7 | |
with: | |
name: current.log | |
path: current | |
- name: Analyze results | |
id: analyze | |
run: | | |
benchstat -row /event baseline/benchmark.log current/benchmark.log | tee analyze.txt | |
echo "analyze<<EOF" >> $GITHUB_OUTPUT | |
cat analyze.txt >> $GITHUB_OUTPUT | |
echo "EOF" >> $GITHUB_OUTPUT | |
- name: Post comment | |
uses: marocchino/sticky-pull-request-comment@331f8f5b4215f0445d3c07b4967662a32a2d3e31 # v2.9.0 | |
with: | |
header: serverless-benchmarks | |
recreate: true | |
message: | | |
## Serverless Benchmark Results | |
`BenchmarkStartEndInvocation` comparison between ${{ needs.baseline.outputs.sha }} and ${{ needs.current.outputs.sha }}. | |
<details> | |
<summary>tl;dr</summary> | |
Use these benchmarks as an insight tool during development. | |
1. Skim down the `vs base` column in each chart. If there is a `~`, then there was no statistically significant change to the benchmark. Otherwise, ensure the estimated percent change is either negative or very small. | |
2. The last row of each chart is the `geomean`. Ensure this percentage is either negative or very small. | |
</details> | |
<details> | |
<summary>What is this benchmarking?</summary> | |
The [`BenchmarkStartEndInvocation`](https://github.com/DataDog/datadog-agent/blob/main/pkg/serverless/daemon/routes_test.go) compares the amount of time it takes to call the `start-invocation` and `end-invocation` endpoints. For universal instrumentation languages (Dotnet, Golang, Java, Ruby), this represents the majority of the duration overhead added by our tracing layer. | |
The benchmark is run using a large variety of lambda request payloads. In the charts below, there is one row for each event payload type. | |
</details> | |
<details> | |
<summary>How do I interpret these charts?</summary> | |
The charts below comes from [`benchstat`](https://pkg.go.dev/golang.org/x/perf/cmd/benchstat). They represent the statistical change in _duration (sec/op)_, _memory overhead (B/op)_, and _allocations (allocs/op)_. | |
The benchstat docs explain how to interpret these charts. | |
> Before the comparison table, we see common file-level configuration. If there are benchmarks with different configuration (for example, from different packages), benchstat will print separate tables for each configuration. | |
> | |
> The table then compares the two input files for each benchmark. It shows the median and 95% confidence interval summaries for each benchmark before and after the change, and an A/B comparison under "vs base". ... The p-value measures how likely it is that any differences were due to random chance (i.e., noise). The "~" means benchstat did not detect a statistically significant difference between the two inputs. ... | |
> | |
> Note that "statistically significant" is not the same as "large": with enough low-noise data, even very small changes can be distinguished from noise and considered statistically significant. It is, of course, generally easier to distinguish large changes from noise. | |
> | |
> Finally, the last row of the table shows the geometric mean of each column, giving an overall picture of how the benchmarks changed. Proportional changes in the geomean reflect proportional changes in the benchmarks. For example, given n benchmarks, if sec/op for one of them increases by a factor of 2, then the sec/op geomean will increase by a factor of ⁿ√2. | |
</details> | |
<details> | |
<summary>I need more help</summary> | |
First off, do not worry if the benchmarks are failing. They are not tests. The intention is for them to be a tool for you to use during development. | |
If you would like a hand interpreting the results come chat with us in `#serverless-agent` in the internal DataDog slack or in `#serverless` in the [public DataDog slack](https://chat.datadoghq.com/). We're happy to help! | |
</details> | |
<details> | |
<summary>Benchmark stats</summary> | |
``` | |
${{ steps.analyze.outputs.analyze }} | |
``` | |
</details> |