-
Notifications
You must be signed in to change notification settings - Fork 40
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add Benchmarking CI #420
Add Benchmarking CI #420
Conversation
3debcc9
to
6997e40
Compare
The culprit was whitespace... This is (one of the many reasons) why Julia > Python. |
f8c7b21
to
8b19a91
Compare
@maleadt Do you know how I can set up the buildkite token on the Github Actions side? |
I can add a token. Which permissions do you need? |
721ef62
to
b40d7f1
Compare
Do we want to save the For testing now I'll do median since that's what https://github.com/LuxDL/LuxLib.jl/pull/128/files is doing. |
dbc3eb2
to
f367f16
Compare
This is mostly ready. I assume benchmarks will be posted to new PRs after this is merged to master? A few uncertainties:
|
51fcec4
to
680a134
Compare
b8e2424
to
e97d4d1
Compare
e97d4d1
to
cfd9499
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I guess we'll only see this in action on subsequent PRs?
queue: "juliaecosystem" | ||
os: "macos" | ||
arch: "aarch64" | ||
macos_version: "15.0" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why do you need macOS 15 for the benchmarks?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I initially wanted to see the performance impact of logging. Seems like now it's only enabled when actually used so might not be worth it.
Looking at the Lux PR I based this off of it seems like it. |
Using a JuliaGPU branch instead of from my fork. See #419 for the start of this branch.
Original text:
Copying over the benchmarks from CUDA.jl.
I'm not sure if I converted them properly. Some function seem to be missing Metal implementations (like
reverse
).The final (and biggest) problem is how inconsistent these results have been. Simply rerunning the benchmarks on the same code gives some huge performance differences. There is always at least one benchmark that is >20% slower or faster. The benchmarks seem to be way more consistent on the runners. Only a few seem to have big variance.
See #418 (comment)
Todo: