-
Notifications
You must be signed in to change notification settings - Fork 195
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Includes simulation tips section in the docs #1543
Conversation
using Oceananigans.Grids: Center, Face | ||
using Oceananigans.Fields: KernelComputedField | ||
|
||
@inline ψ²(i, j, k, grid, ψ) = @inbounds ψ[i, j, k]^2 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't know much about @inbounds
in Julia, but from what I can tell the user doesn't need to use it that much (except when creating KernelComputedField
s, which has many other difficulties). So I'd say we don't need to touch on that here (maybe in docs for KernelComputedField
). But I might be missing something.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sadly, I don't remember when to use @inbuonds and when not to use it. @ali-ramadhan explained it to me once. Maybe he could help in adding a sentence? I would certainly use this when writing code as at the moment it seems mysterious to me when you use it and when you don't.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It just tell julia to not bother checking whether the index is beyond the array’s limit. At least that’s what I think it does...
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this is a reallly nice contribution and I learned some things along the way so thanks.
I have still a lot to learn about efficiency so will let someone else who is more knowledgeable on the topic approve and add more feedback
docs/src/simulation_tips.md
Outdated
In practice it's hard to say whether inlining a function will bring runtime benefits _with | ||
certainty_, since Julia already inlines some small functions automatically. However, it is generally | ||
a good idea to at least investigate this aspect in your code as the benefits can potentially be | ||
significant. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It may be worth mentioning that KernelAbstractions
"force inlines" all GPU code. I can't remember if CPU code is also force inlined. @vchuravy and @jakebolewski have give advice regarding the use of @inline
in the past. Force-inlining suggests that we don't always need the annotation @inline
but for some reason it still pervades all our code... ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I added a suggestion to mention KernelAbstractions.jl but it might not be useful to discuss inlining behavior to new users (many of whom are completely new to Julia).
My (probably unpopular) opinion is that I like to explicitly @inline
functions to tell people reading my code that this is a performance-critical function I intend to have inlined whenever possible (even if it'll be force-inlined by the compiler). But this is probably a debate for another place.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think the docs on inlining in Julia are a bit scarce. There's a lot of spread out in discourse comments. For the time being I think this is the limit of my knowledge. But feel free to adapt the text if you think it needs more details.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think its fine to recommend @inline
for style / clarity. But we shouldn't mislead people in the docs by saying that @inline
is necessary for performance if it has no effect.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@glwagner are you suggesting that @inline
doesn't impact performance?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think it does for GPU, but it might for CPU!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could be fun to illustrate these points with code snippets that do some benchmarking that users can use for themselves (some day).
Thanks for starting this @tomchor . |
Docs should show up at https://clima.github.io/OceananigansDocumentation/previews/PR1543/ once a commit has been made after the PR has been opened. |
docs/src/simulation_tips.md
Outdated
In practice it's hard to say whether inlining a function will bring runtime benefits _with | ||
certainty_, since Julia already inlines some small functions automatically. However, it is generally | ||
a good idea to at least investigate this aspect in your code as the benefits can potentially be | ||
significant. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I added a suggestion to mention KernelAbstractions.jl but it might not be useful to discuss inlining behavior to new users (many of whom are completely new to Julia).
My (probably unpopular) opinion is that I like to explicitly @inline
functions to tell people reading my code that this is a performance-critical function I intend to have inlined whenever possible (even if it'll be force-inlined by the compiler). But this is probably a debate for another place.
docs/src/simulation_tips.md
Outdated
|
||
### Arrays in GPUs are usually different from arrays in CPUs | ||
|
||
Talk about converting to CuArrays and viewing CuArrays as well! |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should we also talk about CUDA scalar operations (https://juliagpu.github.io/CUDA.jl/dev/usage/workflow/#UsageWorkflowScalar) and how/when to use CUDA.allowscalar
and CUDA.@allowscalar
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
My guess is yes, but I know nothing about this topic, so some collaboration would be very much appreciated :)
@ali-ramadhan's suggestion Co-authored-by: Ali Ramadhan <[email protected]>
Co-authored-by: Gregory L. Wagner <[email protected]>
Co-authored-by: Ali Ramadhan <[email protected]>
This gives me a 404 error! |
I believe you have to open the PR (not have it as "Draft") and then it'll push the preview.... |
I finished the first draft (of the topics I know about at least). I haven't written the very last subsection though, which is about viewing/using arrays in GPU runs because honestly I don't know enough to write about it. I know there's a function called
Thoughts? |
I'm wondering if we should provide a separate page on "Using GPUs"? While the simulation tips for CPUs are really performance optimizations that are optional, the GPU simulation tips are mostly required to run without errors. There's a few other things that are required to get things working on GPUs --- for example, Oceananigans must be built (not just run) with a GPU / CUDA installation available; this is a common pitfall on clusters. |
That's a good point. Although I think we could avoid creating another page and put that information in the "Using GPUs" page, so that things are more condensed. |
🤦 there's already Using GPUs of course, silly me... |
We have a preview! |
|
||
It may be useful to know that there are some kernels already defined for commonly-used diagnostics | ||
in packages that are companions to Oceananigans. For example | ||
[Oceanostics.jl](https://github.com/tomchor/Oceanostics.jl/blob/13d2ba5c48d349c5fce292b86785ce600cc19a88/src/TurbulentKineticEnergyTerms.jl#L23-L30) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm a bit hesitant to reference the users to a package that is neither registered nor tested. (mostly the latter tbh)
What do others think?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Oceanostics.jl is registered. It could do with some actual tests but plenty of Oceananigans.jl functionality is not tested.
I think it's good to link to packages outside of Oceananigans.jl to give readers the idea that creating their own packages for extra functionality is okay (and actually encouraged!).
I would even advocate for creating an extra page in the docs to link to other packages related to Oceananigans.jl and being liberal with the packages we include.
Co-authored-by: Navid C. Constantinou <[email protected]>
Co-authored-by: Navid C. Constantinou <[email protected]>
Co-authored-by: Navid C. Constantinou <[email protected]>
Thanks for the help with this PR, @navidcy! Much appreciated
Ah, that's the limit of my knowledge on GPUs. We could either drop that session for now or someone else could write it. I really don't think I know enough to write that section. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I added some quick text under the last section. Should be better than nothing.
Happy to merge this as-is now since it's already a great addition to the docs that we can continue expanding in the future as common GPU issues pop up.
I think GPU tests randomly crapped out (not important for this PR) but docs are still building so as long as the PR docs preview looks good, then this should be good to merge! |
@ali-ramadhan currently waiting for the preview to load so that I can merge, but I don't think it'll do that with the GPU test failing. How do we fix that? |
@tomchor Ah are you okay with committing my suggestion before merging? I think docs did get built and deployed: Working on getting you guys access to Buildkite so you can control it as well. I'm not a Buildkite admin so I have to ask one to invite other people... |
Co-authored-by: Ali Ramadhan <[email protected]>
Sorry, I forgot to do that! |
Everything passed 🎉 Thanks so much for adding this great information to the docs @tomchor! |
Thanks, everyone. I'll merge this for now and we can improve on it later based on feedback from users. @glwagner I'm thinking of opening another PR soon to address your comment about using GPUs. Like I mentioned, some of this info is already available in the "Using GPUs" page, but maybe it's useful to expand a bit on it and link this newly created page there. |
Closes #1478
As mentioned in #1542 I couldn't build the docs locally so I probably made some markdown mistakes along the way, which is why this is a draft pull request. This first draft is also somewhat incomplete.
I figured I'd create the PR early to get some feedback along the way though. I'm especially unfamiliar with the part about converting arrays to CUDA etc., so some help/collaboration there is much appreciated.
CC @ali-ramadhan @glwagner @navidcy @francispoulin