Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Use KernelAbstractions to accelerate MultilayerQG.streamfunctionfrompv! #112

Open
glwagner opened this issue Sep 15, 2020 · 23 comments
Open
Labels
🎮 gpu 🚑 help wanted Extra attention is needed optimization 🏎 ❓ question Further information is requested

Comments

@glwagner
Copy link
Member

KernelAbstractions.jl can be used to accelerate the function

function streamfunctionfrompv!(ψh, qh, params, grid)
for j=1:grid.nl, i=1:grid.nkr
CUDA.@allowscalar @views ψh[i, j, :] .= params.invS[i, j] * qh[i, j, :]
end

A simple example showing how to use KernelAbstractions is the "Naive Transpose":

https://juliagpu.gitlab.io/KernelAbstractions.jl/examples/naive_transpose/

@glwagner
Copy link
Member Author

glwagner commented Sep 15, 2020

The first step is to write a kernel, which will look something like

@kernel invert_column!(ψh, qh, S⁻¹)
    i, j = @index(Global, NTuple)
    @inbounds ψh[i, j] .= S⁻¹[i, j] * qh[i, j]
end

The next step is to create a work layout over which the kernel is launched. If we restrict attention to models that always have more than 32 grid points, we can use something like

# Larger workgroups are generally more efficient. For more generality, we could put an if statement that incurs
# different behavior when either nkl or nl are less than 16
workgroup = 16, 16

# The size determines how many times the kernel is run
worksize = grid.nkr, grid.nl

# This (and its useage below) will ensure the kernel is not run _before_ the data in qh is available
barrier = Event(dev)

# Creates a loop over the specified worksize, using workgroup to organize the computation
loop_invert_column! = invert_column!(dev, workgroup, worksize)

# Launch the kernel
event = loop_invert_column!(ψh, qh, params.invS, dependencies=barrier)

# This will ensure that no other operations occur until the kernel has finished
wait(dev, event)

@glwagner
Copy link
Member Author

glwagner commented Sep 15, 2020

One thing I am not totally sure about is whether KernelAbstractions will compile away the matrix multiplication in @inbounds ψh[i, j] .= S⁻¹[i, j] * qh[i, j]. I think that it will. If not, we may have to unroll our own loop.

@glwagner
Copy link
Member Author

By the way, I think this optimization also requires the columns of ψh[i, j] to be stored as StaticArrays. It looks like ψh is a 3D array right now. Other parts of the code may also have to converted to kernels if this change is made, since broadcasting over the 3D array would no longer work.

@navidcy
Copy link
Member

navidcy commented Sep 15, 2020

With this last suggestion would x, y FFTs work nicely?

@glwagner
Copy link
Member Author

With this last suggestion would x, y FFTs work nicely?

Oof, good point.

Hmm, maybe we need to hand-write the matrix matrix multiply then. Not sure.

@navidcy
Copy link
Member

navidcy commented Sep 15, 2020

yes it's been coming to haunt us either way...
(I remember a similar discussion some months ago...)

@glwagner
Copy link
Member Author

Something like

@kernel invert_column!(ψh, qh, S⁻¹)
    i, j = @index(Global, NTuple)
    ψh_column = view(ψh, i, j, :)
    qh_column = view(qh, i, j, :)
    @inbounds ψh_column .= S⁻¹[i, j] * qh_column
end

might work.

@glwagner
Copy link
Member Author

glwagner commented Sep 16, 2020

Otherwise a kernel along the lines of

using KernelAbstractions.Extras.LoopInfo: @unroll

@kernel invert_column!(ψh, qh, S⁻¹, nz)
    i, j = @index(Global, NTuple)

    @unroll for k = 1:nz

        @inbounds ψh[i, j, k] = 0

        @unroll for m = 1:nz
            @inbounds ψh[i, j, k] += S⁻¹[i, j][k, m] * qh[i, j, m]
        end

    end
end

might work, alternatively. Or maybe my indices are screwed up --- whichever is correct.

Nothing is too difficult, it's just a matter of trying it out.

@navidcy
Copy link
Member

navidcy commented Dec 2, 2020

I should resurrect this..

@navidcy navidcy added 🎮 gpu optimization 🏎 ❓ question Further information is requested labels Dec 2, 2020
@navidcy navidcy added the 🚑 help wanted Extra attention is needed label Jan 28, 2021
@navidcy
Copy link
Member

navidcy commented Mar 18, 2021

What about https://github.com/mcabbott/Tullio.jl to the rescue? (just a random thought)

@glwagner
Copy link
Member Author

There's probably a lot of solutions! I think I gave two, but there might be more.

@mpudig
Copy link
Collaborator

mpudig commented Jul 25, 2024

Hi @navidcy @glwagner – I am hoping to resurrect this issue. I'd like to run a significant number of high vertical resolution simulations and leveraging GPU capabilities would be hugely beneficial...

I am CUDA/GPU-literate but by no means fluent, and am trying to understand this thread and your thinking behind what a possible fix would be. For my own understanding, the main issue is the scalar indexing of S in the PV inversion step, right?

"""
pvfromstreamfunction!(qh, ψh, params, grid)
Obtain the Fourier transform of the PV from the streamfunction `ψh` in each layer using
`qh = params.S * ψh`.
"""
function pvfromstreamfunction!(qh, ψh, params, grid)
for j=1:grid.nl, i=1:grid.nkr
CUDA.@allowscalar @views qh[i, j, :] .= params.S[i, j] * ψh[i, j, :]
end
return nothing
end

It seems you've both thought of a few ways to rewrite the calculation of S and the PV inversion matrix-matrix multiply so that performance is optimized for GPUs. I'd be more than happy to have a go at changing the code and running the tests myself, but I might need a bit of guidance around what method proposed in this thread would be a fruitful initial direction to go in...!

Thanks

@glwagner
Copy link
Member Author

The issue is the explicit loop over i, j. To utilize the GPU you have to write a kernel, hopefully using KernelAbstractions like we do in Oceananigans. Check out the docs for KernelAbstractions and write/run a few kernels for the GPU to test your skills. This is a fairly simple kernel so hopefully it will be fairly straightforward.

@glwagner
Copy link
Member Author

Just want to emphasize that it is easy to learn KernelAbstractions, you can try just running the docs examples to start which should take a few minutes, and then spend an hour or two to learn how to write your own. At that point you're ready to solve this problem and run high res simulations.

@mpudig
Copy link
Collaborator

mpudig commented Jul 25, 2024

Okay, awesome, thanks for the direction, Greg! I'll give this a go and hopefully will run into few difficulties. Famous last words...!

@glwagner
Copy link
Member Author

image

@navidcy
Copy link
Member

navidcy commented Jul 25, 2024

you made this meme???

@glwagner
Copy link
Member Author

yes

@navidcy
Copy link
Member

navidcy commented Jul 26, 2024

It's a great one ;)

mpudig added a commit to mpudig/GeophysicalFlows.jl that referenced this issue Jul 26, 2024
@mpudig
Copy link
Collaborator

mpudig commented Jul 29, 2024

I had a go at writing the kernel using KernelAbstractions.jl and modifying the MultiLayerQG.pvfromstreamfunction! call:

@kernel function pvfromstreamfunction_kernel!(qh, ψh, S, nlayers)
  i, j = @index(Global, NTuple)

  @unroll for k = 1:nlayers

      @inbounds qh[i, j, k] = 0

      @unroll for m = 1:nlayers
          @inbounds qh[i, j, k] += S[i, j][k, m] * ψh[i, j, m]
      end
  
  end
end

and

function pvfromstreamfunction!(qh, ψh, params, grid)
  # Larger workgroups are generally more efficient. For more generality, we could put an 
  # if statement that incurs different behavior when either nkl or nl are less than 8
  workgroup = 8, 8

  # The worksize determines how many times the kernel is run
  worksize = grid.nkr, grid.nl

  # Instantiates the kernel for relevant backend device
  backend = KernelAbstractions.get_backend(qh)
  kernel! = pvfromstreamfunction_kernel!(backend, workgroup, worksize)

  # Launch the kernel
  S, nlayers = params.S, params.nlayers
  kernel!(qh, ψh, S, nlayers)

  # This will ensure that no other operations occur until the kernel has finished
  KernelAbstractions.synchronize(backend)

  return nothing
end

I ran some simple benchmark tests on a GPU (rtx8000), a CPU with 16 threads and a CPU with 1 thread. For example, with nlayers = 3 and nx = 128 the improvement on the GPU is significant:

  • GPU (new code)
nlayers = 3; nx = 128; prob = MultiLayerQG.Problem(nlayers, GPU(); nx); @btime stepforward!(prob)
1.276 ms (2345 allocations: 179.81 KiB)
  • GPU (old code)
nlayers = 3; nx = 128; prob = MultiLayerQG.Problem(nlayers, GPU(); nx); @btime stepforward!(prob)
2.986 s (867442 allocations: 199.74 MiB)

Something I was surprised by was that a multi-threaded CPU performed slightly better than the GPU in terms of speed in some cases. For example, with nlayers = 12 and nx = 512

  • GPU (new code)
nlayers = 12; nx = 512; prob = MultiLayerQG.Problem(nlayers, GPU(); nx); @btime stepforward!(prob)
1.179 s (2541 allocations: 191.94 KiB)
  • CPU with 16 threads (new code)
nlayers = 12; nx = 512; prob = MultiLayerQG.Problem(nlayers, CPU(); nx); @btime stepforward!(prob)
525.084 ms (49375 allocations: 8.28 MiB)

Does this surprise you?

In any case, the rewrite definitely accelerates MultiLayerQG.pvfromstreamfunction! on GPUs for arbitrary nlayers. Let me know if you would like me to open a PR to add the changes!

@mpudig
Copy link
Collaborator

mpudig commented Oct 24, 2024

@navidcy @glwagner Just following up on the above in case you missed it. Let me know if you think I should open a PR to add this to the MultiLayerQG code.

@navidcy
Copy link
Member

navidcy commented Oct 24, 2024

Open a PR I think! Yeah!

@glwagner
Copy link
Member Author

For example, with nlayers = 3 and nx = 128 the improvement on the GPU is significant:

The speed up is 1000x so "significant" is an understatement

Does this surprise you?

Yes, if you can show the code or open a PR maybe we will spot something.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
🎮 gpu 🚑 help wanted Extra attention is needed optimization 🏎 ❓ question Further information is requested
Projects
None yet
Development

No branches or pull requests

3 participants