Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add GPU support for LAMMPS_jll #61

Closed
jmeziere opened this issue Aug 4, 2024 · 3 comments
Closed

Add GPU support for LAMMPS_jll #61

jmeziere opened this issue Aug 4, 2024 · 3 comments

Comments

@jmeziere
Copy link
Contributor

jmeziere commented Aug 4, 2024

Some of my simulations would benefit from a LAMMPS lib that has cuda support. I was thinking of something like build_tarballs.txt. Of course, this would require adding CUDA.jl as a dependency.

@vchuravy
Copy link
Member

vchuravy commented Aug 5, 2024

CUDA software has always been a bit challenging to support in a cross compilation environment like Yggdrasil/BinaryBuilder and my assumption was that folks would bring their own lammps installation for more complex scenarios.

This wouldn't add a CUDA.jl dependency, just a dependency to the hill to try and detect CUDA support.

@jmeziere
Copy link
Contributor Author

jmeziere commented Aug 5, 2024

I see. Having my own LAMMPS installation was my first go-to as well, but it requires that the CUDA toolkit be in LD_LIBRARY_PATH or the rpath of the executable. Unfortunately, CUDA warns if it is not expecting a local toolkit and a toolkit is found, and I was hoping to keep it all contained.

I think, though, that I could probably add something like a cuLAMMPS_jll and link it myself. That way, you guys wouldn't have to worry about it.

The build I was basing my build_tarballs file off of was the cufinufft_jll one. Perhaps they have a good way of handling the difficulties in a cross compilation environment? But this is my first foray into this type of thing, so I'm probably ignorant of much of the complexity.

On a side not, what does adding a dependency to the hill mean?

@vchuravy
Copy link
Member

vchuravy commented Aug 6, 2024

Do open a PR on Yggdrasil to see how hard CUDA support is. If it just works I am fine with it.

You can tell CUDA.jl to use a local tookit as well.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants