-
-
Notifications
You must be signed in to change notification settings - Fork 23
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Optional cudatoolkit dependency #14
Comments
So one option would be to add Though there are some interesting questions that come out of this. First what CUDA version should that package use? Second how does one ensure it picks up the CUDA libraries once installed? Third how do you warn/error when a CUDA version mismatch between the package and the system occurs? Fourth as a community, we have decided to require It’s also worth replacing the word Hope that helps 🙂 |
Let me describe how our current build system works, and how we handle these issues, and the potential problems that might arise with the approach currently used in conda-forge. Right now we create builds for many different versions of CUDA: 7.5, 8.0, 9.0, 9.1, 9.2, 10.0, and 10.1. Users are expected to have installed the toolkit and driver already, and made sure they're in the library path for runtime linking. On HPC clusters the administrator will have taken care of this, so users just execute Users can select which build they want using a label. For example, Now let's consider the conda-forge approach. For this we just pick one CUDA version, build against it, and have the toolkit installed automatically. For end user computers that's very convenient, since they don't have to worry about downloading a toolkit and setting up paths. For cluster users it's less clearly a benefit. It just saves them putting one extra One other complication. We have multiple computational backends, including CUDA, OpenCL, and CPU. All versions are normally included in all packages. It figures out at runtime which ones are actually available based on the installed software and hardware. So we certainly don't want to make people download a large CUDA toolkit if they have an AMD GPU. But I'm also nervous about creating a package that doesn't include the CUDA libraries, since it gives people an extra way to make a mistake and get a package that has inferior support for their hardware. So here are the goal's we want to achieve.
The first one is essential. The second is a "nice to have". |
@peastman, I think some things got lost here. The way conda-forge proposes to do the builds does support multiple CUDA versions. So 1 is already handled. 2 is not. |
How does the user specify which version they want? |
How do you mean? In a recipe performing the build? Or when installing the packages? |
If you mean at the recipe level, @jaimergp's PR ( conda-forge/openmm-feedstock#1 ) should give you multiple builds against different CUDA versions. Alternatively if you mean installing the packages, the user should run |
So if the user manually installs a particular toolkit version, then installs OpenMM, it will get the build for that CUDA version? It won't just download a new toolkit and then install the default OpenMM build? |
Right. As the |
Did you have any more questions about this @peastman? |
Hello! First, thanks for all the effort made towards having GPU-enabled builds in the conda-forge ecosystem. We are very excited about being able to provide our packages here now!
Currently, we are building packages for several CUDA versions, using labels for each one. We expect the users to select a label that matches their CUDA installation this way.
Moving to conda-forge will mean that users won't need to worry about having a CUDA installation to begin with or selecting the appropriate version because
cudatoolkit
is listed as a dependency. This is nice and a great step forward in usability.However, there might be some users that would like to stick to the old behavior: "just give me the package and I will handle CUDA". This might be the case in, for example, HPC sysadmins that would like to manage a single system-wide CUDA installation because that's what works best for them. Some people might not want to download several hundreds of MBs if they already have CUDA in their systems, too.
My question is... how can we provide a GPU-enabled package (this is, we would still need
nvcc
andcudatoolkit
at build time) that does not listcudatoolkit
as a runtime dependency. Is there any way to override therun_exports.strong
configuration?Pinging @jchodera and @peastman so they can follow this as well.
The text was updated successfully, but these errors were encountered: