-
Notifications
You must be signed in to change notification settings - Fork 3.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[CUDA] drop CUDA 10 support, start supporting CUDA 12 (fixes #5789) #6099
Conversation
@@ -25,7 +25,7 @@ if [ $PY_MINOR_VER -gt 7 ]; then | |||
pydistcheck \ | |||
--inspect \ | |||
--ignore 'compiled-objects-have-debug-symbols,distro-too-large-compressed' \ | |||
--max-allowed-size-uncompressed '60M' \ | |||
--max-allowed-size-uncompressed '70M' \ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
With generating code for 2 more CUDA architectures, the linux CUDA wheels are now around 63 MB uncompressed.
==================== running pydistcheck ====================
checking '/LightGBM/dist/lightgbm-4.1.0.99-py3-none-manylinux_2_27_x86_64.whl'
----- package inspection summary -----
file size
* compressed size: 40.6M
* uncompressed size: 63.3M
* compression space saving: 35.9%
contents
* directories: 0
* files: 17 (1 compiled)
size by extension
* .so - 64423.5K (99.3%)
* .py - 404.7K (0.6%)
* no-extension - 21.5K (0.0%)
* .txt - 0.0K (0.0%)
* .typed - 0.0K (0.0%)
Build link: https://github.com/microsoft/LightGBM/actions/runs/6186545556/job/16794443363?pr=6099
We could start dropping older architectures at some point if we think that's a problem, but I don't think that should even be considered until we start possibly distributing precompiled CUDA wheels via a repository like PyPI.
Until then, I think building for more architectures to support a wider range of GPUs is desirable, and a few more MB on disk for the wheel isn't a big concern.
I'd specifically like a review from @shiyu1994 before we merge this one, to be sure I'm not missing something. I don't know a lot about CUDA 😅 |
@jameslamb Thanks for working on this! This is very helpful. We definitely need to support CUDA 12. I'll review this in the next few days. |
Sure no problem, take your time! |
thanks @shiyu1994 ! |
This pull request has been automatically locked since there has not been any recent activity since it was closed. To start a new related discussion, open a new issue at https://github.com/microsoft/LightGBM/issues including a reference to this. |
Fixes #5789.
Notes for Reviewers
CUDA 12.2.2 is the latest release of the CUDA toolkit (https://developer.nvidia.com/cuda-toolkit-archive), but the latest official Docker image is 12.2.0 (https://catalog.ngc.nvidia.com/orgs/nvidia/containers/cuda), so I made the new job here
12.2.0
.I think it's important the
method: source
CI job be tested against the latest version of CUDA, since as of now installing from source is the only way users can work with a CUDA-enabled version of the Python package.