-
-
Notifications
You must be signed in to change notification settings - Fork 62
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Rebuild for CUDA 12 #1079
Rebuild for CUDA 12 #1079
Conversation
Hi! This is the friendly automated conda-forge-linting service. I just wanted to let you know that I linted all conda-recipes in your PR ( |
@conda-forge-admin, please re-render |
Hi! This is the friendly automated conda-forge-webservice. I tried to rerender for you, but it looks like there was nothing to do. This message was generated by GitHub actions workflow run https://github.com/conda-forge/arrow-cpp-feedstock/actions/runs/5171072276. |
@conda-forge-admin, please re-render |
Hi! This is the friendly automated conda-forge-webservice. I tried to rerender for you, but it looks like there was nothing to do. This message was generated by GitHub actions workflow run https://github.com/conda-forge/arrow-cpp-feedstock/actions/runs/5171125185. |
This logic may also need to be limited to CUDA 11 or earlier arrow-cpp-feedstock/recipe/build-arrow.sh Lines 30 to 48 in 5079e41
Here's how another maintainer approached their case Edit: It's also possible this workaround is no longer needed and so can simply be dropped |
Feel free to update this. AFAIU, we only need to build with one cuda version anyway, so we can just remove those workarounds. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the PR! Pleasantly surprised to see this passing1 right off the bat - great work!
Aside from removing the workarounds (which would be great), I just have a comment about the run-constraints.
Footnotes
-
the aarch+CUDA failure is Skip some more crashy tests on aarch+CUDA #1081, and the windows failure is infra flakiness ↩
@conda-forge-admin, please re-render |
Hmm...it looks like the CUDA builds are being skipped on CI altogether?
Also re-rendering removed |
Makes a few changes that will generalize better over CUDA 11 / 12 like use `cuda-version` and referring to the `compiler("cuda")` package directly (since there are differences in what is in its `run_exports`).
Only the CUDA driver version matters for Arrow CUDA components. Co-authored-by: h-vetinari <[email protected]> Co-authored-by: Keith Kraus <[email protected]>
This looks for `$CUDA_HOME` when the `nvcc` wrapper script is installed by searching for `cuda-gdb`, which is in the CUDA 11 Docker images. However this isn't available on CUDA 12 (unless `cuda-gdb` is installed). That said, this logic was added to the `nvcc` wrapper script a while ago. So this can just be dropped in favor of relying on the `nvcc` wrapper sript to set this correctly.
…nda-forge-pinning 2023.06.08.00.22.40
565a87b
to
06c46c8
Compare
I took you at your word on this, hope that's OK. :) Rather than asking you to do the rebasing work I'd like to have (to clean up the back-and-forth here which messes up the git history), I did it myself. Let me know if you want to do it differently. (sorry for the force-pushes) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM from my POV now!
Yep that's ok. Thanks for taking the lead! 🙏 Would just suggest wrapping up the discussion in this thread ( #1079 (comment) ). Perhaps that is just acknowledging there that users need new enough Conda & Mamba versions for this to work |
@kkraus14 , anything else or is this good to go? |
EDIT: This is already happening, apologies. |
All good. It is important that we get this right. So thanks for looking more closely and confirming 😄 |
Asking the bot to merge when it is done, since it seems we are good with the changes here Do we want to start backporting this now or are there other changes we want to combine this with on the older versions? |
Aarch+cuda is inexplicably broken on some tests. I'm debugging this in #1081 |
Ah ok. No worries. Had tried restarting, but I guess the same issues came back |
Any thoughts on this question? |
Yes, we want to backport. I usually do this for the maintenance branches whenever some relevant changes accumulate (or the next migrator drops by), but feel free to open PRs already if you want! |
Submitted PRs to backport these changes to the respective version branches that appear to still be supported: |
Thanks! It's a bit weird to me that the render commit is ordered before adding the migration, but whatever...
I't not just appearance. Policy is: support for last 4 released arrow versions. |
Ah was trying to have a re-render in the beginning that refreshed things before applying changes. Re-rendering at the end was a no-op locally Makes sense. Was just noting that in case I missed something |
While highly appreciated, there's no point in restarting the merge of this PR on main. There hasn't been a passing run on aarch+CUDA for ~20 runs across various PRs & merges, but I seem to finally have cracked which tests are responsible in #1081 |
Hi! This is the friendly automated conda-forge-webservice.
I've started rerendering the recipe as instructed in #1078.
If I find any needed changes to the recipe, I'll push them to this PR shortly. Thank you for waiting!
Here's a checklist to do before merging.
Fixes #1080
Fixes #1078
Fixes #974
Fixes #942