Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Refine _cuda_device_count logic #222

Merged
merged 4 commits into from
Mar 6, 2024
Merged

Refine _cuda_device_count logic #222

merged 4 commits into from
Mar 6, 2024

Conversation

drcege
Copy link
Collaborator

@drcege drcege commented Feb 29, 2024

  • When torch is installed, we can utilize torch.cuda.device_count() to respect CUDA_VISIBLE_DEVICES.
  • Otherwise, it is non-trivial and prone to errors to manually parse CUDA_VISIBLE_DEVICES, so just ignore it for now.

@drcege drcege added the bug Something isn't working label Feb 29, 2024
@drcege drcege self-assigned this Feb 29, 2024
@drcege drcege linked an issue Feb 29, 2024 that may be closed by this pull request
Copy link
Collaborator

@zhijianma zhijianma left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@drcege drcege merged commit e2238e5 into main Mar 6, 2024
5 checks passed
@drcege drcege deleted the fix/cuda branch March 11, 2024 07:37
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

Successfully merging this pull request may close these issues.

[MM] speed up OPs using hf models (clip, ...)
2 participants