Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Check for _is_cuda() in compute_num_jobs #3481

Merged
merged 1 commit into from
Mar 20, 2024

Conversation

bnellnm
Copy link
Contributor

@bnellnm bnellnm commented Mar 18, 2024

Check _is_cuda() before trying to determine number of nvcc threads in compute_num_jobs.

This will avoid an error where CUDA_HOME is not defined on machines without CUDA/nvcc installed, e.g. AMD.

@hliuca
Copy link
Contributor

hliuca commented Mar 18, 2024

the _is_cuda isn't correct... it return True on HIP... the old implementation should be resotred.

@hliuca
Copy link
Contributor

hliuca commented Mar 18, 2024

OLD:
def _is_cuda() -> bool:
return (torch.version.cuda is not None) and not _is_neuron()

New
def _is_cuda() -> bool:
return torch.version.cuda is not None

New is wrong.

@simon-mo
Copy link
Collaborator

@hliuca is the suggested fix adding back and not _is_neuron()? I'm confused how this is related to AMD

@hliuca
Copy link
Contributor

hliuca commented Mar 18, 2024

false alarm... please ignore. thank you.

@jamestwhedbee
Copy link
Contributor

I have run into this too, do we expect this to merge before v0.3.4 is released?

@hliuca
Copy link
Contributor

hliuca commented Mar 20, 2024

As this blocks ROCm/HIP, this PR should be merged as soon as possible. Thank you :-)

@simon-mo simon-mo merged commit ba8ae1d into vllm-project:main Mar 20, 2024
31 checks passed
@hliuca
Copy link
Contributor

hliuca commented Mar 21, 2024

_is_cuda() has been restored in #3537 :-)

def _is_cuda() -> bool:

  • return torch.version.cuda is not None
  • return torch.version.cuda is not None and not _is_neuron()

@bnellnm bnellnm deleted the fix-no-cuda branch July 9, 2024 21:11
Temirulan pushed a commit to Temirulan/vllm-whisper that referenced this pull request Sep 6, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants