-
-
Notifications
You must be signed in to change notification settings - Fork 5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[BugFix] Avoid initializing CUDA too early #3487
Conversation
Care is taken in the code to avoid initializing CUDA prior to CUDA_VISIBLE_DEVICES being set in the worker, but an instance of this was inadvertently introduced in vllm-project#2569.
Actually is it possible to somehow validate DeviceConfig inside the worker, after we have set |
Can we use something like in the def _is_cuda() -> bool:
return torch.version.cuda is not None This should not initialize cuda context, either. It is not safe to assume cuda if it is not neuron. |
@youkaichao I assume this just indicates whether a cuda version of pytorch is in use and so would always return true.
I'm not sure what "safe" means here. If cuda/gpu isn't found then it the server will fail to start either way. Just that if it can be checked here it would fail slightly earlier with a nicer message.
@Yard1 I'm not sure what an easy way to do that would be without nontrivial restructuring. Here all I'm doing is reverting something introduced when the neuron changes were added. We could contemplate that further as a separate improvement? |
Ok sounds good, no blockers from my side |
I'm not familiar with neuron. When we use neuron, is |
Also @njhill do you happen to know how this DeviceConfig has initialized before forking happens in the CI? I wonder if there's a way to restructure CI to avoid the same problem |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM! I think this is a good temporary fix. Should be changed if we would like to use CPU in the future.
Let me know when this is ready to be merged! |
@rkooo567 I was just speculating that this bug might be the cause of that, but if you're referring to the 19 failing tests in @zhuohan123 from my pov it's ready to be merged, thanks! |
Care is taken in the code to avoid initializing CUDA prior to
CUDA_VISIBLE_DEVICES
being set in the worker, but an instance of this was inadvertently introduced in #2569.