-
-
Notifications
You must be signed in to change notification settings - Fork 4.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Bug]: Producer process has been terminated before all shared CUDA tensors released (v 0.5.0 post1, v 0.4.3) #6025
Comments
can you follow https://docs.vllm.ai/en/latest/getting_started/debugging.html to figure out what is happening here? |
@youkaichao I am facing the same log as well. What is the general recommendation to remedy it if it's a critical issue? For my case, the programs runs fine with that message. |
then it's just a warning you can ignore. |
Same problem llm = LLM( Getting : On an instance with 2 Nvidia L4 GPUs. |
I upgraded my vLLM after reading #9774 and it fixed the issue, although I still crash for another reason |
Your current environment
Docker Image: vllm/vllm-openai:v0.4.3 as well as 0.5.0 post-1
Params:
The container freezes (does nothing) after presenting the following exception in the log.
🐛 Describe the bug
The text was updated successfully, but these errors were encountered: