You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I've encountered an issue with the code in the vllm project on GitHub. Specifically, at line 170 of the llm.py file https://github.com/vllm-project/vllm/blob/main/vllm/entrypoints/llm.py#L170, there is a conversion of multi_modal_data.data to torch.float16 using the statement multi_modal_data.data = multi_modal_data.data.to(torch.float16). This automatic conversion may not be suitable for all use cases, especially if the model is designed to operate with bfloat16 or other numerical precisions. If this is an issue that needs to be addressed, I would be happy to submit a pull request with a fix
The text was updated successfully, but these errors were encountered:
I have refactored the processing logic in #3978 which removes the assumption of float16 data type. Could you suggest a test case to determine whether the conversion problem still exists or not?
Your current environment
🐛 Describe the bug
I've encountered an issue with the code in the vllm project on GitHub. Specifically, at line 170 of the llm.py file https://github.com/vllm-project/vllm/blob/main/vllm/entrypoints/llm.py#L170, there is a conversion of multi_modal_data.data to torch.float16 using the statement multi_modal_data.data = multi_modal_data.data.to(torch.float16). This automatic conversion may not be suitable for all use cases, especially if the model is designed to operate with bfloat16 or other numerical precisions. If this is an issue that needs to be addressed, I would be happy to submit a pull request with a fix
The text was updated successfully, but these errors were encountered: