-
-
Notifications
You must be signed in to change notification settings - Fork 4.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Bugfix]Disable the post_norm layer of the vision encoder for LLaVA models #9653
Conversation
👋 Hi! Thank you for contributing to the vLLM project. Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging. To run CI, PR reviewers can do one of these:
🚀 |
Does this issue occur for the other LLaVA models as well? |
Other LLaVA models can still use |
Let's update the other models then, since from a quick look at the HF implementation, they don't use |
OK, all LLaVA models don't use |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for fixing!
…odels (vllm-project#9653) Signed-off-by: Erkin Sagiroglu <[email protected]>
…odels (vllm-project#9653) Signed-off-by: Shanshan Wang <[email protected]>
…odels (vllm-project#9653) Signed-off-by: Shanshan Wang <[email protected]>
…odels (vllm-project#9653) Signed-off-by: qishuai <[email protected]>
…odels (vllm-project#9653) Signed-off-by: NickLucche <[email protected]>
…odels (vllm-project#9653) Signed-off-by: NickLucche <[email protected]>
…odels (vllm-project#9653) Signed-off-by: Sumit Dubey <[email protected]>
Disable the post_norm layer of vision encoder for
LLaVA-Onevision
models based on #9217.