-
-
Notifications
You must be signed in to change notification settings - Fork 4.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[ROCm] support Radeon™ 7900 series (gfx1100) without using flash-attention #2768
Conversation
The current head of the vllm still can not compile successfully on ROCm. See issues #2725 and #2646. |
I am working on fixing the build now. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM! Thanks for the fix!
1ef7c67
to
a660ad7
Compare
Have you tried this repo from AMD? Part of |
Thanks @zhuohan123 . I have updated the branch and resolved the conflict. |
[ROCm] Fix build problem resulted from previous commit related to FP8 kv-cache support (vllm-project#2790) Add documentation on how to do incremental builds (vllm-project#2796) [Ray] Integration compiled DAG off by default (vllm-project#2471) Disable custom all reduce by default (vllm-project#2808) add usage context removed usage_context from Engine_args Move IO to another process added http request [ROCm] support Radeon™ 7900 series (gfx1100) without using flash-attention (vllm-project#2768) Add documentation section about LoRA (vllm-project#2834) Refactor 2 awq gemm kernels into m16nXk32 (vllm-project#2723) Co-authored-by: Chunan Zeng <[email protected]> Added additional arg for from_engine_args comments
This pull request adds vllm support for AMD Radeon™ 7900 series GPU (gfx1100) without using flash-attention.
Currently, flash-attention does not fully support gfx1100. So, we used vllm reference implementation instead.
Note:
When building the docker image, pass
--build-arg BUILD_FA="0"
to thedocker build
command.