-
-
Notifications
You must be signed in to change notification settings - Fork 5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Feature request:support ExLlama #296
Comments
How do you plan on adding batched support for Exllama? I am very interested in your approach as I am trying to work on that too |
ExLlamaV2 has taken over ExLlama in quantization performance for most cases. I hope we can get it implemented in vLLM because it is also an incredible quantization technique. Benchmarks between all the big quantization techniques indicate ExLlamaV2 is the best out of all of them. Have there been any new developments since it was added to the roadmap? |
Please, having exllamav2 with paged attention and with continuous batching would be a big win for the LLM world |
Also looking forward to exllamav2 support |
I was hoping this would be possible, too. I recently worked with the Mixtral-8x7b Model; AWQ 4-bit had significant OOM / Memory overhead compared to ExLlama2 in 4-Bit; also I ended up just running the model in 8-bit using ExLlama2, since that turned out to be the best compromise between model capabilities and VRAM footprint. I can run it in 8-bit on 3x3090 and use full 32k context with ExLlama2; but I need 4x3090 to be even able to load it in 16-bit within VLLM; and i reach OOM when I try to use full context. So this would definitely be an amazing addition to have more flexibility in terms of VRAM-Resources. |
+1 |
1 similar comment
+1 |
+1 |
2 similar comments
+1 |
+1 |
This will be the biggest release for vllm to support exllamav2. +1 |
+1 |
1 similar comment
+1 |
…ct#296) SUMMARY: * update `nightly` workflow to actually use the `skip-for-nightly.txt` skip list
This issue has been automatically marked as stale because it has not had any activity within 90 days. It will be automatically closed if no further activity occurs within 30 days. Leave a comment if you feel this issue should remain open. Thank you! |
This issue has been automatically closed due to inactivity. Please feel free to reopen if you feel it is still relevant. Thank you! |
* Run clang-format on develop * applying the missing formatting
ExLlama (https://github.com/turboderp/exllama)
It's currently the fastest and most memory-efficient executor of models that I'm aware of.
Is there an interest from the maintainers in adding this support?
The text was updated successfully, but these errors were encountered: