-
-
Notifications
You must be signed in to change notification settings - Fork 5.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[RFC]: Reimplement and separate beam search on top of vLLM core #8306
Comments
This is a good goal to work toward, as ensuring that API interfaces (OpenAI, beam search, or otherwise) can efficiently and reliably schedule new sequences benefits all consumers.
The flood of vLLM notifications is hard to keep up with, so I may be out of date. My understanding was that prefix caching was not precise and was block based, resulting in some amount of excess computation. Is there an issue to allow APIs to specify the "prefix length" that should be cached? This new approach could see performance degrade when the sequence length approaches a multiple of the KV block length, if each arm of the beam search schedules a new sequence and must prefill O(kv_block_size) tokens plus decoding O(1) tokens. Ideally both would be O(1) with a hint to allow beam search to cache the entire prefix. |
we can set block size to 1 for the vLLM instance when we use beam search, then we don't have to waste any computation. |
There are also some alternative implementation of this by moving this functionality to a special class of Worker or Executor, which can be configured when beam search is turned on for any engine that needs it. |
@youkaichao How well does the KV cache handle a block size of 1, in terms of compute or memory overhead? |
@AaronFriel I don't think setting block size of 1 will affect performance a lot. But we need to test and measure the impact. |
@simon-mo can you explain more? What special functions / interfaces would these new Worker or Executor need? |
Hello, thanks for your work, I am not sure if I want to create a new issue for this, technically those are still comments! I have done some manual testing of your new beam search implementation and here are some observations, together with a late ROC response:
The tests were run on Llama 3.1 8B AWQ / RTX 3090 / three basic completion prompts like "Today is a good day". Can provide more details if it is of any use. |
@nFunctor it's great to hear that you find the new implementation is faster! We do have plan to add beam search back in the openai server, with implementation similar to the Regarding the exact equivalence with the old implementation, we cannot guarantee generating 500 tokens is exactly the same as huggingface one (and the old one). As long as the algorithm still follows beam search, it should be fine. And we have checked the first 64 tokens are the same. It should be enough for practical usage. |
Thanks for your response @youkaichao . What do you think about the temperature implementation? The new method can still be slower, if generation is done with less beams and less tokens. As you say the block_size parameter is what is holding the method back. I tried activating Flashinfer with block_size 8 and it indeed gave some speedup (12s->10s in one experiment). What I found, as a byproduct, is that the mentioned Llama becomes completely insane on Flashinfer with the old beam method, repeating the same phrase (I am running an instruct-tuned outside of its chat template format, but still). So, in regards to what you said about the results being similar, maybe the new results will be actually more "numerically stable" in some cases. |
beam search is a search algorithm. I don't see how it is related with temperature.
glad to hear that. |
@youkaichao If I understand correctly, the We don't change the logic of the algorithm but since our next-token distribution changes, so might the results. |
sorry, I don't get it. what is your ask? |
@youkaichao hi, it's great to see your design, so does it support multi-batch beam-search or not? I mean, in terms of op, not a loop above prompts list |
what is multi-batch beam-search? |
@youkaichao The beam search docs for
I see you mentioned that beam search conflicts with the sampling algorithm. However, logit processors (currently an argument of the constructor of the |
inference in batch,say 16? |
how to enable vllm openai server with beam search? seems like there is no engine args available there? |
As mentioned in #9253, the current implementation does not stop generating when the EOS token is encountered, and continues until it reaches the maximum token limit. This appears to be the major issue. |
@HeegonJin yes, and I tried my workaround in #9264 (we will see if the team approves). As an external contributor I lack greater understanding of what's going on but it seems to me that a beam gets completed but never gets pushed to completed beams due to the insufficient execution of the |
we plan to improve beam search so that all the sampling parameters should work. |
to all: I added a feature request for a more powerful beam search (as it was in the old vllm) here #10754 |
Motivation.
A rework of #6226
After discussing further with the community, we find that the common use case for beam search is:
After discussing with many contributors, we find:
because beam search is a search algorithm, it conflicts with all the rest sampling algorithm. As a result, many features in vllm already directly assert beam search is not used, e.g.
vllm/vllm/spec_decode/batch_expansion.py
Lines 303 to 305 in 6e36f4f
vllm/vllm/engine/output_processor/multi_step.py
Lines 100 to 103 in 6e36f4f
keeping beam-search as-is in the codebase, will not benefit current beam search user, as no optimization will target at better beam search performance. What's worse, very few developers understand beam search. Keeping beam-search as-is will not only increase the bugs for beam search as the codebase evolves, but also increase the maintenance cost of all contributors.
in search of a win-win solution, on behalf of the vllm team, I propose to separate and reimplement beam search on top of the vllm core code.
to be specific, we can:
LLM.beam_search
interface, that calls the engine to generate 1 tokens with logprobs every step, and maintain beam-search logic only in theLLM.beam_search
function.From the initial discussion, one concern is the efficiency of such implementation, as the request will come and go again and again from the vllm core's perspective. It should be solvable in two-folds:
vLLM is a community project, and we'd like to not only seek opinions from beam-search users, but also seek contributions from beam-search users. Your help is truly needed to shape the future of beam-search support in vLLM.
Proposed Change.
summary of the change: implement beam-search on top of vllm core and add wrappers for users. remove beam-search from the vllm core (scheduler).
Feedback Period.
1 week, from 9/9 to 9/15 (both inclusive)
CC List.
@hrsmanian @zhouyuan @lanking520 @nightflight-dk @HeegonJin @SemMulder @darabos @DhruvaBansal00 @tmostak @physicsrob @YooSungHyun @denadai2 @sjmielke @Reichenbachian @AaronFriel @hinnefe2 @mflaxman10
@WoosukKwon @zhuohan123 @simon-mo
Any Other Things.
No response
Before submitting a new issue...
The text was updated successfully, but these errors were encountered: