Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Performance]: Clarification on Base Model Inference Count with Multiple LoRA Models in vLLM Deployment #8228

Closed
1 task done
zhangyuqi-1 opened this issue Sep 6, 2024 · 4 comments
Labels
performance Performance-related issues

Comments

@zhangyuqi-1
Copy link

Proposal to improve performance

No response

Report of performance regression

No response

Misc discussion on performance

Question:

When deploying LoRA with vLLM, suppose I have 1000 different LoRA models, and each LoRA receives a separate request with a different input. In this scenario, how many times does the base model actually perform inference? Is it only once, or does it perform 1000 inferences?

I understand that the LoRA part will run 1000 times, but its computational cost is relatively small. I'm mainly concerned about how many times the base model runs inference in this case. If the base model only runs once, that would be incredibly efficient, meaning that as the number of LoRA models increases, the overall efficiency would improve significantly. Is this possible?

Your current environment (if you think it is necessary)

No response

Before submitting a new issue...

  • Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the documentation page, which can answer lots of frequently asked questions.
@zhangyuqi-1 zhangyuqi-1 added the performance Performance-related issues label Sep 6, 2024
@jeejeelee
Copy link
Collaborator

When deploying LoRA with vLLM, suppose I have 1000 different LoRA models, and each LoRA receives a separate request with a different input. In this scenario, how many times does the base model actually perform inference? Is it only once, or does it perform 1000 inferences?

Only once. If you want to delve deeper, see: #1804

@zhangyuqi-1
Copy link
Author

It’s fascinating! Does it batch requests corresponding to different LoRA models into one batch size, allowing it to only perform inference once? I’m curious how this is achieved.

@jeejeelee
Copy link
Collaborator

It’s fascinating! Does it batch requests corresponding to different LoRA models into one batch size, allowing it to only perform inference once? I’m curious how this is achieved.

See: punica paper or it's blog

@zhangyuqi-1
Copy link
Author

It’s fascinating! Does it batch requests corresponding to different LoRA models into one batch size, allowing it to only perform inference once? I’m curious how this is achieved.

See: punica paper or it's blog

thanks!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
performance Performance-related issues
Projects
None yet
Development

No branches or pull requests

2 participants