Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Feature]: Enhance integration with advanced LB/gateways with better load/cost reporting and LoRA management #10086

Open
4 of 7 tasks
liu-cong opened this issue Nov 6, 2024 · 3 comments

Comments

@liu-cong
Copy link

liu-cong commented Nov 6, 2024

🚀 The feature, motivation and pitch

There are huge potential in more advanced load balancing strategies tailored for the unique characteristics of AI inference, compared to basic strategies such as round robin. llm instance gateway is one of such efforts and is demonstrating huge performance wins. vLLM can demonstrate leadership in this space by providing better integration with advanced LBs/gateways.

This doc captures the overall requirements for model servers to better support the llm instance gateway. Luckily vLLM already has lots of features/metrics that enable more efficient load balancing such as exposing the KVCacheUtilization metric.

This is a high level breakdown of the feature requests:

Dynamic LoRA Load/unload

Load/cost reporting in metrics

Load/cost reporting in response headers in ORCA format

Open Request Cost Aggregation (ORCA) is a light-weight open protocol for reporting load/cost info to LBs and is already integrated with Envoy and gRPC.

This feature will be controlled by a new engine argument --orca_formats (default [], meaning ORCA is disabled; available values are one or more of[BIN, TEXT, JSON]). If the feature is enabled, vLLM will report metrics defined in the doc as HTTP response headers in the OpenAI compatible APIs.

  • Initial ORCA reporting feature integration (add helpers, add engine argument, plumb metrics source to API responses)
  • Add required metrics, this can be broken down by each metric

Out of band load/cost reporting API in ORCA format

vLLM will expose a light weight API to report the same metrics in ORCA format. This enables LBs to proactively probe the API and get real time load information. This is a long term vision and more details will be shared later.

cc @simon-mo

Alternatives

No response

Additional context

No response

Before submitting a new issue...

  • Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the documentation page, which can answer lots of frequently asked questions.
@ahg-g
Copy link

ahg-g commented Nov 12, 2024

/cc

@simon-mo
Copy link
Collaborator

This sounds great! In general in vLLM we want to ensure we are compatible with open format for load balancing and observability. This helps people actually run vLLM in production.

As long as the overhead in the default case is minimal, I'm in full support.

@coolkp
Copy link
Contributor

coolkp commented Nov 18, 2024

/cc

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

4 participants