You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
There are huge potential in more advanced load balancing strategies tailored for the unique characteristics of AI inference, compared to basic strategies such as round robin. llm instance gateway is one of such efforts and is demonstrating huge performance wins. vLLM can demonstrate leadership in this space by providing better integration with advanced LBs/gateways.
This doc captures the overall requirements for model servers to better support the llm instance gateway. Luckily vLLM already has lots of features/metrics that enable more efficient load balancing such as exposing the KVCacheUtilization metric.
This is a high level breakdown of the feature requests:
Add num_tokens_running and num_tokens_waiting metrics. vLLM already has running and waiting request counts. Exposing token level metrics will further enhance the LB algorithms.
Load/cost reporting in response headers in ORCA format
Open Request Cost Aggregation (ORCA) is a light-weight open protocol for reporting load/cost info to LBs and is already integrated with Envoy and gRPC.
This feature will be controlled by a new engine argument --orca_formats (default [], meaning ORCA is disabled; available values are one or more of[BIN, TEXT, JSON]). If the feature is enabled, vLLM will report metrics defined in the doc as HTTP response headers in the OpenAI compatible APIs.
Initial ORCA reporting feature integration (add helpers, add engine argument, plumb metrics source to API responses)
Add required metrics, this can be broken down by each metric
Out of band load/cost reporting API in ORCA format
vLLM will expose a light weight API to report the same metrics in ORCA format. This enables LBs to proactively probe the API and get real time load information. This is a long term vision and more details will be shared later.
Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the documentation page, which can answer lots of frequently asked questions.
The text was updated successfully, but these errors were encountered:
This sounds great! In general in vLLM we want to ensure we are compatible with open format for load balancing and observability. This helps people actually run vLLM in production.
As long as the overhead in the default case is minimal, I'm in full support.
🚀 The feature, motivation and pitch
There are huge potential in more advanced load balancing strategies tailored for the unique characteristics of AI inference, compared to basic strategies such as round robin. llm instance gateway is one of such efforts and is demonstrating huge performance wins. vLLM can demonstrate leadership in this space by providing better integration with advanced LBs/gateways.
This doc captures the overall requirements for model servers to better support the llm instance gateway. Luckily vLLM already has lots of features/metrics that enable more efficient load balancing such as exposing the KVCacheUtilization metric.
This is a high level breakdown of the feature requests:
Dynamic LoRA Load/unload
Load/cost reporting in metrics
num_tokens_running
andnum_tokens_waiting
metrics. vLLM already has running and waiting request counts. Exposing token level metrics will further enhance the LB algorithms.Load/cost reporting in response headers in ORCA format
Open Request Cost Aggregation (ORCA) is a light-weight open protocol for reporting load/cost info to LBs and is already integrated with Envoy and gRPC.
This feature will be controlled by a new engine argument
--orca_formats
(default[]
, meaning ORCA is disabled; available values are one or more of[BIN, TEXT, JSON]
). If the feature is enabled, vLLM will report metrics defined in the doc as HTTP response headers in the OpenAI compatible APIs.Out of band load/cost reporting API in ORCA format
vLLM will expose a light weight API to report the same metrics in ORCA format. This enables LBs to proactively probe the API and get real time load information. This is a long term vision and more details will be shared later.
cc @simon-mo
Alternatives
No response
Additional context
No response
Before submitting a new issue...
The text was updated successfully, but these errors were encountered: