-
-
Notifications
You must be signed in to change notification settings - Fork 4.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Feature]: Benchmark script with speculative decode metrics #7586
Comments
@cadedaniel I'm wondering if you can provide me with some feedback and suggestions. I'm glad to contribute. |
The idea is good and a contribution here is welcome. My primary concern is latency overheads from metrics collection; i.e. the additional logic required to parse the acceptance rate into per-sequence acceptance info. Suggestion (either/or):
BTW, similar discussion happening here #7522 |
also, you can enable the metrics to be printed in benchmark_latency via: diff --git a/benchmarks/benchmark_latency.py b/benchmarks/benchmark_latency.py
index 97afd301c8f..0ee2bfabb82 100644
--- a/benchmarks/benchmark_latency.py
+++ b/benchmarks/benchmark_latency.py
@@ -47,6 +47,7 @@ def main(args: argparse.Namespace):
distributed_executor_backend=args.distributed_executor_backend,
otlp_traces_endpoint=args.otlp_traces_endpoint,
enable_prefix_caching=args.enable_prefix_caching,
+ disable_log_stats=False,
)
sampling_params = SamplingParams( |
Hi Im evaluating speculative decoding, and I'm not able to get any gain from it. I tested opt 2.7B/llama 3.1 8B/llama 3 8B with the following server configuration parameters: Using a draft model
Using n-gram
Speculative decoding disabled
And the overall behavior is that the less performant approach is with the draft model, then n-gram, and the best case is with speculative decoding off. These results are with an A100-40GB and vLLM 0.5.3 post1. Is there any guide on the best configuration or scenarios where we can et the most of this feature? Thanks! |
Hey! Thanks for the interest. What's the draft model you are using for opt 2.7B/llama 3.1 8B/llama 3 8B? ngram is normally good for document QA or summary, it's not good for online chatting. The perf of SD is workload, model, and hardware dependent. |
This issue has been automatically marked as stale because it has not had any activity within 90 days. It will be automatically closed if no further activity occurs within 30 days. Leave a comment if you feel this issue should remain open. Thank you! |
🚀 The feature, motivation and pitch
I am looking to assess the performance of vllm for speculative decode, but I have been unable to find an offline benchmark script similar to benchmark_latency.py that would allow me to test speculative decode performance. While I can use benchmark_latency.py to obtain e2e latency, it does not provide all of the spec-decode metrics such as the time spent on scoring, verifying, and proposing, as well as the acceptance rate.
Thanks to @cadedaniel's excellent contributions such as #6963 and #3103, we are now able to display spec-decode metrics, including scoring time, verification time, proposal time, and acceptance rate, in the server logging.
However, these metrics can only be viewed in online server logs and are implemented through an asynchronous collector, which could result in inaccuracies. I am considering adding a script called 'benchmark_spec_decode.py' for spec-decode benchmarking in order to capture more spec-decode-related metrics.
Some Proposal
Add a new field
spec_decode_metrics
ofRequestMetrics
vllm/vllm/sequence.py
Lines 87 to 112 in 9587b05
SpecDecodeWorkerMetrics
for more metrics related to spec-decodevllm/vllm/spec_decode/metrics.py
Line 13 in 9587b05
Alternatives
No response
Additional context
No response
The text was updated successfully, but these errors were encountered: