diff --git a/docs/articles_en/learn-openvino/openvino-samples/benchmark-tool.rst b/docs/articles_en/learn-openvino/openvino-samples/benchmark-tool.rst index 558ae0378d7625..67f702848b043c 100644 --- a/docs/articles_en/learn-openvino/openvino-samples/benchmark-tool.rst +++ b/docs/articles_en/learn-openvino/openvino-samples/benchmark-tool.rst @@ -14,10 +14,7 @@ This page demonstrates how to use the Benchmark Tool to estimate deep learning i .. note:: - The Python version is recommended for benchmarking models that will be used - in Python applications, and the C++ version is recommended for benchmarking - models that will be used in C++ applications. Both tools have a similar - command interface and backend. + Use either Python or C++ version, depending on the language of your application. Basic Usage @@ -226,8 +223,12 @@ should be used purposefully. For more information, see the .. note:: - If the latency or throughput hint is set, it will automatically configure streams - and batch sizes for optimal performance based on the specified device.) + * If either the latency or throughput hint is set, it will automatically configure streams, + batch sizes, and the number of parallel infer requests for optimal performance, based on the specified device. + + * Optionally, you can specify the number of parallel infer requests with the ``-nireq`` + option. Setting a high value may improve throughput at the expense + of latency, while a low value may give the opposite result. Number of iterations ++++++++++++++++++++