From fae772f89b6e3924eec2204f563898fd272484c1 Mon Sep 17 00:00:00 2001 From: Helena Kloosterman Date: Wed, 21 Jul 2021 20:39:15 +0200 Subject: [PATCH] Add note about using benchmark_app in notebook to 104 (#178) * Add note about using benchmark_app in notebook to 104 * Update notebooks/104-model-tools/104-model-tools.ipynb Co-authored-by: Ryan Loney --- notebooks/104-model-tools/104-model-tools.ipynb | 12 +++++++----- 1 file changed, 7 insertions(+), 5 deletions(-) diff --git a/notebooks/104-model-tools/104-model-tools.ipynb b/notebooks/104-model-tools/104-model-tools.ipynb index cf78282edd7..8cbc1aceb89 100644 --- a/notebooks/104-model-tools/104-model-tools.ipynb +++ b/notebooks/104-model-tools/104-model-tools.ipynb @@ -317,15 +317,17 @@ "\n", "The following cells show some examples of `benchmark_app` with different parameters. Some useful parameters are:\n", "\n", - "- `-d` Device to use for inference. For example: CPU, GPU, MULTI\n", - "- `-t` Time in number of seconds to run inference\n", - "- `-api` Use asynchronous (async) or synchronous (sync) inference\n", - "- `-b` Batch size\n", + "- `-d` Device to use for inference. For example: CPU, GPU, MULTI. Default: CPU\n", + "- `-t` Time in number of seconds to run inference. Default: 60\n", + "- `-api` Use asynchronous (async) or synchronous (sync) inference. Default: async\n", + "- `-b` Batch size. Default: 1\n", "\n", "\n", "Run `! benchmark_app --help` to get an overview of all possible command line parameters.\n", "\n", - "In the next cell, we define a `benchmark_model()` function that calls `benchmark_app`. This makes it easy to try different combinations. In the cell below that, we display the available devices on the system." + "In the next cell, we define a `benchmark_model()` function that calls `benchmark_app`. This makes it easy to try different combinations. In the cell below that, we display the available devices on the system.\n", + "\n", + "> **NOTE**: In this notebook we run benchmark_app for 15 seconds to give a quick indication of performance. For more accurate performance, we recommended running inference for at least one minute by setting the `t` parameter to 60 or higher, and running `benchmark_app` in a terminal/command prompt after closing other applications. You can copy the _benchmark command_ and paste it in a command prompt where you have activated the `openvino_env` environment. " ] }, {