diff --git a/docs/HOWTO/Custom_Layers_Guide.md b/docs/HOWTO/Custom_Layers_Guide.md
index 0cacca13451ad7..8037e6e95a29ee 100644
--- a/docs/HOWTO/Custom_Layers_Guide.md
+++ b/docs/HOWTO/Custom_Layers_Guide.md
@@ -369,7 +369,6 @@ python3 mri_reconstruction_demo.py \
- [Inference Engine Extensibility Mechanism](../IE_DG/Extensibility_DG/Intro.md)
- [Inference Engine Samples Overview](../IE_DG/Samples_Overview.md)
- [Overview of OpenVINO™ Toolkit Pre-Trained Models](@ref omz_models_intel_index)
-- [Inference Engine Tutorials](https://github.com/intel-iot-devkit/inference-tutorials-generic)
- For IoT Libraries and Code Samples see the [Intel® IoT Developer Kit](https://github.com/intel-iot-devkit).
## Converting Models:
diff --git a/docs/IE_DG/API_Changes.md b/docs/IE_DG/API_Changes.md
index 41681e58d8a3ad..c23c427e6edf38 100644
--- a/docs/IE_DG/API_Changes.md
+++ b/docs/IE_DG/API_Changes.md
@@ -156,7 +156,7 @@ The sections below contain detailed list of changes made to the Inference Engine
### Deprecated API
- **Myriad Plugin API:**
+ **MYRIAD Plugin API:**
* VPU_CONFIG_KEY(IGNORE_IR_STATISTIC)
diff --git a/docs/IE_DG/Extensibility_DG/Custom_ONNX_Ops.md b/docs/IE_DG/Extensibility_DG/Custom_ONNX_Ops.md
index 0999679ae0caa2..b6728e65bc402d 100644
--- a/docs/IE_DG/Extensibility_DG/Custom_ONNX_Ops.md
+++ b/docs/IE_DG/Extensibility_DG/Custom_ONNX_Ops.md
@@ -24,11 +24,11 @@ The `ngraph::onnx_import::Node` class represents a node in ONNX model. It provid
New operator registration must happen before the ONNX model is read, for example, if an ONNX model uses the 'CustomRelu' operator, `register_operator("CustomRelu", ...)` must be called before InferenceEngine::Core::ReadNetwork.
Re-registering ONNX operators within the same process is supported. During registration of the existing operator, a warning is printed.
-The example below demonstrates an examplary model that requires previously created 'CustomRelu' operator:
+The example below demonstrates an exemplary model that requires previously created 'CustomRelu' operator:
@snippet onnx_custom_op/onnx_custom_op.cpp onnx_custom_op:model
-For a reference on how to create a graph with nGraph operations, visit [nGraph tutorial](../nGraphTutorial.md).
+For a reference on how to create a graph with nGraph operations, visit [Custom nGraph Operations](AddingNGraphOps.md).
For a complete list of predefined nGraph operators, visit [available operations sets](../../ops/opset.md).
If operator is no longer needed, it can be unregistered by calling `unregister_operator`. The function takes three arguments `op_type`, `version`, and `domain`.
diff --git a/docs/IE_DG/InferenceEngine_QueryAPI.md b/docs/IE_DG/InferenceEngine_QueryAPI.md
index 788c2d580324a9..60497bbebdf362 100644
--- a/docs/IE_DG/InferenceEngine_QueryAPI.md
+++ b/docs/IE_DG/InferenceEngine_QueryAPI.md
@@ -32,7 +32,8 @@ MYRIAD.1.4-ma2480
FPGA.0
FPGA.1
CPU
-GPU
+GPU.0
+GPU.1
...
```
diff --git a/docs/IE_DG/Introduction.md b/docs/IE_DG/Introduction.md
index efab88d5dd95e2..6d3d5be66c608b 100644
--- a/docs/IE_DG/Introduction.md
+++ b/docs/IE_DG/Introduction.md
@@ -122,7 +122,4 @@ The open source version is available in the [OpenVINO™ toolkit GitHub reposito
- [Intel® Deep Learning Deployment Toolkit Web Page](https://software.intel.com/en-us/computer-vision-sdk)
-[scheme]: img/workflow_steps.png
-
-#### Optimization Notice
-For complete information about compiler optimizations, see our [Optimization Notice](https://software.intel.com/en-us/articles/optimization-notice#opt-en).
+[scheme]: img/workflow_steps.png
\ No newline at end of file
diff --git a/docs/IE_DG/Optimization_notice.md b/docs/IE_DG/Optimization_notice.md
deleted file mode 100644
index 3c128d95b6c5bc..00000000000000
--- a/docs/IE_DG/Optimization_notice.md
+++ /dev/null
@@ -1,3 +0,0 @@
-# Optimization Notice {#openvino_docs_IE_DG_Optimization_notice}
-
-![Optimization_notice](img/opt-notice-en_080411.gif)
\ No newline at end of file
diff --git a/docs/IE_DG/Samples_Overview.md b/docs/IE_DG/Samples_Overview.md
index 245fa68e900e80..1eeedca35b9f52 100644
--- a/docs/IE_DG/Samples_Overview.md
+++ b/docs/IE_DG/Samples_Overview.md
@@ -43,7 +43,7 @@ To run the sample applications, you can use images and videos from the media fil
## Samples that Support Pre-Trained Models
-You can download the [pre-trained models](@ref omz_models_intel_index) using the OpenVINO [Model Downloader](@ref omz_tools_downloader_README) or from [https://download.01.org/opencv/](https://download.01.org/opencv/).
+To run the sample, you can use [public](@ref omz_models_public_index) or [Intel's](@ref omz_models_intel_index) pre-trained models from the Open Model Zoo. The models can be downloaded using the [Model Downloader](@ref omz_tools_downloader_README).
## Build the Sample Applications
@@ -127,7 +127,7 @@ You can also build a generated solution manually. For example, if you want to bu
Microsoft Visual Studio and open the generated solution file from the `C:\Users\\Documents\Intel\OpenVINO\inference_engine_cpp_samples_build\Samples.sln`
directory.
-### Build the Sample Applications on macOS*
+### Build the Sample Applications on macOS*
The officially supported macOS* build environment is the following:
diff --git a/docs/IE_DG/protecting_model_guide.md b/docs/IE_DG/protecting_model_guide.md
index 99b7836b1b25d1..2074d2230146cb 100644
--- a/docs/IE_DG/protecting_model_guide.md
+++ b/docs/IE_DG/protecting_model_guide.md
@@ -59,5 +59,4 @@ should be called with `weights` passed as an empty `Blob`.
- Inference Engine Developer Guide: [Inference Engine Developer Guide](Deep_Learning_Inference_Engine_DevGuide.md)
- For more information on Sample Applications, see the [Inference Engine Samples Overview](Samples_Overview.md)
- For information on a set of pre-trained models, see the [Overview of OpenVINO™ Toolkit Pre-Trained Models](@ref omz_models_intel_index)
-- For information on Inference Engine Tutorials, see the [Inference Tutorials](https://github.com/intel-iot-devkit/inference-tutorials-generic)
- For IoT Libraries and Code Samples see the [Intel® IoT Developer Kit](https://github.com/intel-iot-devkit).
diff --git a/docs/IE_DG/supported_plugins/CL_DNN.md b/docs/IE_DG/supported_plugins/CL_DNN.md
index a25012bf0732a0..a8cfbc579128f9 100644
--- a/docs/IE_DG/supported_plugins/CL_DNN.md
+++ b/docs/IE_DG/supported_plugins/CL_DNN.md
@@ -1,9 +1,30 @@
GPU Plugin {#openvino_docs_IE_DG_supported_plugins_CL_DNN}
=======
-The GPU plugin uses the Intel® Compute Library for Deep Neural Networks ([clDNN](https://01.org/cldnn)) to infer deep neural networks.
-clDNN is an open source performance library for Deep Learning (DL) applications intended for acceleration of Deep Learning Inference on Intel® Processor Graphics including Intel® HD Graphics and Intel® Iris® Graphics.
-For an in-depth description of clDNN, see: [clDNN sources](https://github.com/intel/clDNN) and [Accelerate Deep Learning Inference with Intel® Processor Graphics](https://software.intel.com/en-us/articles/accelerating-deep-learning-inference-with-intel-processor-graphics).
+The GPU plugin uses the Intel® Compute Library for Deep Neural Networks (clDNN) to infer deep neural networks.
+clDNN is an open source performance library for Deep Learning (DL) applications intended for acceleration of Deep Learning Inference on Intel® Processor Graphics including Intel® HD Graphics, Intel® Iris® Graphics, Intel® Iris® Xe Graphics, and Intel® Iris® Xe MAX graphics.
+For an in-depth description of clDNN, see [Inference Engine source files](https://github.com/openvinotoolkit/openvino/tree/master/inference-engine/src/cldnn_engine) and [Accelerate Deep Learning Inference with Intel® Processor Graphics](https://software.intel.com/en-us/articles/accelerating-deep-learning-inference-with-intel-processor-graphics).
+
+## Device Naming Convention
+* Devices are enumerated as "GPU.X" where `X={0, 1, 2,...}`. Only Intel® GPU devices are considered.
+* If the system has an integrated GPU, it always has id=0 ("GPU.0").
+* Other GPUs have undefined order that depends on the GPU driver.
+* "GPU" is an alias for "GPU.0"
+* If the system doesn't have an integrated GPU, then devices are enumerated starting from 0.
+
+For demonstration purposes, see the [Hello Query Device C++ Sample](../../../inference-engine/samples/hello_query_device/README.md) that can print out the list of available devices with associated indices. Below is an example output (truncated to the device names only):
+
+```sh
+./hello_query_device
+Available devices:
+ Device: CPU
+...
+ Device: GPU.0
+...
+ Device: GPU.1
+...
+ Device: HDDL
+```
## Optimizations
@@ -92,7 +113,7 @@ When specifying key values as raw strings (that is, when using Python API), omit
| `KEY_CLDNN_PLUGIN_THROTTLE` | `<0-3>` | `0` | OpenCL queue throttling (before usage, make sure your OpenCL driver supports appropriate extension)
Lower value means lower driver thread priority and longer sleep time for it. 0 disables the setting. |
| `KEY_CLDNN_GRAPH_DUMPS_DIR` | `""` | `""` | clDNN graph optimizer stages dump output directory (in GraphViz format) |
| `KEY_CLDNN_SOURCES_DUMPS_DIR` | `""` | `""` | Final optimized clDNN OpenCL sources dump output directory |
-| `KEY_GPU_THROUGHPUT_STREAMS` | `KEY_GPU_THROUGHPUT_AUTO`, or positive integer| 1 | Specifies a number of GPU "execution" streams for the throughput mode (upper bound for a number of inference requests that can be executed simultaneously).
This option is can be used to decrease GPU stall time by providing more effective load from several streams. Increasing the number of streams usually is more effective for smaller topologies or smaller input sizes. Note that your application should provide enough parallel slack (e.g. running many inference requests) to leverage full GPU bandwidth. Additional streams consume several times more GPU memory, so make sure the system has enough memory available to suit parallel stream execution. Multiple streams might also put additional load on CPU. If CPU load increases, it can be regulated by setting an appropriate `KEY_CLDNN_PLUGIN_THROTTLE` option value (see above). If your target system has relatively weak CPU, keep throttling low.
The default value is 1, which implies latency-oriented behaviour.
`KEY_GPU_THROUGHPUT_AUTO` creates bare minimum of streams to improve the performance; this is the most portable option if you are not sure how many resources your target machine has (and what would be the optimal number of streams).
A positive integer value creates the requested number of streams. |
+| `KEY_GPU_THROUGHPUT_STREAMS` | `KEY_GPU_THROUGHPUT_AUTO`, or positive integer| 1 | Specifies a number of GPU "execution" streams for the throughput mode (upper bound for a number of inference requests that can be executed simultaneously).
This option is can be used to decrease GPU stall time by providing more effective load from several streams. Increasing the number of streams usually is more effective for smaller topologies or smaller input sizes. Note that your application should provide enough parallel slack (e.g. running many inference requests) to leverage full GPU bandwidth. Additional streams consume several times more GPU memory, so make sure the system has enough memory available to suit parallel stream execution. Multiple streams might also put additional load on CPU. If CPU load increases, it can be regulated by setting an appropriate `KEY_CLDNN_PLUGIN_THROTTLE` option value (see above). If your target system has relatively weak CPU, keep throttling low.
The default value is 1, which implies latency-oriented behavior.
`KEY_GPU_THROUGHPUT_AUTO` creates bare minimum of streams to improve the performance; this is the most portable option if you are not sure how many resources your target machine has (and what would be the optimal number of streams).
A positive integer value creates the requested number of streams. |
| `KEY_EXCLUSIVE_ASYNC_REQUESTS` | `YES` / `NO` | `NO` | Forces async requests (also from different executable networks) to execute serially.|
## Note on Debug Capabilities of the GPU Plugin
diff --git a/docs/IE_DG/supported_plugins/HDDL.md b/docs/IE_DG/supported_plugins/HDDL.md
index f935c42cc21a3e..9154f1d3f3039a 100644
--- a/docs/IE_DG/supported_plugins/HDDL.md
+++ b/docs/IE_DG/supported_plugins/HDDL.md
@@ -21,7 +21,7 @@ For the "Supported Networks", please reference to [MYRIAD Plugin](MYRIAD.md)
See VPU common configuration parameters for the [VPU Plugins](VPU.md).
When specifying key values as raw strings (that is, when using Python API), omit the `KEY_` prefix.
-In addition to common parameters for Myriad plugin and HDDL plugin, HDDL plugin accepts the following options:
+In addition to common parameters for MYRIAD plugin and HDDL plugin, HDDL plugin accepts the following options:
| Parameter Name | Parameter Values | Default | Description |
| :--- | :--- | :--- | :--- |
diff --git a/docs/IE_DG/supported_plugins/MULTI.md b/docs/IE_DG/supported_plugins/MULTI.md
index a3166c3de8e956..a6b4aaefc9f1c9 100644
--- a/docs/IE_DG/supported_plugins/MULTI.md
+++ b/docs/IE_DG/supported_plugins/MULTI.md
@@ -47,11 +47,13 @@ Inference Engine now features a dedicated API to enumerate devices and their cap
```sh
./hello_query_device
Available devices:
- Device: CPU
+ Device: CPU
...
- Device: GPU
+ Device: GPU.0
...
- Device: HDDL
+ Device: GPU.1
+...
+ Device: HDDL
```
Simple programmatic way to enumerate the devices and use with the multi-device is as follows:
diff --git a/docs/IE_DG/supported_plugins/VPU.md b/docs/IE_DG/supported_plugins/VPU.md
index 7c04290f7dd16d..189a23b5a94f20 100644
--- a/docs/IE_DG/supported_plugins/VPU.md
+++ b/docs/IE_DG/supported_plugins/VPU.md
@@ -9,12 +9,12 @@ This chapter provides information on the Inference Engine plugins that enable in
## Known Layers Limitations
-* `'ScaleShift'` layer is supported for zero value of `'broadcast'` attribute only.
-* `'CTCGreedyDecoder'` layer works with `'ctc_merge_repeated'` attribute equal 1.
-* `'DetectionOutput'` layer works with zero values of `'interpolate_orientation'` and `'num_orient_classes'` parameters only.
-* `'MVN'` layer uses fixed value for `'eps'` parameters (1e-9).
-* `'Normalize'` layer uses fixed value for `'eps'` parameters (1e-9) and is supported for zero value of `'across_spatial'` only.
-* `'Pad'` layer works only with 4D tensors.
+* `ScaleShift` layer is supported for zero value of `broadcast` attribute only.
+* `CTCGreedyDecoder` layer works with `ctc_merge_repeated` attribute equal 1.
+* `DetectionOutput` layer works with zero values of `interpolate_orientation` and `num_orient_classes` parameters only.
+* `MVN` layer uses fixed value for `eps` parameters (1e-9).
+* `Normalize` layer uses fixed value for `eps` parameters (1e-9) and is supported for zero value of `across_spatial` only.
+* `Pad` layer works only with 4D tensors.
## Optimizations
diff --git a/docs/Legal_Information.md b/docs/Legal_Information.md
index 00c6cd968357e6..2f3526f2902677 100644
--- a/docs/Legal_Information.md
+++ b/docs/Legal_Information.md
@@ -4,9 +4,7 @@ This software and the related documents are Intel copyrighted materials, and you
This document contains information on products, services and/or processes in development. All information provided here is subject to change without notice. Contact your Intel representative to obtain the latest forecast, schedule, specifications and roadmaps. The products and services described may contain defects or errors known as errata which may cause deviations from published specifications. Current characterized errata are available on request. Copies of documents which have an order number and are referenced in this document may be obtained by calling 1-800-548-4725 or by visiting [www.intel.com/design/literature.htm](https://www.intel.com/design/literature.htm).
-Software and workloads used in performance tests may have been optimized for performance only on Intel microprocessors.
-
-Performance tests, such as SYSmark and MobileMark, are measured using specific computer systems, components, software, operations and functions. Any change to any of those factors may cause the results to vary. You should consult other information and performance tests to assist you in fully evaluating your contemplated purchases, including the performance of that product when combined with other products. For more complete information visit [www.intel.com/benchmarks](https://www.intel.com/benchmarks).
+Performance varies by use, configuration and other factors. Learn more at [www.intel.com/PerformanceIndex](https://www.intel.com/PerformanceIndex).
Performance results are based on testing as of dates shown in configurations and may not reflect all publicly available updates. See backup for configuration details. No product or component can be absolutely secure.
@@ -14,7 +12,7 @@ Your costs and results may vary.
Intel technologies may require enabled hardware, software or service activation.
-© Intel Corporation. Intel, the Intel logo, and other Intel marks are trademarks of Intel Corporation or its subsidiaries. \*Other names and brands may be claimed as the property of others.
+© Intel Corporation. Intel, the Intel logo, and other Intel marks are trademarks of Intel Corporation or its subsidiaries. \*Other names and brands may be claimed as the property of others.
## OpenVINO™ Logo
To build equity around the project, the OpenVINO logo was created for both Intel and community usage. The logo may only be used to represent the OpenVINO toolkit and offerings built using the OpenVINO toolkit.
diff --git a/docs/MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md b/docs/MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md
index 8ce80da1d1579b..cd9245c3e69646 100644
--- a/docs/MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md
+++ b/docs/MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md
@@ -12,6 +12,13 @@ Model Optimizer produces an Intermediate Representation (IR) of the network, whi
* .bin
- Contains the weights and biases binary data.
+> **TIP**: You also can work with the Model Optimizer inside the OpenVINO™ [Deep Learning Workbench](@ref workbench_docs_Workbench_DG_Introduction) (DL Workbench).
+> [DL Workbench](@ref workbench_docs_Workbench_DG_Introduction) is a platform built upon OpenVINO™ and provides a web-based graphical environment that enables you to optimize, fine-tune, analyze, visualize, and compare
+> performance of deep learning models on various Intel® architecture
+> configurations. In the DL Workbench, you can use most of OpenVINO™ toolkit components.
+>
+> Proceed to an [easy installation from Docker](@ref workbench_docs_Workbench_DG_Install_from_Docker_Hub) to get started.
+
## What's New in the Model Optimizer in this Release?
* Common changes:
diff --git a/docs/MO_DG/prepare_model/convert_model/Convert_Model_From_Caffe.md b/docs/MO_DG/prepare_model/convert_model/Convert_Model_From_Caffe.md
index cb111e9004bc4d..06ae438d9cd3c6 100644
--- a/docs/MO_DG/prepare_model/convert_model/Convert_Model_From_Caffe.md
+++ b/docs/MO_DG/prepare_model/convert_model/Convert_Model_From_Caffe.md
@@ -23,7 +23,7 @@ A summary of the steps for optimizing and deploying a model that was trained wit
* **Object detection models:**
* SSD300-VGG16, SSD500-VGG16
* Faster-RCNN
- * RefineDet (Myriad plugin only)
+ * RefineDet (MYRIAD plugin only)
* **Face detection models:**
* VGG Face
diff --git a/docs/MO_DG/prepare_model/convert_model/Convert_Model_From_TensorFlow.md b/docs/MO_DG/prepare_model/convert_model/Convert_Model_From_TensorFlow.md
index 077e35db9d1569..7748206c36d09e 100644
--- a/docs/MO_DG/prepare_model/convert_model/Convert_Model_From_TensorFlow.md
+++ b/docs/MO_DG/prepare_model/convert_model/Convert_Model_From_TensorFlow.md
@@ -280,7 +280,7 @@ python3 mo_tf.py --input_model inception_v1.pb -b 1 --tensorflow_custom_operatio
* Launching the Model Optimizer for Inception V1 frozen model and use custom sub-graph replacement file `transform.json` for model conversion. For more information about this feature, refer to [Sub-Graph Replacement in the Model Optimizer](../customize_model_optimizer/Subgraph_Replacement_Model_Optimizer.md).
```sh
-python3 mo_tf.py --input_model inception_v1.pb -b 1 --tensorflow_use_custom_operations_config transform.json
+python3 mo_tf.py --input_model inception_v1.pb -b 1 --transformations_config transform.json
```
* Launching the Model Optimizer for Inception V1 frozen model and dump information about the graph to TensorBoard log dir `/tmp/log_dir`
diff --git a/docs/MO_DG/prepare_model/convert_model/tf_specific/Convert_EfficientDet_Models.md b/docs/MO_DG/prepare_model/convert_model/tf_specific/Convert_EfficientDet_Models.md
index c58de18d8479d5..7d9aac14dbb0b4 100644
--- a/docs/MO_DG/prepare_model/convert_model/tf_specific/Convert_EfficientDet_Models.md
+++ b/docs/MO_DG/prepare_model/convert_model/tf_specific/Convert_EfficientDet_Models.md
@@ -46,7 +46,7 @@ To generate the IR of the EfficientDet TensorFlow model, run:
```sh
python3 $MO_ROOT/mo.py \
--input_model savedmodeldir/efficientdet-d4_frozen.pb \
---tensorflow_use_custom_operations_config $MO_ROOT/extensions/front/tf/automl_efficientdet.json \
+--transformations_config $MO_ROOT/extensions/front/tf/automl_efficientdet.json \
--input_shape [1,$IMAGE_SIZE,$IMAGE_SIZE,3] \
--reverse_input_channels
```
@@ -56,7 +56,7 @@ EfficientDet models were trained with different input image sizes. To determine
dictionary in the [hparams_config.py](https://github.com/google/automl/blob/96e1fee/efficientdet/hparams_config.py#L304) file.
The attribute `image_size` specifies the shape to be specified for the model conversion.
-The `tensorflow_use_custom_operations_config` command line parameter specifies the configuration json file containing hints
+The `transformations_config` command line parameter specifies the configuration json file containing hints
to the Model Optimizer on how to convert the model and trigger transformations implemented in the
`$MO_ROOT/extensions/front/tf/AutomlEfficientDet.py`. The json file contains some parameters which must be changed if you
train the model yourself and modified the `hparams_config` file or the parameters are different from the ones used for EfficientDet-D4.
diff --git a/docs/MO_DG/prepare_model/convert_model/tf_specific/Convert_YOLO_From_Tensorflow.md b/docs/MO_DG/prepare_model/convert_model/tf_specific/Convert_YOLO_From_Tensorflow.md
index 0073ac2f5490ca..99748b7b18f61a 100644
--- a/docs/MO_DG/prepare_model/convert_model/tf_specific/Convert_YOLO_From_Tensorflow.md
+++ b/docs/MO_DG/prepare_model/convert_model/tf_specific/Convert_YOLO_From_Tensorflow.md
@@ -91,7 +91,7 @@ To generate the IR of the YOLOv3 TensorFlow model, run:
```sh
python3 mo_tf.py
--input_model /path/to/yolo_v3.pb
---tensorflow_use_custom_operations_config $MO_ROOT/extensions/front/tf/yolo_v3.json
+--transformations_config $MO_ROOT/extensions/front/tf/yolo_v3.json
--batch 1
```
@@ -99,18 +99,18 @@ To generate the IR of the YOLOv3-tiny TensorFlow model, run:
```sh
python3 mo_tf.py
--input_model /path/to/yolo_v3_tiny.pb
---tensorflow_use_custom_operations_config $MO_ROOT/extensions/front/tf/yolo_v3_tiny.json
+--transformations_config $MO_ROOT/extensions/front/tf/yolo_v3_tiny.json
--batch 1
```
where:
* `--batch` defines shape of model input. In the example, `--batch` is equal to 1, but you can also specify other integers larger than 1.
-* `--tensorflow_use_custom_operations_config` adds missing `Region` layers to the model. In the IR, the `Region` layer has name `RegionYolo`.
+* `--transformations_config` adds missing `Region` layers to the model. In the IR, the `Region` layer has name `RegionYolo`.
> **NOTE:** The color channel order (RGB or BGR) of an input data should match the channel order of the model training dataset. If they are different, perform the `RGB<->BGR` conversion specifying the command-line parameter: `--reverse_input_channels`. Otherwise, inference results may be incorrect. For more information about the parameter, refer to **When to Reverse Input Channels** section of [Converting a Model Using General Conversion Parameters](../Converting_Model_General.md).
-OpenVINO™ toolkit provides a demo that uses YOLOv3 model. For more information, refer to [Object Detection YOLO* V3 Demo, Async API Performance Showcase](@ref omz_demos_object_detection_demo_yolov3_async_README).
+OpenVINO™ toolkit provides a demo that uses YOLOv3 model. For more information, refer to [Object Detection C++ Demo](@ref omz_demos_object_detection_demo_ssd_async_README).
## Convert YOLOv1 and YOLOv2 Models to the IR
@@ -167,14 +167,14 @@ python3 ./mo_tf.py
--input_model /.pb \
--batch 1 \
--scale 255 \
---tensorflow_use_custom_operations_config /deployment_tools/model_optimizer/extensions/front/tf/.json
+--transformations_config /deployment_tools/model_optimizer/extensions/front/tf/.json
```
where:
* `--batch` defines shape of model input. In the example, `--batch` is equal to 1, but you can also specify other integers larger than 1.
* `--scale` specifies scale factor that input values will be divided by.
The model was trained with input values in the range `[0,1]`. OpenVINO™ toolkit samples read input images as values in `[0,255]` range, so the scale 255 must be applied.
-* `--tensorflow_use_custom_operations_config` adds missing `Region` layers to the model. In the IR, the `Region` layer has name `RegionYolo`.
+* `--transformations_config` adds missing `Region` layers to the model. In the IR, the `Region` layer has name `RegionYolo`.
For other applicable parameters, refer to [Convert Model from TensorFlow](../Convert_Model_From_TensorFlow.md).
> **NOTE:** The color channel order (RGB or BGR) of an input data should match the channel order of the model training dataset. If they are different, perform the `RGB<->BGR` conversion specifying the command-line parameter: `--reverse_input_channels`. Otherwise, inference results may be incorrect. For more information about the parameter, refer to **When to Reverse Input Channels** section of [Converting a Model Using General Conversion Parameters](../Converting_Model_General.md).
diff --git a/docs/MO_DG/prepare_model/customize_model_optimizer/Subgraph_Replacement_Model_Optimizer.md b/docs/MO_DG/prepare_model/customize_model_optimizer/Subgraph_Replacement_Model_Optimizer.md
index a3e6eda7756ad7..70bec8bdb4f91c 100644
--- a/docs/MO_DG/prepare_model/customize_model_optimizer/Subgraph_Replacement_Model_Optimizer.md
+++ b/docs/MO_DG/prepare_model/customize_model_optimizer/Subgraph_Replacement_Model_Optimizer.md
@@ -1,4 +1,4 @@
# Sub-Graph Replacement in the Model Optimizer {#openvino_docs_MO_DG_prepare_model_customize_model_optimizer_Subgraph_Replacement_Model_Optimizer}
-The document has been deprecated. Refer to the [Model Optimizer Extensibility](Subgraph_Replacement_Model_Optimizer.md)
+The document has been deprecated. Refer to the [Model Optimizer Extensibility](Customize_Model_Optimizer.md)
for the up-to-date documentation.
diff --git a/docs/Optimization_notice.md b/docs/Optimization_notice.md
deleted file mode 100644
index 99f71b905cc6b5..00000000000000
--- a/docs/Optimization_notice.md
+++ /dev/null
@@ -1,3 +0,0 @@
-# Optimization Notice {#openvino_docs_Optimization_notice}
-
-![Optimization_notice](img/opt-notice-en_080411.gif)
\ No newline at end of file
diff --git a/docs/benchmarks/performance_benchmarks.md b/docs/benchmarks/performance_benchmarks.md
index 9247d63541ba28..169c83c9bea947 100644
--- a/docs/benchmarks/performance_benchmarks.md
+++ b/docs/benchmarks/performance_benchmarks.md
@@ -26,127 +26,174 @@ Measuring inference performance involves many variables and is extremely use-cas
\htmlonly
-
+
\endhtmlonly
\htmlonly
-
+
\endhtmlonly
\htmlonly
-
+
\endhtmlonly
\htmlonly
-
+
\endhtmlonly
\htmlonly
-
+
\endhtmlonly
\htmlonly
-
+
\endhtmlonly
\htmlonly
-
+
\endhtmlonly
\htmlonly
-
+
\endhtmlonly
\htmlonly
-
+
\endhtmlonly
\htmlonly
-
+
\endhtmlonly
\htmlonly
-
+
\endhtmlonly
\htmlonly
-
+
\endhtmlonly
\htmlonly
-
+
\endhtmlonly
\htmlonly
-
+
\endhtmlonly
\htmlonly
-
+
\endhtmlonly
## Platform Configurations
-Intel® Distribution of OpenVINO™ toolkit performance benchmark numbers are based on release 2021.1.
+Intel® Distribution of OpenVINO™ toolkit performance benchmark numbers are based on release 2021.2.
-Intel technologies’ features and benefits depend on system configuration and may require enabled hardware, software or service activation. Learn more at intel.com, or from the OEM or retailer. Performance results are based on testing as of September 25, 2020 and may not reflect all publicly available security updates. See configuration disclosure for details. No product can be absolutely secure.
+Intel technologies’ features and benefits depend on system configuration and may require enabled hardware, software or service activation. Learn more at intel.com, or from the OEM or retailer. Performance results are based on testing as of December 9, 2020 and may not reflect all publicly available updates. See configuration disclosure for details. No product can be absolutely secure.
-Software and workloads used in performance tests may have been optimized for performance only on Intel microprocessors. Performance tests, such as SYSmark and MobileMark, are measured using specific computer systems, components, software, operations and functions. Any change to any of those factors may cause the results to vary. You should consult other information and performance tests to assist you in fully evaluating your contemplated purchases, including the performance of that product when combined with other products. For more complete information, see [Performance Benchmark Test Disclosure](https://www.intel.com/content/www/us/en/benchmarks/benchmark.html).
+Performance varies by use, configuration and other factors. Learn more at [www.intel.com/PerformanceIndex](https://www.intel.com/PerformanceIndex).
Your costs and results may vary.
© Intel Corporation. Intel, the Intel logo, and other Intel marks are trademarks of Intel Corporation or its subsidiaries. Other names and brands may be claimed as the property of others.
-Optimization Notice: Intel’s compilers may or may not optimize to the same degree for non-Intel microprocessors for optimizations that are not unique to Intel microprocessors. These optimizations include SSE2, SSE3, and SSSE3 instruction sets and other optimizations. Intel does not guarantee the availability, functionality, or effectiveness of any optimization on microprocessors not manufactured by Intel. Microprocessor-dependent optimizations in this product are intended for use with Intel microprocessors. Certain optimizations not specific to Intel microarchitecture are reserved for Intel microprocessors. Please refer to the applicable product User and Reference Guides for more information regarding the specific instruction sets covered by this notice. [Notice Revision #2010804](https://software.intel.com/articles/optimization-notice).
+Intel optimizations, for Intel compilers or other products, may not optimize to the same degree for non-Intel products.
Testing by Intel done on: see test date for each HW platform below.
**CPU Inference Engines**
-| | Intel® Xeon® E-2124G | Intel® Xeon® Silver 4216R | Intel® Xeon® Gold 5218T | Intel® Xeon® Platinum 8270 |
-| ------------------------------- | ----------------------| ---------------------------- | ---------------------------- | ---------------------------- |
-| Motherboard | ASUS* WS C246 PRO | Intel® Server Board S2600STB | Intel® Server Board S2600STB | Intel® Server Board S2600STB |
-| CPU | Intel® Xeon® E-2124G CPU @ 3.40GHz | Intel® Xeon® Silver 4216R CPU @ 2.20GHz | Intel® Xeon® Gold 5218T CPU @ 2.10GHz | Intel® Xeon® Platinum 8270 CPU @ 2.70GHz |
-| Hyper Threading | OFF | ON | ON | ON |
-| Turbo Setting | ON | ON | ON | ON |
-| Memory | 2 x 16 GB DDR4 2666MHz| 12 x 32 GB DDR4 2666MHz | 12 x 32 GB DDR4 2666MHz | 12 x 32 GB DDR4 2933MHz |
-| Operating System | Ubuntu* 18.04 LTS | Ubuntu* 18.04 LTS | Ubuntu* 18.04 LTS | Ubuntu* 18.04 LTS |
-| Kernel Version | 5.3.0-24-generic | 5.3.0-24-generic | 5.3.0-24-generic | 5.3.0-24-generic |
-| BIOS Vendor | American Megatrends Inc.* | Intel Corporation | Intel Corporation | Intel Corporation |
-| BIOS Version | 0904 | SE5C620.86B.02.01.
0009.092820190230 | SE5C620.86B.02.01.
0009.092820190230 | SE5C620.86B.02.01.
0009.092820190230 |
-| BIOS Release | April 12, 2019 | September 28, 2019 | September 28, 2019 | September 28, 2019 |
-| BIOS Settings | Select optimized default settings,
save & exit | Select optimized default settings,
change power policy
to "performance",
save & exit | Select optimized default settings,
change power policy to "performance",
save & exit | Select optimized default settings,
change power policy to "performance",
save & exit |
-| Batch size | 1 | 1 | 1 | 1 |
-| Precision | INT8 | INT8 | INT8 | INT8 |
-| Number of concurrent inference requests | 4 | 32 | 32 | 52 |
-| Test Date | September 25, 2020 | September 25, 2020 | September 25, 2020 | September 25, 2020 |
-| Power dissipation, TDP in Watt | [71](https://ark.intel.com/content/www/us/en/ark/products/134854/intel-xeon-e-2124g-processor-8m-cache-up-to-4-50-ghz.html#tab-blade-1-0-1) | [125](https://ark.intel.com/content/www/us/en/ark/products/193394/intel-xeon-silver-4216-processor-22m-cache-2-10-ghz.html#tab-blade-1-0-1) | [105](https://ark.intel.com/content/www/us/en/ark/products/193953/intel-xeon-gold-5218t-processor-22m-cache-2-10-ghz.html#tab-blade-1-0-1) | [205](https://ark.intel.com/content/www/us/en/ark/products/192482/intel-xeon-platinum-8270-processor-35-75m-cache-2-70-ghz.html#tab-blade-1-0-1) |
-| CPU Price on September 29, 2020, USD
Prices may vary | [213](https://ark.intel.com/content/www/us/en/ark/products/134854/intel-xeon-e-2124g-processor-8m-cache-up-to-4-50-ghz.html) | [1,002](https://ark.intel.com/content/www/us/en/ark/products/193394/intel-xeon-silver-4216-processor-22m-cache-2-10-ghz.html) | [1,349](https://ark.intel.com/content/www/us/en/ark/products/193953/intel-xeon-gold-5218t-processor-22m-cache-2-10-ghz.html) | [7,405](https://ark.intel.com/content/www/us/en/ark/products/192482/intel-xeon-platinum-8270-processor-35-75m-cache-2-70-ghz.html) |
+| | Intel® Xeon® E-2124G | Intel® Xeon® W1290P | Intel® Xeon® Silver 4216R |
+| ------------------------------- | ---------------------- | --------------------------- | ---------------------------- |
+| Motherboard | ASUS* WS C246 PRO | ASUS* WS W480-ACE | Intel® Server Board S2600STB |
+| CPU | Intel® Xeon® E-2124G CPU @ 3.40GHz | Intel® Xeon® W-1290P CPU @ 3.70GHz | Intel® Xeon® Silver 4216R CPU @ 2.20GHz |
+| Hyper Threading | OFF | ON | ON |
+| Turbo Setting | ON | ON | ON |
+| Memory | 2 x 16 GB DDR4 2666MHz | 4 x 16 GB DDR4 @ 2666MHz |12 x 32 GB DDR4 2666MHz |
+| Operating System | Ubuntu* 18.04 LTS | Ubuntu* 18.04 LTS | Ubuntu* 18.04 LTS |
+| Kernel Version | 5.3.0-24-generic | 5.3.0-24-generic | 5.3.0-24-generic |
+| BIOS Vendor | American Megatrends Inc.* | American Megatrends Inc. | Intel Corporation |
+| BIOS Version | 0904 | 607 | SE5C620.86B.02.01.
0009.092820190230 |
+| BIOS Release | April 12, 2019 | May 29, 2020 | September 28, 2019 |
+| BIOS Settings | Select optimized default settings,
save & exit | Select optimized default settings,
save & exit | Select optimized default settings,
change power policy
to "performance",
save & exit |
+| Batch size | 1 | 1 | 1
+| Precision | INT8 | INT8 | INT8
+| Number of concurrent inference requests | 4 | 5 | 32
+| Test Date | December 9, 2020 | December 9, 2020 | December 9, 2020
+| Power dissipation, TDP in Watt | [71](https://ark.intel.com/content/www/us/en/ark/products/134854/intel-xeon-e-2124g-processor-8m-cache-up-to-4-50-ghz.html#tab-blade-1-0-1) | [125](https://ark.intel.com/content/www/us/en/ark/products/199336/intel-xeon-w-1290p-processor-20m-cache-3-70-ghz.html) | [125](https://ark.intel.com/content/www/us/en/ark/products/193394/intel-xeon-silver-4216-processor-22m-cache-2-10-ghz.html#tab-blade-1-0-1) |
+| CPU Price on September 29, 2020, USD
Prices may vary | [213](https://ark.intel.com/content/www/us/en/ark/products/134854/intel-xeon-e-2124g-processor-8m-cache-up-to-4-50-ghz.html) | [539](https://ark.intel.com/content/www/us/en/ark/products/199336/intel-xeon-w-1290p-processor-20m-cache-3-70-ghz.html) |[1,002](https://ark.intel.com/content/www/us/en/ark/products/193394/intel-xeon-silver-4216-processor-22m-cache-2-10-ghz.html) |
**CPU Inference Engines (continue)**
-| | Intel® Core™ i5-8500 | Intel® Core™ i7-8700T | Intel® Core™ i9-10920X | 11th Gen Intel® Core™ i5-1145G7E |
-| -------------------- | ---------------------------------- | ----------------------------------- |--------------------------------------|-----------------------------------|
-| Motherboard | ASUS* PRIME Z370-A | GIGABYTE* Z370M DS3H-CF | ASUS* PRIME X299-A II | Intel Corporation
internal/Reference Validation Platform |
-| CPU | Intel® Core™ i5-8500 CPU @ 3.00GHz | Intel® Core™ i7-8700T CPU @ 2.40GHz | Intel® Core™ i9-10920X CPU @ 3.50GHz | 11th Gen Intel® Core™ i5-1145G7E @ 2.60GHz |
-| Hyper Threading | OFF | ON | ON | ON |
-| Turbo Setting | ON | ON | ON | ON |
-| Memory | 2 x 16 GB DDR4 2666MHz | 4 x 16 GB DDR4 2400MHz | 4 x 16 GB DDR4 2666MHz | 2 x 8 GB DDR4 3200MHz |
-| Operating System | Ubuntu* 18.04 LTS | Ubuntu* 18.04 LTS | Ubuntu* 18.04 LTS | Ubuntu* 18.04 LTS |
-| Kernel Version | 5.3.0-24-generic | 5.0.0-23-generic | 5.0.0-23-generic | 5.8.0-05-generic |
-| BIOS Vendor | American Megatrends Inc.* | American Megatrends Inc.* | American Megatrends Inc.* | Intel Corporation |
-| BIOS Version | 2401 | F11 | 505 | TGLIFUI1.R00.3243.A04.2006302148 |
-| BIOS Release | July 12, 2019 | March 13, 2019 | December 17, 2019 | June 30, 2020 |
-| BIOS Settings | Select optimized default settings,
save & exit | Select optimized default settings,
set OS type to "other",
save & exit | Default Settings | Default Settings |
-| Batch size | 1 | 1 | 1 | 1 |
-| Precision | INT8 | INT8 | INT8 | INT8 |
-| Number of concurrent inference requests | 3 | 4 | 24 | 4 |
-| Test Date | September 25, 2020 | September 25, 2020 | September 25, 2020 | September 25, 2020 |
-| Power dissipation, TDP in Watt | [65](https://ark.intel.com/content/www/us/en/ark/products/129939/intel-core-i5-8500-processor-9m-cache-up-to-4-10-ghz.html#tab-blade-1-0-1) | [35](https://ark.intel.com/content/www/us/en/ark/products/129948/intel-core-i7-8700t-processor-12m-cache-up-to-4-00-ghz.html#tab-blade-1-0-1) | [165](https://ark.intel.com/content/www/us/en/ark/products/198012/intel-core-i9-10920x-x-series-processor-19-25m-cache-3-50-ghz.html) | [28](https://ark.intel.com/content/www/us/en/ark/products/208081/intel-core-i5-1145g7e-processor-8m-cache-up-to-4-10-ghz.html) |
-| CPU Price on September 29, 2020, USD
Prices may vary | [192](https://ark.intel.com/content/www/us/en/ark/products/129939/intel-core-i5-8500-processor-9m-cache-up-to-4-10-ghz.html) | [303](https://ark.intel.com/content/www/us/en/ark/products/129948/intel-core-i7-8700t-processor-12m-cache-up-to-4-00-ghz.html) | [700](https://ark.intel.com/content/www/us/en/ark/products/198012/intel-core-i9-10920x-x-series-processor-19-25m-cache-3-50-ghz.html) | [309](https://mysamples.intel.com/SAM_U_Product/ProductDetail.aspx?InputMMID=99A3D1&RequestID=0&ProductID=1213750) |
+| | Intel® Xeon® Gold 5218T | Intel® Xeon® Platinum 8270 |
+| ------------------------------- | ---------------------------- | ---------------------------- |
+| Motherboard | Intel® Server Board S2600STB | Intel® Server Board S2600STB |
+| CPU | Intel® Xeon® Gold 5218T CPU @ 2.10GHz | Intel® Xeon® Platinum 8270 CPU @ 2.70GHz |
+| Hyper Threading | ON | ON |
+| Turbo Setting | ON | ON |
+| Memory | 12 x 32 GB DDR4 2666MHz | 12 x 32 GB DDR4 2933MHz |
+| Operating System | Ubuntu* 18.04 LTS | Ubuntu* 18.04 LTS |
+| Kernel Version | 5.3.0-24-generic | 5.3.0-24-generic |
+| BIOS Vendor | Intel Corporation | Intel Corporation |
+| BIOS Version | SE5C620.86B.02.01.
0009.092820190230 | SE5C620.86B.02.01.
0009.092820190230 |
+| BIOS Release | September 28, 2019 | September 28, 2019 |
+| BIOS Settings | Select optimized default settings,
change power policy to "performance",
save & exit | Select optimized default settings,
change power policy to "performance",
save & exit |
+| Batch size | 1 | 1 |
+| Precision | INT8 | INT8 |
+| Number of concurrent inference requests |32 | 52 |
+| Test Date | December 9, 2020 | December 9, 2020 |
+| Power dissipation, TDP in Watt | [105](https://ark.intel.com/content/www/us/en/ark/products/193953/intel-xeon-gold-5218t-processor-22m-cache-2-10-ghz.html#tab-blade-1-0-1) | [205](https://ark.intel.com/content/www/us/en/ark/products/192482/intel-xeon-platinum-8270-processor-35-75m-cache-2-70-ghz.html#tab-blade-1-0-1) |
+| CPU Price on September 29, 2020, USD
Prices may vary | [1,349](https://ark.intel.com/content/www/us/en/ark/products/193953/intel-xeon-gold-5218t-processor-22m-cache-2-10-ghz.html) | [7,405](https://ark.intel.com/content/www/us/en/ark/products/192482/intel-xeon-platinum-8270-processor-35-75m-cache-2-70-ghz.html) |
+
+
+**CPU Inference Engines (continue)**
+
+| | Intel® Core™ i7-8700T | Intel® Core™ i9-10920X | Intel® Core™ i9-10900TE
(iEi Flex BX210AI)| 11th Gen Intel® Core™ i7-1185G7 |
+| -------------------- | ----------------------------------- |--------------------------------------| ---------------------------------------------|---------------------------------|
+| Motherboard | GIGABYTE* Z370M DS3H-CF | ASUS* PRIME X299-A II | iEi / B595 | Intel Corporation
internal/Reference
Validation Platform |
+| CPU | Intel® Core™ i7-8700T CPU @ 2.40GHz | Intel® Core™ i9-10920X CPU @ 3.50GHz | Intel® Core™ i9-10900TE CPU @ 1.80GHz | 11th Gen Intel® Core™ i7-1185G7 @ 3.00GHz |
+| Hyper Threading | ON | ON | ON | ON |
+| Turbo Setting | ON | ON | ON | ON |
+| Memory | 4 x 16 GB DDR4 2400MHz | 4 x 16 GB DDR4 2666MHz | 2 x 8 GB DDR4 @ 2400MHz | 2 x 8 GB DDR4 3200MHz |
+| Operating System | Ubuntu* 18.04 LTS | Ubuntu* 18.04 LTS | Ubuntu* 18.04 LTS | Ubuntu* 18.04 LTS |
+| Kernel Version | 5.3.0-24-generic | 5.3.0-24-generic | 5.8.0-05-generic | 5.8.0-05-generic |
+| BIOS Vendor | American Megatrends Inc.* | American Megatrends Inc.* | American Megatrends Inc.* | Intel Corporation |
+| BIOS Version | F11 | 505 | Z667AR10 | TGLSFWI1.R00.3425.
A00.2010162309 |
+| BIOS Release | March 13, 2019 | December 17, 2019 | July 15, 2020 | October 16, 2020 |
+| BIOS Settings | Select optimized default settings,
set OS type to "other",
save & exit | Default Settings | Default Settings | Default Settings |
+| Batch size | 1 | 1 | 1 | 1 |
+| Precision | INT8 | INT8 | INT8 | INT8 |
+| Number of concurrent inference requests |4 | 24 | 5 | 4 |
+| Test Date | December 9, 2020 | December 9, 2020 | December 9, 2020 | December 9, 2020 |
+| Power dissipation, TDP in Watt | [35](https://ark.intel.com/content/www/us/en/ark/products/129948/intel-core-i7-8700t-processor-12m-cache-up-to-4-00-ghz.html#tab-blade-1-0-1) | [165](https://ark.intel.com/content/www/us/en/ark/products/198012/intel-core-i9-10920x-x-series-processor-19-25m-cache-3-50-ghz.html) | [35](https://ark.intel.com/content/www/us/en/ark/products/203901/intel-core-i9-10900te-processor-20m-cache-up-to-4-60-ghz.html) | [28](https://ark.intel.com/content/www/us/en/ark/products/208664/intel-core-i7-1185g7-processor-12m-cache-up-to-4-80-ghz-with-ipu.html#tab-blade-1-0-1) |
+| CPU Price on September 29, 2020, USD
Prices may vary | [303](https://ark.intel.com/content/www/us/en/ark/products/129948/intel-core-i7-8700t-processor-12m-cache-up-to-4-00-ghz.html) | [700](https://ark.intel.com/content/www/us/en/ark/products/198012/intel-core-i9-10920x-x-series-processor-19-25m-cache-3-50-ghz.html) | [444](https://ark.intel.com/content/www/us/en/ark/products/203901/intel-core-i9-10900te-processor-20m-cache-up-to-4-60-ghz.html) | [426](https://ark.intel.com/content/www/us/en/ark/products/208664/intel-core-i7-1185g7-processor-12m-cache-up-to-4-80-ghz-with-ipu.html#tab-blade-1-0-0) |
+
+
+**CPU Inference Engines (continue)**
+
+| | Intel® Core™ i5-8500 | Intel® Core™ i5-10500TE | Intel® Core™ i5-10500TE
(iEi Flex-BX210AI)|
+| -------------------- | ---------------------------------- | ----------------------------------- |-------------------------------------- |
+| Motherboard | ASUS* PRIME Z370-A | GIGABYTE* Z490 AORUS PRO AX | iEi / B595 |
+| CPU | Intel® Core™ i5-8500 CPU @ 3.00GHz | Intel® Core™ i5-10500TE CPU @ 2.30GHz | Intel® Core™ i5-10500TE CPU @ 2.30GHz |
+| Hyper Threading | OFF | ON | ON |
+| Turbo Setting | ON | ON | ON |
+| Memory | 2 x 16 GB DDR4 2666MHz | 2 x 16 GB DDR4 @ 2666MHz | 1 x 8 GB DDR4 @ 2400MHz |
+| Operating System | Ubuntu* 18.04 LTS | Ubuntu* 18.04 LTS | Ubuntu* 18.04 LTS |
+| Kernel Version | 5.3.0-24-generic | 5.3.0-24-generic | 5.3.0-24-generic |
+| BIOS Vendor | American Megatrends Inc.* | American Megatrends Inc.* | American Megatrends Inc.* |
+| BIOS Version | 2401 | F3 | Z667AR10 |
+| BIOS Release | July 12, 2019 | March 25, 2020 | July 17, 2020 |
+| BIOS Settings | Select optimized default settings,
save & exit | Select optimized default settings,
set OS type to "other",
save & exit | Default Settings |
+| Batch size | 1 | 1 | 1 |
+| Precision | INT8 | INT8 | INT8 |
+| Number of concurrent inference requests | 3 | 4 | 4 |
+| Test Date | December 9, 2020 | December 9, 2020 | December 9, 2020 |
+| Power dissipation, TDP in Watt | [65](https://ark.intel.com/content/www/us/en/ark/products/129939/intel-core-i5-8500-processor-9m-cache-up-to-4-10-ghz.html#tab-blade-1-0-1)| [35](https://ark.intel.com/content/www/us/en/ark/products/203891/intel-core-i5-10500te-processor-12m-cache-up-to-3-70-ghz.html) | [35](https://ark.intel.com/content/www/us/en/ark/products/203891/intel-core-i5-10500te-processor-12m-cache-up-to-3-70-ghz.html) |
+| CPU Price on September 29, 2020, USD
Prices may vary | [192](https://ark.intel.com/content/www/us/en/ark/products/129939/intel-core-i5-8500-processor-9m-cache-up-to-4-10-ghz.html) | [195](https://ark.intel.com/content/www/us/en/ark/products/203891/intel-core-i5-10500te-processor-12m-cache-up-to-3-70-ghz.html) | [195](https://ark.intel.com/content/www/us/en/ark/products/203891/intel-core-i5-10500te-processor-12m-cache-up-to-3-70-ghz.html) |
+
**CPU Inference Engines (continue)**
@@ -166,7 +213,7 @@ Testing by Intel done on: see test date for each HW platform below.
| Batch size | 1 | 1 |
| Precision | INT8 | INT8 |
| Number of concurrent inference requests | 4 | 4 |
-| Test Date | September 25, 2020 | September 25, 2020 |
+| Test Date | December 9, 2020 | December 9, 2020 |
| Power dissipation, TDP in Watt | [9.5](https://ark.intel.com/content/www/us/en/ark/products/96485/intel-atom-x5-e3940-processor-2m-cache-up-to-1-80-ghz.html) | [65](https://ark.intel.com/content/www/us/en/ark/products/126688/intel-core-i3-8100-processor-6m-cache-3-60-ghz.html#tab-blade-1-0-1)|
| CPU Price on September 29, 2020, USD
Prices may vary | [34](https://ark.intel.com/content/www/us/en/ark/products/96485/intel-atom-x5-e3940-processor-2m-cache-up-to-1-80-ghz.html) | [117](https://ark.intel.com/content/www/us/en/ark/products/126688/intel-core-i3-8100-processor-6m-cache-3-60-ghz.html) |
@@ -174,7 +221,7 @@ Testing by Intel done on: see test date for each HW platform below.
**Accelerator Inference Engines**
-| | Intel® Neural Compute Stick 2 | Intel® Vision Accelerator Design
with Intel® Movidius™ VPUs (Uzel* UI-AR8) |
+| | Intel® Neural Compute Stick 2 | Intel® Vision Accelerator Design
with Intel® Movidius™ VPUs (Mustang-V100-MX8) |
| --------------------------------------- | ------------------------------------- | ------------------------------------- |
| VPU | 1 X Intel® Movidius™ Myriad™ X MA2485 | 8 X Intel® Movidius™ Myriad™ X MA2485 |
| Connection | USB 2.0/3.0 | PCIe X4 |
@@ -182,7 +229,7 @@ Testing by Intel done on: see test date for each HW platform below.
| Precision | FP16 | FP16 |
| Number of concurrent inference requests | 4 | 32 |
| Power dissipation, TDP in Watt | 2.5 | [30](https://www.mouser.com/ProductDetail/IEI/MUSTANG-V100-MX8-R10?qs=u16ybLDytRaZtiUUvsd36w%3D%3D) |
-| CPU Price, USD
Prices may vary | [69](https://ark.intel.com/content/www/us/en/ark/products/140109/intel-neural-compute-stick-2.html) (from September 29, 2020) | [768](https://www.mouser.com/ProductDetail/IEI/MUSTANG-V100-MX8-R10?qs=u16ybLDytRaZtiUUvsd36w%3D%3D) (from May 15, 2020) |
+| CPU Price, USD
Prices may vary | [69](https://ark.intel.com/content/www/us/en/ark/products/140109/intel-neural-compute-stick-2.html) (from December 9, 2020) | [214](https://www.arrow.com/en/products/mustang-v100-mx8-r10/iei-technology?gclid=Cj0KCQiA5bz-BRD-ARIsABjT4ng1v1apmxz3BVCPA-tdIsOwbEjTtqnmp_rQJGMfJ6Q2xTq6ADtf9OYaAhMUEALw_wcB) (from December 9, 2020) |
| Host Computer | Intel® Core™ i7 | Intel® Core™ i5 |
| Motherboard | ASUS* Z370-A II | Uzelinfo* / US-E1300 |
| CPU | Intel® Core™ i7-8700 CPU @ 3.20GHz | Intel® Core™ i5-6600 CPU @ 3.30GHz |
@@ -194,9 +241,9 @@ Testing by Intel done on: see test date for each HW platform below.
| BIOS Vendor | American Megatrends Inc.* | American Megatrends Inc.* |
| BIOS Version | 411 | 5.12 |
| BIOS Release | September 21, 2018 | September 21, 2018 |
-| Test Date | September 25, 2020 | September 25, 2020 |
+| Test Date | December 9, 2020 | December 9, 2020 |
-Please follow this link for more detailed configuration descriptions: [Configuration Details](https://docs.openvinotoolkit.org/resources/benchmark_files/system_configurations_2021.1.html)
+Please follow this link for more detailed configuration descriptions: [Configuration Details](https://docs.openvinotoolkit.org/resources/benchmark_files/system_configurations_2021.2.html)
\htmlonly
+
+
+
+
+
+
+
+
+
+
+ Page-1
+
+
+
+ Rounded Rectangle
+ Landmarks detector
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+ Landmarksdetector
+
+ Rounded Rectangle.4
+ Generate BG mask
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+ GenerateBG mask
+
+ Rounded Rectangle.5
+ Input
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+ Input
+
+ Rounded Rectangle.7
+ Unsharp mask
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+ Unsharpmask
+
+ Rounded Rectangle.8
+ Bilateral filter
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+ Bilateralfilter
+
+ Rounded Rectangle.9
+ Face detector
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+ Facedetector
+
+ Rounded Rectangle.10
+ Generate sharp mask
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+ Generatesharp mask
+
+ Rounded Rectangle.11
+ Output
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+ Output
+
+ Rounded Rectangle.13
+ Generate blur mask
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+ Generateblur mask
+
+ Circle
+
+
+
+
+
+
+ Circle.15
+ *
+
+
+
+ *
+
+ Circle.16
+
+
+
+
+
+
+ Circle.17
+ *
+
+
+
+ *
+
+ Circle.18
+ +
+
+
+
+
+
+
+ +
+
+ Circle.20
+
+
+
+
+
+
+ Circle.21
+ *
+
+
+
+ *
+
+ Dynamic connector
+
+
+
+ Dynamic connector.27
+
+
+
+ Dynamic connector.28
+
+
+
+ Dynamic connector.29
+
+
+
+ Dynamic connector.30
+
+
+
+ Dynamic connector.31
+
+
+
+ Dynamic connector.32
+
+
+
+ Dynamic connector.33
+
+
+
+ Dynamic connector.34
+
+
+
+ Dynamic connector.35
+
+
+
+ Dynamic connector.36
+
+
+
+ Dynamic connector.37
+
+
+
+ Dynamic connector.38
+
+
+
+ Dynamic connector.39
+
+
+
+ Dynamic connector.40
+
+
+
+ Rectangle
+ For each face
+
+
+
+
+
+
+ For each face
+
+
diff --git a/docs/img/gapi_face_beautification_example.jpg b/docs/img/gapi_face_beautification_example.jpg
new file mode 100644
index 00000000000000..eb3df6b58785bf
--- /dev/null
+++ b/docs/img/gapi_face_beautification_example.jpg
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:fb32d3db8768ff157daeff999cc7f4361d2bca866ed6dc95b8f78d8cc62ae208
+size 176525
diff --git a/docs/img/gapi_kernel_implementation_hierarchy.png b/docs/img/gapi_kernel_implementation_hierarchy.png
new file mode 100644
index 00000000000000..f910caa840d191
--- /dev/null
+++ b/docs/img/gapi_kernel_implementation_hierarchy.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:f291422f562825d4c5eee718b7c22e472b02a5a0a9c0be01d59b6b7cd8d756b1
+size 14603
diff --git a/docs/img/gapi_programming_model.png b/docs/img/gapi_programming_model.png
new file mode 100644
index 00000000000000..2ac10dcc82c13f
--- /dev/null
+++ b/docs/img/gapi_programming_model.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:925f70ede92d71e16733d78e003f62cd8bfdee0790bddbf2b7ce4fc8ef3f44bf
+size 171518
diff --git a/docs/img/int8vsfp32.png b/docs/img/int8vsfp32.png
index a47ffa2f1c96ff..b4889ea2252a97 100644
--- a/docs/img/int8vsfp32.png
+++ b/docs/img/int8vsfp32.png
@@ -1,3 +1,3 @@
version https://git-lfs.github.com/spec/v1
-oid sha256:304869bcbea000f6dbf46dee7900ff01aa61a75a3787969cc307f2f54d57263c
-size 32185
+oid sha256:0109b9cbc2908f786f6593de335c725f8ce5c800f37a7d79369408cc47eb8471
+size 25725
diff --git a/docs/install_guides/PAC_Configure_2018R5.md b/docs/install_guides/PAC_Configure_2018R5.md
index 8177adb315d2b5..1378c0c6f2cb09 100644
--- a/docs/install_guides/PAC_Configure_2018R5.md
+++ b/docs/install_guides/PAC_Configure_2018R5.md
@@ -236,11 +236,7 @@ classification_sample_async -m squeezenet1.1.xml -i $IE_INSTALL/demo/car.png -d
classification_sample_async -m squeezenet1.1.xml -i $IE_INSTALL/demo/car.png -d HETERO:FPGA,CPU -ni 100
```
-Congratulations, You are done with the Intel® Distribution of OpenVINO™ toolkit installation for FPGA. To learn more about how the Intel® Distribution of OpenVINO™ toolkit works, the Hello World tutorial and are other resources are provided below.
-
-## Hello World Face Detection Tutorial
-
-Use the [Intel® Distribution of OpenVINO™ toolkit with FPGA Hello World Face Detection Exercise](https://github.com/fritzboyle/openvino-with-fpga-hello-world-face-detection) to learn more about how the software and hardware work together.
+Congratulations, You are done with the Intel® Distribution of OpenVINO™ toolkit installation for FPGA.
## Additional Resources
diff --git a/docs/install_guides/PAC_Configure_2019RX.md b/docs/install_guides/PAC_Configure_2019RX.md
index 867215540e4881..5e43876ec20e00 100644
--- a/docs/install_guides/PAC_Configure_2019RX.md
+++ b/docs/install_guides/PAC_Configure_2019RX.md
@@ -237,12 +237,7 @@ classification_sample_async -m squeezenet1.1.xml -i $IE_INSTALL/demo/car.png
classification_sample_async -m squeezenet1.1.xml -i $IE_INSTALL/demo/car.png -d HETERO:FPGA,CPU
```
-Congratulations, You are done with the Intel® Distribution of OpenVINO™ toolkit installation for FPGA. To learn more about how the Intel® Distribution of OpenVINO™ toolkit works, the Hello World tutorial and are other resources are provided below.
-
-## Hello World Face Detection Tutorial
-
-Use the [Intel® Distribution of OpenVINO™ toolkit with FPGA Hello World Face Detection Exercise](https://github.com/fritzboyle/openvino-with-fpga-hello-world-face-detection) to learn more about how the software and hardware work together.
-
+Congratulations, You are done with the Intel® Distribution of OpenVINO™ toolkit installation for FPGA.
## Additional Resources
Intel® Distribution of OpenVINO™ toolkit home page: [https://software.intel.com/en-us/openvino-toolkit](https://software.intel.com/en-us/openvino-toolkit)
diff --git a/docs/install_guides/VisionAcceleratorFPGA_Configure_2018R5.md b/docs/install_guides/VisionAcceleratorFPGA_Configure_2018R5.md
index c0082ef86f62a2..328c824fa35967 100644
--- a/docs/install_guides/VisionAcceleratorFPGA_Configure_2018R5.md
+++ b/docs/install_guides/VisionAcceleratorFPGA_Configure_2018R5.md
@@ -319,11 +319,7 @@ The throughput on FPGA is listed and may show a lower FPS. This is due to the in
./classification_sample_async -i car.png -m ~/squeezenet1.1_FP16/squeezenet1.1.xml -d HETERO:FPGA,CPU -ni 100
```
-Congratulations, you are done with the Intel® Distribution of OpenVINO™ toolkit installation for FPGA. To learn more about how the Intel® Distribution of OpenVINO™ toolkit works, the Hello World tutorial and are other resources are provided below.
-
-## Hello World Face Detection Tutorial
-
-Use the [Intel® Distribution of OpenVINO™ toolkit with FPGA Hello World Face Detection Exercise](https://github.com/fritzboyle/openvino-with-fpga-hello-world-face-detection) to learn more about how the software and hardware work together.
+Congratulations, you are done with the Intel® Distribution of OpenVINO™ toolkit installation for FPGA.
## Additional Resources
diff --git a/docs/install_guides/VisionAcceleratorFPGA_Configure_2019R1.md b/docs/install_guides/VisionAcceleratorFPGA_Configure_2019R1.md
index 640f5387c38fa7..8de131e8c45161 100644
--- a/docs/install_guides/VisionAcceleratorFPGA_Configure_2019R1.md
+++ b/docs/install_guides/VisionAcceleratorFPGA_Configure_2019R1.md
@@ -270,11 +270,7 @@ The throughput on FPGA is listed and may show a lower FPS. This is due to the in
./classification_sample_async -i car.png -m ~/squeezenet1.1_FP16/squeezenet1.1.xml -d HETERO:FPGA,CPU -ni 100
```
-Congratulations, you are done with the Intel® Distribution of OpenVINO™ toolkit installation for FPGA. To learn more about how the Intel® Distribution of OpenVINO™ toolkit works, the Hello World tutorial and are other resources are provided below.
-
-## Hello World Face Detection Tutorial
-
-Use the [Intel® Distribution of OpenVINO™ toolkit with FPGA Hello World Face Detection Exercise](https://github.com/fritzboyle/openvino-with-fpga-hello-world-face-detection) to learn more about how the software and hardware work together.
+Congratulations, you are done with the Intel® Distribution of OpenVINO™ toolkit installation for FPGA.
## Additional Resources
diff --git a/docs/install_guides/VisionAcceleratorFPGA_Configure_2019R3.md b/docs/install_guides/VisionAcceleratorFPGA_Configure_2019R3.md
index 369555f35f2f8a..06d8ebbc86939a 100644
--- a/docs/install_guides/VisionAcceleratorFPGA_Configure_2019R3.md
+++ b/docs/install_guides/VisionAcceleratorFPGA_Configure_2019R3.md
@@ -270,11 +270,7 @@ Note the CPU throughput in Frames Per Second (FPS). This tells you how quickly t
```
The throughput on FPGA is listed and may show a lower FPS. This may be due to the initialization time. To account for that, increase the number of iterations or batch size when deploying to get a better sense of the speed the FPGA can run inference at.
-Congratulations, you are done with the Intel® Distribution of OpenVINO™ toolkit installation for FPGA. To learn more about how the Intel® Distribution of OpenVINO™ toolkit works, the Hello World tutorial and are other resources are provided below.
-
-## Hello World Face Detection Tutorial
-
-Use the [Intel® Distribution of OpenVINO™ toolkit with FPGA Hello World Face Detection Exercise](https://github.com/fritzboyle/openvino-with-fpga-hello-world-face-detection) to learn more about how the software and hardware work together.
+Congratulations, you are done with the Intel® Distribution of OpenVINO™ toolkit installation for FPGA.
## Additional Resources
diff --git a/docs/install_guides/installing-openvino-apt.md b/docs/install_guides/installing-openvino-apt.md
index 08249588623ac6..812c6195f2c9a5 100644
--- a/docs/install_guides/installing-openvino-apt.md
+++ b/docs/install_guides/installing-openvino-apt.md
@@ -129,6 +129,5 @@ sudo apt autoremove intel-openvino--ubuntu-.<
- [Model Optimizer Developer Guide](../MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md).
- [Inference Engine Developer Guide](../IE_DG/Deep_Learning_Inference_Engine_DevGuide.md).
- For more information on Sample Applications, see the [Inference Engine Samples Overview](../IE_DG/Samples_Overview.md).
-- For information on Inference Engine Tutorials, see the [Inference Tutorials](https://github.com/intel-iot-devkit/inference-tutorials-generic).
- For IoT Libraries & Code Samples see the [Intel® IoT Developer Kit](https://github.com/intel-iot-devkit).
diff --git a/docs/install_guides/installing-openvino-conda.md b/docs/install_guides/installing-openvino-conda.md
index c491c862a682dc..a53997c4901fb5 100644
--- a/docs/install_guides/installing-openvino-conda.md
+++ b/docs/install_guides/installing-openvino-conda.md
@@ -49,7 +49,7 @@ Now you can start to develop and run your application.
## Known Issues and Limitations
- You cannot use Python bindings included in Intel® Distribution of OpenVINO™ toolkit with [Anaconda* distribution](https://www.anaconda.com/products/individual/)
-- You cannot use Python OpenVINO™ bindings included in Anaconda* package with official [Python distribution](https://https://www.python.org/).
+- You cannot use Python OpenVINO™ bindings included in Anaconda* package with official [Python distribution](https://www.python.org/).
## Additional Resources
@@ -59,6 +59,5 @@ Now you can start to develop and run your application.
- [Model Optimizer Developer Guide](../MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md).
- [Inference Engine Developer Guide](../IE_DG/Deep_Learning_Inference_Engine_DevGuide.md).
- For more information on Sample Applications, see the [Inference Engine Samples Overview](../IE_DG/Samples_Overview.md).
-- For information on Inference Engine Tutorials, see the [Inference Tutorials](https://github.com/intel-iot-devkit/inference-tutorials-generic).
- Intel® Distribution of OpenVINO™ toolkit Anaconda* home page: [https://anaconda.org/intel/openvino-ie4py](https://anaconda.org/intel/openvino-ie4py)
diff --git a/docs/install_guides/installing-openvino-docker-linux.md b/docs/install_guides/installing-openvino-docker-linux.md
index 9d73e742d8aaae..ff5acfbe0635b2 100644
--- a/docs/install_guides/installing-openvino-docker-linux.md
+++ b/docs/install_guides/installing-openvino-docker-linux.md
@@ -59,7 +59,7 @@ RUN apt-get update && \
curl -L "https://github.com/intel/compute-runtime/releases/download/19.41.14441/intel-igc-core_1.0.2597_amd64.deb" --output "intel-igc-core_1.0.2597_amd64.deb" && \
curl -L "https://github.com/intel/compute-runtime/releases/download/19.41.14441/intel-igc-opencl_1.0.2597_amd64.deb" --output "intel-igc-opencl_1.0.2597_amd64.deb" && \
curl -L "https://github.com/intel/compute-runtime/releases/download/19.41.14441/intel-opencl_19.41.14441_amd64.deb" --output "intel-opencl_19.41.14441_amd64.deb" && \
- curl -L "https://github.com/intel/compute-runtime/releases/download/19.41.14441/intel-ocloc_19.04.12237_amd64.deb" --output "intel-ocloc_19.04.12237_amd64.deb" && \
+ curl -L "https://github.com/intel/compute-runtime/releases/download/19.41.14441/intel-ocloc_19.41.14441_amd64.deb" --output "intel-ocloc_19.04.12237_amd64.deb" && \
dpkg -i /tmp/opencl/*.deb && \
ldconfig && \
rm /tmp/opencl
diff --git a/docs/install_guides/installing-openvino-linux.md b/docs/install_guides/installing-openvino-linux.md
index 9ceed341bda9b8..df4c0413152a97 100644
--- a/docs/install_guides/installing-openvino-linux.md
+++ b/docs/install_guides/installing-openvino-linux.md
@@ -31,6 +31,19 @@ The Intel® Distribution of OpenVINO™ toolkit for Linux\*:
| [Documentation for Pre-Trained Models ](@ref omz_models_intel_index) | Documentation for the pre-trained models available in the [Open Model Zoo repo](https://github.com/opencv/open_model_zoo). |
| Deep Learning Streamer (DL Streamer) | Streaming analytics framework, based on GStreamer, for constructing graphs of media analytics components. For the DL Streamer documentation, see [DL Streamer Samples](@ref gst_samples_README), [API Reference](https://openvinotoolkit.github.io/dlstreamer_gst/), [Elements](https://github.com/opencv/gst-video-analytics/wiki/Elements), [Tutorial](https://github.com/opencv/gst-video-analytics/wiki/DL%20Streamer%20Tutorial). |
+**Could Be Optionally Installed**
+
+[Deep Learning Workbench](@ref workbench_docs_Workbench_DG_Introduction) (DL Workbench) is a platform built upon OpenVINO™ and provides a web-based graphical environment that enables you to optimize, fine-tune, analyze, visualize, and compare performance of deep learning models on various Intel® architecture
+configurations. In the DL Workbench, you can use most of OpenVINO™ toolkit components:
+* [Model Downloader](@ref omz_tools_downloader_README)
+* [Intel® Open Model Zoo](@ref omz_models_intel_index)
+* [Model Optimizer](../MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md)
+* [Post-training Optimization Tool](@ref pot_README)
+* [Accuracy Checker](@ref omz_tools_accuracy_checker_README)
+* [Benchmark Tool](../../inference-engine/samples/benchmark_app/README.md)
+
+Proceed to an [easy installation from Docker](@ref workbench_docs_Workbench_DG_Install_from_Docker_Hub) to get started.
+
## System Requirements
**Hardware**
@@ -65,14 +78,12 @@ This guide provides step-by-step instructions on how to install the Intel® Dist
2. Install External software dependencies
3. Set the OpenVINO™ Environment Variables: Optional Update to .bashrc.
4. Configure the Model Optimizer
-5. Run the Verification Scripts to Verify Installation and Compile Samples
-6. Steps for Intel® Processor Graphics (GPU)
-7. Steps for Intel® Neural Compute Stick 2
-8. Steps for Intel® Vision Accelerator Design with Intel® Movidius™ VPU
+5. Steps for Intel® Processor Graphics (GPU)
+6. Steps for Intel® Neural Compute Stick 2
+7. Steps for Intel® Vision Accelerator Design with Intel® Movidius™ VPU
After installing your Intel® Movidius™ VPU, you will return to this guide to complete OpenVINO™ installation.
-9. Run a Sample Application
-10. Uninstall the Intel® Distribution of OpenVINO™ Toolkit.
-11. Use the Face Detection Tutorial
+8. Get Started with Code Samples and Demo Applications
+9. Steps to uninstall the Intel® Distribution of OpenVINO™ Toolkit.
## Install the Intel® Distribution of OpenVINO™ Toolkit Core Components
@@ -98,15 +109,10 @@ cd l_openvino_toolkit_p_
```
If you have a previous version of the Intel Distribution of OpenVINO
toolkit installed, rename or delete these two directories:
-- `~/inference_engine_samples_build`
-- `~/openvino_models`
-
- **Installation Notes:**
- - Choose an installation option and run the related script as root.
- - You can use either a GUI installation wizard or command line instructions (CLI).
- - Screenshots are provided for the GUI, but not for CLI. The following information also applies to CLI and will be helpful to your installation where you will be presented with the same choices and tasks.
-
-5. Choose your installation option:
+ - `~/inference_engine_samples_build`
+ - `~/openvino_models`
+5. Choose your installation option and run the related script as root to use either a GUI installation wizard or command line instructions (CLI).
+ Screenshots are provided for the GUI, but not for CLI. The following information also applies to CLI and will be helpful to your installation where you will be presented with the same choices and tasks.
- **Option 1:** GUI Installation Wizard:
```sh
sudo ./install_GUI.sh
@@ -120,27 +126,22 @@ sudo ./install.sh
sudo sed -i 's/decline/accept/g' silent.cfg
sudo ./install.sh -s silent.cfg
```
-You can select which OpenVINO components will be installed by modifying the `COMPONENTS` parameter in the `silent.cfg` file. For example, to install only CPU runtime for the Inference Engine, set
-`COMPONENTS=intel-openvino-ie-rt-cpu__x86_64` in `silent.cfg`.
-To get a full list of available components for installation, run the `./install.sh --list_components` command from the unpacked OpenVINO™ toolkit package.
-
-6. Follow the instructions on your screen. Watch for informational
-messages such as the following in case you must complete additional
-steps:
-![](../img/openvino-install-linux-01.png)
-
+ You can select which OpenVINO components will be installed by modifying the `COMPONENTS` parameter in the `silent.cfg` file. For example, to install only CPU runtime for the Inference Engine, set `COMPONENTS=intel-openvino-ie-rt-cpu__x86_64` in `silent.cfg`. To get a full list of available components for installation, run the `./install.sh --list_components` command from the unpacked OpenVINO™ toolkit package.
+6. Follow the instructions on your screen. Watch for informational messages such as the following in case you must complete additional steps:
+ ![](../img/openvino-install-linux-01.png)
7. If you select the default options, the **Installation summary** GUI screen looks like this:
-![](../img/openvino-install-linux-02.png)
-**Optional:** You can choose **Customize** to change the installation directory or the components you want to install:
-![](../img/openvino-install-linux-03.png)
-By default, the Intel® Distribution of OpenVINO™ is installed to the following directory, referred to as ``:
- - For root or administrator: `/opt/intel/openvino_/`
- - For regular users: `/home//intel/openvino_/`
-For simplicity, a symbolic link to the latest installation is also created: `/opt/intel/openvino_2021/`.
+ ![](../img/openvino-install-linux-02.png)
+ By default, the Intel® Distribution of OpenVINO™ is installed to the following directory, referred to as ``:
+ * For root or administrator: `/opt/intel/openvino_/`
+ * For regular users: `/home//intel/openvino_/`
+ For simplicity, a symbolic link to the latest installation is also created: `/opt/intel/openvino_2021/`.
+
+8. **Optional**: You can choose **Customize** to change the installation directory or the components you want to install:
+> **NOTE**: If there is an OpenVINO™ toolkit version previously installed on your system, the installer will use the same destination directory for next installations. If you want to install a newer version to a different directory, you need to uninstall the previously installed versions.
+ ![](../img/openvino-install-linux-03.png)
> **NOTE**: The Intel® Media SDK component is always installed in the `/opt/intel/mediasdk` directory regardless of the OpenVINO installation path chosen.
-8. A Complete screen indicates that the core components have been installed:
-
-![](../img/openvino-install-linux-04.png)
+9. A Complete screen indicates that the core components have been installed:
+ ![](../img/openvino-install-linux-04.png)
The first core components are installed. Continue to the next section to install additional dependencies.
@@ -266,51 +267,15 @@ cd /opt/intel/openvino_2021/deployment_tools/model_optimizer/install_prerequisit
```
The Model Optimizer is configured for one or more frameworks.
-You are ready to compile the samples by running the verification scripts.
-
-## Run the Verification Scripts to Verify Installation
-
-> **IMPORTANT**: This section is required. In addition to confirming your installation was successful, demo scripts perform other steps, such as setting up your computer to use the Inference Engine samples.
+You have completed all required installation, configuration and build steps in this guide to use your CPU to work with your trained models.
-To verify the installation and compile two samples, use the steps below to run the verification applications provided with the product on the CPU.
-
-> **NOTE:** To run the demo applications on Intel® Processor Graphics or Intel® Neural Compute Stick 2 devices, make sure you first completed the additional Steps for Intel® Processor Graphics (GPU) or Steps for Intel® Neural Compute Stick 2.
-
-1. Go to the **Inference Engine demo** directory:
-```sh
-cd /opt/intel/openvino_2021/deployment_tools/demo
-```
-
-2. Run the **Image Classification verification script**:
-```sh
-./demo_squeezenet_download_convert_run.sh
-```
- This verification script downloads a SqueezeNet model, uses the Model Optimizer to convert the model to the .bin and .xml Intermediate Representation (IR) files. The Inference Engine requires this model conversion so it can use the IR as input and achieve optimum performance on Intel hardware.
- This verification script builds the [Image Classification Sample Async](../../inference-engine/samples/classification_sample_async/README.md) application and run it with the `car.png` image located in the demo directory. When the verification script completes, you will have the label and confidence for the top-10 categories:
- ![](../img/image_classification_script_output_lnx.png)
-
-3. Run the **Inference Pipeline verification script**:
-```sh
-./demo_security_barrier_camera.sh
-```
- This script downloads three pre-trained model IRs, builds the [Security Barrier Camera Demo](@ref omz_demos_security_barrier_camera_demo_README) application, and runs it with the downloaded models and the `car_1.bmp` image from the `demo` directory to show an inference pipeline. The verification script uses vehicle recognition in which vehicle attributes build on each other to narrow in on a specific attribute.
- First, an object is identified as a vehicle. This identification is used as input to the next model, which identifies specific vehicle attributes, including the license plate. Finally, the attributes identified as the license plate are used as input to the third model, which recognizes specific characters in the license plate.
- When the verification script completes, you will see an image that displays the resulting frame with detections rendered as bounding boxes, and text:
- ![](../img/inference_pipeline_script_lnx.png)
-
-4. Close the image viewer window to complete the verification script.
-
-
-To learn about the verification scripts, see the `README.txt` file in `/opt/intel/openvino_2021/deployment_tools/demo`.
-
-For a description of the Intel Distribution of OpenVINO™ pre-trained object detection and object recognition models, see [Overview of OpenVINO™ Toolkit Pre-Trained Models](@ref omz_models_intel_index).
-
-You have completed all required installation, configuration and build steps in this guide to use your CPU to work with your trained models.
-To use other hardware, see;
+To enable inference on other hardware, see:
- Steps for Intel® Processor Graphics (GPU)
- Steps for Intel® Neural Compute Stick 2
- Steps for Intel® Vision Accelerator Design with Intel® Movidius™ VPUs
+Or proceed to the Get Started to get started with running code samples and demo applications.
+
## Steps for Intel® Processor Graphics (GPU)
The steps in this section are required only if you want to enable the toolkit components to use processor graphics (GPU) on your system.
@@ -323,11 +288,10 @@ cd /opt/intel/openvino_2021/install_dependencies/
```sh
sudo -E su
```
-3. Install the **Intel® Graphics Compute Runtime for OpenCL™** driver components required to use the GPU plugin and write custom layers for Intel® Integrated Graphics. Run the installation script:
+3. Install the **Intel® Graphics Compute Runtime for OpenCL™** driver components required to use the GPU plugin and write custom layers for Intel® Integrated Graphics. The drivers are not included in the package, to install it, make sure you have the internet connection and run the installation script:
```sh
./install_NEO_OCL_driver.sh
```
- The drivers are not included in the package and the script downloads them. Make sure you have the internet connection for this step.
The script compares the driver version on the system to the current version. If the driver version on the system is higher or equal to the current version, the script does
not install a new driver. If the version of the driver is lower than the current version, the script uninstalls the lower and installs the current version with your permission:
![](../img/NEO_check_agreement.png)
@@ -335,9 +299,13 @@ not install a new driver. If the version of the driver is lower than the current
```sh
Add OpenCL user to video group
```
- Ignore this suggestion and continue.
+ Ignore this suggestion and continue.
You can also find the most recent version of the driver, installation procedure and other information in the [https://github.com/intel/compute-runtime/](https://github.com/intel/compute-runtime/) repository.
+
4. **Optional** Install header files to allow compiling a new code. You can find the header files at [Khronos OpenCL™ API Headers](https://github.com/KhronosGroup/OpenCL-Headers.git).
+You've completed all required configuration steps to perform inference on processor graphics.
+Proceed to the Get Started to get started with running code samples and demo applications.
+
## Steps for Intel® Neural Compute Stick 2
These steps are only required if you want to perform inference on Intel® Movidius™ NCS powered by the Intel® Movidius™ Myriad™ 2 VPU or Intel® Neural Compute Stick 2 powered by the Intel® Movidius™ Myriad™ X VPU. See also the [Get Started page for Intel® Neural Compute Stick 2:](https://software.intel.com/en-us/neural-compute-stick/get-started)
@@ -348,20 +316,23 @@ sudo usermod -a -G users "$(whoami)"
```
Log out and log in for it to take effect.
2. To perform inference on Intel® Neural Compute Stick 2, install the USB rules as follows:
-```sh
-sudo cp /opt/intel/openvino_2021/inference_engine/external/97-myriad-usbboot.rules /etc/udev/rules.d/
-```
-```sh
-sudo udevadm control --reload-rules
-```
-```sh
-sudo udevadm trigger
-```
-```sh
-sudo ldconfig
-```
+ ```sh
+ sudo cp /opt/intel/openvino_2021/inference_engine/external/97-myriad-usbboot.rules /etc/udev/rules.d/
+ ```
+ ```sh
+ sudo udevadm control --reload-rules
+ ```
+ ```sh
+ sudo udevadm trigger
+ ```
+ ```sh
+ sudo ldconfig
+ ```
> **NOTE**: You may need to reboot your machine for this to take effect.
+You've completed all required configuration steps to perform inference on Intel® Neural Compute Stick 2.
+Proceed to the Get Started to get started with running code samples and demo applications.
+
## Steps for Intel® Vision Accelerator Design with Intel® Movidius™ VPUs
To install and configure your Intel® Vision Accelerator Design with Intel® Movidius™ VPUs, see the [Intel® Vision Accelerator Design with Intel® Movidius™ VPUs Configuration Guide](installing-openvino-linux-ivad-vpu.md).
@@ -385,61 +356,14 @@ cd /opt/intel/openvino_2021/deployment_tools/demo
./demo_security_barrier_camera.sh -d HDDL
```
-## Run a Sample Application
-
-> **IMPORTANT**: This section requires that you have [Run the Verification Scripts to Verify Installation](#run-the-demos). This script builds the Image Classification sample application and downloads and converts the required Caffe* Squeezenet model to an IR.
-
-In this section you will run the Image Classification sample application, with the Caffe* Squeezenet1.1 model on three types of Intel® hardware: CPU, GPU and VPUs.
-
-Image Classification sample application binary file was automatically built and the FP16 model IR files are created when you [Ran the Image Classification Verification Script](#run-the-image-classification-verification-script).
-
-The Image Classification sample application binary file located in the `/home//inference_engine_samples_build/intel64/Release` directory.
-The Caffe* Squeezenet model IR files (`.bin` and `.xml`) are located in the `/home//openvino_models/ir/public/squeezenet1.1/FP16/` directory.
-
-> **NOTE**: If you installed the Intel® Distribution of OpenVINO™ to the non-default install directory, replace `/opt/intel` with the directory in which you installed the software.
-
-To run the sample application:
-
-1. Set up environment variables:
-```sh
-source /opt/intel/openvino_2021/bin/setupvars.sh
-```
-2. Go to the samples build directory:
-```sh
-cd ~/inference_engine_samples_build/intel64/Release
-```
-3. Run the sample executable with specifying the `car.png` file from the `demo` directory as an input image, the IR of your FP16 model and a plugin for a hardware device to perform inference on.
-> **NOTE**: Running the sample application on hardware other than CPU requires performing [additional hardware configuration steps](#optional-steps).
-
- - **For CPU**:
- ```sh
- ./classification_sample_async -i /opt/intel/openvino_2021/deployment_tools/demo/car.png -m ~/openvino_models/ir/public/squeezenet1.1/FP16/squeezenet1.1.xml -d CPU
- ```
-
- - **For GPU**:
- ```sh
- ./classification_sample_async -i /opt/intel/openvino_2021/deployment_tools/demo/car.png -m ~/openvino_models/ir/public/squeezenet1.1/FP16/squeezenet1.1.xml -d GPU
- ```
-
- - **For MYRIAD**:
- > **NOTE**: Running inference on Intel® Neural Compute Stick 2 with the MYRIAD plugin requires performing [additional hardware configuration steps](#additional-NCS-steps).
- ```sh
- ./classification_sample_async -i /opt/intel/openvino_2021/deployment_tools/demo/car.png -m ~/openvino_models/ir/public/squeezenet1.1/FP16/squeezenet1.1.xml -d MYRIAD
- ```
-
- - **For HDDL**:
- > **NOTE**: Running inference on Intel® Vision Accelerator Design with Intel® Movidius™ VPUs with the HDDL plugin requires performing [additional hardware configuration steps](installing-openvino-linux-ivad-vpu.md)
- ```sh
- ./classification_sample_async -i /opt/intel/openvino_2021/deployment_tools/demo/car.png -m ~/openvino_models/ir/public/squeezenet1.1/FP16/squeezenet1.1.xml -d HDDL
- ```
-
-For information on Sample Applications, see the [Inference Engine Samples Overview](../IE_DG/Samples_Overview.md).
-
-Congratulations, you have finished the installation of the Intel® Distribution of OpenVINO™ toolkit for Linux*. To learn more about how the Intel® Distribution of OpenVINO™ toolkit works, the Hello World tutorial and other resources are provided below.
+You've completed all required configuration steps to perform inference on Intel® Vision Accelerator Design with Intel® Movidius™ VPUs.
+Proceed to the Get Started to get started with running code samples and demo applications.
-## Hello World Face Detection Tutorial
+## Get Started
-See the [OpenVINO™ Hello World Face Detection Exercise](https://github.com/intel-iot-devkit/inference-tutorials-generic).
+Now you are ready to get started. To continue, see the following pages:
+* [OpenVINO™ Toolkit Overview](../index.md)
+* [Get Started Guide for Linux](../get_started/get_started_linux.md) to learn the basic OpenVINO™ toolkit workflow and run code samples and demo applications with pre-trained models on different inference devices.
## Uninstall the Intel® Distribution of OpenVINO™ Toolkit
Choose one of the options provided below to uninstall the Intel® Distribution of OpenVINO™ Toolkit from your system.
@@ -492,7 +416,6 @@ trusted-host = mirrors.aliyun.com
- [Inference Engine Developer Guide](../IE_DG/Deep_Learning_Inference_Engine_DevGuide.md).
- For more information on Sample Applications, see the [Inference Engine Samples Overview](../IE_DG/Samples_Overview.md).
- For information on a set of pre-trained models, see the [Overview of OpenVINO™ Toolkit Pre-Trained Models](@ref omz_models_intel_index)
-- For information on Inference Engine Tutorials, see the [Inference Tutorials](https://github.com/intel-iot-devkit/inference-tutorials-generic)
- For IoT Libraries and Code Samples see the [Intel® IoT Developer Kit](https://github.com/intel-iot-devkit).
To learn more about converting models, go to:
diff --git a/docs/install_guides/installing-openvino-macos.md b/docs/install_guides/installing-openvino-macos.md
index a3e081e7c8e212..9489d3a3732a69 100644
--- a/docs/install_guides/installing-openvino-macos.md
+++ b/docs/install_guides/installing-openvino-macos.md
@@ -31,6 +31,19 @@ The following components are installed by default:
| Additional Tools | A set of tools to work with your models including [Accuracy Checker utility](@ref omz_tools_accuracy_checker_README), [Post-Training Optimization Tool Guide](@ref pot_README), [Model Downloader](@ref omz_tools_downloader_README) and other |
| [Documentation for Pre-Trained Models ](@ref omz_models_intel_index) | Documentation for the pre-trained models available in the [Open Model Zoo repo](https://github.com/opencv/open_model_zoo) |
+**Could Be Optionally Installed**
+
+[Deep Learning Workbench](@ref workbench_docs_Workbench_DG_Introduction) (DL Workbench) is a platform built upon OpenVINO™ and provides a web-based graphical environment that enables you to optimize, fine-tune, analyze, visualize, and compare performance of deep learning models on various Intel® architecture
+configurations. In the DL Workbench, you can use most of OpenVINO™ toolkit components:
+* [Model Downloader](@ref omz_tools_downloader_README)
+* [Intel® Open Model Zoo](@ref omz_models_intel_index)
+* [Model Optimizer](../MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md)
+* [Post-training Optimization Tool](@ref pot_README)
+* [Accuracy Checker](@ref omz_tools_accuracy_checker_README)
+* [Benchmark Tool](../../inference-engine/samples/benchmark_app/README.md)
+
+Proceed to an [easy installation from Docker](@ref workbench_docs_Workbench_DG_Install_from_Docker_Hub) to get started.
+
## Development and Target Platform
The development and target platforms have the same requirements, but you can select different components during the installation, based on your intended use.
@@ -64,9 +77,9 @@ The following steps will be covered:
1. Install the Intel® Distribution of OpenVINO™ Toolkit .
2. Set the OpenVINO environment variables and (optional) Update to .bash_profile
.
-4. Configure the Model Optimizer.
-5. Run verification scripts to verify installation and compile samples.
-6. Uninstall the Intel® Distribution of OpenVINO™ Toolkit.
+3. Configure the Model Optimizer.
+4. Get Started with Code Samples and Demo Applications.
+5. Uninstall the Intel® Distribution of OpenVINO™ Toolkit.
## Install the Intel® Distribution of OpenVINO™ toolkit Core Components
@@ -93,7 +106,7 @@ The disk image is mounted to `/Volumes/m_openvino_toolkit_p_` and autom
![](../img/openvino-install-macos-01.png)
- The default installation directory path depends on the privileges you choose for the installation.
+ The default installation directory path depends on the privileges you choose for the installation.
5. Click **Next** and follow the instructions on your screen.
@@ -104,18 +117,16 @@ The disk image is mounted to `/Volumes/m_openvino_toolkit_p_` and autom
8. The **Installation summary** screen shows you the default component set to install:
![](../img/openvino-install-macos-03.png)
+ By default, the Intel® Distribution of OpenVINO™ is installed to the following directory, referred to as ``:
- By default, the Intel® Distribution of OpenVINO™ is installed to the following directory, referred to as ``:
-
-* For root or administrator: `/opt/intel/openvino_/`
-* For regular users: `/home//intel/openvino_/`
-
-For simplicity, a symbolic link to the latest installation is also created: `/home//intel/openvino_2021/`.
+ * For root or administrator: `/opt/intel/openvino_/`
+ * For regular users: `/home//intel/openvino_/`
+ For simplicity, a symbolic link to the latest installation is also created: `/home//intel/openvino_2021/`.
9. If needed, click **Customize** to change the installation directory or the components you want to install:
- ![](../img/openvino-install-macos-04.png)
-
- Click **Next** to save the installation options and show the Installation summary screen.
+ ![](../img/openvino-install-macos-04.png)
+ > **NOTE**: If there is an OpenVINO™ toolkit version previously installed on your system, the installer will use the same destination directory for next installations. If you want to install a newer version to a different directory, you need to uninstall the previously installed versions.
+ Click **Next** to save the installation options and show the Installation summary screen.
10. On the **Installation summary** screen, press **Install** to begin the installation.
@@ -228,55 +239,11 @@ Configure individual frameworks separately **ONLY** if you did not select **Opti
The Model Optimizer is configured for one or more frameworks.
-You are ready to verify the installation by running the verification scripts.
-
-## Run the Verification Scripts to Verify Installation and Compile Samples
-
-> **NOTES**:
-> - The steps shown here assume you used the default installation directory to install the OpenVINO toolkit. If you installed the software to a directory other than `/opt/intel/`, update the directory path with the location where you installed the toolkit.
-> - If you installed the product as a root user, you must switch to the root mode before you continue: `sudo -i`.
-
-To verify the installation and compile two Inference Engine samples, run the verification applications provided with the product on the CPU:
-
-### Run the Image Classification Verification Script
-
-1. Go to the **Inference Engine demo** directory:
- ```sh
- cd /opt/intel/openvino_2021/deployment_tools/demo
- ```
-
-2. Run the **Image Classification verification script**:
- ```sh
- ./demo_squeezenet_download_convert_run.sh
- ```
-
-The Image Classification verification script downloads a public SqueezeNet Caffe* model and runs the Model Optimizer to convert the model to `.bin` and `.xml` Intermediate Representation (IR) files. The Inference Engine requires this model conversion so it can use the IR as input and achieve optimum performance on Intel hardware.
-
-This verification script creates the directory `/home//inference_engine_samples/`, builds the [Image Classification Sample](../../inference-engine/samples/classification_sample_async/README.md) application and runs with the model IR and `car.png` image located in the `demo` directory. When the verification script completes, you will have the label and confidence for the top-10 categories:
+You have completed all required installation, configuration and build steps in this guide to use your CPU to work with your trained models.
-![](../img/image_classification_script_output_lnx.png)
+To enable inference on Intel® Neural Compute Stick 2, see the Steps for Intel® Neural Compute Stick 2.
-For a brief description of the Intermediate Representation `.bin` and `.xml` files, see [Configuring the Model Optimizer](#configure-the-model-optimizer).
-
-This script is complete. Continue to the next section to run the Inference Pipeline verification script.
-
-### Run the Inference Pipeline Verification Script
-
-While still in `/opt/intel/openvino_2021/deployment_tools/demo/`, run the Inference Pipeline verification script:
- ```sh
- ./demo_security_barrier_camera.sh
- ```
-
-This verification script downloads three pre-trained model IRs, builds the [Security Barrier Camera Demo](@ref omz_demos_security_barrier_camera_demo_README) application and runs it with the downloaded models and the `car_1.bmp` image from the `demo` directory to show an inference pipeline. The verification script uses vehicle recognition in which vehicle attributes build on each other to narrow in on a specific attribute.
-
-First, an object is identified as a vehicle. This identification is used as input to the next model, which identifies specific vehicle attributes, including the license plate. Finally, the attributes identified as the license plate are used as input to the third model, which recognizes specific characters in the license plate.
-
-When the verification script completes, you will see an image that displays the resulting frame with detections rendered as bounding boxes, and text:
-![](../img/inference_pipeline_script_mac.png)
-
-Close the image viewer screen to end the demo.
-
-**Congratulations**, you have completed the Intel® Distribution of OpenVINO™ 2020.1 installation for macOS. To learn more about what you can do with the Intel® Distribution of OpenVINO™ toolkit, see the additional resources provided below.
+Or proceed to the Get Started to get started with running code samples and demo applications.
## Steps for Intel® Neural Compute Stick 2
@@ -291,9 +258,14 @@ For example, to install the `libusb` library using Homebrew\*, use the following
brew install libusb
```
-## Hello World Tutorials
+You've completed all required configuration steps to perform inference on your Intel® Neural Compute Stick 2.
+Proceed to the Get Started to get started with running code samples and demo applications.
+
+## Get Started
-Visit the Intel Distribution of OpenVINO Toolkit [Inference Tutorials for Face Detection and Car Detection Exercises](https://github.com/intel-iot-devkit/inference-tutorials-generic/tree/openvino_toolkit_r3_0)
+Now you are ready to get started. To continue, see the following pages:
+* [OpenVINO™ Toolkit Overview](../index.md)
+* [Get Started Guide for Windows](../get_started/get_started_macos.md) to learn the basic OpenVINO™ toolkit workflow and run code samples and demo applications with pre-trained models on different inference devices.
## Uninstall the Intel® Distribution of OpenVINO™ Toolkit
diff --git a/docs/install_guides/installing-openvino-windows.md b/docs/install_guides/installing-openvino-windows.md
index af6c16247cb234..8de98761d15781 100644
--- a/docs/install_guides/installing-openvino-windows.md
+++ b/docs/install_guides/installing-openvino-windows.md
@@ -26,9 +26,7 @@ Your installation is complete when these are all completed:
4. Configure the Model Optimizer
-5. Run two Verification Scripts to Verify Installation
-
-6. Optional:
+5. Optional:
- Install the Intel® Graphics Driver for Windows*
@@ -36,7 +34,9 @@ Your installation is complete when these are all completed:
- Update Windows* environment variables
-7. Uninstall the Intel® Distribution of OpenVINO™ Toolkit
+Also, the following steps will be covered in the guide:
+- Get Started with Code Samples and Demo Applications
+- Uninstall the Intel® Distribution of OpenVINO™ Toolkit
### About the Intel® Distribution of OpenVINO™ toolkit
@@ -65,6 +65,19 @@ The following components are installed by default:
| Additional Tools | A set of tools to work with your models including [Accuracy Checker utility](@ref omz_tools_accuracy_checker_README), [Post-Training Optimization Tool Guide](@ref pot_README), [Model Downloader](@ref omz_tools_downloader_README) and other |
| [Documentation for Pre-Trained Models ](@ref omz_models_intel_index) | Documentation for the pre-trained models available in the [Open Model Zoo repo](https://github.com/opencv/open_model_zoo) |
+**Could Be Optionally Installed**
+
+[Deep Learning Workbench](@ref workbench_docs_Workbench_DG_Introduction) (DL Workbench) is a platform built upon OpenVINO™ and provides a web-based graphical environment that enables you to optimize, fine-tune, analyze, visualize, and compare performance of deep learning models on various Intel® architecture
+configurations. In the DL Workbench, you can use most of OpenVINO™ toolkit components:
+* [Model Downloader](@ref omz_tools_downloader_README)
+* [Intel® Open Model Zoo](@ref omz_models_intel_index)
+* [Model Optimizer](../MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md)
+* [Post-training Optimization Tool](@ref pot_README)
+* [Accuracy Checker](@ref omz_tools_accuracy_checker_README)
+* [Benchmark Tool](../../inference-engine/samples/benchmark_app/README.md)
+
+Proceed to an [easy installation from Docker](@ref workbench_docs_Workbench_DG_Install_from_Docker_Hub) to get started.
+
### System Requirements
**Hardware**
@@ -99,29 +112,20 @@ The following components are installed by default:
### Install the Intel® Distribution of OpenVINO™ toolkit Core Components
-1. If you have not downloaded the Intel® Distribution of OpenVINO™ toolkit, [download the latest version](http://software.intel.com/en-us/openvino-toolkit/choose-download/free-download-windows). By default, the file is saved to the `Downloads` directory as `w_openvino_toolkit_p_.exe`.
-
-2. Go to the `Downloads` folder and double-click `w_openvino_toolkit_p_.exe`. A window opens to let you choose your installation directory and components. The default installation directory is `C:\Program Files (x86)\Intel\openvino_`, for simplicity, a shortcut to the latest installation is also created: `C:\Program Files (x86)\Intel\openvino_2021`. If you choose a different installation directory, the installer will create the directory for you:
-
+1. If you have not downloaded the Intel® Distribution of OpenVINO™ toolkit, [download the latest version](https://software.intel.com/content/www/us/en/develop/tools/openvino-toolkit/download.html). By default, the file is saved to the `Downloads` directory as `w_openvino_toolkit_p_.exe`.
+2. Go to the `Downloads` folder and double-click `w_openvino_toolkit_p_.exe`. A window opens to let you choose your installation directory and components.
![](../img/openvino-install-windows-01.png)
-
+ The default installation directory is `C:\Program Files (x86)\Intel\openvino_`, for simplicity, a shortcut to the latest installation is also created: `C:\Program Files (x86)\Intel\openvino_2021`. If you choose a different installation directory, the installer will create the directory for you.
+ > **NOTE**: If there is an OpenVINO™ toolkit version previously installed on your system, the installer will use the same destination directory for next installations. If you want to install a newer version to a different directory, you need to uninstall the previously installed versions.
3. Click **Next**.
-
4. You are asked if you want to provide consent to gather information. Choose the option of your choice. Click **Next**.
-
5. If you are missing external dependencies, you will see a warning screen. Write down the dependencies you are missing. **You need to take no other action at this time**. After installing the Intel® Distribution of OpenVINO™ toolkit core components, install the missing dependencies.
The screen example below indicates you are missing two dependencies:
-
![](../img/openvino-install-windows-02.png)
-
6. Click **Next**.
-
7. When the first part of installation is complete, the final screen informs you that the core components have been installed and additional steps still required:
-
- ![](../img/openvino-install-windows-03.png)
-
+ ![](../img/openvino-install-windows-03.png)
8. Click **Finish** to close the installation wizard. A new browser window opens to the next section of the installation guide to set the environment variables. You are in the same document. The new window opens in case you ran the installation without first opening this installation guide.
-
9. If the installation indicated you must install dependencies, install them first. If there are no missing dependencies, you can go ahead and set the environment variables.
### Set the Environment Variables
@@ -139,14 +143,14 @@ setupvars.bat
(Optional): OpenVINO toolkit environment variables are removed when you close the Command Prompt window. As an option, you can permanently set the environment variables manually.
+> **NOTE**: If you see an error indicating Python is not installed when you know you installed it, your computer might not be able to find the program. For the instructions to add Python to your system environment variables, see Update Your Windows Environment Variables.
+
The environment variables are set. Continue to the next section to configure the Model Optimizer.
## Configure the Model Optimizer
> **IMPORTANT**: These steps are required. You must configure the Model Optimizer for at least one framework. The Model Optimizer will fail if you do not complete the steps in this section.
-> **NOTE**: If you see an error indicating Python is not installed when you know you installed it, your computer might not be able to find the program. For the instructions to add Python to your system environment variables, see Update Your Windows Environment Variables.
-
The Model Optimizer is a key component of the Intel® Distribution of OpenVINO™ toolkit. You cannot do inference on your trained model without running the model through the Model Optimizer. When you run a pre-trained model through the Model Optimizer, your output is an Intermediate Representation (IR) of the network. The IR is a pair of files that describe the whole model:
- `.xml`: Describes the network topology
@@ -234,89 +238,25 @@ The Model Optimizer is configured for one or more frameworks. Success is indicat
![](../img/Configure-MO.PNG)
-You are ready to use two short demos to see the results of running the Intel Distribution of OpenVINO toolkit and to verify your installation was successful. The demo scripts are required since they perform additional configuration steps. Continue to the next section.
-
-If you want to use a GPU or VPU, or update your Windows* environment variables, read through the Optional Steps section.
-
-
-## Use Verification Scripts to Verify Your Installation
-
-> **IMPORTANT**: This section is required. In addition to confirming your installation was successful, demo scripts perform other steps, such as setting up your computer to use the Inference Engine samples.
-
-> **NOTE**:
-> The paths in this section assume you used the default installation directory. If you used a directory other than `C:\Program Files (x86)\Intel`, update the directory with the location where you installed the software.
-To verify the installation and compile two samples, run the verification applications provided with the product on the CPU:
-
-1. Open a command prompt window.
-
-2. Go to the Inference Engine demo directory:
- ```sh
- cd C:\Program Files (x86)\Intel\openvino_2021\deployment_tools\demo\
- ```
-
-3. Run the verification scripts by following the instructions in the next section.
-
-
-### Run the Image Classification Verification Script
-
-To run the script, start the `demo_squeezenet_download_convert_run.bat` file:
-```sh
-demo_squeezenet_download_convert_run.bat
-```
-
-This script downloads a SqueezeNet model, uses the Model Optimizer to convert the model to the `.bin` and `.xml` Intermediate Representation (IR) files. The Inference Engine requires this model conversion so it can use the IR as input and achieve optimum performance on Intel hardware.
-This verification script builds the [Image Classification Sample Async](../../inference-engine/samples/classification_sample_async/README.md) application and run it with the `car.png` image in the demo directory. For a brief description of the Intermediate Representation, see Configuring the Model Optimizer.
-
-When the verification script completes, you will have the label and confidence for the top-10 categories:
-![](../img/image_classification_script_output_win.png)
-
-This demo is complete. Leave the console open and continue to the next section to run the Inference Pipeline demo.
-
-
-### Run the Inference Pipeline Verification Script
-
-To run the script, start the `demo_security_barrier_camera.bat` file while still in the console:
-```sh
-demo_security_barrier_camera.bat
-```
-
-This script downloads three pre-trained model IRs, builds the [Security Barrier Camera Demo](@ref omz_demos_security_barrier_camera_demo_README) application, and runs it with the downloaded models and the `car_1.bmp` image from the `demo` directory to show an inference pipeline. The verification script uses vehicle recognition in which vehicle attributes build on each other to narrow in on a specific attribute.
-
-First, an object is identified as a vehicle. This identification is used as input to the next model, which identifies specific vehicle attributes, including the license plate. Finally, the attributes identified as the license plate are used as input to the third model, which recognizes specific characters in the license plate.
-
-When the demo completes, you have two windows open:
-
- * A console window that displays information about the tasks performed by the demo
- * An image viewer window that displays a resulting frame with detections rendered as bounding boxes, similar to the following:
-
- ![](../img/inference_pipeline_script_win.png)
-
-Close the image viewer window to end the demo.
-
-To learn more about the verification scripts, see `README.txt` in `C:\Program Files (x86)\Intel\openvino_2021\deployment_tools\demo`.
-
-For detailed description of the OpenVINO™ pre-trained object detection and object recognition models, see the [Overview of OpenVINO™ toolkit Pre-Trained Models](@ref omz_models_intel_index) page.
-
-In this section, you saw a preview of the Intel® Distribution of OpenVINO™ toolkit capabilities.
-
-Congratulations. You have completed all the required installation, configuration, and build steps to work with your trained models using CPU.
+You have completed all required installation, configuration and build steps in this guide to use your CPU to work with your trained models.
-If you want to use Intel® Processor graphics (GPU), Intel® Neural Compute Stick 2 or Intel® Vision Accelerator Design with Intel® Movidius™ VPUs, or add CMake* and Python* to your Windows* environment variables, read through the next section for additional steps.
+If you want to use a GPU or VPU, or update your Windows* environment variables, read through the Optional Steps section:
-If you want to continue and run the Image Classification Sample Application on one of the supported hardware device, see the [Run the Image Classification Sample Application](#run-the-image-classification-sample-application) section.
+- Steps for Intel® Processor Graphics (GPU)
+- Steps for Intel® Vision Accelerator Design with Intel® Movidius™ VPUs
+- Add CMake* or Python* to your Windows* environment variables
+Or proceed to the Get Started to get started with running code samples and demo applications.
## Optional Steps
-Use the optional steps below if you want to:
-* Infer models on Intel® Processor Graphics
-* Infer models on Intel® Vision Accelerator Design with Intel® Movidius™ VPUs
-* Add CMake* or Python* to your Windows* environment variables.
-
### Optional: Additional Installation Steps for Intel® Processor Graphics (GPU)
> **NOTE**: These steps are required only if you want to use a GPU.
-If your applications offload computation to Intel® Integrated Graphics, you must have the Intel Graphics Driver for Windows version 15.65 or higher. To see if you have this driver installed:
+If your applications offload computation to **Intel® Integrated Graphics**, you must have the latest version of Intel Graphics Driver for Windows installed for your hardware.
+[Download and install a higher version](http://downloadcenter.intel.com/product/80939/Graphics-Drivers).
+
+To check if you have this driver installed:
1. Type **device manager** in your **Search Windows** box. The **Device Manager** opens.
@@ -326,14 +266,13 @@ If your applications offload computation to Intel® Integrated Graphics, you mus
3. Right-click the adapter name and select **Properties**.
-4. Click the **Driver** tab to see the driver version. Make sure the version number is 15.65 or higher.
+4. Click the **Driver** tab to see the driver version.
![](../img/DeviceDriverVersion.PNG)
-5. If your device driver version is lower than 15.65, [download and install a higher version](http://downloadcenter.intel.com/product/80939/Graphics-Drivers).
-
-You are done updating your device driver and are ready to use your GPU.
+> **NOTE**: To use the **Intel® Iris® Xe MAX Graphics**, see the [Drivers & Software](https://downloadcenter.intel.com/download/29993/Intel-Iris-Xe-MAX-Dedicated-Graphics-Drivers?product=80939) page for driver downloads and installation instructions.
+You are done updating your device driver and are ready to use your GPU. Proceed to the Get Started to get started with running code samples and demo applications.
### Optional: Additional Installation Steps for the Intel® Vision Accelerator Design with Intel® Movidius™ VPUs
@@ -354,22 +293,7 @@ See also:
* After you've configurated your Intel® Vision Accelerator Design with Intel® Movidius™ VPUs, see [Intel® Movidius™ VPUs Programming Guide for Use with Intel® Distribution of OpenVINO™ toolkit](movidius-programming-guide.md) to learn how to distribute a model across all 8 VPUs to maximize performance.
-After configuration is done, you are ready to run the verification scripts with the HDDL Plugin for your Intel® Vision Accelerator Design with Intel® Movidius™ VPUs.
-
-1. Open a command prompt window.
-
-2. Go to the Inference Engine demo directory:
- ```sh
- cd C:\Program Files (x86)\Intel\openvino_2021\deployment_tools\demo\
- ```
-3. Run the Image Classification verification script. If you have access to the Internet through the proxy server only, please make sure that it is configured in your environment.
- ```sh
- demo_squeezenet_download_convert_run.bat -d HDDL
- ```
-4. Run the Inference Pipeline verification script:
- ```sh
- demo_security_barrier_camera.bat -d HDDL
- ```
+After configuration is done, you are ready to Get Started with running code samples and demo applications.
### Optional: Update Your Windows Environment Variables
@@ -396,55 +320,11 @@ Use these steps to update your Windows `PATH` if a command you execute returns a
Your `PATH` environment variable is updated.
-## Run the Image Classification Sample Application
-
-> **IMPORTANT**: This section requires that you have [Run the Verification Scripts to Verify Installation](#run-the-demos). This script builds the Image Classification sample application and downloads and converts the required Caffe* Squeezenet model to an IR.
-
-In this section you will run the Image Classification sample application, with the Caffe* Squeezenet1.1 model on three types of Intel® hardware: CPU, GPU and VPUs.
-
-Image Classification sample application binary file was automatically built and the FP16 model IR files are created when you [Ran the Image Classification Verification Script](#run-the-image-classification-verification-script).
-
-The Image Classification sample application binary file located in the `C:\Users\\Documents\Intel\OpenVINO\inference_engine_samples_build\intel64\Release\` directory.
-The Caffe* Squeezenet model IR files (`.bin` and `.xml`) are located in the in the `C:\Users\\Documents\Intel\OpenVINO\openvino_models\ir\public\squeezenet1.1\FP16\` directory.
-
-> **NOTE**: If you installed the Intel® Distribution of OpenVINO™ toolkit to the non-default installation directory, replace `C:\Program Files (x86)\Intel` with the directory where you installed the software.
-
-To run the sample application:
-
-1. Set up environment variables:
-```sh
-cd C:\Program Files (x86)\Intel\openvino_2021\bin\setupvars.bat
-```
-2. Go to the samples build directory:
-```sh
-cd C:\Users\\Documents\Intel\OpenVINO\inference_engine_samples_build\intel64\Release
-```
-3. Run the sample executable with specifying the `car.png` file from the `demo` directory as an input image, the IR of your FP16 model and a plugin for a hardware device to perform inference on.
-> **NOTE**: Running the sample application on hardware other than CPU requires performing [additional hardware configuration steps](#optional-steps).
-
- - For CPU:
- ```sh
- classification_sample_async.exe -i "C:\Program Files (x86)\Intel\openvino_2021\deployment_tools\demo\car.png" -m "C:\Users\\Documents\Intel\OpenVINO\openvino_models\ir\public\squeezenet1.1\FP16\squeezenet1.1.xml" -d CPU
- ```
-
- - For GPU:
- ```sh
- classification_sample_async.exe -i "C:\Program Files (x86)\Intel\openvino_2021\deployment_tools\demo\car.png" -m "C:\Users\\Documents\Intel\OpenVINO\openvino_models\ir\public\squeezenet1.1\FP16\squeezenet1.1.xml" -d GPU
- ```
-
- - For VPU (Intel® Neural Compute Stick 2):
- ```sh
- classification_sample_async.exe -i "C:\Program Files (x86)\Intel\openvino_2021\deployment_tools\demo\car.png" -m "C:\Users\\Documents\Intel\OpenVINO\openvino_models\ir\public\squeezenet1.1\FP16\squeezenet1.1.xml" -d MYRIAD
- ```
-
- - For VPU (Intel® Vision Accelerator Design with Intel® Movidius™ VPUs):
- ```sh
- classification_sample_async.exe -i "C:\Program Files (x86)\Intel\openvino_2021\deployment_tools\demo\car.png" -m "C:\Users\\Documents\Intel\OpenVINO\openvino_models\ir\public\squeezenet1.1\FP16\squeezenet1.1.xml" -d HDDL
- ```
-
-For information on Sample Applications, see the [Inference Engine Samples Overview](../IE_DG/Samples_Overview.md).
+## Get Started
-Congratulations, you have finished the installation of the Intel® Distribution of OpenVINO™ toolkit for Windows*. To learn more about how the Intel® Distribution of OpenVINO™ toolkit works, the Hello World tutorial and other resources are provided below.
+Now you are ready to get started. To continue, see the following pages:
+* [OpenVINO™ Toolkit Overview](../index.md)
+* [Get Started Guide for Windows](../get_started/get_started_windows.md) to learn the basic OpenVINO™ toolkit workflow and run code samples and demo applications with pre-trained models on different inference devices.
## Uninstall the Intel® Distribution of OpenVINO™ Toolkit
Follow the steps below to uninstall the Intel® Distribution of OpenVINO™ Toolkit from your system:
@@ -469,14 +349,12 @@ To learn more about converting deep learning models, go to:
## Additional Resources
- [Intel Distribution of OpenVINO Toolkit home page](https://software.intel.com/en-us/openvino-toolkit)
-- [Intel Distribution of OpenVINO Toolkit documentation](https://software.intel.com/en-us/openvino-toolkit/documentation/featured)
- [OpenVINO™ Release Notes](https://software.intel.com/en-us/articles/OpenVINO-RelNotes)
-- [Introduction to Inference Engine](inference_engine_intro.md)
+- [Introduction to Inference Engine](../IE_DG/inference_engine_intro.md)
- [Inference Engine Developer Guide](../IE_DG/Deep_Learning_Inference_Engine_DevGuide.md)
- [Model Optimizer Developer Guide](../MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md)
- [Inference Engine Samples Overview](../IE_DG/Samples_Overview.md)
- [Overview of OpenVINO™ Toolkit Pre-Trained Models](@ref omz_models_intel_index)
-- Intel Distribution of OpenVINO Toolkit Hello World Activities, see the [Inference Tutorials for Face Detection and Car Detection Exercises](https://github.com/intel-iot-devkit/inference-tutorials-generic/tree/openvino_toolkit_r3_0)
- [Intel® Neural Compute Stick 2 Get Started](https://software.intel.com/en-us/neural-compute-stick/get-started)
diff --git a/docs/install_guides/installing-openvino-yum.md b/docs/install_guides/installing-openvino-yum.md
index 2dab2bcdf938ab..5fc6143ae5133d 100644
--- a/docs/install_guides/installing-openvino-yum.md
+++ b/docs/install_guides/installing-openvino-yum.md
@@ -106,6 +106,5 @@ sudo yum autoremove intel-openvino-runtime-centos-.
- [Model Optimizer Developer Guide](../MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md).
- [Inference Engine Developer Guide](../IE_DG/Deep_Learning_Inference_Engine_DevGuide.md).
- For more information on Sample Applications, see the [Inference Engine Samples Overview](../IE_DG/Samples_Overview.md).
-- For information on Inference Engine Tutorials, see the [Inference Tutorials](https://github.com/intel-iot-devkit/inference-tutorials-generic).
- For IoT Libraries & Code Samples see the [Intel® IoT Developer Kit](https://github.com/intel-iot-devkit).
diff --git a/docs/ops/sequence/CTCLoss_4.md b/docs/ops/sequence/CTCLoss_4.md
index 67def3a2250366..c38e0a293d23dc 100644
--- a/docs/ops/sequence/CTCLoss_4.md
+++ b/docs/ops/sequence/CTCLoss_4.md
@@ -41,7 +41,7 @@ p(S) = \prod_{t=1}^{L_i} p_{i,t,ct}
3. Finally, compute negative log of summed up probabilities of all found alignments:
\f[
-CTCLoss = \minus \ln \sum_{S} p(S)
+CTCLoss = - \ln \sum_{S} p(S)
\f]
**Note 1**: This calculation scheme does not provide steps for optimal implementation and primarily serves for better explanation.
diff --git a/docs/ovsa/ovsa_diagram.png b/docs/ovsa/ovsa_diagram.png
new file mode 100644
index 00000000000000..774de121e18d0b
--- /dev/null
+++ b/docs/ovsa/ovsa_diagram.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:2e7ed21b111f0438b9fad367c4db293c35882de05bc8bb3252a1ef5bc289ae2a
+size 33369
diff --git a/docs/ovsa/ovsa_example.png b/docs/ovsa/ovsa_example.png
new file mode 100644
index 00000000000000..fd44a7a4ff8b7b
--- /dev/null
+++ b/docs/ovsa/ovsa_example.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:356688e3fd7dd4ad6c591cda1d35d9ebd5c2b6f9787e6caa4c116717101669e5
+size 29847
diff --git a/docs/ovsa/ovsa_get_started.md b/docs/ovsa/ovsa_get_started.md
new file mode 100644
index 00000000000000..f45d4bf299cff8
--- /dev/null
+++ b/docs/ovsa/ovsa_get_started.md
@@ -0,0 +1,798 @@
+# OpenVINO™ Security Add-on {#ovsa_get_started}
+
+This guide provides instructions for people who use the OpenVINO™ Security Add-on to create, distribute, and use models that are created with the OpenVINO™ toolkit:
+
+* **Model Developer**: The Model Developer interacts with the Independent Software Vendor to control the User access to models. This document shows you how to setup hardware and virtual machines to use the OpenVINO™ Security Add-on to define access control to your OpenVINO™ models and then provide the access controlled models to the users.
+* **Independent Software Vendor**: Use this guide for instructions to use the OpenVINO™ Security Add-on to validate license for access controlled models that are provided to your customers (users).
+* **User**: This document includes instructions for end users who need to access and run access controlled models through the OpenVINO™ Security Add-on.
+
+In this release, one person performs the role of both the Model Developer and the Independent Software Vendor. Therefore, this document provides instructions to configure one system for these two roles and one system for the User role. This document also provides a way for the same person to play the role of the Model Developer, Independent Software Vendor, and User to let you see how the OpenVINO™ Security Add-on functions from the User perspective.
+
+
+## Overview
+
+The OpenVINO™ Security Add-on works with the [OpenVINO™ Model Server](@ref openvino_docs_ovms) on Intel® architecture. Together, the OpenVINO™ Security Add-on and the OpenVINO™ Model Server provide a way for Model Developers and Independent Software Vendors to use secure packaging and secure model execution to enable access control to the OpenVINO™ models, and for model Users to run inference within assigned limits.
+
+The OpenVINO™ Security Add-on consists of three components that run in Kernel-based Virtual Machines (KVMs). These components provide a way to run security-sensitive operations in an isolated environment. A brief description of the three components are as follows. Click each triangled line for more information about each.
+
+
+ OpenVINO™ Security Add-on Tool: As a Model Developer or Independent Software Vendor, you use the OpenVINO™ Security Add-on Tool(`ovsatool`) to generate a access controlled model and master license.
+
+- The Model Developer generates a access controlled model from the OpenVINO™ toolkit output. The access controlled model uses the model's Intermediate Representation (IR) files to create a access controlled output file archive that are distributed to Model Users. The Developer can also put the archive file in long-term storage or back it up without additional security.
+
+- The Model Developer uses the OpenVINO™ Security Add-on Tool(`ovsatool`) to generate and manage cryptographic keys and related collateral for the access controlled models. Cryptographic material is only available in a virtual machine (VM) environment. The OpenVINO™ Security Add-on key management system lets the Model Developer to get external Certificate Authorities to generate certificates to add to a key-store.
+
+- The Model Developer generates user-specific licenses in a JSON format file for the access controlled model. The Model Developer can define global or user-specific licenses and attach licensing policies to the licenses. For example, the Model Developer can add a time limit for a model or limit the number of times a user can run a model.
+
+
+
+
+ OpenVINO™ Security Add-on License Service: Use the OpenVINO™ Security Add-on License Service to verify user parameters.
+
+- The Independent Software Vendor hosts the OpenVINO™ Security Add-on License Service, which responds to license validation requests when a user attempts to load a access controlled model in a model server. The licenses are registered with the OpenVINO™ Security Add-on License Service.
+
+- When a user loads the model, the OpenVINO™ Security Add-on Runtime contacts the License Service to make sure the license is valid and within the parameters that the Model Developer defined with the OpenVINO™ Security Add-on Tool(`ovsatool`). The user must be able to reach the Independent Software Vendor's License Service over the Internet.
+
+
+
+
+ OpenVINO™ Security Add-on Runtime: Users install and use the OpenVINO™ Security Add-on Runtime on a virtual machine.
+
+Users host the OpenVINO™ Security Add-on Runtime component in a virtual machine.
+
+Externally from the OpenVINO™ Security Add-on, the User adds the access controlled model to the OpenVINO™ Model Server config file. The OpenVINO™ Model Server attempts to load the model in memory. At this time, the OpenVINO™ Security Add-on Runtime component validates the user's license for the access controlled model against information stored in the License Service provided by the Independent Software Vendor.
+
+After the license is successfully validated, the OpenVINO™ Model Server loads the model and services the inference requests.
+
+
+
+
+**Where the OpenVINO™ Security Add-on Fits into Model Development and Deployment**
+
+![Security Add-on Diagram](ovsa_diagram.png)
+
+## About the Installation
+The Model Developer, Independent Software Vendor, and User each must prepare one physical hardware machine and one Kernel-based Virtual Machine (KVM). In addition, each person must prepare a Guest Virtual Machine (Guest VM) for each role that person plays.
+
+For example:
+* If one person acts as both the Model Developer and as the Independent Software Vendor, that person must prepare two Guest VMs. Both Guest VMs can be on the same physical hardware (Host Machine) and under the same KVM on that Host Machine.
+* If one person acts as all three roles, that person must prepare three Guest VMs. All three Guest VMs can be on the same Host Machine and under the same KVM on that Host Machine.
+
+**Purpose of Each Machine**
+
+| Machine | Purpose |
+| ----------- | ----------- |
+| Host Machine | Physical hardware on which the KVM and Guest VM share set up. |
+| Kernel-based Virtual Machine (KVM) | The OpenVINO™ Security Add-on runs in this virtual machine because it provides an isolated environment for security sensitive operations. |
+| Guest VM | The Model Developer uses the Guest VM to enable access control to the completed model.
The Independent Software Provider uses the Guest VM to host the License Service.
The User uses the Guest VM to contact the License Service and run the access controlled model. |
+
+
+## Prerequisites
+
+**Hardware**
+* Intel® Core™ or Xeon® processor
+
+**Operating system, firmware, and software**
+* Ubuntu* Linux* 18.04 on the Host Machine.
+* TPM version 2.0-conformant Discrete Trusted Platform Module (dTPM) or Firmware Trusted Platform Module (fTPM)
+* Secure boot is enabled.
+
+**Other**
+* The Independent Software Vendor must have access to a Certificate Authority (CA) that implements the Online Certificate Status Protocol (OCSP), supporting Elliptic Curve Cryptography (ECC) certificates for deployment.
+* The example in this document uses self-signed certificates.
+
+## How to Prepare a Host Machine
+
+This section is for the combined role of Model Developer and Independent Software Vendor, and the separate User role.
+
+### Step 1: Set up Packages on the Host Machine
+
+Begin this step on the Intel® Core™ or Xeon® processor machine that meets the prerequisites.
+
+> **NOTE**: As an alternative to manually following steps 1 - 11, you can run the script `install_host_deps.sh` in the `Scripts/reference directory` under the OpenVINO™ Security Add-on repository. The script stops with an error message if it identifies any issues. If the script halts due to an error, correct the issue that caused the error and restart the script. The script runs for several minutes and provides progress information.
+
+1. Test for Trusted Platform Module (TPM) support:
+ ```sh
+ dmesg | grep -i TPM
+ ```
+ The output indicates TPM availability in the kernel boot logs. Look for presence of the following devices to indicate TPM support is available:
+ * `/dev/tpm0`
+ * `/dev/tpmrm0`
+
+ If you do not see this information, your system does not meet the prerequisites to use the OpenVINO™ Security Add-on.
+2. Make sure hardware virtualization support is enabled in the BIOS:
+ ```sh
+ kvm-ok
+ ```
+ The output should show:
+ `INFO: /dev/kvm exists`
+ `KVM acceleration can be used`
+
+ If your output is different, modify your BIOS settings to enable hardware virtualization.
+
+ If the `kvm-ok` command is not present, install it:
+ ```sh
+ sudo apt install -y cpu-checker
+ ```
+3. Install the Kernel-based Virtual Machine (KVM) and QEMU packages.
+ ```sh
+ sudo apt install qemu qemu-kvm libvirt-bin bridge-utils virt-manager
+ ```
+4. Check the QEMU version:
+ ```sh
+ qemu-system-x86_64 --version
+ ```
+ If the response indicates a QEMU version lower than 2.12.0 download, compile and install the latest QEMU version from [https://www.qemu.org/download](https://www.qemu.org/download).
+5. Build and install the [`libtpm` package](https://github.com/stefanberger/libtpms/).
+6. Build and install the [`swtpm` package](https://github.com/stefanberger/swtpm/).
+7. Add the `swtpm` package to the `$PATH` environment variable.
+8. Install the software tool [`tpm2-tss`]( https://github.com/tpm2-software/tpm2-tss/releases/download/2.4.4/tpm2-tss-2.4.4.tar.gz).
+ Installation information is at https://github.com/tpm2-software/tpm2-tss/blob/master/INSTALL.md
+9. Install the software tool [`tpm2-abmrd`](https://github.com/tpm2-software/tpm2-abrmd/releases/download/2.3.3/tpm2-abrmd-2.3.3.tar.gz).
+ Installation information is at https://github.com/tpm2-software/tpm2-abrmd/blob/master/INSTALL.md
+10. Install the [`tpm2-tools`](https://github.com/tpm2-software/tpm2-tools/releases/download/4.3.0/tpm2-tools-4.3.0.tar.gz).
+ Installation information is at https://github.com/tpm2-software/tpm2-tools/blob/master/INSTALL.md
+11. Install the [Docker packages](https://docs.docker.com/engine/install/ubuntu/).
+ > **NOTE**: Regardless of whether you used the `install_host_deps.sh` script, complete step 12 to finish setting up the packages on the Host Machine.
+12. If you are running behind a proxy, [set up a proxy for Docker](https://docs.docker.com/config/daemon/systemd/).
+
+The following are installed and ready to use:
+* Kernel-based Virtual Machine (KVM)
+* QEMU
+* SW-TPM
+* HW-TPM support
+* Docker
+
+You're ready to configure the Host Machine for networking.
+
+### Step 2: Set up Networking on the Host Machine
+
+This step is for the combined Model Developer and Independent Software Vendor roles. If Model User VM is running on different physical host, repeat the following steps for that host also.
+
+In this step you prepare two network bridges:
+* A global IP address that a KVM can access across the Internet. This is the address that the OpenVINO™ Security Add-on Run-time software on a user's machine uses to verify they have a valid license.
+* A host-only local address to provide communication between the Guest VM and the QEMU host operating system.
+
+This example in this step uses the following names. Your configuration might use different names:
+* `50-cloud-init.yaml` as an example configuration file name.
+* `eno1` as an example network interface name.
+* `br0` as an example bridge name.
+* `virbr0` as an example bridge name.
+
+1. Open the network configuration file for editing. This file is in `/etc/netplan` with a name like `50-cloud-init.yaml`
+2. Look for these lines in the file:
+ ```sh
+ network:
+ ethernets:
+ eno1:
+ dhcp4: true
+ dhcp-identifier: mac
+ version: 2
+ ```
+3. Change the existing lines and add the `br0` network bridge. These changes enable external network access:
+ ```sh
+ network:
+ ethernets:
+ eno1:
+ dhcp4: false
+ bridges:
+ br0:
+ interfaces: [eno1]
+ dhcp4: yes
+ dhcp-identifier: mac
+ version: 2
+ ```
+4. Save and close the network configuration file.
+5. Run two commands to activate the updated network configuration file. If you use ssh, you might lose network connectivity when issuing these commands. If so, reconnect to the network.
+```sh
+sudo netplan generate
+```
+```sh
+sudo netplan apply
+```
+ A bridge is created and an IP address is assigned to the new bridge.
+6. Verify the new bridge:
+ ```sh
+ ip a | grep br0
+ ```
+ The output looks similar to this and shows valid IP addresses:
+ ```sh
+ 4: br0:
mtu 1500 qdisc noqueue state UP group default qlen 1000
inet 123.123.123.123/ brd 321.321.321.321 scope global dynamic br0
+ ```
+7. Create a script named `br0-qemu-ifup` to bring up the `br0` interface. Add the following script contents:
+ ```sh
+ #!/bin/sh
+ nic=$1
+ if [ -f /etc/default/qemu-kvm ]; then
+ . /etc/default/qemu-kvm
+ fi
+ switch=br0
+ ifconfig $nic 0.0.0.0 up
+ brctl addif ${switch} $nic
+ ```
+8. Create a script named `br0-qemu-ifdown` to bring down the `br0` interface. Add the following script contents:
+ ```sh
+ #!/bin/sh
+ nic=$1
+ if [ -f /etc/default/qemu-kvm ]; then
+ . /etc/default/qemu-kvm
+ fi
+ switch=br0
+ brctl delif $switch $nic
+ ifconfig $nic 0.0.0.0 down
+ ```
+9. Create a script named `virbr0-qemu-ifup` to bring up the `virbr0` interface. Add the following script contents:
+ ```sh
+ #!/bin/sh
+ nic=$1
+ if [ -f /etc/default/qemu-kvm ]; then
+ . /etc/default/qemu-kvm
+ fi
+ switch=virbr0
+ ifconfig $nic 0.0.0.0 up
+ brctl addif ${switch} $nic
+ ```
+10. Create a script named `virbr0-qemu-ifdown` to bring down the `virbr0` interface. Add the following script contents:
+ ```sh
+ #!/bin/sh
+ nic=$1
+ if [ -f /etc/default/qemu-kvm ]; then
+ . /etc/default/qemu-kvm
+ fi
+ switch=virbr0
+ brctl delif $switch $nic
+ ifconfig $nic 0.0.0.0 down
+ ```
+
+See the QEMU documentation for more information about the QEMU network configuration.
+
+Networking is set up on the Host Machine. Continue to the Step 3 to prepare a Guest VM for the combined role of Model Developer and Independent Software Vendor.
+
+
+### Step 3: Set Up one Guest VM for the combined roles of Model Developer and Independent Software Vendor
+
+For each separate role you play, you must prepare a virtual machine, called a Guest VM. Because in this release, the Model Developer and Independent Software Vendor roles are combined, these instructions guide you to set up one Guest VM, named `ovsa_isv`.
+
+Begin these steps on the Host Machine.
+
+As an option, you can use `virsh` and the virtual machine manager to create and bring up a Guest VM. See the `libvirtd` documentation for instructions if you'd like to do this.
+
+1. Download the [Ubuntu 18.04 server ISO image](https://releases.ubuntu.com/18.04/ubuntu-18.04.5-live-server-amd64.iso)
+
+2. Create an empty virtual disk image to serve as the Guest VM for your role as Model Developer and Independent Software Vendor:
+ ```sh
+ sudo qemu-img create -f qcow2 /ovsa_isv_dev_vm_disk.qcow2 20G
+ ```
+3. Install Ubuntu 18.04 on the Guest VM. Name the Guest VM `ovsa_isv`:
+ ```sh
+ sudo qemu-system-x86_64 -m 8192 -enable-kvm \
+ -cpu host \
+ -drive if=virtio,file=/ovsa_isv_dev_vm_disk.qcow2,cache=none \
+ -cdrom /ubuntu-18.04.5-live-server-amd64.iso \
+ -device e1000,netdev=hostnet1,mac=52:54:00:d1:66:5f \
+ -netdev tap,id=hostnet1,script=/virbr0-qemu-ifup,downscript=/virbr0-qemu-ifdown \
+ -vnc :1
+ ```
+4. Connect a VNC client with `:1`
+5. Follow the prompts on the screen to finish installing the Guest VM. Name the VM as `ovsa_isv_dev`
+6. Shut down the Guest VM.
+7. Restart the Guest VM after removing the option of cdrom image:
+ ```sh
+ sudo qemu-system-x86_64 -m 8192 -enable-kvm \
+ -cpu host \
+ -drive if=virtio,file=/ovsa_isv_dev_vm_disk.qcow2,cache=none \
+ -device e1000,netdev=hostnet1,mac=52:54:00:d1:66:5f \
+ -netdev tap,id=hostnet1,script=/virbr0-qemu-ifup,downscript=/virbr0-qemu-ifdown \
+ -vnc :1
+ ```
+8. Choose ONE of these options to install additional required software:
+ * **Option 1**: Use a script to install additional software
+ 1. Copy the script `install_guest_deps.sh` from the `Scripts/reference directory` of the OVSA repository to the Guest VM
+ 2. Run the script.
+ 3. Shut down the Guest VM.
+ * **Option 2** : Manually install additional software
+ 1. Install the software tool [`tpm2-tss`](https://github.com/tpm2-software/tpm2-tss/releases/download/2.4.4/tpm2-tss-2.4.4.tar.gz).
+ Installation information is at https://github.com/tpm2-software/tpm2-tss/blob/master/INSTALL.md
+ 2. Install the software tool [`tpm2-abmrd`](https://github.com/tpm2-software/tpm2-abrmd/releases/download/2.3.3/tpm2-abrmd-2.3.3.tar.gz).
+ Installation information is at https://github.com/tpm2-software/tpm2-abrmd/blob/master/INSTALL.md
+ 3. Install the [`tpm2-tools`](https://github.com/tpm2-software/tpm2-tools/releases/download/4.3.0/tpm2-tools-4.3.0.tar.gz).
+ Installation information is at https://github.com/tpm2-software/tpm2-tools/blob/master/INSTALL.md
+ 4. Install the [Docker packages](https://docs.docker.com/engine/install/ubuntu/)
+ 5. Shut down the Guest VM.
+9. On the host, create a directory to support the virtual TPM device. Only `root` should have read/write permission to this directory:
+ ```sh
+ sudo mkdir -p /var/OVSA/
+ sudo mkdir /var/OVSA/vtpm
+ sudo mkdir /var/OVSA/vtpm/vtpm_isv_dev
+ ```
+ **NOTE**: For steps 10 and 11, you can copy and edit the script named `start_ovsa_isv_dev_vm.sh` in the `Scripts/reference` directory in the OpenVINO™ Security Add-on repository instead of manually running the commands. If using the script, select the script with `isv` in the file name regardless of whether you are playing the role of the Model Developer or the role of the Independent Software Vendor. Edit the script to point to the correct directory locations and increment `vnc` for each Guest VM.
+10. Start the vTPM on Host:
+ ```sh
+ swtpm socket --tpmstate dir=/var/OVSA/vtpm/vtpm_isv_dev \
+ --tpm2 \
+ --ctrl type=unixio,path=/var/OVSA/vtpm/vtpm_isv_dev/swtpm-sock \
+ --log level=20
+ ```
+
+11. Start the Guest VM:
+ ```sh
+ sudo qemu-system-x86_64 \
+ -cpu host \
+ -enable-kvm \
+ -m 8192 \
+ -smp 8,sockets=1,cores=8,threads=1 \
+ -device e1000,netdev=hostnet0,mac=52:54:00:d1:66:6f \
+ -netdev tap,id=hostnet0,script=/br0-qemu-ifup,downscript=/br0-qemu-ifdown \
+ -device e1000,netdev=hostnet1,mac=52:54:00:d1:66:5f \
+ -netdev tap,id=hostnet1,script=/virbr0-qemu-ifup,downscript=/virbr0-qemu-ifdown \
+ -drive if=virtio,file=/ovsa_isv_dev_vm_disk.qcow2,cache=none \
+ -chardev socket,id=chrtpm,path=/var/OVSA/vtpm/vtpm_isv_dev/swtpm-sock \
+ -tpmdev emulator,id=tpm0,chardev=chrtpm \
+ -device tpm-tis,tpmdev=tpm0 \
+ -vnc :1
+ ```
+ Use the QEMU runtime options in the command to change the memory amount or CPU assigned to this Guest VM.
+
+12. Use a VNC client to log on to the Guest VM at `:1`
+
+### Step 4: Set Up one Guest VM for the User role
+
+1. Choose ONE of these options to create a Guest VM for the User role:
+ **Option 1: Copy and Rename the `ovsa_isv_dev_vm_disk.qcow2` disk image**
+ 1. Copy the `ovsa_isv_dev_vm_disk.qcow2` disk image to a new image named `ovsa_runtime_vm_disk.qcow2`. You created the `ovsa_isv_dev_vm_disk.qcow2` disk image in Step 3.
+ 2. Boot the new image.
+ 3. Change the hostname from `ovsa_isv_dev` to `ovsa_runtime`.
+ ```sh
+ sudo hostnamectl set-hostname ovsa_runtime
+ ```
+ 4. Replace all instances of `ovsa_isv_dev` to `ovsa_runtime` in the new image.
+ ```sh
+ sudo nano /etc/hosts
+ ```
+ 5. Change the `/etc/machine-id`:
+ ```sh
+ sudo rm /etc/machine-id
+ systemd-machine-id-setup
+ ```
+ 6. Shut down the Guest VM.
+
+ **Option 2: Manually create the Guest VM**
+ 1. Create an empty virtual disk image:
+ ```sh
+ sudo qemu-img create -f qcow2 /ovsa_ovsa_runtime_vm_disk.qcow2 20G
+ ```
+ 2. Install Ubuntu 18.04 on the Guest VM. Name the Guest VM `ovsa_runtime`:
+ ```sh
+ sudo qemu-system-x86_64 -m 8192 -enable-kvm \
+ -cpu host \
+ -drive if=virtio,file=/ovsa_ovsa_runtime_vm_disk.qcow2,cache=none \
+ -cdrom /ubuntu-18.04.5-live-server-amd64.iso \
+ -device e1000,netdev=hostnet1,mac=52:54:00:d1:66:5f \
+ -netdev tap,id=hostnet1,script=/virbr0-qemu-ifup, downscript=/virbr0-qemu-ifdown \
+ -vnc :2
+ ```
+ 3. Connect a VNC client with `:2`.
+ 4. Follow the prompts on the screen to finish installing the Guest VM. Name the Guest VM `ovsa_runtime`.
+ 5. Shut down the Guest VM.
+ 6. Restart the Guest VM:
+ ```sh
+ sudo qemu-system-x86_64 -m 8192 -enable-kvm \
+ -cpu host \
+ -drive if=virtio,file=/ovsa_ovsa_runtime_vm_disk.qcow2,cache=none \
+ -device e1000,netdev=hostnet1,mac=52:54:00:d1:66:5f \
+ -netdev tap,id=hostnet1,script=/virbr0-qemu-ifup, downscript=/virbr0-qemu-ifdown \
+ -vnc :2
+ ```
+ 7. Choose ONE of these options to install additional required software:
+
+ **Option 1: Use a script to install additional software**
+ 1. Copy the script `install_guest_deps.sh` from the `Scripts/reference` directory of the OVSA repository to the Guest VM
+ 2. Run the script.
+ 3. Shut down the Guest VM.
+
+ **Option 2: Manually install additional software**
+ 1. Install the software tool [`tpm2-tss`](https://github.com/tpm2-software/tpm2-tss/releases/download/2.4.4/tpm2-tss-2.4.4.tar.gz)
+ Installation information is at https://github.com/tpm2-software/tpm2-tss/blob/master/INSTALL.md
+ 2. Install the software tool [`tpm2-abmrd`](https://github.com/tpm2-software/tpm2-abrmd/releases/download/2.3.3/tpm2-abrmd-2.3.3.tar.gz)
+ Installation information is at https://github.com/tpm2-software/tpm2-abrmd/blob/master/INSTALL.md
+ 3. Install the [`tpm2-tools`](https://github.com/tpm2-software/tpm2-tools/releases/download/4.3.0/tpm2-tools-4.3.0.tar.gz)
+ Installation information is at https://github.com/tpm2-software/tpm2-tools/blob/master/INSTALL.md
+ 4. Install the [Docker packages](https://docs.docker.com/engine/install/ubuntu/)
+ 5. Shut down the Guest VM.
+
+2. Create a directory to support the virtual TPM device. Only `root` should have read/write permission to this directory:
+ ```sh
+ sudo mkdir /var/OVSA/vtpm/vtpm_runtime
+ ```
+ **NOTE**: For steps 3 and 4, you can copy and edit the script named `start_ovsa_runtime_vm.sh` in the scripts directory in the OpenVINO™ Security Add-on repository instead of manually running the commands. Edit the script to point to the correct directory locations and increment `vnc` for each Guest VM. This means that if you are creating a third Guest VM on the same Host Machine, change `-vnc :2` to `-vnc :3`
+3. Start the vTPM:
+ ```sh
+ swtpm socket --tpmstate dir=/var/OVSA/vtpm/vtpm_runtime \
+ --tpm2 \
+ --ctrl type=unixio,path=/var/OVSA/vtpm/vtpm_runtime/swtpm-sock \
+ --log level=20
+ ```
+4. Start the Guest VM in a new terminal. To do so, either copy and edit the script named `start_ovsa_runtime_vm.sh` in the scripts directory in the OpenVINO™ Security Add-on repository or manually run the command:
+ ```sh
+ sudo qemu-system-x86_64 \
+ -cpu host \
+ -enable-kvm \
+ -m 8192 \
+ -smp 8,sockets=1,cores=8,threads=1 \
+ -device e1000,netdev=hostnet2,mac=52:54:00:d1:67:6f \
+ -netdev tap,id=hostnet2,script=/br0-qemu-ifup,downscript=/br0-qemu-ifdown \
+ -device e1000,netdev=hostnet3,mac=52:54:00:d1:67:5f \
+ -netdev tap,id=hostnet3,script=/virbr0-qemu-ifup,downscript=/virbr0-qemu-ifdown \
+ -drive if=virtio,file=/ovsa_runtime_vm_disk.qcow2,cache=none \
+ -chardev socket,id=chrtpm,path=/var/OVSA/vtpm/vtpm_runtime/swtpm-sock \
+ -tpmdev emulator,id=tpm0,chardev=chrtpm \
+ -device tpm-tis,tpmdev=tpm0 \
+ -vnc :2
+ ```
+ Use the QEMU runtime options in the command to change the memory amount or CPU assigned to this Guest VM.
+5. Use a VNC client to log on to the Guest VM at `:` where `` corresponds to the vnc number in the `start_ovsa_isv_vm.sh` or in step 8.
+
+## How to Build and Install the OpenVINO™ Security Add-on Software
+
+Follow the below steps to build and Install OpenVINO™ Security Add-on on host and different VMs.
+
+### Step 1: Build the OpenVINO™ Model Server image
+Building OpenVINO™ Security Add-on depends on OpenVINO™ Model Server docker containers. Download and build OpenVINO™ Model Server first on the host.
+
+1. Download the [OpenVINO™ Model Server software](https://github.com/openvinotoolkit/model_server)
+2. Build the [OpenVINO™ Model Server Docker images](https://github.com/openvinotoolkit/model_server/blob/main/docs/docker_container.md)
+ ```sh
+ git clone https://github.com/openvinotoolkit/model_server.git
+ cd model_server
+ make docker_build
+ ```
+### Step 2: Build the software required for all roles
+
+This step is for the combined role of Model Developer and Independent Software Vendor, and the User
+
+1. Download the [OpenVINO™ Security Add-on](https://github.com/openvinotoolkit/security_addon)
+
+2. Go to the top-level OpenVINO™ Security Add-on source directory.
+ ```sh
+ cd security_addon
+ ```
+3. Build the OpenVINO™ Security Add-on:
+ ```sh
+ make clean all
+ sudo make package
+ ```
+ The following packages are created under the `release_files` directory:
+ - `ovsa-kvm-host.tar.gz`: Host Machine file
+ - `ovsa-developer.tar.gz`: For the Model Developer and the Independent Software Developer
+ - `ovsa-model-hosting.tar.gz`: For the User
+
+### Step 3: Install the host software
+This step is for the combined role of Model Developer and Independent Software Vendor, and the User.
+
+1. Go to the `release_files` directory:
+ ```sh
+ cd release_files
+ ```
+2. Set up the path:
+ ```sh
+ export OVSA_RELEASE_PATH=$PWD
+ ```
+3. Install the OpenVINO™ Security Add-on Software on the Host Machine:
+ ```sh
+ cd $OVSA_RELEASE_PATH
+ tar xvfz ovsa-kvm-host.tar.gz
+ cd ovsa-kvm-host
+ ./install.sh
+ ```
+
+If you are using more than one Host Machine repeat Step 3 on each.
+
+### Step 4: Set up packages on the Guest VM
+This step is for the combined role of Model Developer and Independent Software Vendor. References to the Guest VM are to `ovsa_isv_dev`.
+
+1. Log on to the Guest VM.
+2. Create the OpenVINO™ Security Add-on directory in the home directory
+ ```sh
+ mkdir OVSA
+ ```
+3. Go to the Host Machine, outside of the Guest VM.
+4. Copy `ovsa-developer.tar.gz` from `release_files` to the Guest VM:
+ ```sh
+ cd $OVSA_RELEASE_PATH
+ scp ovsa-developer.tar.gz username@://OVSA
+ ```
+5. Go to the Guest VM.
+6. Install the software to the Guest VM:
+ ```sh
+ cd OVSA
+ tar xvfz ovsa-developer.tar.gz
+ cd ovsa-developer
+ sudo -s
+ ./install.sh
+ ```
+7. Create a directory named `artefacts`. This directory will hold artefacts required to create licenses:
+ ```sh
+ cd //OVSA
+ mkdir artefacts
+ cd artefacts
+ ```
+8. Start the license server on a separate terminal.
+ ```sh
+ sudo -s
+ source /opt/ovsa/scripts/setupvars.sh
+ cd /opt/ovsa/bin
+ ./license_server
+ ```
+
+### Step 5: Install the OpenVINO™ Security Add-on Model Hosting Component
+
+This step is for the User. References to the Guest VM are to `ovsa_runtime`.
+
+The Model Hosting components install the OpenVINO™ Security Add-on Runtime Docker container based on OpenVINO™ Model Server NGINX Docker to host a access controlled model.
+
+1. Log on to the Guest VM as ``.
+2. Create the OpenVINO™ Security Add-on directory in the home directory
+ ```sh
+ mkdir OVSA
+ ```
+3. While on the Host Machine copy the ovsa-model-hosting.tar.gz from release_files to the Guest VM:
+ ```sh
+ cd $OVSA_RELEASE_PATH
+ scp ovsa-model-hosting.tar.gz username@://OVSA
+ ```
+4. Install the software to the Guest VM:
+ ```sh
+ cd OVSA
+ tar xvfz ovsa-model-hosting.tar.gz
+ cd ovsa-model-hosting
+ sudo -s
+ ./install.sh
+ ```
+5. Create a directory named `artefacts`:
+ ```sh
+ cd //OVSA
+ mkdir artefacts
+ cd artefacts
+ ```
+
+## How to Use the OpenVINO™ Security Add-on
+
+This section requires interactions between the Model Developer/Independent Software vendor and the User. All roles must complete all applicable set up steps and installation steps before beginning this section.
+
+This document uses the [face-detection-retail-0004](@ref omz_models_intel_face_detection_retail_0004_description_face_detection_retail_0004) model as an example.
+
+The following figure describes the interactions between the Model Developer, Independent Software Vendor, and User.
+
+**Remember**: The Model Developer/Independent Software Vendor and User roles are related to virtual machine use and one person might fill the tasks required by multiple roles. In this document the tasks of Model Developer and Independent Software Vendor are combined and use the Guest VM named `ovsa_isv`. It is possible to have all roles set up on the same Host Machine.
+
+![OpenVINO™ Security Add-on Example Diagram](ovsa_example.png)
+
+### Model Developer Instructions
+
+The Model Developer creates model, defines access control and creates the user license. References to the Guest VM are to `ovsa_isv_dev`. After the model is created, access control enabled, and the license is ready, the Model Developer provides the license details to the Independent Software Vendor before sharing to the Model User.
+
+#### Step 1: Create a key store and add a certificate to it
+
+1. Set up a path to the artefacts directory:
+ ```sh
+ sudo -s
+ cd //OVSA/artefacts
+ export OVSA_RUNTIME_ARTEFACTS=$PWD
+ source /opt/ovsa/scripts/setupvars.sh
+ ```
+2. Create files to request a certificate:
+ This example uses a self-signed certificate for demonstration purposes. In a production environment, use CSR files to request for a CA-signed certificate.
+ ```sh
+ cd $OVSA_DEV_ARTEFACTS
+ /opt/ovsa/bin/ovsatool keygen -storekey -t ECDSA -n Intel -k isv_keystore -r isv_keystore.csr -e "/C=IN/CN=localhost"
+ ```
+ Two files are created:
+ - `isv_keystore.csr`- A Certificate Signing Request (CSR)
+ - `isv_keystore.csr.crt` - A self-signed certificate
+
+ In a production environment, send `isv_keystore.csr` to a CA to request a CA-signed certificate.
+
+3. Add the certificate to the key store
+ ```sh
+ /opt/ovsa/bin/ovsatool keygen -storecert -c isv_keystore.csr.crt -k isv_keystore
+ ```
+
+#### Step 2: Create the model
+
+This example uses `curl` to download the `face-detection-retail-004` model from the OpenVINO Model Zoo. If you are behind a firewall, check and set your proxy settings.
+
+1. Log on to the Guest VM.
+
+2. Download a model from the Model Zoo:
+ ```sh
+ cd $OVSA_DEV_ARTEFACTS
+ curl --create-dirs https://download.01.org/opencv/2021/openvinotoolkit/2021.1/open_model_zoo/models_bin/1/face-detection-retail-0004/FP32/face-detection-retail-0004.xml https:// download.01.org/opencv/2021/openvinotoolkit/2021.1/open_model_zoo/models_bin/1/face-detection-retail-0004/FP32/face-detection-retail-0004.bin -o model/face-detection-retail-0004.xml -o model/face-detection-retail-0004.bin
+ ```
+ The model is downloaded to the `OVSA_DEV_ARTEFACTS/model` directory.
+
+#### Step 3: Define access control for the model and create a master license for it
+
+1. Go to the `artefacts` directory:
+ ```sh
+ cd $OVSA_DEV_ARTEFACTS
+ ```
+2. Run the `uuidgen` command:
+ ```sh
+ uuidgen
+ ```
+3. Define and enable the model access control and master license:
+ ```sh
+ /opt/ovsa/bin/ovsatool protect -i model/face-detection-retail-0004.xml model/face-detection-retail-0004.bin -n "face detection" -d "face detection retail" -v 0004 -p face_detection_model.dat -m face_detection_model.masterlic -k isv_keystore -g
+ ```
+The Intermediate Representation files for the `face-detection-retail-0004` model are encrypted as `face_detection_model.dat` and a master license is generated as `face_detection_model.masterlic`.
+
+#### Step 4: Create a Runtime Reference TCB
+
+Use the runtime reference TCB to create a customer license for the access controlled model and the specific runtime.
+
+Generate the reference TCB for the runtime
+```sh
+cd $OVSA_DEV_ARTEFACTS
+source /opt/ovsa/scripts/setupvars.sh
+ /opt/ovsa/bin/ovsaruntime gen-tcb-signature -n "Face Detect @ Runtime VM" -v "1.0" -f face_detect_runtime_vm.tcb -k isv_keystore
+```
+
+#### Step 5: Publish the access controlled Model and Runtime Reference TCB
+The access controlled model is ready to be shared with the User and the reference TCB is ready to perform license checks.
+
+#### Step 6: Receive a User Request
+1. Obtain artefacts from the User who needs access to a access controlled model:
+ * Customer certificate from the customer's key store.
+ * Other information that apply to your licensing practices, such as the length of time the user needs access to the model
+
+2. Create a customer license configuration
+ ```sh
+ cd $OVSA_DEV_ARTEFACTS
+ /opt/ovsa/bin/ovsatool licgen -t TimeLimit -l30 -n "Time Limit License Config" -v 1.0 -u ":" -k isv_keystore -o 30daylicense.config
+ ```
+3. Create the customer license
+ ```sh
+ cd $OVSA_DEV_ARTEFACTS
+ /opt/ovsa/bin/ovsatool sale -m face_detection_model.masterlic -k isv_keystore -l 30daylicense.config -t face_detect_runtime_vm.tcb -p custkeystore.csr.crt -c face_detection_model.lic
+ ```
+
+4. Update the license server database with the license.
+ ```sh
+ cd /opt/ovsa/DB
+ python3 ovsa_store_customer_lic_cert_db.py ovsa.db $OVSA_DEV_ARTEFACTS/face_detection_model.lic $OVSA_DEV_ARTEFACTS/custkeystore.csr.crt
+ ```
+
+5. Provide these files to the User:
+ * `face_detection_model.dat`
+ * `face_detection_model.lic`
+
+### User Instructions
+References to the Guest VM are to `ovsa_rumtime`.
+
+#### Step 1: Add a CA-Signed Certificate to a Key Store
+
+1. Set up a path to the artefacts directory:
+ ```sh
+ sudo -s
+ cd //OVSA/artefacts
+ export OVSA_RUNTIME_ARTEFACTS=$PWD
+ source /opt/ovsa/scripts/setupvars.sh
+ ```
+2. Generate a Customer key store file:
+ ```sh
+ cd $OVSA_RUNTIME_ARTEFACTS
+ /opt/ovsa/bin/ovsatool keygen -storekey -t ECDSA -n Intel -k custkeystore -r custkeystore.csr -e "/C=IN/CN=localhost"
+ ```
+ Two files are created:
+ * `custkeystore.csr` - A Certificate Signing Request (CSR)
+ * `custkeystore.csr.crt` - A self-signed certificate
+
+3. Send `custkeystore.csr` to the CA to request a CA-signed certificate.
+
+4. Add the certificate to the key store:
+ ```sh
+ /opt/ovsa/bin/ovsatool keygen -storecert -c custkeystore.csr.crt -k custkeystore
+ ```
+
+#### Step 2: Request an access controlled Model from the Model Developer
+This example uses scp to share data between the ovsa_runtime and ovsa_dev Guest VMs on the same Host Machine.
+
+1. Communicate your need for a model to the Model Developer. The Developer will ask you to provide the certificate from your key store and other information. This example uses the length of time the model needs to be available.
+2. Generate an artefact file to provide to the Developer:
+ ```sh
+ cd $OVSA_RUNTIME_ARTEFACTS
+ scp custkeystore.csr.crt username@://OVSA/artefacts
+ ```
+#### Step 3: Receive and load the access controlled model into the OpenVINO™ Model Server
+1. Receive the model as files named
+ * `face_detection_model.dat`
+ * `face_detection_model.lic`
+2. Prepare the environment:
+ ```sh
+ cd $OVSA_RUNTIME_ARTEFACTS/..
+ cp /opt/ovsa/example_runtime ovms -r
+ cd ovms
+ mkdir -vp model/fd/1
+ ```
+ The `$OVSA_RUNTIME_ARTEFACTS/../ovms` directory contains scripts and a sample configuration JSON file to start the model server.
+3. Copy the artefacts from the Model Developer:
+ ```sh
+ cd $OVSA_RUNTIME_ARTEFACTS/../ovms
+ cp $OVSA_RUNTIME_ARTEFACTS/face_detection_model.dat model/fd/1/.
+ cp $OVSA_RUNTIME_ARTEFACTS/face_detection_model.lic model/fd/1/.
+ cp $OVSA_RUNTIME_ARTEFACTS/custkeystore model/fd/1/.
+ ```
+4. Rename and edit `sample.json` to include the names of the access controlled model artefacts you received from the Model Developer. The file looks like this:
+ ```sh
+ {
+ "custom_loader_config_list":[
+ {
+ "config":{
+ "loader_name":"ovsa",
+ "library_path": "/ovsa-runtime/lib/libovsaruntime.so"
+ }
+ }
+ ],
+ "model_config_list":[
+ {
+ "config":{
+ "name":"protected-model",
+ "base_path":"/sampleloader/model/fd",
+ "custom_loader_options": {"loader_name": "ovsa", "keystore": "custkeystore", "protected_file": "face_detection_model"}
+ }
+ }
+ ]
+ }
+ ```
+#### Step 4: Start the NGINX Model Server
+The NGINX Model Server publishes the access controlled model.
+ ```sh
+ ./start_secure_ovsa_model_server.sh
+ ```
+For information about the NGINX interface, see https://github.com/openvinotoolkit/model_server/blob/main/extras/nginx-mtls-auth/README.md
+
+#### Step 5: Prepare to run Inference
+
+1. Log on to the Guest VM from another terminal.
+
+2. Install the Python dependencies for your set up. For example:
+ ```sh
+ sudo apt install pip3
+ pip3 install cmake
+ pip3 install scikit-build
+ pip3 install opencv-python
+ pip3 install futures==3.1.1
+ pip3 install tensorflow-serving-api==1.14.0
+ ```
+3. Copy the `face_detection.py` from the example_client in `/opt/ovsa/example_client`
+ ```sh
+ cd /home/intel/OVSA/ovms
+ cp /opt/ovsa/example_client/* .
+ ```
+4. Copy the sample images for inferencing. An image directory is created that includes a sample image for inferencing.
+ ```sh
+ curl --create-dirs https://raw.githubusercontent.com/openvinotoolkit/model_server/master/example_client/images/people/people1.jpeg -o images/people1.jpeg
+ ```
+#### Step 6: Run Inference
+
+Run the `face_detection.py` script:
+```sh
+python3 face_detection.py --grpc_port 3335 --batch_size 1 --width 300 --height 300 --input_images_dir images --output_dir results --tls --server_cert server.pem --client_cert client.pem --client_key client.key --model_name protected-model
+```
+
+## Summary
+You have completed these tasks:
+- Set up one or more computers (Host Machines) with one KVM per machine and one or more virtual machines (Guest VMs) on the Host Machines
+- Installed the OpenVINO™ Security Add-on
+- Used the OpenVINO™ Model Server to work with OpenVINO™ Security Add-on
+- As a Model Developer or Independent Software Vendor, you access controlled a model and prepared a license for it.
+- As a Model Developer or Independent Software Vendor, you prepared and ran a License Server and used the License Server to verify a User had a valid license to use a access controlled model.
+- As a User, you provided information to a Model Developer or Independent Software Vendor to get a access controlled model and the license for the model.
+- As a User, you set up and launched a Host Server on which you can run licensed and access controlled models.
+- As a User, you loaded a access controlled model, validated the license for the model, and used the model to run inference.
+
+## References
+Use these links for more information:
+- [OpenVINO™ toolkit](https://software.intel.com/en-us/openvino-toolkit)
+- [OpenVINO Model Server Quick Start Guide](https://github.com/openvinotoolkit/model_server/blob/main/docs/ovms_quickstart.md)
+- [Model repository](https://github.com/openvinotoolkit/model_server/blob/main/docs/models_repository.md)
diff --git a/inference-engine/ie_bridges/c/samples/hello_classification/README.md b/inference-engine/ie_bridges/c/samples/hello_classification/README.md
index 845a19e1bf52dc..6bf0ddf0b6369b 100644
--- a/inference-engine/ie_bridges/c/samples/hello_classification/README.md
+++ b/inference-engine/ie_bridges/c/samples/hello_classification/README.md
@@ -14,7 +14,7 @@ To properly demonstrate this API, it is required to run several networks in pipe
## Running
-To run the sample, you can use public or pre-trained models. To download the pre-trained models, use the OpenVINO [Model Downloader](@ref omz_tools_downloader_README) or go to [https://download.01.org/opencv/](https://download.01.org/opencv/).
+To run the sample, you can use [public](@ref omz_models_public_index) or [Intel's](@ref omz_models_intel_index) pre-trained models from the Open Model Zoo. The models can be downloaded using the [Model Downloader](@ref omz_tools_downloader_README).
> **NOTE**: Before running the sample with a trained model, make sure the model is converted to the Inference Engine format (\*.xml + \*.bin) using the [Model Optimizer tool](../../../../../docs/MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md).
>
diff --git a/inference-engine/ie_bridges/c/samples/hello_nv12_input_classification/README.md b/inference-engine/ie_bridges/c/samples/hello_nv12_input_classification/README.md
index eeadef10cdbf88..a9e1e20056b049 100644
--- a/inference-engine/ie_bridges/c/samples/hello_nv12_input_classification/README.md
+++ b/inference-engine/ie_bridges/c/samples/hello_nv12_input_classification/README.md
@@ -34,9 +34,7 @@ ffmpeg -i cat.jpg -pix_fmt nv12 cat.yuv
## Running
-To run the sample, you can use public or pre-trained models. To download pre-trained models, use
-the OpenVINO™ [Model Downloader](@ref omz_tools_downloader_README)
-or go to [https://download.01.org/opencv/](https://download.01.org/opencv/).
+To run the sample, you can use [public](@ref omz_models_public_index) or [Intel's](@ref omz_models_intel_index) pre-trained models from the Open Model Zoo. The models can be downloaded using the [Model Downloader](@ref omz_tools_downloader_README).
> **NOTE**: Before running the sample with a trained model, make sure the model is converted to the
> Inference Engine format (\*.xml + \*.bin) using the [Model Optimizer tool](../../../../../docs/MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md).
diff --git a/inference-engine/ie_bridges/c/samples/object_detection_sample_ssd/README.md b/inference-engine/ie_bridges/c/samples/object_detection_sample_ssd/README.md
index 2e70a23f0576a6..55916a129f9473 100644
--- a/inference-engine/ie_bridges/c/samples/object_detection_sample_ssd/README.md
+++ b/inference-engine/ie_bridges/c/samples/object_detection_sample_ssd/README.md
@@ -39,7 +39,7 @@ Options:
Running the application with the empty list of options yields the usage message given above and an error message.
-To run the sample, you can use public or pre-trained models. To download the pre-trained models, use the OpenVINO [Model Downloader](@ref omz_tools_downloader_README) or go to [https://download.01.org/opencv/](https://download.01.org/opencv/).
+To run the sample, you can use [public](@ref omz_models_public_index) or [Intel's](@ref omz_models_intel_index) pre-trained models from the Open Model Zoo. The models can be downloaded using the [Model Downloader](@ref omz_tools_downloader_README).
> **NOTE**: Before running the sample with a trained model, make sure the model is converted to the Inference Engine format (\*.xml + \*.bin) using the [Model Optimizer tool](../../../../../docs/MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md).
>
diff --git a/inference-engine/ie_bridges/python/sample/classification_sample_async/README.md b/inference-engine/ie_bridges/python/sample/classification_sample_async/README.md
index d7a20f5037333d..80dc537b9a5702 100644
--- a/inference-engine/ie_bridges/python/sample/classification_sample_async/README.md
+++ b/inference-engine/ie_bridges/python/sample/classification_sample_async/README.md
@@ -59,7 +59,7 @@ Options:
Running the application with the empty list of options yields the usage message given above and an error message.
-To run the sample, you can use AlexNet and GoogLeNet or other image classification models. You can download the pre-trained models with the OpenVINO [Model Downloader](@ref omz_tools_downloader_README) or from [https://download.01.org/opencv/](https://download.01.org/opencv/).
+To run the sample, you can use AlexNet and GoogLeNet or other image classification models. You can download [public](@ref omz_models_public_index) or [Intel's](@ref omz_models_intel_index) pre-trained models using the [Model Downloader](@ref omz_tools_downloader_README).
> **NOTE**: Before running the sample with a trained model, make sure the model is converted to the Inference Engine format (\*.xml + \*.bin) using the [Model Optimizer tool](../../../../../docs/MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md).
>
diff --git a/inference-engine/ie_bridges/python/sample/hello_classification/README.md b/inference-engine/ie_bridges/python/sample/hello_classification/README.md
index 488278c87d2ff4..34858bb437a1bb 100644
--- a/inference-engine/ie_bridges/python/sample/hello_classification/README.md
+++ b/inference-engine/ie_bridges/python/sample/hello_classification/README.md
@@ -46,7 +46,7 @@ Options:
Running the application with the empty list of options yields the usage message given above.
-To run the sample, you can use AlexNet and GoogLeNet or other image classification models. You can download the pre-trained models with the OpenVINO [Model Downloader](@ref omz_tools_downloader_README) or from [https://download.01.org/opencv/](https://download.01.org/opencv/).
+To run the sample, you can use AlexNet and GoogLeNet or other image classification models. You can download [public](@ref omz_models_public_index) or [Intel's](@ref omz_models_intel_index) pre-trained models using the [Model Downloader](@ref omz_tools_downloader_README).
> **NOTE**: Before running the sample with a trained model, make sure the model is converted to the Inference Engine format (\*.xml + \*.bin) using the [Model Optimizer tool](../../../../../docs/MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md).
>
diff --git a/inference-engine/ie_bridges/python/sample/ngraph_function_creation_sample/README.md b/inference-engine/ie_bridges/python/sample/ngraph_function_creation_sample/README.md
index 75b05f78c5f5df..bdba6c38ab46e3 100644
--- a/inference-engine/ie_bridges/python/sample/ngraph_function_creation_sample/README.md
+++ b/inference-engine/ie_bridges/python/sample/ngraph_function_creation_sample/README.md
@@ -13,7 +13,7 @@ When the inference is done, the application outputs inference results to the sta
> **NOTE**: This sample supports models with FP32 weights only.
-The `lenet.bin` weights file was generated by the [Model Optimizer](../../../docs/MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md)
+The `lenet.bin` weights file was generated by the [Model Optimizer](../../../../../docs/MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md)
tool from the public LeNet model with the `--input_shape [64,1,28,28]` parameter specified.
The original model is available in the [Caffe* repository](https://github.com/BVLC/caffe/tree/master/examples/mnist) on GitHub\*.
@@ -69,4 +69,4 @@ By default, the application outputs top-1 inference result for each inference re
## See Also
-* [Using Inference Engine Samples](../../../docs/IE_DG/Samples_Overview.md)
+* [Using Inference Engine Samples](../../../../../docs/IE_DG/Samples_Overview.md)
diff --git a/inference-engine/ie_bridges/python/sample/object_detection_sample_ssd/README.md b/inference-engine/ie_bridges/python/sample/object_detection_sample_ssd/README.md
index 26b8394cdb7260..90bc09ff2e75bf 100644
--- a/inference-engine/ie_bridges/python/sample/object_detection_sample_ssd/README.md
+++ b/inference-engine/ie_bridges/python/sample/object_detection_sample_ssd/README.md
@@ -55,7 +55,7 @@ Options:
Running the application with the empty list of options yields the usage message given above and an error message.
-To run the sample, you can use RMNet_SSD or other object-detection models. You can download the pre-trained models with the OpenVINO [Model Downloader](@ref omz_tools_downloader_README) or from [https://download.01.org/opencv/](https://download.01.org/opencv/).
+To run the sample, you can use RMNet_SSD or other object-detection models. You can download [public](@ref omz_models_public_index) or [Intel's](@ref omz_models_intel_index) pre-trained models using the [Model Downloader](@ref omz_tools_downloader_README).
> **NOTE**: Before running the sample with a trained model, make sure the model is converted to the Inference Engine format (\*.xml + \*.bin) using the [Model Optimizer tool](../../../../../docs/MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md).
>
diff --git a/inference-engine/samples/benchmark_app/README.md b/inference-engine/samples/benchmark_app/README.md
index 41f48e4735d886..3bba703c68bb71 100644
--- a/inference-engine/samples/benchmark_app/README.md
+++ b/inference-engine/samples/benchmark_app/README.md
@@ -4,6 +4,12 @@ This topic demonstrates how to use the Benchmark C++ Tool to estimate deep learn
> **NOTE:** This topic describes usage of C++ implementation of the Benchmark Tool. For the Python* implementation, refer to [Benchmark Python* Tool](../../tools/benchmark_tool/README.md).
+> **TIP**: You also can work with the Benchmark Tool inside the OpenVINO™ [Deep Learning Workbench](@ref workbench_docs_Workbench_DG_Introduction) (DL Workbench).
+> [DL Workbench](@ref workbench_docs_Workbench_DG_Introduction) is a platform built upon OpenVINO™ and provides a web-based graphical environment that enables you to optimize, fine-tune, analyze, visualize, and compare
+> performance of deep learning models on various Intel® architecture
+> configurations. In the DL Workbench, you can use most of OpenVINO™ toolkit components.
+>
+> Proceed to an [easy installation from Docker](@ref workbench_docs_Workbench_DG_Install_from_Docker_Hub) to get started.
## How It Works
@@ -43,6 +49,7 @@ The application also saves executable graph information serialized to an XML fil
## Run the Tool
+
Note that the benchmark_app usually produces optimal performance for any device out of the box.
**So in most cases you don't need to play the app options explicitly and the plain device name is enough**, for example, for CPU:
@@ -115,7 +122,7 @@ If a model has only image input(s), please provide a folder with images or a pat
If a model has some specific input(s) (not images), please prepare a binary file(s) that is filled with data of appropriate precision and provide a path to them as input.
If a model has mixed input types, input folder should contain all required files. Image inputs are filled with image files one by one. Binary inputs are filled with binary inputs one by one.
-To run the tool, you can use public or Intel's pre-trained models. To download the models, use the OpenVINO [Model Downloader](@ref omz_tools_downloader_README) or go to [https://download.01.org/opencv/](https://download.01.org/opencv/).
+To run the tool, you can use [public](@ref omz_models_public_index) or [Intel's](@ref omz_models_intel_index) pre-trained models from the Open Model Zoo. The models can be downloaded using the [Model Downloader](@ref omz_tools_downloader_README).
> **NOTE**: Before running the tool with a trained model, make sure the model is converted to the Inference Engine format (\*.xml + \*.bin) using the [Model Optimizer tool](../../../docs/MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md).
>
diff --git a/inference-engine/samples/classification_sample_async/README.md b/inference-engine/samples/classification_sample_async/README.md
index 5d9abb063350f8..32df39493566ed 100644
--- a/inference-engine/samples/classification_sample_async/README.md
+++ b/inference-engine/samples/classification_sample_async/README.md
@@ -49,7 +49,7 @@ Options:
Running the application with the empty list of options yields the usage message given above and an error message.
-To run the sample, use AlexNet and GoogLeNet or other public or pre-trained image classification models. To download the pre-trained models, use the OpenVINO [Model Downloader](@ref omz_tools_downloader_README) or go to [https://download.01.org/opencv/](https://download.01.org/opencv/).
+To run the sample, use AlexNet and GoogLeNet or other public or pre-trained image classification models. You can download [public](@ref omz_models_public_index) or [Intel's](@ref omz_models_intel_index) pre-trained models using the [Model Downloader](@ref omz_tools_downloader_README).
> **NOTE**: Before running the sample with a trained model, make sure the model is converted to the Inference Engine format (\*.xml + \*.bin) using the [Model Optimizer tool](../../../docs/MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md).
>
diff --git a/inference-engine/samples/hello_classification/README.md b/inference-engine/samples/hello_classification/README.md
index f390cf9a874b8a..1244e68343a770 100644
--- a/inference-engine/samples/hello_classification/README.md
+++ b/inference-engine/samples/hello_classification/README.md
@@ -19,7 +19,7 @@ Refer to [Integrate the Inference Engine New Request API with Your Application](
## Running
-To run the sample, you can use public or pre-trained models. To download the pre-trained models, use the OpenVINO [Model Downloader](@ref omz_tools_downloader_README) or go to [https://download.01.org/opencv/](https://download.01.org/opencv/).
+To run the sample, you can use [public](@ref omz_models_public_index) or [Intel's](@ref omz_models_intel_index) pre-trained models from the Open Model Zoo. The models can be downloaded using the [Model Downloader](@ref omz_tools_downloader_README).
> **NOTE**: Before running the sample with a trained model, make sure the model is converted to the Inference Engine format (\*.xml + \*.bin) using the [Model Optimizer tool](../../../docs/MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md).
>
diff --git a/inference-engine/samples/hello_nv12_input_classification/README.md b/inference-engine/samples/hello_nv12_input_classification/README.md
index b80a5a86a59121..5d781bc66923c7 100644
--- a/inference-engine/samples/hello_nv12_input_classification/README.md
+++ b/inference-engine/samples/hello_nv12_input_classification/README.md
@@ -35,9 +35,7 @@ ffmpeg -i cat.jpg -pix_fmt nv12 cat.yuv
## Running
-To run the sample, you can use public or pre-trained models. To download pre-trained models, use
-the OpenVINO™ [Model Downloader](@ref omz_tools_downloader_README)
-or go to [https://download.01.org/opencv/](https://download.01.org/opencv/).
+To run the sample, you can use [public](@ref omz_models_public_index) or [Intel's](@ref omz_models_intel_index) pre-trained models from the Open Model Zoo. The models can be downloaded using the [Model Downloader](@ref omz_tools_downloader_README).
> **NOTE**: Before running the sample with a trained model, make sure the model is converted to the
> Inference Engine format (\*.xml + \*.bin) using the [Model Optimizer tool](../../../docs/MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md).
diff --git a/inference-engine/samples/hello_reshape_ssd/README.md b/inference-engine/samples/hello_reshape_ssd/README.md
index ae14ddcc5a92c2..4392a3eafcf369 100644
--- a/inference-engine/samples/hello_reshape_ssd/README.md
+++ b/inference-engine/samples/hello_reshape_ssd/README.md
@@ -7,7 +7,7 @@ networks like SSD-VGG. The sample shows how to use [Shape Inference feature](../
## Running
-To run the sample, you can use public or pre-trained models. To download the pre-trained models, use the OpenVINO [Model Downloader](@ref omz_tools_downloader_README) or go to [https://download.01.org/opencv/](https://download.01.org/opencv/).
+To run the sample, you can use [public](@ref omz_models_public_index) or [Intel's](@ref omz_models_intel_index) pre-trained models from the Open Model Zoo. The models can be downloaded using the [Model Downloader](@ref omz_tools_downloader_README).
> **NOTE**: Before running the sample with a trained model, make sure the model is converted to the Inference Engine format (\*.xml + \*.bin) using the [Model Optimizer tool](../../../docs/MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md).
>
diff --git a/inference-engine/samples/object_detection_sample_ssd/README.md b/inference-engine/samples/object_detection_sample_ssd/README.md
index b0a4f4e84652f9..46849d90bfecc2 100644
--- a/inference-engine/samples/object_detection_sample_ssd/README.md
+++ b/inference-engine/samples/object_detection_sample_ssd/README.md
@@ -36,7 +36,7 @@ Options:
Running the application with the empty list of options yields the usage message given above and an error message.
-To run the sample, you can use public or pre-trained models. To download the pre-trained models, use the OpenVINO [Model Downloader](@ref omz_tools_downloader_README) or go to [https://download.01.org/opencv/](https://download.01.org/opencv/).
+To run the sample, you can use [public](@ref omz_models_public_index) or [Intel's](@ref omz_models_intel_index) pre-trained models from the Open Model Zoo. The models can be downloaded using the [Model Downloader](@ref omz_tools_downloader_README).
> **NOTE**: Before running the sample with a trained model, make sure the model is converted to the Inference Engine format (\*.xml + \*.bin) using the [Model Optimizer tool](../../../docs/MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md).
>
diff --git a/inference-engine/tools/benchmark_tool/README.md b/inference-engine/tools/benchmark_tool/README.md
index 68da2058758cd1..33f45b3a4c9a6f 100644
--- a/inference-engine/tools/benchmark_tool/README.md
+++ b/inference-engine/tools/benchmark_tool/README.md
@@ -4,6 +4,13 @@ This topic demonstrates how to run the Benchmark Python* Tool, which performs in
> **NOTE:** This topic describes usage of Python implementation of the Benchmark Tool. For the C++ implementation, refer to [Benchmark C++ Tool](../../samples/benchmark_app/README.md).
+> **TIP**: You also can work with the Benchmark Tool inside the OpenVINO™ [Deep Learning Workbench](@ref workbench_docs_Workbench_DG_Introduction) (DL Workbench).
+> [DL Workbench](@ref workbench_docs_Workbench_DG_Introduction) is a platform built upon OpenVINO™ and provides a web-based graphical environment that enables you to optimize, fine-tune, analyze, visualize, and compare
+> performance of deep learning models on various Intel® architecture
+> configurations. In the DL Workbench, you can use most of OpenVINO™ toolkit components.
+>
+> Proceed to an [easy installation from Docker](@ref workbench_docs_Workbench_DG_Install_from_Docker_Hub) to get started.
+
## How It Works
Upon start-up, the application reads command-line parameters and loads a network and images/binary files to the Inference Engine plugin, which is chosen depending on a specified device. The number of infer requests and execution approach depend on the mode defined with the `-api` command-line parameter.
@@ -129,7 +136,7 @@ If a model has only image input(s), please a provide folder with images or a pat
If a model has some specific input(s) (not images), please prepare a binary file(s), which is filled with data of appropriate precision and provide a path to them as input.
If a model has mixed input types, input folder should contain all required files. Image inputs are filled with image files one by one. Binary inputs are filled with binary inputs one by one.
-To run the tool, you can use public or Intel's pre-trained models. To download the models, use the OpenVINO [Model Downloader](@ref omz_tools_downloader_README) or go to [https://download.01.org/opencv/](https://download.01.org/opencv/).
+To run the tool, you can use [public](@ref omz_models_public_index) or [Intel's](@ref omz_models_intel_index) pre-trained models from the Open Model Zoo. The models can be downloaded using the [Model Downloader](@ref omz_tools_downloader_README).
> **NOTE**: Before running the tool with a trained model, make sure the model is converted to the Inference Engine format (\*.xml + \*.bin) using the [Model Optimizer tool](../../../docs/MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md).
diff --git a/tools/benchmark/README.md b/tools/benchmark/README.md
index 0d9e62bc44a889..28fd4b70933e2f 100644
--- a/tools/benchmark/README.md
+++ b/tools/benchmark/README.md
@@ -146,7 +146,7 @@ If a model has only image input(s), please a provide folder with images or a pat
If a model has some specific input(s) (not images), please prepare a binary file(s), which is filled with data of appropriate precision and provide a path to them as input.
If a model has mixed input types, input folder should contain all required files. Image inputs are filled with image files one by one. Binary inputs are filled with binary inputs one by one.
-To run the demo, you can use public or pre-trained models. To download the pre-trained models, use the OpenVINO [Model Downloader](https://github.com/opencv/open_model_zoo/tree/2018/model_downloader) or go to [https://download.01.org/opencv/](https://download.01.org/opencv/).
+To run the tool, you can use [public](@ref omz_models_public_index) or [Intel's](@ref omz_models_intel_index) pre-trained models from the Open Model Zoo. The models can be downloaded using the [Model Downloader](@ref omz_tools_downloader_README).
> **NOTE**: Before running the demo with a trained model, make sure the model is converted to the Inference Engine format (\*.xml + \*.bin) using the [Model Optimizer tool](./docs/MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md).