diff --git a/docs/IE_DG/Extensibility_DG/Custom_ONNX_Ops.md b/docs/IE_DG/Extensibility_DG/Custom_ONNX_Ops.md
index 0999679ae0caa2..73a0800e2b56e0 100644
--- a/docs/IE_DG/Extensibility_DG/Custom_ONNX_Ops.md
+++ b/docs/IE_DG/Extensibility_DG/Custom_ONNX_Ops.md
@@ -24,7 +24,7 @@ The `ngraph::onnx_import::Node` class represents a node in ONNX model. It provid
New operator registration must happen before the ONNX model is read, for example, if an ONNX model uses the 'CustomRelu' operator, `register_operator("CustomRelu", ...)` must be called before InferenceEngine::Core::ReadNetwork.
Re-registering ONNX operators within the same process is supported. During registration of the existing operator, a warning is printed.
-The example below demonstrates an examplary model that requires previously created 'CustomRelu' operator:
+The example below demonstrates an exemplary model that requires previously created 'CustomRelu' operator:
@snippet onnx_custom_op/onnx_custom_op.cpp onnx_custom_op:model
diff --git a/docs/IE_DG/Samples_Overview.md b/docs/IE_DG/Samples_Overview.md
index d63a310de0b06b..07c2a964e296cf 100644
--- a/docs/IE_DG/Samples_Overview.md
+++ b/docs/IE_DG/Samples_Overview.md
@@ -43,7 +43,7 @@ To run the sample applications, you can use images and videos from the media fil
## Samples that Support Pre-Trained Models
-You can download the [pre-trained models](@ref omz_models_intel_index) using the OpenVINO [Model Downloader](@ref omz_tools_downloader_README) or from [https://download.01.org/opencv/](https://download.01.org/opencv/).
+To run the sample, you can use [public](@ref omz_models_public_index) or [Intel's](@ref omz_models_intel_index) pre-trained models from the Open Model Zoo. The models can be downloaded using the [Model Downloader](@ref omz_tools_downloader_README).
## Build the Sample Applications
diff --git a/docs/get_started/get_started_raspbian.md b/docs/get_started/get_started_raspbian.md
index 4dad1b790a6448..e238029b70495e 100644
--- a/docs/get_started/get_started_raspbian.md
+++ b/docs/get_started/get_started_raspbian.md
@@ -43,7 +43,7 @@ The primary tools for deploying your models and applications are installed to th
The OpenVINO™ workflow on Raspbian* OS is as follows:
1. **Get a pre-trained model** for your inference task. If you want to use your model for inference, the model must be converted to the `.bin` and `.xml` Intermediate Representation (IR) files, which are used as input by Inference Engine. On Raspberry PI, OpenVINO™ toolkit includes only the Inference Engine module. The Model Optimizer is not supported on this platform. To get the optimized models you can use one of the following options:
- * Download public and Intel's pre-trained models from the [Open Model Zoo](https://github.com/opencv/open_model_zoo) using [Model Downloader tool](@ref omz_tools_downloader_README#model_downloader_usage).
+ * Download public and Intel's pre-trained models from the [Open Model Zoo](https://github.com/opencv/open_model_zoo) using [Model Downloader tool](@ref omz_tools_downloader_README).
For more information on pre-trained models, see [Pre-Trained Models Documentation](@ref omz_models_intel_index)
* Convert a model using the Model Optimizer from a full installation of Intel® Distribution of OpenVINO™ toolkit on one of the supported platforms. Installation instructions are available:
diff --git a/docs/install_guides/installing-openvino-raspbian.md b/docs/install_guides/installing-openvino-raspbian.md
index d61d942d158c06..7c49294f401ff1 100644
--- a/docs/install_guides/installing-openvino-raspbian.md
+++ b/docs/install_guides/installing-openvino-raspbian.md
@@ -60,7 +60,7 @@ This guide provides step-by-step instructions on how to install the OpenVINO™
## Install the OpenVINO™ Toolkit for Raspbian* OS Package
-The guide assumes you downloaded the OpenVINO toolkit for Raspbian* OS. If you do not have a copy of the toolkit package file `l_openvino_toolkit_runtime_raspbian_p_.tgz`, download the latest version from the [Intel® Open Source Technology Center](https://download.01.org/opencv/2021/openvinotoolkit/) and then return to this guide to proceed with the installation.
+The guide assumes you downloaded the OpenVINO toolkit for Raspbian* OS. If you do not have a copy of the toolkit package file `l_openvino_toolkit_runtime_raspbian_p_.tgz`, download the latest version from the [OpenVINO™ Toolkit packages storage](https://storage.openvinotoolkit.org/repositories/openvino/packages/) and then return to this guide to proceed with the installation.
> **NOTE**: The OpenVINO toolkit for Raspbian OS is distributed without installer, so you need to perform extra steps comparing to the [Intel® Distribution of OpenVINO™ toolkit for Linux* OS](installing-openvino-linux.md).
@@ -173,7 +173,7 @@ Read the next topic if you want to learn more about OpenVINO workflow for Raspbe
If you want to use your model for inference, the model must be converted to the .bin and .xml Intermediate Representation (IR) files that are used as input by Inference Engine. OpenVINO™ toolkit support on Raspberry Pi only includes the Inference Engine module of the Intel® Distribution of OpenVINO™ toolkit. The Model Optimizer is not supported on this platform. To get the optimized models you can use one of the following options:
-* Download public and Intel's pre-trained models from the [Open Model Zoo](https://github.com/opencv/open_model_zoo) using [Model Downloader tool](@ref omz_tools_downloader_README#model_downloader_usage).
+* Download public and Intel's pre-trained models from the [Open Model Zoo](https://github.com/opencv/open_model_zoo) using [Model Downloader tool](@ref omz_tools_downloader_README).
For more information on pre-trained models, see [Pre-Trained Models Documentation](@ref omz_models_intel_index)
diff --git a/inference-engine/ie_bridges/c/samples/hello_classification/README.md b/inference-engine/ie_bridges/c/samples/hello_classification/README.md
index 845a19e1bf52dc..6bf0ddf0b6369b 100644
--- a/inference-engine/ie_bridges/c/samples/hello_classification/README.md
+++ b/inference-engine/ie_bridges/c/samples/hello_classification/README.md
@@ -14,7 +14,7 @@ To properly demonstrate this API, it is required to run several networks in pipe
## Running
-To run the sample, you can use public or pre-trained models. To download the pre-trained models, use the OpenVINO [Model Downloader](@ref omz_tools_downloader_README) or go to [https://download.01.org/opencv/](https://download.01.org/opencv/).
+To run the sample, you can use [public](@ref omz_models_public_index) or [Intel's](@ref omz_models_intel_index) pre-trained models from the Open Model Zoo. The models can be downloaded using the [Model Downloader](@ref omz_tools_downloader_README).
> **NOTE**: Before running the sample with a trained model, make sure the model is converted to the Inference Engine format (\*.xml + \*.bin) using the [Model Optimizer tool](../../../../../docs/MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md).
>
diff --git a/inference-engine/ie_bridges/c/samples/hello_nv12_input_classification/README.md b/inference-engine/ie_bridges/c/samples/hello_nv12_input_classification/README.md
index eeadef10cdbf88..a9e1e20056b049 100644
--- a/inference-engine/ie_bridges/c/samples/hello_nv12_input_classification/README.md
+++ b/inference-engine/ie_bridges/c/samples/hello_nv12_input_classification/README.md
@@ -34,9 +34,7 @@ ffmpeg -i cat.jpg -pix_fmt nv12 cat.yuv
## Running
-To run the sample, you can use public or pre-trained models. To download pre-trained models, use
-the OpenVINO™ [Model Downloader](@ref omz_tools_downloader_README)
-or go to [https://download.01.org/opencv/](https://download.01.org/opencv/).
+To run the sample, you can use [public](@ref omz_models_public_index) or [Intel's](@ref omz_models_intel_index) pre-trained models from the Open Model Zoo. The models can be downloaded using the [Model Downloader](@ref omz_tools_downloader_README).
> **NOTE**: Before running the sample with a trained model, make sure the model is converted to the
> Inference Engine format (\*.xml + \*.bin) using the [Model Optimizer tool](../../../../../docs/MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md).
diff --git a/inference-engine/ie_bridges/c/samples/object_detection_sample_ssd/README.md b/inference-engine/ie_bridges/c/samples/object_detection_sample_ssd/README.md
index 31ba028cb95a9d..9a75643f64331a 100644
--- a/inference-engine/ie_bridges/c/samples/object_detection_sample_ssd/README.md
+++ b/inference-engine/ie_bridges/c/samples/object_detection_sample_ssd/README.md
@@ -37,7 +37,7 @@ Options:
Running the application with the empty list of options yields the usage message given above and an error message.
-To run the sample, you can use public or pre-trained models. To download the pre-trained models, use the OpenVINO [Model Downloader](@ref omz_tools_downloader_README) or go to [https://download.01.org/opencv/](https://download.01.org/opencv/).
+To run the sample, you can use [public](@ref omz_models_public_index) or [Intel's](@ref omz_models_intel_index) pre-trained models from the Open Model Zoo. The models can be downloaded using the [Model Downloader](@ref omz_tools_downloader_README).
> **NOTE**: Before running the sample with a trained model, make sure the model is converted to the Inference Engine format (\*.xml + \*.bin) using the [Model Optimizer tool](../../../../../docs/MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md).
>
diff --git a/inference-engine/ie_bridges/java/samples/README.md b/inference-engine/ie_bridges/java/samples/README.md
index e81d097d15b587..d3d26d234c0329 100644
--- a/inference-engine/ie_bridges/java/samples/README.md
+++ b/inference-engine/ie_bridges/java/samples/README.md
@@ -57,9 +57,14 @@ Throughput: 148.29 FPS
Upon the start-up the sample application reads command line parameters and loads a network and an image to the Inference
Engine device. When inference is done, the application creates an output image/video.
-To download model ( .bin and .xml files must be downloaded) use:
-https://download.01.org/opencv/2019/open_model_zoo/R1/models_bin/face-detection-adas-0001/FP32/
+To download the model to your `my/download/directory`, use the [Model Downloader](@ref omz_tools_downloader_README):
+```sh
+cd /deployment_tools/tools/model_downloader/
+```
+```sh
+./downloader.py --name face-detection-adas-0001 --precisions FP32 -o my/download/directory
+```
## Build and run
Build and run steps are similar to ```benchmark_app```, but you need to add an environment variable with OpenCV installation or build path before building:
diff --git a/inference-engine/ie_bridges/python/sample/classification_sample_async/README.md b/inference-engine/ie_bridges/python/sample/classification_sample_async/README.md
index d7a20f5037333d..80dc537b9a5702 100644
--- a/inference-engine/ie_bridges/python/sample/classification_sample_async/README.md
+++ b/inference-engine/ie_bridges/python/sample/classification_sample_async/README.md
@@ -59,7 +59,7 @@ Options:
Running the application with the empty list of options yields the usage message given above and an error message.
-To run the sample, you can use AlexNet and GoogLeNet or other image classification models. You can download the pre-trained models with the OpenVINO [Model Downloader](@ref omz_tools_downloader_README) or from [https://download.01.org/opencv/](https://download.01.org/opencv/).
+To run the sample, you can use AlexNet and GoogLeNet or other image classification models. You can download [public](@ref omz_models_public_index) or [Intel's](@ref omz_models_intel_index) pre-trained models using the [Model Downloader](@ref omz_tools_downloader_README).
> **NOTE**: Before running the sample with a trained model, make sure the model is converted to the Inference Engine format (\*.xml + \*.bin) using the [Model Optimizer tool](../../../../../docs/MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md).
>
diff --git a/inference-engine/ie_bridges/python/sample/hello_classification/README.md b/inference-engine/ie_bridges/python/sample/hello_classification/README.md
index 488278c87d2ff4..34858bb437a1bb 100644
--- a/inference-engine/ie_bridges/python/sample/hello_classification/README.md
+++ b/inference-engine/ie_bridges/python/sample/hello_classification/README.md
@@ -46,7 +46,7 @@ Options:
Running the application with the empty list of options yields the usage message given above.
-To run the sample, you can use AlexNet and GoogLeNet or other image classification models. You can download the pre-trained models with the OpenVINO [Model Downloader](@ref omz_tools_downloader_README) or from [https://download.01.org/opencv/](https://download.01.org/opencv/).
+To run the sample, you can use AlexNet and GoogLeNet or other image classification models. You can download [public](@ref omz_models_public_index) or [Intel's](@ref omz_models_intel_index) pre-trained models using the [Model Downloader](@ref omz_tools_downloader_README).
> **NOTE**: Before running the sample with a trained model, make sure the model is converted to the Inference Engine format (\*.xml + \*.bin) using the [Model Optimizer tool](../../../../../docs/MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md).
>
diff --git a/inference-engine/ie_bridges/python/sample/object_detection_sample_ssd/README.md b/inference-engine/ie_bridges/python/sample/object_detection_sample_ssd/README.md
index 26b8394cdb7260..90bc09ff2e75bf 100644
--- a/inference-engine/ie_bridges/python/sample/object_detection_sample_ssd/README.md
+++ b/inference-engine/ie_bridges/python/sample/object_detection_sample_ssd/README.md
@@ -55,7 +55,7 @@ Options:
Running the application with the empty list of options yields the usage message given above and an error message.
-To run the sample, you can use RMNet_SSD or other object-detection models. You can download the pre-trained models with the OpenVINO [Model Downloader](@ref omz_tools_downloader_README) or from [https://download.01.org/opencv/](https://download.01.org/opencv/).
+To run the sample, you can use RMNet_SSD or other object-detection models. You can download [public](@ref omz_models_public_index) or [Intel's](@ref omz_models_intel_index) pre-trained models using the [Model Downloader](@ref omz_tools_downloader_README).
> **NOTE**: Before running the sample with a trained model, make sure the model is converted to the Inference Engine format (\*.xml + \*.bin) using the [Model Optimizer tool](../../../../../docs/MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md).
>
diff --git a/inference-engine/samples/benchmark_app/README.md b/inference-engine/samples/benchmark_app/README.md
index 0dd997e6de7d45..50f1736be6d408 100644
--- a/inference-engine/samples/benchmark_app/README.md
+++ b/inference-engine/samples/benchmark_app/README.md
@@ -120,7 +120,7 @@ If a model has only image input(s), please provide a folder with images or a pat
If a model has some specific input(s) (not images), please prepare a binary file(s) that is filled with data of appropriate precision and provide a path to them as input.
If a model has mixed input types, input folder should contain all required files. Image inputs are filled with image files one by one. Binary inputs are filled with binary inputs one by one.
-To run the tool, you can use public or Intel's pre-trained models. To download the models, use the OpenVINO [Model Downloader](@ref omz_tools_downloader_README) or go to [https://download.01.org/opencv/](https://download.01.org/opencv/).
+To run the tool, you can use [public](@ref omz_models_public_index) or [Intel's](@ref omz_models_intel_index) pre-trained models from the Open Model Zoo. The models can be downloaded using the [Model Downloader](@ref omz_tools_downloader_README).
> **NOTE**: Before running the tool with a trained model, make sure the model is converted to the Inference Engine format (\*.xml + \*.bin) using the [Model Optimizer tool](../../../docs/MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md).
>
diff --git a/inference-engine/samples/classification_sample_async/README.md b/inference-engine/samples/classification_sample_async/README.md
index 5d9abb063350f8..32df39493566ed 100644
--- a/inference-engine/samples/classification_sample_async/README.md
+++ b/inference-engine/samples/classification_sample_async/README.md
@@ -49,7 +49,7 @@ Options:
Running the application with the empty list of options yields the usage message given above and an error message.
-To run the sample, use AlexNet and GoogLeNet or other public or pre-trained image classification models. To download the pre-trained models, use the OpenVINO [Model Downloader](@ref omz_tools_downloader_README) or go to [https://download.01.org/opencv/](https://download.01.org/opencv/).
+To run the sample, use AlexNet and GoogLeNet or other public or pre-trained image classification models. You can download [public](@ref omz_models_public_index) or [Intel's](@ref omz_models_intel_index) pre-trained models using the [Model Downloader](@ref omz_tools_downloader_README).
> **NOTE**: Before running the sample with a trained model, make sure the model is converted to the Inference Engine format (\*.xml + \*.bin) using the [Model Optimizer tool](../../../docs/MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md).
>
diff --git a/inference-engine/samples/hello_classification/README.md b/inference-engine/samples/hello_classification/README.md
index f390cf9a874b8a..1244e68343a770 100644
--- a/inference-engine/samples/hello_classification/README.md
+++ b/inference-engine/samples/hello_classification/README.md
@@ -19,7 +19,7 @@ Refer to [Integrate the Inference Engine New Request API with Your Application](
## Running
-To run the sample, you can use public or pre-trained models. To download the pre-trained models, use the OpenVINO [Model Downloader](@ref omz_tools_downloader_README) or go to [https://download.01.org/opencv/](https://download.01.org/opencv/).
+To run the sample, you can use [public](@ref omz_models_public_index) or [Intel's](@ref omz_models_intel_index) pre-trained models from the Open Model Zoo. The models can be downloaded using the [Model Downloader](@ref omz_tools_downloader_README).
> **NOTE**: Before running the sample with a trained model, make sure the model is converted to the Inference Engine format (\*.xml + \*.bin) using the [Model Optimizer tool](../../../docs/MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md).
>
diff --git a/inference-engine/samples/hello_nv12_input_classification/README.md b/inference-engine/samples/hello_nv12_input_classification/README.md
index b80a5a86a59121..5d781bc66923c7 100644
--- a/inference-engine/samples/hello_nv12_input_classification/README.md
+++ b/inference-engine/samples/hello_nv12_input_classification/README.md
@@ -35,9 +35,7 @@ ffmpeg -i cat.jpg -pix_fmt nv12 cat.yuv
## Running
-To run the sample, you can use public or pre-trained models. To download pre-trained models, use
-the OpenVINO™ [Model Downloader](@ref omz_tools_downloader_README)
-or go to [https://download.01.org/opencv/](https://download.01.org/opencv/).
+To run the sample, you can use [public](@ref omz_models_public_index) or [Intel's](@ref omz_models_intel_index) pre-trained models from the Open Model Zoo. The models can be downloaded using the [Model Downloader](@ref omz_tools_downloader_README).
> **NOTE**: Before running the sample with a trained model, make sure the model is converted to the
> Inference Engine format (\*.xml + \*.bin) using the [Model Optimizer tool](../../../docs/MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md).
diff --git a/inference-engine/samples/hello_reshape_ssd/README.md b/inference-engine/samples/hello_reshape_ssd/README.md
index ae14ddcc5a92c2..4392a3eafcf369 100644
--- a/inference-engine/samples/hello_reshape_ssd/README.md
+++ b/inference-engine/samples/hello_reshape_ssd/README.md
@@ -7,7 +7,7 @@ networks like SSD-VGG. The sample shows how to use [Shape Inference feature](../
## Running
-To run the sample, you can use public or pre-trained models. To download the pre-trained models, use the OpenVINO [Model Downloader](@ref omz_tools_downloader_README) or go to [https://download.01.org/opencv/](https://download.01.org/opencv/).
+To run the sample, you can use [public](@ref omz_models_public_index) or [Intel's](@ref omz_models_intel_index) pre-trained models from the Open Model Zoo. The models can be downloaded using the [Model Downloader](@ref omz_tools_downloader_README).
> **NOTE**: Before running the sample with a trained model, make sure the model is converted to the Inference Engine format (\*.xml + \*.bin) using the [Model Optimizer tool](../../../docs/MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md).
>
diff --git a/inference-engine/samples/object_detection_sample_ssd/README.md b/inference-engine/samples/object_detection_sample_ssd/README.md
index b0a4f4e84652f9..46849d90bfecc2 100644
--- a/inference-engine/samples/object_detection_sample_ssd/README.md
+++ b/inference-engine/samples/object_detection_sample_ssd/README.md
@@ -36,7 +36,7 @@ Options:
Running the application with the empty list of options yields the usage message given above and an error message.
-To run the sample, you can use public or pre-trained models. To download the pre-trained models, use the OpenVINO [Model Downloader](@ref omz_tools_downloader_README) or go to [https://download.01.org/opencv/](https://download.01.org/opencv/).
+To run the sample, you can use [public](@ref omz_models_public_index) or [Intel's](@ref omz_models_intel_index) pre-trained models from the Open Model Zoo. The models can be downloaded using the [Model Downloader](@ref omz_tools_downloader_README).
> **NOTE**: Before running the sample with a trained model, make sure the model is converted to the Inference Engine format (\*.xml + \*.bin) using the [Model Optimizer tool](../../../docs/MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md).
>
diff --git a/inference-engine/tools/benchmark_tool/README.md b/inference-engine/tools/benchmark_tool/README.md
index daf6c95ce1c9bf..7d960468aa2f86 100644
--- a/inference-engine/tools/benchmark_tool/README.md
+++ b/inference-engine/tools/benchmark_tool/README.md
@@ -136,7 +136,7 @@ If a model has only image input(s), please a provide folder with images or a pat
If a model has some specific input(s) (not images), please prepare a binary file(s), which is filled with data of appropriate precision and provide a path to them as input.
If a model has mixed input types, input folder should contain all required files. Image inputs are filled with image files one by one. Binary inputs are filled with binary inputs one by one.
-To run the tool, you can use public or Intel's pre-trained models. To download the models, use the OpenVINO [Model Downloader](@ref omz_tools_downloader_README) or go to [https://download.01.org/opencv/](https://download.01.org/opencv/).
+To run the tool, you can use [public](@ref omz_models_public_index) or [Intel's](@ref omz_models_intel_index) pre-trained models from the Open Model Zoo. The models can be downloaded using the [Model Downloader](@ref omz_tools_downloader_README).
> **NOTE**: Before running the tool with a trained model, make sure the model is converted to the Inference Engine format (\*.xml + \*.bin) using the [Model Optimizer tool](../../../docs/MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md).
diff --git a/tools/benchmark/README.md b/tools/benchmark/README.md
index a83467fca03bc0..705a2f31c7de31 100644
--- a/tools/benchmark/README.md
+++ b/tools/benchmark/README.md
@@ -144,7 +144,7 @@ If a model has only image input(s), please a provide folder with images or a pat
If a model has some specific input(s) (not images), please prepare a binary file(s), which is filled with data of appropriate precision and provide a path to them as input.
If a model has mixed input types, input folder should contain all required files. Image inputs are filled with image files one by one. Binary inputs are filled with binary inputs one by one.
-To run the demo, you can use public or pre-trained models. To download the pre-trained models, use the OpenVINO [Model Downloader](https://github.com/opencv/open_model_zoo/tree/2018/model_downloader) or go to [https://download.01.org/opencv/](https://download.01.org/opencv/).
+To run the tool, you can use [public](@ref omz_models_public_index) or [Intel's](@ref omz_models_intel_index) pre-trained models from the Open Model Zoo. The models can be downloaded using the [Model Downloader](@ref omz_tools_downloader_README).
> **NOTE**: Before running the demo with a trained model, make sure the model is converted to the Inference Engine format (\*.xml + \*.bin) using the [Model Optimizer tool](./docs/MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md).