diff --git a/docs/OV_Runtime_UG/AutoPlugin_Debugging.md b/docs/OV_Runtime_UG/AutoPlugin_Debugging.md
index 17ef1acd91b140..9f81de8d1de548 100644
--- a/docs/OV_Runtime_UG/AutoPlugin_Debugging.md
+++ b/docs/OV_Runtime_UG/AutoPlugin_Debugging.md
@@ -53,9 +53,9 @@ in which the `LOG_LEVEL` is represented by the first letter of its name (ERROR b
@sphinxdirective
.. code-block:: sh
- [17:09:36.6188]D[plugin.cpp:167] deviceName:MYRIAD, defaultDeviceID:, uniqueName:MYRIAD_
- [17:09:36.6242]I[executable_network.cpp:181] [AUTOPLUGIN]:select device:MYRIAD
- [17:09:36.6809]ERROR[executable_network.cpp:384] [AUTOPLUGIN] load failed, MYRIAD:[ GENERAL_ERROR ]
+ [17:09:36.6188]D[plugin.cpp:167] deviceName:GPU, defaultDeviceID:, uniqueName:GPU_
+ [17:09:36.6242]I[executable_network.cpp:181] [AUTOPLUGIN]:select device:GPU
+ [17:09:36.6809]ERROR[executable_network.cpp:384] [AUTOPLUGIN] load failed, GPU:[ GENERAL_ERROR ]
@endsphinxdirective
diff --git a/docs/OV_Runtime_UG/hetero_execution.md b/docs/OV_Runtime_UG/hetero_execution.md
index f3998b0d181375..6b6409c988ba9e 100644
--- a/docs/OV_Runtime_UG/hetero_execution.md
+++ b/docs/OV_Runtime_UG/hetero_execution.md
@@ -141,7 +141,7 @@ You can use the GraphViz utility or a file converter to view the images. On the
You can use performance data (in sample applications, it is the option `-pc`) to get the performance data on each subgraph.
-Here is an example of the output for Googlenet v1 running on HDDL with fallback to CPU:
+Here is an example of the output for Googlenet v1 running on HDDL (device no longer supported) with fallback to CPU:
```
subgraph1: 1. input preprocessing (mean data/HDDL):EXECUTED layerType: realTime: 129 cpu: 129 execType:
diff --git a/docs/OV_Runtime_UG/img/yolo_tiny_v1.svg b/docs/OV_Runtime_UG/img/yolo_tiny_v1.svg
deleted file mode 100644
index 8e509eaecc955c..00000000000000
--- a/docs/OV_Runtime_UG/img/yolo_tiny_v1.svg
+++ /dev/null
@@ -1,3 +0,0 @@
-version https://git-lfs.github.com/spec/v1
-oid sha256:b07c2c7328ed2f667402c1e451e39f625b5ba8f30975c32379eac81ebc1dbd53
-size 170201
diff --git a/docs/OV_Runtime_UG/supported_plugins/Device_Plugins.md b/docs/OV_Runtime_UG/supported_plugins/Device_Plugins.md
index 378aea9a44e67c..87ea4ddecbdb0b 100644
--- a/docs/OV_Runtime_UG/supported_plugins/Device_Plugins.md
+++ b/docs/OV_Runtime_UG/supported_plugins/Device_Plugins.md
@@ -72,15 +72,16 @@ A simple programmatic way to enumerate the devices and use with the multi-device
@endsphinxdirective
-Beyond the typical "CPU", "GPU", and so on, when multiple instances of a device are available, the names are more qualified. For example, this is how two Intel® Movidius™ Myriad™ X sticks are listed with the hello_query_sample:
+Beyond the typical "CPU", "GPU", and so on, when multiple instances of a device are available, the names are more qualified. For example, this is how two GPUs can be listed (iGPU is always GPU.0):
+
```
...
- Device: MYRIAD.1.2-ma2480
+ Device: GPU.0
...
- Device: MYRIAD.1.4-ma2480
+ Device: GPU.1
```
-So, the explicit configuration to use both would be "MULTI:MYRIAD.1.2-ma2480,MYRIAD.1.4-ma2480". Accordingly, the code that loops over all available devices of the "MYRIAD" type only is as follows:
+So, the explicit configuration to use both would be "MULTI:GPU.1,GPU.0". Accordingly, the code that loops over all available devices of the "GPU" type only is as follows:
@sphinxdirective
diff --git a/docs/OV_Runtime_UG/supported_plugins/Supported_Devices.md b/docs/OV_Runtime_UG/supported_plugins/Supported_Devices.md
index 8c5b8904296cf0..ab6181826de4ae 100644
--- a/docs/OV_Runtime_UG/supported_plugins/Supported_Devices.md
+++ b/docs/OV_Runtime_UG/supported_plugins/Supported_Devices.md
@@ -3,7 +3,7 @@ Supported Devices {#openvino_docs_OV_UG_supported_plugins_Supported_Devices}
The OpenVINO Runtime can infer models in different formats with various input and output formats. This section provides supported and optimal configurations per device. In OpenVINO™ documentation, "device" refers to an Intel® processors used for inference, which can be a supported CPU, GPU, or GNA (Gaussian neural accelerator coprocessor), or a combination of those devices.
-> **NOTE**: With OpenVINO™ 2020.4 release, Intel® Movidius™ Neural Compute Stick support has been cancelled.
+> **NOTE**: With OpenVINO™ 2023.0 release, support has been cancelled for all VPU accelerators based on Intel® Movidius™.
The OpenVINO Runtime provides unique capabilities to infer deep learning models on the following device types with corresponding plugins:
diff --git a/docs/OV_Runtime_UG/supported_plugins/config_properties.md b/docs/OV_Runtime_UG/supported_plugins/config_properties.md
index 121eed4a99e4f3..6f285f1d139de2 100644
--- a/docs/OV_Runtime_UG/supported_plugins/config_properties.md
+++ b/docs/OV_Runtime_UG/supported_plugins/config_properties.md
@@ -180,24 +180,6 @@ The `ov::CompiledModel::get_property` method is used to get property values the
@endsphinxtabset
-Or the current temperature of the `MYRIAD` device:
-
-@sphinxtabset
-
-@sphinxtab{C++}
-
-@snippet docs/snippets/ov_properties_api.cpp device_thermal
-
-@endsphinxtab
-
-@sphinxtab{Python}
-
-@snippet docs/snippets/ov_properties_api.py device_thermal
-
-@endsphinxtab
-
-@endsphinxtabset
-
Or the number of threads that would be used for inference on `CPU` device:
diff --git a/docs/gapi/gapi_face_analytics_pipeline.md b/docs/gapi/gapi_face_analytics_pipeline.md
index 861aec24ebc465..781161df8ba281 100644
--- a/docs/gapi/gapi_face_analytics_pipeline.md
+++ b/docs/gapi/gapi_face_analytics_pipeline.md
@@ -19,7 +19,7 @@ This sample requires:
To download the models from the Open Model Zoo, use the [Model Downloader](@ref omz_tools_downloader) tool.
## Introduction: Why G-API
-Many computer vision algorithms run on a video stream rather than on individual images. Stream processing usually consists of multiple steps – like decode, preprocessing, detection, tracking, classification (on detected objects), and visualization – forming a *video processing pipeline*. Moreover, many these steps of such pipeline can run in parallel – modern platforms have different hardware blocks on the same chip like decoders and GPUs, and extra accelerators can be plugged in as extensions, like Intel® Movidius™ Neural Compute Stick for deep learning offload.
+Many computer vision algorithms run on a video stream rather than on individual images. Stream processing usually consists of multiple steps – like decode, preprocessing, detection, tracking, classification (on detected objects), and visualization – forming a *video processing pipeline*. Moreover, many these steps of such pipeline can run in parallel – modern platforms have different hardware blocks on the same chip like decoders and GPUs, and extra accelerators can be plugged in as extensions for deep learning offload.
Given all this manifold of options and a variety in video analytics algorithms, managing such pipelines effectively quickly becomes a problem. For sure it can be done manually, but this approach doesn't scale: if a change is required in the algorithm (e.g. a new pipeline step is added), or if it is ported on a new platform with different capabilities, the whole pipeline needs to be re-optimized.
diff --git a/docs/install_guides/installing-openvino-docker-linux.md b/docs/install_guides/installing-openvino-docker-linux.md
index 8c9c776b947d21..745b5db27b8a0a 100644
--- a/docs/install_guides/installing-openvino-docker-linux.md
+++ b/docs/install_guides/installing-openvino-docker-linux.md
@@ -63,7 +63,7 @@ You can also try our [Tutorials](https://github.com/openvinotoolkit/docker_ci/tr
## Configuring the Image for Different Devices
-If you want to run inferences on a CPU or Intel® Neural Compute Stick 2, no extra configuration is needed. Go to Running the image on different devices for the next step.
+If you want to run inferences on a CPU no extra configuration is needed. Go to Running the image on different devices for the next step.
### Configuring Docker Image for GPU
@@ -175,5 +175,4 @@ docker run -itu root:root --rm --device /dev/dri:/dev/dri
- [DockerHub CI Framework](https://github.com/openvinotoolkit/docker_ci) for Intel® Distribution of OpenVINO™ toolkit. The Framework can generate a Dockerfile, build, test, and deploy an image with the Intel® Distribution of OpenVINO™ toolkit. You can reuse available Dockerfiles, add your layer and customize the image of OpenVINO™ for your needs.
- Intel® Distribution of OpenVINO™ toolkit home page: [https://software.intel.com/en-us/openvino-toolkit](https://software.intel.com/en-us/openvino-toolkit)
-- Intel® Neural Compute Stick 2 Get Started: [https://software.intel.com/en-us/neural-compute-stick/get-started](https://software.intel.com/en-us/neural-compute-stick/get-started)
- [OpenVINO Installation Selector Tool](https://www.intel.com/content/www/us/en/developer/tools/openvino-toolkit/download.html)
\ No newline at end of file
diff --git a/docs/snippets/AUTO4.cpp b/docs/snippets/AUTO4.cpp
index b85538ed61bd8c..c1c4047a7ffe8e 100644
--- a/docs/snippets/AUTO4.cpp
+++ b/docs/snippets/AUTO4.cpp
@@ -15,8 +15,8 @@ ov::CompiledModel compiled_model1 = core.compile_model(model, "AUTO",
ov::CompiledModel compiled_model2 = core.compile_model(model, "AUTO",
ov::hint::model_priority(ov::hint::Priority::LOW));
/************
- Assume that all the devices (CPU, GPU, and MYRIAD) can support all the models.
- Result: compiled_model0 will use GPU, compiled_model1 will use MYRIAD, compiled_model2 will use CPU.
+ Assume that all the devices (CPU and GPUs) can support all the models.
+ Result: compiled_model0 will use GPU.1, compiled_model1 will use GPU.0, compiled_model2 will use CPU.
************/
// Example 2
@@ -27,8 +27,8 @@ ov::CompiledModel compiled_model4 = core.compile_model(model, "AUTO",
ov::CompiledModel compiled_model5 = core.compile_model(model, "AUTO",
ov::hint::model_priority(ov::hint::Priority::LOW));
/************
- Assume that all the devices (CPU, GPU, and MYRIAD) can support all the models.
- Result: compiled_model3 will use GPU, compiled_model4 will use GPU, compiled_model5 will use MYRIAD.
+ Assume that all the devices (CPU and GPUs) can support all the models.
+ Result: compiled_model3 will use GPU.1, compiled_model4 will use GPU.1, compiled_model5 will use GPU.0.
************/
//! [part4]
}
diff --git a/docs/snippets/AUTO5.cpp b/docs/snippets/AUTO5.cpp
index 624ed10831c8a6..14e75b7d8c2130 100644
--- a/docs/snippets/AUTO5.cpp
+++ b/docs/snippets/AUTO5.cpp
@@ -2,17 +2,17 @@
int main() {
ov::AnyMap cpu_config = {};
-ov::AnyMap myriad_config = {};
+ov::AnyMap gpu_config = {};
//! [part5]
ov::Core core;
-// Read a network in IR, PaddlePaddle, or ONNX format:
+// Read a network in IR, TensorFlow, PaddlePaddle, or ONNX format:
std::shared_ptr model = core.read_model("sample.xml");
-// Configure CPU and the MYRIAD devices when compiled model
+// Configure the CPU and the GPU devices when compiling model
ov::CompiledModel compiled_model = core.compile_model(model, "AUTO",
ov::device::properties("CPU", cpu_config),
- ov::device::properties("MYRIAD", myriad_config));
+ ov::device::properties("GPU", gpu_config));
//! [part5]
return 0;
}
diff --git a/docs/snippets/MULTI0.cpp b/docs/snippets/MULTI0.cpp
index 89b70e6952b566..6789be0e8f76af 100644
--- a/docs/snippets/MULTI0.cpp
+++ b/docs/snippets/MULTI0.cpp
@@ -10,14 +10,14 @@ std::shared_ptr model = core.read_model("sample.xml");
// Option 1
// Pre-configure MULTI globally with explicitly defined devices,
// and compile the model on MULTI using the newly specified default device list.
-core.set_property("MULTI", ov::device::priorities("HDDL,GPU"));
+core.set_property("MULTI", ov::device::priorities("GPU.1,GPU.0"));
ov::CompiledModel compileModel0 = core.compile_model(model, "MULTI");
// Option 2
// Specify the devices to be used by MULTI explicitly at compilation.
// The following lines are equivalent:
-ov::CompiledModel compileModel1 = core.compile_model(model, "MULTI:HDDL,GPU");
-ov::CompiledModel compileModel2 = core.compile_model(model, "MULTI", ov::device::priorities("HDDL,GPU"));
+ov::CompiledModel compileModel1 = core.compile_model(model, "MULTI:GPU.1,GPU.0");
+ov::CompiledModel compileModel2 = core.compile_model(model, "MULTI", ov::device::priorities("GPU.1,GPU.0"));
diff --git a/docs/snippets/MULTI1.cpp b/docs/snippets/MULTI1.cpp
index 9c3f4dbb6a0033..8d8caa411fdf9c 100644
--- a/docs/snippets/MULTI1.cpp
+++ b/docs/snippets/MULTI1.cpp
@@ -4,22 +4,20 @@ int main() {
//! [part1]
ov::Core core;
std::shared_ptr model = core.read_model("sample.xml");
-ov::CompiledModel compileModel = core.compile_model(model, "MULTI:HDDL,GPU");
+ov::CompiledModel compileModel = core.compile_model(model, "MULTI:CPU,GPU");
// Once the priority list is set, you can alter it on the fly:
// reverse the order of priorities
-compileModel.set_property(ov::device::priorities("GPU,HDDL"));
+compileModel.set_property(ov::device::priorities("GPU,CPU"));
-// exclude some devices (in this case, HDDL)
+// exclude some devices (in this case, CPU)
compileModel.set_property(ov::device::priorities("GPU"));
// bring back the excluded devices
-compileModel.set_property(ov::device::priorities("GPU,HDDL"));
+compileModel.set_property(ov::device::priorities("GPU,CPU"));
// You cannot add new devices on the fly!
-// Attempting to do so, for example, adding CPU:
-compileModel.set_property(ov::device::priorities("CPU,GPU,HDDL"));
-// would trigger the following exception:
+// Attempting to do so will trigger the following exception:
// [ ERROR ] [NOT_FOUND] You can only change device
// priorities but not add new devices with the model's
// ov::device::priorities. CPU device was not in the original device list!
diff --git a/docs/snippets/MULTI3.cpp b/docs/snippets/MULTI3.cpp
index 29e0132948a76d..dabc7469c4d1a2 100644
--- a/docs/snippets/MULTI3.cpp
+++ b/docs/snippets/MULTI3.cpp
@@ -3,12 +3,12 @@
int main() {
//! [part3]
ov::Core core;
-std::vector myriadDevices = core.get_property("MYRIAD", ov::available_devices);
+std::vector GPUDevices = core.get_property("GPU", ov::available_devices);
std::string all_devices;
-for (size_t i = 0; i < myriadDevices.size(); ++i) {
- all_devices += std::string("MYRIAD.")
- + myriadDevices[i]
- + std::string(i < (myriadDevices.size() -1) ? "," : "");
+for (size_t i = 0; i < GPUDevices.size(); ++i) {
+ all_devices += std::string("GPU.")
+ + GPUDevices[i]
+ + std::string(i < (GPUDevices.size() -1) ? "," : "");
}
ov::CompiledModel compileModel = core.compile_model("sample.xml", "MULTI",
ov::device::priorities(all_devices));
diff --git a/docs/snippets/MULTI4.cpp b/docs/snippets/MULTI4.cpp
index ac4d4db6d438b8..ab12d447c61f91 100644
--- a/docs/snippets/MULTI4.cpp
+++ b/docs/snippets/MULTI4.cpp
@@ -1,19 +1,19 @@
#include
int main() {
-ov::AnyMap myriad_config, gpu_config;
+ov::AnyMap cpu_config, gpu_config;
//! [part4]
ov::Core core;
// Read a network in IR, PaddlePaddle, or ONNX format:
std::shared_ptr model = core.read_model("sample.xml");
-// When compiling the model on MULTI, configure GPU and HDDL
+// When compiling the model on MULTI, configure GPU and CPU
// (devices, priorities, and device configurations):
ov::CompiledModel compileModel = core.compile_model(model, "MULTI",
- ov::device::priorities("HDDL", "GPU"),
+ ov::device::priorities("GPU", "CPU"),
ov::device::properties("GPU", gpu_config),
- ov::device::properties("HDDL", myriad_config));
+ ov::device::properties("CPU", cpu_config));
// Optionally, query the optimal number of requests:
uint32_t nireq = compileModel.get_property(ov::optimal_number_of_infer_requests);
diff --git a/docs/snippets/MULTI5.cpp b/docs/snippets/MULTI5.cpp
index b4ea419c7a9df7..e1c749418b3987 100644
--- a/docs/snippets/MULTI5.cpp
+++ b/docs/snippets/MULTI5.cpp
@@ -5,7 +5,7 @@ int main() {
ov::Core core;
// // Read a model and compile it on MULTI
-ov::CompiledModel compileModel = core.compile_model("sample.xml", "MULTI:HDDL,GPU");
+ov::CompiledModel compileModel = core.compile_model("sample.xml", "MULTI:GPU,CPU");
// query the optimal number of requests
uint32_t nireq = compileModel.get_property(ov::optimal_number_of_infer_requests);
diff --git a/docs/snippets/ov_auto.py b/docs/snippets/ov_auto.py
index 46afd3ab24c883..4f02a6f5c9a92b 100644
--- a/docs/snippets/ov_auto.py
+++ b/docs/snippets/ov_auto.py
@@ -75,23 +75,22 @@ def part4():
compiled_model0 = core.compile_model(model=model, device_name="AUTO", config={"MODEL_PRIORITY":"HIGH"})
compiled_model1 = core.compile_model(model=model, device_name="AUTO", config={"MODEL_PRIORITY":"MEDIUM"})
compiled_model2 = core.compile_model(model=model, device_name="AUTO", config={"MODEL_PRIORITY":"LOW"})
- # Assume that all the devices (CPU, GPU, and MYRIAD) can support all the networks.
- # Result: compiled_model0 will use GPU, compiled_model1 will use MYRIAD, compiled_model2 will use CPU.
+ # Assume that all the devices (CPU and GPUs) can support all the networks.
+ # Result: compiled_model0 will use GPU.1, compiled_model1 will use GPU.0, compiled_model2 will use CPU.
# Example 2
compiled_model3 = core.compile_model(model=model, device_name="AUTO", config={"MODEL_PRIORITY":"HIGH"})
compiled_model4 = core.compile_model(model=model, device_name="AUTO", config={"MODEL_PRIORITY":"MEDIUM"})
compiled_model5 = core.compile_model(model=model, device_name="AUTO", config={"MODEL_PRIORITY":"LOW"})
- # Assume that all the devices (CPU, GPU, and MYRIAD) can support all the networks.
- # Result: compiled_model3 will use GPU, compiled_model4 will use GPU, compiled_model5 will use MYRIAD.
+ # Assume that all the devices (CPU ang GPUs) can support all the networks.
+ # Result: compiled_model3 will use GPU.1, compiled_model4 will use GPU.1, compiled_model5 will use GPU.0.
#! [part4]
def part5():
#! [part5]
core = Core()
model = core.read_model(model_path)
- core.set_property(device_name="CPU", properties={})
- core.set_property(device_name="MYRIAD", properties={})
+ # gpu_config and cpu_config will load during compile_model()
compiled_model = core.compile_model(model=model)
compiled_model = core.compile_model(model=model, device_name="AUTO")
#! [part5]
diff --git a/docs/snippets/ov_multi.py b/docs/snippets/ov_multi.py
index f7bdda4a7c62c6..f4e538c1b842ba 100644
--- a/docs/snippets/ov_multi.py
+++ b/docs/snippets/ov_multi.py
@@ -27,27 +27,29 @@ def MULTI_1():
#! [MULTI_1]
core = Core()
model = core.read_model(model_path)
- core.set_property(device_name="MULTI", properties={"MULTI_DEVICE_PRIORITIES":"HDDL,GPU"})
+ core.set_property(device_name="MULTI", properties={"MULTI_DEVICE_PRIORITIES":"CPU,GPU"})
# Once the priority list is set, you can alter it on the fly:
# reverse the order of priorities
- core.set_property(device_name="MULTI", properties={"MULTI_DEVICE_PRIORITIES":"GPU,HDDL"})
+ core.set_property(device_name="MULTI", properties={"MULTI_DEVICE_PRIORITIES":"GPU,CPU"})
- # exclude some devices (in this case, HDDL)
+ # exclude some devices (in this case, CPU)
core.set_property(device_name="MULTI", properties={"MULTI_DEVICE_PRIORITIES":"GPU"})
# bring back the excluded devices
- core.set_property(device_name="MULTI", properties={"MULTI_DEVICE_PRIORITIES":"HDDL,GPU"})
+ core.set_property(device_name="MULTI", properties={"MULTI_DEVICE_PRIORITIES":"GPU,CPU"})
# You cannot add new devices on the fly!
- # Attempting to do so, for example, adding CPU:
- core.set_property(device_name="MULTI", properties={"MULTI_DEVICE_PRIORITIES":"CPU,HDDL,GPU"})
- # would trigger the following exception:
+ # Attempting to do so will trigger the following exception:
# [ ERROR ] [NOT_FOUND] You can only change device
# priorities but not add new devices with the model's
# ov::device::priorities. CPU device was not in the original device list!
#! [MULTI_1]
+
+# the following two pieces of code appear not to be used anywhere
+# they should be considered for removal
+
def available_devices_1():
#! [available_devices_1]
all_devices = "MULTI:"
@@ -61,7 +63,7 @@ def available_devices_2():
#! [available_devices_2]
match_list = []
all_devices = "MULTI:"
- dev_match_str = "MYRIAD"
+ dev_match_str = "GPU"
core = Core()
model = core.read_model(model_path)
for d in core.available_devices:
@@ -81,18 +83,16 @@ def available_devices_2():
def MULTI_4():
#! [MULTI_4]
core = Core()
- hddl_config = {}
+ cpu_config = {}
gpu_config = {}
# Read a network in IR, PaddlePaddle, or ONNX format:
model = core.read_model(model_path)
- # When compiling the model on MULTI, configure CPU and MYRIAD
- # (devices, priorities, and device configurations):
- core.set_property(device_name="GPU", properties=gpu_config)
- core.set_property(device_name="HDDL", properties=hddl_config)
- compiled_model = core.compile_model(model=model, device_name="MULTI:HDDL,GPU")
-
+ # When compiling the model on MULTI, configure CPU and GPU
+ # (devices, priorities, and device configurations; gpu_config and cpu_config will load during compile_model() ):
+ compiled_model = core.compile_model(model=model, device_name="MULTI:GPU,CPU", config={"CPU":"NUM_STREAMS 4", "GPU":"NUM_STREAMS 8"})
+
# Optionally, query the optimal number of requests:
nireq = compiled_model.get_property("OPTIMAL_NUMBER_OF_INFER_REQUESTS")
#! [MULTI_4]
diff --git a/docs/snippets/ov_properties_api.cpp b/docs/snippets/ov_properties_api.cpp
index d9a144fcf99aac..7815291ee7b90e 100644
--- a/docs/snippets/ov_properties_api.cpp
+++ b/docs/snippets/ov_properties_api.cpp
@@ -41,13 +41,6 @@ auto compiled_model_thrp = core.compile_model(model, "CPU",
//! [core_set_property_then_compile]
}
-{
-//! [device_thermal]
-auto compiled_model = core.compile_model(model, "MYRIAD");
-float temperature = compiled_model.get_property(ov::device::thermal);
-//! [device_thermal]
-}
-
{
//! [inference_num_threads]
auto compiled_model = core.compile_model(model, "CPU");
diff --git a/docs/snippets/ov_properties_api.py b/docs/snippets/ov_properties_api.py
index 2393593bf185e7..f9c4b8913294f9 100644
--- a/docs/snippets/ov_properties_api.py
+++ b/docs/snippets/ov_properties_api.py
@@ -40,11 +40,6 @@
compiled_model_thrp = core.compile_model(model, "CPU", config)
# [core_set_property_then_compile]
-# [device_thermal]
-compiled_model = core.compile_model(model, "MYRIAD")
-temperature = compiled_model.get_property("DEVICE_THERMAL")
-# [device_thermal]
-
# [inference_num_threads]
compiled_model = core.compile_model(model, "CPU")
diff --git a/src/inference/include/ie/ie_iexecutable_network.hpp b/src/inference/include/ie/ie_iexecutable_network.hpp
index 4f9f784fa9b0a2..73106ef89900a8 100644
--- a/src/inference/include/ie/ie_iexecutable_network.hpp
+++ b/src/inference/include/ie/ie_iexecutable_network.hpp
@@ -126,7 +126,7 @@ class INFERENCE_ENGINE_DEPRECATED("Use InferenceEngine::ExecutableNetwork instea
* The method is responsible to extract information
* which affects executable network execution. The list of supported configuration values can be extracted via
* ExecutableNetwork::GetMetric with the SUPPORTED_CONFIG_KEYS key, but some of these keys cannot be changed
- * dymanically, e.g. DEVICE_ID cannot changed if an executable network has already been compiled for particular
+ * dynamically, e.g. DEVICE_ID cannot changed if an executable network has already been compiled for particular
* device.
*
* @param name config key, can be found in ie_plugin_config.hpp