Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[DOCS] remove mentions of myriad throughout docs #15690

Merged
Show file tree
Hide file tree
Changes from 3 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
6 changes: 3 additions & 3 deletions docs/OV_Runtime_UG/AutoPlugin_Debugging.md
Original file line number Diff line number Diff line change
Expand Up @@ -53,9 +53,9 @@ in which the `LOG_LEVEL` is represented by the first letter of its name (ERROR b
@sphinxdirective
.. code-block:: sh

[17:09:36.6188]D[plugin.cpp:167] deviceName:MYRIAD, defaultDeviceID:, uniqueName:MYRIAD_
[17:09:36.6242]I[executable_network.cpp:181] [AUTOPLUGIN]:select device:MYRIAD
[17:09:36.6809]ERROR[executable_network.cpp:384] [AUTOPLUGIN] load failed, MYRIAD:[ GENERAL_ERROR ]
[17:09:36.6188]D[plugin.cpp:167] deviceName:GPU, defaultDeviceID:, uniqueName:GPU_
[17:09:36.6242]I[executable_network.cpp:181] [AUTOPLUGIN]:select device:GPU
[17:09:36.6809]ERROR[executable_network.cpp:384] [AUTOPLUGIN] load failed, GPU:[ GENERAL_ERROR ]
@endsphinxdirective


Expand Down
2 changes: 1 addition & 1 deletion docs/OV_Runtime_UG/hetero_execution.md
Original file line number Diff line number Diff line change
Expand Up @@ -141,7 +141,7 @@ You can use the GraphViz utility or a file converter to view the images. On the

You can use performance data (in sample applications, it is the option `-pc`) to get the performance data on each subgraph.

Here is an example of the output for Googlenet v1 running on HDDL with fallback to CPU:
Here is an example of the output for Googlenet v1 running on HDDL (device no longer supported) with fallback to CPU:

```
subgraph1: 1. input preprocessing (mean data/HDDL):EXECUTED layerType: realTime: 129 cpu: 129 execType:
Expand Down
3 changes: 0 additions & 3 deletions docs/OV_Runtime_UG/img/yolo_tiny_v1.svg

This file was deleted.

9 changes: 5 additions & 4 deletions docs/OV_Runtime_UG/supported_plugins/Device_Plugins.md
Original file line number Diff line number Diff line change
Expand Up @@ -72,15 +72,16 @@ A simple programmatic way to enumerate the devices and use with the multi-device

@endsphinxdirective

Beyond the typical "CPU", "GPU", and so on, when multiple instances of a device are available, the names are more qualified. For example, this is how two Intel® Movidius™ Myriad™ X sticks are listed with the hello_query_sample:
Beyond the typical "CPU", "GPU", and so on, when multiple instances of a device are available, the names are more qualified. For example, this is how two GPUs can be listed (iGPU is always GPU.0):

```
...
Device: MYRIAD.1.2-ma2480
Device: GPU.0
...
Device: MYRIAD.1.4-ma2480
Device: GPU.1
```

So, the explicit configuration to use both would be "MULTI:MYRIAD.1.2-ma2480,MYRIAD.1.4-ma2480". Accordingly, the code that loops over all available devices of the "MYRIAD" type only is as follows:
So, the explicit configuration to use both would be "MULTI:GPU.1,GPU.0". Accordingly, the code that loops over all available devices of the "GPU" type only is as follows:

@sphinxdirective

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@ Supported Devices {#openvino_docs_OV_UG_supported_plugins_Supported_Devices}

The OpenVINO Runtime can infer models in different formats with various input and output formats. This section provides supported and optimal configurations per device. In OpenVINO™ documentation, "device" refers to an Intel® processors used for inference, which can be a supported CPU, GPU, or GNA (Gaussian neural accelerator coprocessor), or a combination of those devices.

> **NOTE**: With OpenVINO™ 2020.4 release, Intel® Movidius™ Neural Compute Stick support has been cancelled.
> **NOTE**: With OpenVINO™ 2023.0 release, support has been cancelled for all VPU accelerators based on Intel® Movidius™.

The OpenVINO Runtime provides unique capabilities to infer deep learning models on the following device types with corresponding plugins:

Expand Down
18 changes: 0 additions & 18 deletions docs/OV_Runtime_UG/supported_plugins/config_properties.md
Original file line number Diff line number Diff line change
Expand Up @@ -180,24 +180,6 @@ The `ov::CompiledModel::get_property` method is used to get property values the

@endsphinxtabset

Or the current temperature of the `MYRIAD` device:

@sphinxtabset

@sphinxtab{C++}

@snippet docs/snippets/ov_properties_api.cpp device_thermal

@endsphinxtab

@sphinxtab{Python}

@snippet docs/snippets/ov_properties_api.py device_thermal

@endsphinxtab

@endsphinxtabset


Or the number of threads that would be used for inference on `CPU` device:

Expand Down
2 changes: 1 addition & 1 deletion docs/gapi/gapi_face_analytics_pipeline.md
Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,7 @@ This sample requires:
To download the models from the Open Model Zoo, use the [Model Downloader](@ref omz_tools_downloader) tool.

## Introduction: Why G-API
Many computer vision algorithms run on a video stream rather than on individual images. Stream processing usually consists of multiple steps – like decode, preprocessing, detection, tracking, classification (on detected objects), and visualization – forming a *video processing pipeline*. Moreover, many these steps of such pipeline can run in parallel – modern platforms have different hardware blocks on the same chip like decoders and GPUs, and extra accelerators can be plugged in as extensions, like Intel® Movidius™ Neural Compute Stick for deep learning offload.
Many computer vision algorithms run on a video stream rather than on individual images. Stream processing usually consists of multiple steps – like decode, preprocessing, detection, tracking, classification (on detected objects), and visualization – forming a *video processing pipeline*. Moreover, many these steps of such pipeline can run in parallel – modern platforms have different hardware blocks on the same chip like decoders and GPUs, and extra accelerators can be plugged in as extensions for deep learning offload.

Given all this manifold of options and a variety in video analytics algorithms, managing such pipelines effectively quickly becomes a problem. For sure it can be done manually, but this approach doesn't scale: if a change is required in the algorithm (e.g. a new pipeline step is added), or if it is ported on a new platform with different capabilities, the whole pipeline needs to be re-optimized.

Expand Down
3 changes: 1 addition & 2 deletions docs/install_guides/installing-openvino-docker-linux.md
Original file line number Diff line number Diff line change
Expand Up @@ -63,7 +63,7 @@ You can also try our [Tutorials](https://github.com/openvinotoolkit/docker_ci/tr

## <a name="configure-image-docker-linux"></a>Configuring the Image for Different Devices

If you want to run inferences on a CPU or Intel® Neural Compute Stick 2, no extra configuration is needed. Go to <a href="#run-image-docker-linux">Running the image on different devices</a> for the next step.
If you want to run inferences on a CPU no extra configuration is needed. Go to <a href="#run-image-docker-linux">Running the image on different devices</a> for the next step.

### Configuring Docker Image for GPU

Expand Down Expand Up @@ -175,5 +175,4 @@ docker run -itu root:root --rm --device /dev/dri:/dev/dri <image_name>

- [DockerHub CI Framework](https://github.com/openvinotoolkit/docker_ci) for Intel® Distribution of OpenVINO™ toolkit. The Framework can generate a Dockerfile, build, test, and deploy an image with the Intel® Distribution of OpenVINO™ toolkit. You can reuse available Dockerfiles, add your layer and customize the image of OpenVINO™ for your needs.
- Intel® Distribution of OpenVINO™ toolkit home page: [https://software.intel.com/en-us/openvino-toolkit](https://software.intel.com/en-us/openvino-toolkit)
- Intel® Neural Compute Stick 2 Get Started: [https://software.intel.com/en-us/neural-compute-stick/get-started](https://software.intel.com/en-us/neural-compute-stick/get-started)
- [OpenVINO Installation Selector Tool](https://www.intel.com/content/www/us/en/developer/tools/openvino-toolkit/download.html)
8 changes: 4 additions & 4 deletions docs/snippets/AUTO4.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -15,8 +15,8 @@ ov::CompiledModel compiled_model1 = core.compile_model(model, "AUTO",
ov::CompiledModel compiled_model2 = core.compile_model(model, "AUTO",
ov::hint::model_priority(ov::hint::Priority::LOW));
/************
Assume that all the devices (CPU, GPU, and MYRIAD) can support all the models.
Result: compiled_model0 will use GPU, compiled_model1 will use MYRIAD, compiled_model2 will use CPU.
Assume that all the devices (CPU and GPUs) can support all the models.
Result: compiled_model0 will use GPU.1, compiled_model1 will use GPU.0, compiled_model2 will use CPU.
************/

// Example 2
Expand All @@ -27,8 +27,8 @@ ov::CompiledModel compiled_model4 = core.compile_model(model, "AUTO",
ov::CompiledModel compiled_model5 = core.compile_model(model, "AUTO",
ov::hint::model_priority(ov::hint::Priority::LOW));
/************
Assume that all the devices (CPU, GPU, and MYRIAD) can support all the models.
Result: compiled_model3 will use GPU, compiled_model4 will use GPU, compiled_model5 will use MYRIAD.
Assume that all the devices (CPU and GPUs) can support all the models.
Result: compiled_model3 will use GPU.1, compiled_model4 will use GPU.1, compiled_model5 will use GPU.0.
************/
//! [part4]
}
Expand Down
8 changes: 4 additions & 4 deletions docs/snippets/AUTO5.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -2,17 +2,17 @@

int main() {
ov::AnyMap cpu_config = {};
ov::AnyMap myriad_config = {};
ov::AnyMap gpu_config = {};
//! [part5]
ov::Core core;

// Read a network in IR, PaddlePaddle, or ONNX format:
// Read a network in IR, TensorFlow, PaddlePaddle, or ONNX format:
std::shared_ptr<ov::Model> model = core.read_model("sample.xml");

// Configure CPU and the MYRIAD devices when compiled model
// Configure the CPU and the GPU devices when compiling model
ov::CompiledModel compiled_model = core.compile_model(model, "AUTO",
ov::device::properties("CPU", cpu_config),
ov::device::properties("MYRIAD", myriad_config));
ov::device::properties("GPU", gpu_config));
//! [part5]
return 0;
}
6 changes: 3 additions & 3 deletions docs/snippets/MULTI0.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -10,14 +10,14 @@ std::shared_ptr<ov::Model> model = core.read_model("sample.xml");
// Option 1
// Pre-configure MULTI globally with explicitly defined devices,
// and compile the model on MULTI using the newly specified default device list.
core.set_property("MULTI", ov::device::priorities("HDDL,GPU"));
core.set_property("MULTI", ov::device::priorities("GPU.1,GPU.0"));
ov::CompiledModel compileModel0 = core.compile_model(model, "MULTI");

// Option 2
// Specify the devices to be used by MULTI explicitly at compilation.
// The following lines are equivalent:
ov::CompiledModel compileModel1 = core.compile_model(model, "MULTI:HDDL,GPU");
ov::CompiledModel compileModel2 = core.compile_model(model, "MULTI", ov::device::priorities("HDDL,GPU"));
ov::CompiledModel compileModel1 = core.compile_model(model, "MULTI:GPU.1,GPU.0");
ov::CompiledModel compileModel2 = core.compile_model(model, "MULTI", ov::device::priorities("GPU.1,GPU.0"));



Expand Down
12 changes: 5 additions & 7 deletions docs/snippets/MULTI1.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -4,22 +4,20 @@ int main() {
//! [part1]
ov::Core core;
std::shared_ptr<ov::Model> model = core.read_model("sample.xml");
ov::CompiledModel compileModel = core.compile_model(model, "MULTI:HDDL,GPU");
ov::CompiledModel compileModel = core.compile_model(model, "MULTI:CPU,GPU");

// Once the priority list is set, you can alter it on the fly:
// reverse the order of priorities
compileModel.set_property(ov::device::priorities("GPU,HDDL"));
compileModel.set_property(ov::device::priorities("GPU,CPU"));

// exclude some devices (in this case, HDDL)
// exclude some devices (in this case, CPU)
compileModel.set_property(ov::device::priorities("GPU"));

// bring back the excluded devices
compileModel.set_property(ov::device::priorities("GPU,HDDL"));
compileModel.set_property(ov::device::priorities("GPU,CPU"));

// You cannot add new devices on the fly!
// Attempting to do so, for example, adding CPU:
compileModel.set_property(ov::device::priorities("CPU,GPU,HDDL"));
// would trigger the following exception:
// Attempting to do so will trigger the following exception:
// [ ERROR ] [NOT_FOUND] You can only change device
// priorities but not add new devices with the model's
// ov::device::priorities. CPU device was not in the original device list!
Expand Down
10 changes: 5 additions & 5 deletions docs/snippets/MULTI3.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -3,12 +3,12 @@
int main() {
//! [part3]
ov::Core core;
std::vector<std::string> myriadDevices = core.get_property("MYRIAD", ov::available_devices);
std::vector<std::string> GPUDevices = core.get_property("GPU", ov::available_devices);
std::string all_devices;
for (size_t i = 0; i < myriadDevices.size(); ++i) {
all_devices += std::string("MYRIAD.")
+ myriadDevices[i]
+ std::string(i < (myriadDevices.size() -1) ? "," : "");
for (size_t i = 0; i < GPUDevices.size(); ++i) {
all_devices += std::string("GPU.")
+ GPUDevices[i]
+ std::string(i < (GPUDevices.size() -1) ? "," : "");
}
ov::CompiledModel compileModel = core.compile_model("sample.xml", "MULTI",
ov::device::priorities(all_devices));
Expand Down
8 changes: 4 additions & 4 deletions docs/snippets/MULTI4.cpp
Original file line number Diff line number Diff line change
@@ -1,19 +1,19 @@
#include <openvino/openvino.hpp>

int main() {
ov::AnyMap myriad_config, gpu_config;
ov::AnyMap cpu_config, gpu_config;
//! [part4]
ov::Core core;

// Read a network in IR, PaddlePaddle, or ONNX format:
std::shared_ptr<ov::Model> model = core.read_model("sample.xml");

// When compiling the model on MULTI, configure GPU and HDDL
// When compiling the model on MULTI, configure GPU and CPU
// (devices, priorities, and device configurations):
ov::CompiledModel compileModel = core.compile_model(model, "MULTI",
ov::device::priorities("HDDL", "GPU"),
ov::device::priorities("GPU", "CPU"),
ov::device::properties("GPU", gpu_config),
ov::device::properties("HDDL", myriad_config));
ov::device::properties("CPU", cpu_config));

// Optionally, query the optimal number of requests:
uint32_t nireq = compileModel.get_property(ov::optimal_number_of_infer_requests);
Expand Down
2 changes: 1 addition & 1 deletion docs/snippets/MULTI5.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@ int main() {
ov::Core core;

// // Read a model and compile it on MULTI
ov::CompiledModel compileModel = core.compile_model("sample.xml", "MULTI:HDDL,GPU");
ov::CompiledModel compileModel = core.compile_model("sample.xml", "MULTI:GPU,CPU");

// query the optimal number of requests
uint32_t nireq = compileModel.get_property(ov::optimal_number_of_infer_requests);
Expand Down
10 changes: 5 additions & 5 deletions docs/snippets/ov_auto.py
Original file line number Diff line number Diff line change
Expand Up @@ -75,23 +75,23 @@ def part4():
compiled_model0 = core.compile_model(model=model, device_name="AUTO", config={"MODEL_PRIORITY":"HIGH"})
compiled_model1 = core.compile_model(model=model, device_name="AUTO", config={"MODEL_PRIORITY":"MEDIUM"})
compiled_model2 = core.compile_model(model=model, device_name="AUTO", config={"MODEL_PRIORITY":"LOW"})
# Assume that all the devices (CPU, GPU, and MYRIAD) can support all the networks.
# Result: compiled_model0 will use GPU, compiled_model1 will use MYRIAD, compiled_model2 will use CPU.
# Assume that all the devices (CPU and GPUs) can support all the networks.
# Result: compiled_model0 will use GPU.1, compiled_model1 will use GPU.0, compiled_model2 will use CPU.

# Example 2
compiled_model3 = core.compile_model(model=model, device_name="AUTO", config={"MODEL_PRIORITY":"HIGH"})
compiled_model4 = core.compile_model(model=model, device_name="AUTO", config={"MODEL_PRIORITY":"MEDIUM"})
compiled_model5 = core.compile_model(model=model, device_name="AUTO", config={"MODEL_PRIORITY":"LOW"})
# Assume that all the devices (CPU, GPU, and MYRIAD) can support all the networks.
# Result: compiled_model3 will use GPU, compiled_model4 will use GPU, compiled_model5 will use MYRIAD.
# Assume that all the devices (CPU ang GPUs) can support all the networks.
# Result: compiled_model3 will use GPU.1, compiled_model4 will use GPU.1, compiled_model5 will use GPU.0.
#! [part4]

def part5():
#! [part5]
core = Core()
model = core.read_model(model_path)
core.set_property(device_name="CPU", properties={})
core.set_property(device_name="MYRIAD", properties={})
core.set_property(device_name="GPU", properties={})
kblaszczak-intel marked this conversation as resolved.
Show resolved Hide resolved
compiled_model = core.compile_model(model=model)
compiled_model = core.compile_model(model=model, device_name="AUTO")
#! [part5]
Expand Down
28 changes: 15 additions & 13 deletions docs/snippets/ov_multi.py
Original file line number Diff line number Diff line change
Expand Up @@ -27,27 +27,29 @@ def MULTI_1():
#! [MULTI_1]
core = Core()
model = core.read_model(model_path)
core.set_property(device_name="MULTI", properties={"MULTI_DEVICE_PRIORITIES":"HDDL,GPU"})
core.set_property(device_name="MULTI", properties={"MULTI_DEVICE_PRIORITIES":"CPU,GPU"})
# Once the priority list is set, you can alter it on the fly:
# reverse the order of priorities
core.set_property(device_name="MULTI", properties={"MULTI_DEVICE_PRIORITIES":"GPU,HDDL"})
core.set_property(device_name="MULTI", properties={"MULTI_DEVICE_PRIORITIES":"GPU,CPU"})

# exclude some devices (in this case, HDDL)
# exclude some devices (in this case, CPU)
core.set_property(device_name="MULTI", properties={"MULTI_DEVICE_PRIORITIES":"GPU"})

# bring back the excluded devices
core.set_property(device_name="MULTI", properties={"MULTI_DEVICE_PRIORITIES":"HDDL,GPU"})
core.set_property(device_name="MULTI", properties={"MULTI_DEVICE_PRIORITIES":"GPU,CPU"})

# You cannot add new devices on the fly!
# Attempting to do so, for example, adding CPU:
core.set_property(device_name="MULTI", properties={"MULTI_DEVICE_PRIORITIES":"CPU,HDDL,GPU"})
# would trigger the following exception:
# Attempting to do so will trigger the following exception:
# [ ERROR ] [NOT_FOUND] You can only change device
# priorities but not add new devices with the model's
# ov::device::priorities. CPU device was not in the original device list!

#! [MULTI_1]


# the following two pieces of code appear not to be used anywhere
# they should be considered for removal

def available_devices_1():
#! [available_devices_1]
all_devices = "MULTI:"
Expand All @@ -61,7 +63,7 @@ def available_devices_2():
#! [available_devices_2]
match_list = []
all_devices = "MULTI:"
dev_match_str = "MYRIAD"
dev_match_str = "GPU"
core = Core()
model = core.read_model(model_path)
for d in core.available_devices:
Expand All @@ -81,18 +83,18 @@ def available_devices_2():
def MULTI_4():
#! [MULTI_4]
core = Core()
hddl_config = {}
cpu_config = {}
gpu_config = {}

# Read a network in IR, PaddlePaddle, or ONNX format:
model = core.read_model(model_path)

# When compiling the model on MULTI, configure CPU and MYRIAD
# When compiling the model on MULTI, configure CPU and GPU
# (devices, priorities, and device configurations):
core.set_property(device_name="GPU", properties=gpu_config)
core.set_property(device_name="HDDL", properties=hddl_config)
compiled_model = core.compile_model(model=model, device_name="MULTI:HDDL,GPU")
core.set_property(device_name="CPU", properties=cpu_config)
kblaszczak-intel marked this conversation as resolved.
Show resolved Hide resolved
compiled_model = core.compile_model(model=model, device_name="MULTI:GPU,CPU", config={"CPU":"NUM_STREAMS 4", "GPU":"NUM_STREAMS 8"})

# Optionally, query the optimal number of requests:
nireq = compiled_model.get_property("OPTIMAL_NUMBER_OF_INFER_REQUESTS")
#! [MULTI_4]
Expand Down
7 changes: 0 additions & 7 deletions docs/snippets/ov_properties_api.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -41,13 +41,6 @@ auto compiled_model_thrp = core.compile_model(model, "CPU",
//! [core_set_property_then_compile]
}

{
//! [device_thermal]
auto compiled_model = core.compile_model(model, "MYRIAD");
float temperature = compiled_model.get_property(ov::device::thermal);
//! [device_thermal]
}

{
//! [inference_num_threads]
auto compiled_model = core.compile_model(model, "CPU");
Expand Down
5 changes: 0 additions & 5 deletions docs/snippets/ov_properties_api.py
Original file line number Diff line number Diff line change
Expand Up @@ -40,11 +40,6 @@
compiled_model_thrp = core.compile_model(model, "CPU", config)
# [core_set_property_then_compile]

# [device_thermal]
compiled_model = core.compile_model(model, "MYRIAD")
temperature = compiled_model.get_property("DEVICE_THERMAL")
# [device_thermal]


# [inference_num_threads]
compiled_model = core.compile_model(model, "CPU")
Expand Down
2 changes: 1 addition & 1 deletion src/inference/include/ie/ie_iexecutable_network.hpp
Original file line number Diff line number Diff line change
Expand Up @@ -126,7 +126,7 @@ class INFERENCE_ENGINE_DEPRECATED("Use InferenceEngine::ExecutableNetwork instea
* The method is responsible to extract information
* which affects executable network execution. The list of supported configuration values can be extracted via
* ExecutableNetwork::GetMetric with the SUPPORTED_CONFIG_KEYS key, but some of these keys cannot be changed
* dymanically, e.g. DEVICE_ID cannot changed if an executable network has already been compiled for particular
* dynamically, e.g. DEVICE_ID cannot changed if an executable network has already been compiled for particular
* device.
*
* @param name config key, can be found in ie_plugin_config.hpp
Expand Down