Skip to content

Commit

Permalink
additional cleanup
Browse files Browse the repository at this point in the history
  • Loading branch information
kblaszczak-intel committed Feb 16, 2023
1 parent d19b098 commit 51e63f7
Show file tree
Hide file tree
Showing 5 changed files with 4 additions and 8 deletions.
3 changes: 0 additions & 3 deletions docs/OV_Runtime_UG/img/yolo_tiny_v1.svg

This file was deleted.

2 changes: 1 addition & 1 deletion docs/OV_Runtime_UG/supported_plugins/Supported_Devices.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@ Supported Devices {#openvino_docs_OV_UG_supported_plugins_Supported_Devices}

The OpenVINO Runtime can infer models in different formats with various input and output formats. This section provides supported and optimal configurations per device. In OpenVINO™ documentation, "device" refers to an Intel® processors used for inference, which can be a supported CPU, GPU, or GNA (Gaussian neural accelerator coprocessor), or a combination of those devices.

> **NOTE**: With OpenVINO™ 2020.4 release, Intel® Movidius™ Neural Compute Stick support has been cancelled.
> **NOTE**: With OpenVINO™ 2023.0 release, support has been cancelled for all VPU accelerators based on Intel® Movidius™.
The OpenVINO Runtime provides unique capabilities to infer deep learning models on the following device types with corresponding plugins:

Expand Down
2 changes: 1 addition & 1 deletion docs/gapi/gapi_face_analytics_pipeline.md
Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,7 @@ This sample requires:
To download the models from the Open Model Zoo, use the [Model Downloader](@ref omz_tools_downloader) tool.

## Introduction: Why G-API
Many computer vision algorithms run on a video stream rather than on individual images. Stream processing usually consists of multiple steps – like decode, preprocessing, detection, tracking, classification (on detected objects), and visualization – forming a *video processing pipeline*. Moreover, many these steps of such pipeline can run in parallel – modern platforms have different hardware blocks on the same chip like decoders and GPUs, and extra accelerators can be plugged in as extensions, like Intel® Movidius™ Neural Compute Stick for deep learning offload.
Many computer vision algorithms run on a video stream rather than on individual images. Stream processing usually consists of multiple steps – like decode, preprocessing, detection, tracking, classification (on detected objects), and visualization – forming a *video processing pipeline*. Moreover, many these steps of such pipeline can run in parallel – modern platforms have different hardware blocks on the same chip like decoders and GPUs, and extra accelerators can be plugged in as extensions for deep learning offload.

Given all this manifold of options and a variety in video analytics algorithms, managing such pipelines effectively quickly becomes a problem. For sure it can be done manually, but this approach doesn't scale: if a change is required in the algorithm (e.g. a new pipeline step is added), or if it is ported on a new platform with different capabilities, the whole pipeline needs to be re-optimized.

Expand Down
3 changes: 1 addition & 2 deletions docs/install_guides/installing-openvino-docker-linux.md
Original file line number Diff line number Diff line change
Expand Up @@ -63,7 +63,7 @@ You can also try our [Tutorials](https://github.com/openvinotoolkit/docker_ci/tr

## <a name="configure-image-docker-linux"></a>Configuring the Image for Different Devices

If you want to run inferences on a CPU or Intel® Neural Compute Stick 2, no extra configuration is needed. Go to <a href="#run-image-docker-linux">Running the image on different devices</a> for the next step.
If you want to run inferences on a CPU no extra configuration is needed. Go to <a href="#run-image-docker-linux">Running the image on different devices</a> for the next step.

### Configuring Docker Image for GPU

Expand Down Expand Up @@ -175,5 +175,4 @@ docker run -itu root:root --rm --device /dev/dri:/dev/dri <image_name>

- [DockerHub CI Framework](https://github.com/openvinotoolkit/docker_ci) for Intel® Distribution of OpenVINO™ toolkit. The Framework can generate a Dockerfile, build, test, and deploy an image with the Intel® Distribution of OpenVINO™ toolkit. You can reuse available Dockerfiles, add your layer and customize the image of OpenVINO™ for your needs.
- Intel® Distribution of OpenVINO™ toolkit home page: [https://software.intel.com/en-us/openvino-toolkit](https://software.intel.com/en-us/openvino-toolkit)
- Intel® Neural Compute Stick 2 Get Started: [https://software.intel.com/en-us/neural-compute-stick/get-started](https://software.intel.com/en-us/neural-compute-stick/get-started)
- [OpenVINO Installation Selector Tool](https://www.intel.com/content/www/us/en/developer/tools/openvino-toolkit/download.html)
2 changes: 1 addition & 1 deletion src/inference/include/ie/ie_iexecutable_network.hpp
Original file line number Diff line number Diff line change
Expand Up @@ -126,7 +126,7 @@ class INFERENCE_ENGINE_DEPRECATED("Use InferenceEngine::ExecutableNetwork instea
* The method is responsible to extract information
* which affects executable network execution. The list of supported configuration values can be extracted via
* ExecutableNetwork::GetMetric with the SUPPORTED_CONFIG_KEYS key, but some of these keys cannot be changed
* dymanically, e.g. DEVICE_ID cannot changed if an executable network has already been compiled for particular
* dynamically, e.g. DEVICE_ID cannot changed if an executable network has already been compiled for particular
* device.
*
* @param name config key, can be found in ie_plugin_config.hpp
Expand Down

0 comments on commit 51e63f7

Please sign in to comment.