Skip to content

Commit

Permalink
[DOCS] bring VPU support back in 22.3.1 (#18064)
Browse files Browse the repository at this point in the history
  • Loading branch information
kblaszczak-intel authored Jun 14, 2023
1 parent 04bcae3 commit 12b562b
Show file tree
Hide file tree
Showing 12 changed files with 1 addition and 116 deletions.
10 changes: 0 additions & 10 deletions docs/Extensibility_UG/VPU_Extensibility.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,16 +2,6 @@



@sphinxdirective

.. warning::

OpenVINO 2022.3, temporarily, does not support the VPU devices.
The feature will be re-implemented with the next update. Until then,
continue using a previous release of OpenVINO, if you work with VPUs.

@endsphinxdirective

To enable operations not supported by OpenVINO™ out of the box, you need a custom extension for Model Optimizer, a custom nGraph operation set, and a custom kernel for the device you will target. This page describes custom kernel support for one the VPU, the Intel® Neural Compute Stick 2 device, which uses the MYRIAD device plugin.

> **NOTE:**
Expand Down
8 changes: 0 additions & 8 deletions docs/OV_Runtime_UG/deployment/deployment-manager-tool.md
Original file line number Diff line number Diff line change
Expand Up @@ -14,15 +14,7 @@ To use the Deployment Manager tool, the following requirements need to be met:
* **For VPU**, see [Configurations for Intel® Vision Accelerator Design with Intel® Movidius™ VPUs](../../install_guides/configurations-for-ivad-vpu.md).
* **For GNA**, see [Intel® Gaussian & Neural Accelerator (GNA)](../../install_guides/configurations-for-intel-gna.md)

@sphinxdirective

.. warning::

OpenVINO 2022.3, temporarily, does not support the VPU devices.
The feature will be re-implemented with the next update. Until then,
continue using a previous release of OpenVINO, if you work with VPUs.

@endsphinxdirective


> **IMPORTANT**: The operating system on the target system must be the same as the development system on which you are creating the package. For example, if the target system is Ubuntu 18.04, the deployment package must be created from the OpenVINO™ toolkit installed on Ubuntu 18.04.
Expand Down
8 changes: 0 additions & 8 deletions docs/OV_Runtime_UG/supported_plugins/MYRIAD.md
Original file line number Diff line number Diff line change
@@ -1,15 +1,7 @@
# MYRIAD Device {#openvino_docs_OV_UG_supported_plugins_MYRIAD}


@sphinxdirective

.. warning::

OpenVINO 2022.3, temporarily, does not support the VPU devices.
The feature will be re-implemented with the next update. Until then,
continue using a previous release of OpenVINO, if you work with VPUs.

@endsphinxdirective


The OpenVINO Runtime MYRIAD plugin has been developed for inference of neural networks on Intel® Neural Compute Stick 2.
Expand Down
11 changes: 1 addition & 10 deletions docs/OV_Runtime_UG/supported_plugins/Supported_Devices.md
Original file line number Diff line number Diff line change
Expand Up @@ -12,23 +12,14 @@ The OpenVINO Runtime provides unique capabilities to infer deep learning models
|------------------------------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------|
|[GPU plugin](GPU.md) |Intel® Processor Graphics, including Intel® HD Graphics and Intel® Iris® Graphics |
|[CPU plugin](CPU.md) |Intel® Xeon® with Intel® Advanced Vector Extensions 2 (Intel® AVX2), Intel® Advanced Vector Extensions 512 (Intel® AVX-512), and AVX512_BF16, Intel® Core™ Processors with Intel® AVX2, Intel® Atom® Processors with Intel® Streaming SIMD Extensions (Intel® SSE) |
|[VPU plugins](VPU.md) (available in the Intel® Distribution of OpenVINO™ toolkit) |Intel® Neural Compute Stick 2 powered by the Intel® Movidius™ Myriad™ X, Intel® Vision Accelerator Design with Intel® Movidius™ VPUs |
|[VPU plugin](VPU.md) (available in the Intel® Distribution of OpenVINO™ toolkit) |Intel® Neural Compute Stick 2 powered by the Intel® Movidius™ Myriad™ X, Intel® Vision Accelerator Design with Intel® Movidius™ VPUs |
|[GNA plugin](GNA.md) (available in the Intel® Distribution of OpenVINO™ toolkit) |Intel® Speech Enabling Developer Kit, Amazon Alexa* Premium Far-Field Developer Kit, Intel® Pentium® Silver J5005 Processor, Intel® Pentium® Silver N5000 Processor, Intel® Celeron® J4005 Processor, Intel® Celeron® J4105 Processor, Intel® Celeron® Processor N4100, Intel® Celeron® Processor N4000, Intel® Core™ i3-8121U Processor, Intel® Core™ i7-1065G7 Processor, Intel® Core™ i7-1060G7 Processor, Intel® Core™ i5-1035G4 Processor, Intel® Core™ i5-1035G7 Processor, Intel® Core™ i5-1035G1 Processor, Intel® Core™ i5-1030G7 Processor, Intel® Core™ i5-1030G4 Processor, Intel® Core™ i3-1005G1 Processor, Intel® Core™ i3-1000G1 Processor, Intel® Core™ i3-1000G4 Processor|
|[Arm® CPU plugin](ARM_CPU.md) (unavailable in the Intel® Distribution of OpenVINO™ toolkit) |Raspberry Pi™ 4 Model B, Apple® Mac mini with M1 chip, NVIDIA® Jetson Nano™, Android™ devices |
|[Multi-Device execution](../multi_device.md) |Multi-Device execution enables simultaneous inference of the same model on several devices in parallel |
|[Auto-Device plugin](../auto_device_selection.md) |Auto-Device plugin enables selecting Intel® device for inference automatically |
|[Heterogeneous plugin](../hetero_execution.md) |Heterogeneous execution enables automatic inference splitting between several devices (for example if a device doesn't [support certain operation](#supported-layers)). |


@sphinxdirective

.. warning::

OpenVINO 2022.3, temporarily, does not support the VPU devices.
The feature will be re-implemented with the next update. Until then,
continue using a previous release of OpenVINO, if you work with VPUs.

@endsphinxdirective


> **NOTE**: ARM® CPU plugin is a community-level add-on to OpenVINO™. Intel® welcomes community participation in the OpenVINO™ ecosystem, technical questions and code contributions on community forums. However, this component has not undergone full release validation or qualification from Intel®, hence no official support is offered.
Expand Down
11 changes: 0 additions & 11 deletions docs/OV_Runtime_UG/supported_plugins/VPU.md
Original file line number Diff line number Diff line change
Expand Up @@ -13,17 +13,6 @@




@sphinxdirective

.. warning::

OpenVINO 2022.3, temporarily, does not support the VPU devices.
The feature will be re-implemented with the next update. Until then,
continue using a previous release of OpenVINO, if you work with VPUs.

@endsphinxdirective

This chapter provides information on the OpenVINO™ Runtime plugins that enable inference of deep learning models on the supported VPU devices:

* Intel® Neural Compute Stick 2 powered by the Intel® Movidius™ Myriad™ X — Supported by the [MYRIAD Plugin](MYRIAD.md)
Expand Down
9 changes: 0 additions & 9 deletions docs/get_started/get_started_demos.md
Original file line number Diff line number Diff line change
Expand Up @@ -378,15 +378,6 @@ The following two examples show how to run the same sample using GPU or MYRIAD a

#### Running Inference on MYRIAD

@sphinxdirective

.. warning::

OpenVINO 2022.3, temporarily, does not support the VPU devices.
The feature will be re-implemented with the next update. Until then,
continue using a previous release of OpenVINO, if you work with VPUs.

@endsphinxdirective

> **NOTE**: Running inference on VPU devices (Intel® Movidius™ Neural Compute Stick or Intel® Neural Compute Stick 2) with the MYRIAD plugin requires [additional hardware configuration steps](../install_guides/configurations-for-ncs2.md), as described earlier on this page.
Expand Down
13 changes: 0 additions & 13 deletions docs/install_guides/configurations-for-iei-card.md
Original file line number Diff line number Diff line change
@@ -1,19 +1,6 @@
# Configurations for IEI Mustang-V100-MX8-R10 Card {#openvino_docs_install_guides_movidius_setup_guide}

> **warning:**
> OpenVINO 2022.3, temporarily, does not support the VPU devices.
> The feature will be re-implemented with the next update. Until then,
> continue using a previous release of OpenVINO, if you work with VPUs.

@sphinxdirective

.. warning::

OpenVINO 2022.3, temporarily, does not support the VPU devices.
The feature will be re-implemented with the next update. Until then,
continue using a previous release of OpenVINO, if you work with VPUs.

@endsphinxdirective

@sphinxdirective

Expand Down
8 changes: 0 additions & 8 deletions docs/install_guides/configurations-for-ivad-vpu.md
Original file line number Diff line number Diff line change
@@ -1,15 +1,7 @@
# Configurations for Intel® Vision Accelerator Design with Intel® Movidius™ VPUs {#openvino_docs_install_guides_installing_openvino_ivad_vpu}


@sphinxdirective

.. warning::

OpenVINO 2022.3, temporarily, does not support the VPU devices.
The feature will be re-implemented with the next update. Until then,
continue using a previous release of OpenVINO, if you work with VPUs.

@endsphinxdirective


@sphinxdirective
Expand Down
11 changes: 0 additions & 11 deletions docs/install_guides/configurations-for-ncs2.md
Original file line number Diff line number Diff line change
Expand Up @@ -43,17 +43,6 @@ You've completed all required configuration steps to perform inference on Intel

## macOS

@sphinxdirective

.. warning::

OpenVINO 2022.3, temporarily, does not support the VPU devices.
The feature will be re-implemented with the next update. Until then,
continue using a previous release of OpenVINO, if you work with VPUs.

@endsphinxdirective


These steps are required only if you want to perform inference on Intel® Neural Compute Stick 2 powered by the Intel® Movidius™ Myriad™ X VPU.

To perform inference on Intel® Neural Compute Stick 2, the `libusb` library is required. You can build it from the [source code](https://github.com/libusb/libusb) or install using the macOS package manager you prefer: [Homebrew](https://brew.sh/), [MacPorts](https://www.macports.org/) or other.
Expand Down
10 changes: 0 additions & 10 deletions docs/install_guides/installing-openvino-overview.md
Original file line number Diff line number Diff line change
Expand Up @@ -20,16 +20,6 @@ Intel® Distribution of OpenVINO™ Toolkit is a comprehensive toolkit for devel
* Speeds time-to-market via an easy-to-use library of computer vision functions and pre-optimized kernels.
* Compatible with models from a wide variety of frameworks, including TensorFlow, PyTorch, PaddlePaddle, ONNX, and more.

@sphinxdirective

.. warning::

OpenVINO 2022.3, temporarily, does not support the VPU devices.
The feature will be re-implemented with the next update. Until then,
continue using a previous release of OpenVINO, if you work with VPUs.

@endsphinxdirective



## Install OpenVINO
Expand Down
10 changes: 0 additions & 10 deletions docs/install_guides/installing-openvino-yocto.md
Original file line number Diff line number Diff line change
Expand Up @@ -65,16 +65,6 @@ Follow the [Yocto Project official documentation](https://docs.yoctoproject.org/
CORE_IMAGE_EXTRA_INSTALL:append = " openvino-model-optimizer"
```

@sphinxdirective

.. warning::

OpenVINO 2022.3, temporarily, does not support the VPU devices.
The feature will be re-implemented with the next update. Until then,
continue using a previous release of OpenVINO, if you work with VPUs.

@endsphinxdirective



## Step 2: Build a Yocto Image with OpenVINO Packages
Expand Down
8 changes: 0 additions & 8 deletions docs/install_guides/troubleshooting-issues.md
Original file line number Diff line number Diff line change
Expand Up @@ -46,15 +46,7 @@ Try one of these solutions:

<!-- this part was taken from original configurations-for-ivad-vpu.md -->

@sphinxdirective

.. warning::

OpenVINO 2022.3, temporarily, does not support the VPU devices.
The feature will be re-implemented with the next update. Until then,
continue using a previous release of OpenVINO, if you work with VPUs.

@endsphinxdirective

### Unable to run inference with the MYRIAD Plugin after running with the HDDL Plugin

Expand Down

0 comments on commit 12b562b

Please sign in to comment.