Skip to content

Commit

Permalink
Merge branch 'releases/2024/3' into docs-plugin-api-mcs-24-3
Browse files Browse the repository at this point in the history
  • Loading branch information
msmykx-intel authored Aug 27, 2024
2 parents d5402ba + df88c5c commit ba05072
Show file tree
Hide file tree
Showing 38 changed files with 7,568 additions and 32,509 deletions.
17 changes: 8 additions & 9 deletions docs/articles_en/about-openvino/compatibility-and-support.rst
Original file line number Diff line number Diff line change
Expand Up @@ -7,18 +7,17 @@ Compatibility and Support
:hidden:

compatibility-and-support/supported-devices
compatibility-and-support/supported-operations
compatibility-and-support/supported-models
compatibility-and-support/supported-operations-inference-devices
compatibility-and-support/supported-operations-framework-frontend


:doc:`Supported Devices <compatibility-and-support/supported-devices>` - compatibility information for supported hardware accelerators.

:doc:`Supported Models <compatibility-and-support/supported-models>` - a list of selected models confirmed to work with given hardware.

:doc:`Supported Operations <compatibility-and-support/supported-operations-inference-devices>` - a listing of framework layers supported by OpenVINO.

:doc:`Supported Operations <compatibility-and-support/supported-operations-framework-frontend>` - a listing of layers supported by OpenVINO inference devices.
| :doc:`Supported Devices <compatibility-and-support/supported-devices>`:
| compatibility information for supported hardware accelerators.
| :doc:`Supported Operations <compatibility-and-support/supported-operations>`:
| a listing of operations supported by OpenVINO, based on device and frontend conformance.
| :doc:`AI Models verified for OpenVINO™ <compatibility-and-support/supported-models>`:
| a list of selected models confirmed to work with Intel® Core Ultra™ Processors with the
OpenVINO™ toolkit.
Original file line number Diff line number Diff line change
@@ -1,12 +1,12 @@
Supported Inference Devices
============================
Supported Devices
===============================================================================================

.. meta::
:description: Check the list of devices used by OpenVINO to run inference
of deep learning models.


The OpenVINO™ runtime enables you to use a selection of devices to run your
The OpenVINO™ runtime enables you to use the following devices to run your
deep learning models:
:doc:`CPU <../../openvino-workflow/running-inference/inference-devices-and-modes/cpu-device>`,
:doc:`GPU <../../openvino-workflow/running-inference/inference-devices-and-modes/gpu-device>`,
Expand All @@ -18,16 +18,20 @@ deep learning models:
Beside running inference with a specific device,
OpenVINO offers the option of running automated inference with the following inference modes:

* :doc:`Automatic Device Selection <../../openvino-workflow/running-inference/inference-devices-and-modes/auto-device-selection>` - automatically selects the best device
available for the given task. It offers many additional options and optimizations, including inference on
multiple devices at the same time.
* :doc:`Heterogeneous Inference <../../openvino-workflow/running-inference/inference-devices-and-modes/hetero-execution>` - enables splitting inference among several devices
automatically, for example, if one device doesn't support certain operations.
* :doc:`(LEGACY) Multi-device Inference <./../../documentation/legacy-features/multi-device>` - executes inference on multiple devices.
Currently, this mode is considered a legacy solution. Using Automatic Device Selection is advised.
* :doc:`Automatic Batching <../../openvino-workflow/running-inference/inference-devices-and-modes/automatic-batching>` - automatically groups inference requests to improve
device utilization.
| :doc:`Automatic Device Selection <../../openvino-workflow/running-inference/inference-devices-and-modes/auto-device-selection>`:
| automatically selects the best device available for the given task. It offers many
additional options and optimizations, including inference on multiple devices at the
same time.
| :doc:`Heterogeneous Inference <../../openvino-workflow/running-inference/inference-devices-and-modes/hetero-execution>`:
| enables splitting inference among several devices automatically, for example, if one device
doesn't support certain operations.
| :doc:`Automatic Batching <../../openvino-workflow/running-inference/inference-devices-and-modes/automatic-batching>`:
| automatically groups inference requests to improve device utilization.
| :doc:`(LEGACY) Multi-device Inference <./../../documentation/legacy-features/multi-device>`:
| executes inference on multiple devices. Currently, this mode is considered a legacy
solution. Using Automatic Device Selection instead is advised.

Feature Support and API Coverage
Expand All @@ -36,16 +40,17 @@ Feature Support and API Coverage
======================================================================================================================================== ======= ========== ===========
Supported Feature CPU GPU NPU
======================================================================================================================================== ======= ========== ===========
:doc:`Automatic Device Selection <../../openvino-workflow/running-inference/inference-devices-and-modes/auto-device-selection>` Yes Yes Partial
:doc:`Heterogeneous execution <../../openvino-workflow/running-inference/inference-devices-and-modes/hetero-execution>` Yes Yes No
:doc:`(LEGACY) Multi-device execution <./../../documentation/legacy-features/multi-device>` Yes Yes Partial
:doc:`Automatic batching <../../openvino-workflow/running-inference/inference-devices-and-modes/automatic-batching>` No Yes No
:doc:`Multi-stream execution <../../openvino-workflow/running-inference/optimize-inference/optimizing-throughput>` Yes Yes No
:doc:`Models caching <../../openvino-workflow/running-inference/optimize-inference/optimizing-latency/model-caching-overview>` Yes Partial Yes
:doc:`Model caching <../../openvino-workflow/running-inference/optimize-inference/optimizing-latency/model-caching-overview>` Yes Partial Yes
:doc:`Dynamic shapes <../../openvino-workflow/running-inference/dynamic-shapes>` Yes Partial No
:doc:`Import/Export <../../documentation/openvino-ecosystem>` Yes Yes Yes
:doc:`Preprocessing acceleration <../../openvino-workflow/running-inference/optimize-inference/optimize-preprocessing>` Yes Yes No
:doc:`Stateful models <../../openvino-workflow/running-inference/stateful-models>` Yes Yes Yes
:doc:`Extensibility <../../documentation/openvino-extensibility>` Yes Yes No
:doc:`(LEGACY) Multi-device execution <./../../documentation/legacy-features/multi-device>` Yes Yes Partial
======================================================================================================================================== ======= ========== ===========


Expand Down Expand Up @@ -80,10 +85,10 @@ topic (step 3 "Configure input and output").

.. note::

With OpenVINO 2024.0 release, support for GNA has been discontinued. To keep using it
With the OpenVINO 2024.0 release, support for GNA has been discontinued. To keep using it
in your solutions, revert to the 2023.3 (LTS) version.

With OpenVINO™ 2023.0 release, support has been cancelled for:
With the OpenVINO™ 2023.0 release, support has been cancelled for:
- Intel® Neural Compute Stick 2 powered by the Intel® Movidius™ Myriad™ X
- Intel® Vision Accelerator Design with Intel® Movidius™

Expand Down
Loading

0 comments on commit ba05072

Please sign in to comment.