Skip to content

Commit

Permalink
Merge remote-tracking branch 'upstream/releases/2023/1' into docs-update
Browse files Browse the repository at this point in the history
  • Loading branch information
ilya-lavrenov committed Sep 17, 2023
2 parents bd7cf10 + 2d97a5d commit 19b72b2
Show file tree
Hide file tree
Showing 191 changed files with 10,350 additions and 6,595 deletions.
233 changes: 205 additions & 28 deletions docs/Documentation/model_introduction.md

Large diffs are not rendered by default.

16 changes: 9 additions & 7 deletions docs/Documentation/openvino_legacy_features.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,6 +7,7 @@
:hidden:

OpenVINO Development Tools package <openvino_docs_install_guides_install_dev_tools>
Model Optimizer / Conversion API <openvino_docs_OV_Converter_UG_prepare_model_convert_model_MO_OVC_transition>
OpenVINO API 2.0 transition <openvino_2_0_transition_guide>
Open Model ZOO <model_zoo>
Apache MXNet, Caffe, and Kaldi <mxnet_caffe_kaldi>
Expand Down Expand Up @@ -36,16 +37,17 @@ offering.
| :doc:`See how to install Development Tools <openvino_docs_install_guides_install_dev_tools>`


| **Model Optimizer**
| **Model Optimizer / Conversion API**
| *New solution:* Direct model support and OpenVINO Converter (OVC)
| *Old solution:* Model Optimizer discontinuation planned for OpenVINO 2025.0
| *Old solution:* Legacy Conversion API discontinuation planned for OpenVINO 2025.0
|
| Model Optimizer's role was largely reduced when all major model frameworks became
supported directly. For the sole purpose of converting model files explicitly,
it has been replaced with a more light-weight and efficient solution, the
OpenVINO Converter (launched with OpenVINO 2023.1).
| The role of Model Optimizer and later the Conversion API was largely reduced
when all major model frameworks became supported directly. For converting model
files explicitly, it has been replaced with a more light-weight and efficient
solution, the OpenVINO Converter (launched with OpenVINO 2023.1).

.. :doc:`See how to use OVC <?????????>`
| :doc:`See how to use OVC <openvino_docs_model_processing_introduction>`
| :doc:`See how to transition from the legacy solution <openvino_docs_OV_Converter_UG_prepare_model_convert_model_MO_OVC_transition>`


| **Open Model ZOO**
Expand Down
7 changes: 1 addition & 6 deletions docs/Extensibility_UG/Intro.md
Original file line number Diff line number Diff line change
Expand Up @@ -22,11 +22,6 @@
openvino_docs_transformations
OpenVINO Plugin Developer Guide <openvino_docs_ie_plugin_dg_overview>

.. toctree::
:maxdepth: 1
:hidden:

openvino_docs_MO_DG_prepare_model_customize_model_optimizer_Customize_Model_Optimizer

The Intel® Distribution of OpenVINO™ toolkit supports neural-network models trained with various frameworks, including
TensorFlow, PyTorch, ONNX, TensorFlow Lite, and PaddlePaddle (OpenVINO support for Apache MXNet, Caffe, and Kaldi is currently
Expand Down Expand Up @@ -62,7 +57,7 @@ Mapping from Framework Operation

Mapping of custom operation is implemented differently, depending on model format used for import. You may choose one of the following:

1. If a model is represented in the ONNX (including models exported from Pytorch in ONNX), TensorFlow Lite, PaddlePaddle or TensorFlow formats, then one of the classes from :doc:`Frontend Extension API <openvino_docs_Extensibility_UG_Frontend_Extensions>` should be used. It consists of several classes available in C++ which can be used with the ``--extensions`` option in Model Optimizer or when a model is imported directly to OpenVINO runtime using the ``read_model`` method. Python API is also available for runtime model import.
1. If a model is represented in the ONNX (including models exported from PyTorch in ONNX), TensorFlow Lite, PaddlePaddle or TensorFlow formats, then one of the classes from :doc:`Frontend Extension API <openvino_docs_Extensibility_UG_Frontend_Extensions>` should be used. It consists of several classes available in C++ which can be used with the ``--extensions`` option in Model Optimizer or when a model is imported directly to OpenVINO runtime using the ``read_model`` method. Python API is also available for runtime model import.

2. If a model is represented in the Caffe, Kaldi or MXNet formats (as legacy frontends), then :doc:`[Legacy] Model Optimizer Extensions <openvino_docs_MO_DG_prepare_model_customize_model_optimizer_Customize_Model_Optimizer>` should be used. This approach is available for model conversion in Model Optimizer only.

Expand Down
9 changes: 6 additions & 3 deletions docs/MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
# Convert a Model {#openvino_docs_MO_DG_Deep_Learning_Model_Optimizer_DevGuide}
# Legacy Conversion API {#openvino_docs_MO_DG_Deep_Learning_Model_Optimizer_DevGuide}

@sphinxdirective

Expand All @@ -14,12 +14,15 @@
openvino_docs_MO_DG_FP16_Compression
openvino_docs_MO_DG_Python_API
openvino_docs_MO_DG_prepare_model_Model_Optimizer_FAQ
Supported_Model_Formats_MO_DG

.. meta::
:description: Model conversion (MO) furthers the transition between training and
deployment environments, it adjusts deep learning models for
:description: Model conversion (MO) furthers the transition between training and
deployment environments, it adjusts deep learning models for
optimal execution on target devices.

.. note::
This part of the documentation describes a legacy approach to model conversion. Starting with OpenVINO 2023.1, a simpler alternative API for model conversion is available: ``openvino.convert_model`` and OpenVINO Model Converter ``ovc`` CLI tool. Refer to :doc:`Model preparation <openvino_docs_model_processing_introduction>` for more details. If you are still using `openvino.tools.mo.convert_model` or `mo` CLI tool, you can still refer to this documentation. However, consider checking the :doc:`transition guide <openvino_docs_OV_Converter_UG_prepare_model_convert_model_MO_OVC_transition>` to learn how to migrate from the legacy conversion API to the new one. Depending on the model topology, the new API can be a better option for you.

To convert a model to OpenVINO model format (``ov.Model``), you can use the following command:

Expand Down
10 changes: 5 additions & 5 deletions docs/MO_DG/prepare_model/FP16_Compression.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@
@sphinxdirective

By default, when IR is saved all relevant floating-point weights are compressed to ``FP16`` data type during model conversion.
It results in creating a "compressed ``FP16`` model", which occupies about half of
It results in creating a "compressed ``FP16`` model", which occupies about half of
the original space in the file system. The compression may introduce a minor drop in accuracy,
but it is negligible for most models.
In case if accuracy drop is significant user can disable compression explicitly.
Expand All @@ -29,20 +29,20 @@ To disable compression, use the ``compress_to_fp16=False`` option:
mo --input_model INPUT_MODEL --compress_to_fp16=False


For details on how plugins handle compressed ``FP16`` models, see
For details on how plugins handle compressed ``FP16`` models, see
:doc:`Working with devices <openvino_docs_OV_UG_Working_with_devices>`.

.. note::

``FP16`` compression is sometimes used as the initial step for ``INT8`` quantization.
Refer to the :doc:`Post-training optimization <pot_introduction>` guide for more
``FP16`` compression is sometimes used as the initial step for ``INT8`` quantization.
Refer to the :doc:`Post-training optimization <ptq_introduction>` guide for more
information about that.


.. note::

Some large models (larger than a few GB) when compressed to ``FP16`` may consume an overly large amount of RAM on the loading
phase of the inference. If that is the case for your model, try to convert it without compression:
phase of the inference. If that is the case for your model, try to convert it without compression:
``convert_model(INPUT_MODEL, compress_to_fp16=False)`` or ``convert_model(INPUT_MODEL)``


Expand Down
2 changes: 1 addition & 1 deletion docs/MO_DG/prepare_model/MO_Python_API.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@

Model conversion API is represented by ``convert_model()`` method in openvino.tools.mo namespace. ``convert_model()`` is compatible with types from openvino.runtime, like PartialShape, Layout, Type, etc.

``convert_model()`` has the ability available from the command-line tool, plus the ability to pass Python model objects, such as a Pytorch model or TensorFlow Keras model directly, without saving them into files and without leaving the training environment (Jupyter Notebook or training scripts). In addition to input models consumed directly from Python, ``convert_model`` can take OpenVINO extension objects constructed directly in Python for easier conversion of operations that are not supported in OpenVINO.
``convert_model()`` has the ability available from the command-line tool, plus the ability to pass Python model objects, such as a PyTorch model or TensorFlow Keras model directly, without saving them into files and without leaving the training environment (Jupyter Notebook or training scripts). In addition to input models consumed directly from Python, ``convert_model`` can take OpenVINO extension objects constructed directly in Python for easier conversion of operations that are not supported in OpenVINO.

.. note::

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -6,19 +6,11 @@
:description: Learn how to convert a model from the
ONNX format to the OpenVINO Intermediate Representation.


Introduction to ONNX
####################

`ONNX <https://github.com/onnx/onnx>`__ is a representation format for deep learning models that allows AI developers to easily transfer models between different frameworks. It is hugely popular among deep learning tools, like PyTorch, Caffe2, Apache MXNet, Microsoft Cognitive Toolkit, and many others.

.. note:: ONNX models are supported via FrontEnd API. You may skip conversion to IR and read models directly by OpenVINO runtime API. Refer to the :doc:`inference example <openvino_docs_OV_UG_Integrate_OV_with_your_application>` for more details. Using ``convert_model`` is still necessary in more complex cases, such as new custom inputs/outputs in model pruning, adding pre-processing, or using Python conversion extensions.

Converting an ONNX Model
########################

This page provides instructions on model conversion from the ONNX format to the OpenVINO IR format.

The model conversion process assumes you have an ONNX model that was directly downloaded from a public repository or converted from any framework that supports exporting to the ONNX format.

.. tab-set::
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -7,10 +7,10 @@
TensorFlow format to the OpenVINO Intermediate Representation.


This page provides general instructions on how to run model conversion from a TensorFlow format to the OpenVINO IR format. The instructions are different depending on whether your model was created with TensorFlow v1.X or TensorFlow v2.X.

.. note:: TensorFlow models are supported via :doc:`FrontEnd API <openvino_docs_MO_DG_TensorFlow_Frontend>`. You may skip conversion to IR and read models directly by OpenVINO runtime API. Refer to the :doc:`inference example <openvino_docs_OV_UG_Integrate_OV_with_your_application>` for more details. Using ``convert_model`` is still necessary in more complex cases, such as new custom inputs/outputs in model pruning, adding pre-processing, or using Python conversion extensions.

The conversion instructions are different depending on whether your model was created with TensorFlow v1.X or TensorFlow v2.X.

Converting TensorFlow 1 Models
###############################

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@

.. meta::
:description: Learn how to convert a BERT-NER model
from Pytorch to the OpenVINO Intermediate Representation.
from PyTorch to the OpenVINO Intermediate Representation.


The goal of this article is to present a step-by-step guide on how to convert PyTorch BERT-NER model to OpenVINO IR. First, you need to download the model and convert it to ONNX.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@

.. meta::
:description: Learn how to convert a Cascade RCNN R-101
model from Pytorch to the OpenVINO Intermediate Representation.
model from PyTorch to the OpenVINO Intermediate Representation.


The goal of this article is to present a step-by-step guide on how to convert a PyTorch Cascade RCNN R-101 model to OpenVINO IR. First, you need to download the model and convert it to ONNX.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@

.. meta::
:description: Learn how to convert a F3Net model
from Pytorch to the OpenVINO Intermediate Representation.
from PyTorch to the OpenVINO Intermediate Representation.


`F3Net <https://github.com/weijun88/F3Net>`__ : Fusion, Feedback and Focus for Salient Object Detection
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@

.. meta::
:description: Learn how to convert a QuartzNet model
from Pytorch to the OpenVINO Intermediate Representation.
from PyTorch to the OpenVINO Intermediate Representation.


`NeMo project <https://github.com/NVIDIA/NeMo>`__ provides the QuartzNet model.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@

.. meta::
:description: Learn how to convert a RCAN model
from Pytorch to the OpenVINO Intermediate Representation.
from PyTorch to the OpenVINO Intermediate Representation.


`RCAN <https://github.com/yulunzhang/RCAN>`__ : Image Super-Resolution Using Very Deep Residual Channel Attention Networks
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@

.. meta::
:description: Learn how to convert a RNN-T model
from Pytorch to the OpenVINO Intermediate Representation.
from PyTorch to the OpenVINO Intermediate Representation.


This guide covers conversion of RNN-T model from `MLCommons <https://github.com/mlcommons>`__ repository. Follow
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@

.. meta::
:description: Learn how to convert a YOLACT model
from Pytorch to the OpenVINO Intermediate Representation.
from PyTorch to the OpenVINO Intermediate Representation.


You Only Look At CoefficienTs (YOLACT) is a simple, fully convolutional model for real-time instance segmentation.
Expand Down
Loading

0 comments on commit 19b72b2

Please sign in to comment.