diff --git a/docs/Documentation/model_introduction.md b/docs/Documentation/model_introduction.md
index d0fd4535ce59c2..26038599b83362 100644
--- a/docs/Documentation/model_introduction.md
+++ b/docs/Documentation/model_introduction.md
@@ -3,64 +3,233 @@
@sphinxdirective
.. meta::
- :description: Preparing models for OpenVINO Runtime. Learn about the methods
+ :description: Preparing models for OpenVINO Runtime. Learn about the methods
used to read, convert and compile models from different frameworks.
-
.. toctree::
:maxdepth: 1
:hidden:
Supported_Model_Formats
- openvino_docs_MO_DG_Deep_Learning_Model_Optimizer_DevGuide
+ openvino_docs_OV_Converter_UG_Conversion_Options
+ openvino_docs_OV_Converter_UG_prepare_model_convert_model_Converting_Model
+ openvino_docs_OV_Converter_UG_prepare_model_convert_model_MO_OVC_transition
Every deep learning workflow begins with obtaining a model. You can choose to prepare a custom one, use a ready-made solution and adjust it to your needs, or even download and run a pre-trained network from an online database, such as `TensorFlow Hub `__, `Hugging Face `__, or `Torchvision models `__.
-Import a model using ``read_model()``
-#################################################
+OpenVINO™ :doc:`supports several model formats ` and can convert them into its own representation, `openvino.Model `__ (`ov.Model `__), providing a conversion API. Converted models can be used for inference with one or multiple OpenVINO Hardware plugins. There are two ways to use the conversion API: using a Python program or calling the ``ovc`` command line tool.
-Model files (not Python objects) from :doc:`ONNX, PaddlePaddle, TensorFlow and TensorFlow Lite ` (check :doc:`TensorFlow Frontend Capabilities and Limitations `) do not require a separate step for model conversion, that is ``mo.convert_model``.
+.. note::
-The ``read_model()`` method reads a model from a file and produces `openvino.runtime.Model `__. If the file is in one of the supported original framework :doc:`file formats `, the method runs internal conversion to an OpenVINO model format. If the file is already in the :doc:`OpenVINO IR format `, it is read "as-is", without any conversion involved.
+ Prior OpenVINO 2023.1 release, model conversion API was exposed as ``openvino.tools.mo.convert_model`` function and ``mo`` command line tool.
+ Starting from 2023.1 release, a new simplified API was introduced: ``openvino.convert_model`` function and ``ovc`` command line tool as a replacement for ``openvino.tools.mo.convert_model``
+ and ``mo`` correspondingly, which are considered to be legacy now. All new users are recommended to use these new methods instead of the old methods. Please note that the new API and old API do not
+ provide the same level of features, that means the new tools are not always backward compatible with the old ones. Please consult with :doc:`Model Conversion API Transition Guide `.
-You can also convert a model from original framework to `openvino.runtime.Model `__ using ``convert_model()`` method. More details about ``convert_model()`` are provided in :doc:`model conversion guide ` .
+Convert a Model in Python: ``convert_model``
+############################################
-``ov.Model`` can be saved to IR using the ``ov.save_model()`` method. The saved IR can be further optimized using :doc:`Neural Network Compression Framework (NNCF) ` that applies post-training quantization methods.
+You can use Model conversion API in Python with the ``openvino.convert_model`` function. This function converts a model from its original framework representation, for example Pytorch or TensorFlow, to the object of type ``openvino.Model``. The resulting ``openvino.Model`` can be inferred in the same application (Python script or Jupyter Notebook) or saved into a file using``openvino.save_model`` for future use. Below, there are examples on how to use the ``openvino.convert_model`` with models from popular public repositories:
-.. note::
+.. tab-set::
+
+ .. tab-item:: Torchvision
+
+ .. code-block:: py
+ :force:
+
+ import openvino as ov
+ import torch
+ from torchvision.models import resnet50
+
+ model = resnet50(pretrained=True)
+
+ # prepare input_data
+ input_data = torch.rand(1, 3, 224, 224)
+
+ ov_model = ov.convert_model(model, example_input=input_data)
+
+ ###### Option 1: Save to OpenVINO IR:
+
+ # save model to OpenVINO IR for later use
+ ov.save_model(ov_model, 'model.xml')
+
+ ###### Option 2: Compile and infer with OpenVINO:
+
+ # compile model
+ compiled_model = ov.compile_model(ov_model)
+
+ # run the inference
+ result = compiled_model(input_data)
+
+ .. tab-item:: Hugging Face Transformers
+
+ .. code-block:: py
+
+ from transformers import BertTokenizer, BertModel
+
+ tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')
+ model = BertModel.from_pretrained("bert-base-uncased")
+ text = "Replace me by any text you'd like."
+ encoded_input = tokenizer(text, return_tensors='pt')
+
+ import openvino as ov
+ ov_model = ov.convert_model(model, example_input={**encoded_input})
+
+ ###### Option 1: Save to OpenVINO IR:
+
+ # save model to OpenVINO IR for later use
+ ov.save_model(ov_model, 'model.xml')
+
+ ###### Option 2: Compile and infer with OpenVINO:
+
+ # compile model
+ compiled_model = ov.compile_model(ov_model)
+
+ # prepare input_data using HF tokenizer or your own tokenizer
+ # encoded_input is reused here for simplicity
+
+ # run inference
+ result = compiled_model({**encoded_input})
+
+ .. tab-item:: Keras Applications
+
+ .. code-block:: py
+
+ import tensorflow as tf
+ import openvino as ov
+
+ tf_model = tf.keras.applications.ResNet50(weights="imagenet")
+ ov_model = ov.convert_model(tf_model)
+
+ ###### Option 1: Save to OpenVINO IR:
+
+ # save model to OpenVINO IR for later use
+ ov.save_model(ov_model, 'model.xml')
+
+ ###### Option 2: Compile and infer with OpenVINO:
+
+ # compile model
+ compiled_model = ov.compile_model(ov_model)
+
+ # prepare input_data
+ import numpy as np
+ input_data = np.random.rand(1, 224, 224, 3)
- ``convert_model()`` also allows you to perform input/output cut, add pre-processing or add custom Python conversion extensions.
+ # run inference
+ result = compiled_model(input_data)
-Convert a model with Python using ``mo.convert_model()``
-###########################################################
+ .. tab-item:: TensorFlow Hub
-Model conversion API, specifically, the ``mo.convert_model()`` method converts a model from original framework to ``ov.Model``. ``mo.convert_model()`` returns ``ov.Model`` object in memory so the ``read_model()`` method is not required. The resulting ``ov.Model`` can be inferred in the same training environment (python script or Jupiter Notebook). ``mo.convert_model()`` provides a convenient way to quickly switch from framework-based code to OpenVINO-based code in your inference application.
+ .. code-block:: py
-In addition to model files, ``mo.convert_model()`` can take OpenVINO extension objects constructed directly in Python for easier conversion of operations that are not supported in OpenVINO. The ``mo.convert_model()`` method also has a set of parameters to :doc:`cut the model `, :doc:`set input shapes or layout `, :doc:`add preprocessing `, etc.
+ import tensorflow as tf
+ import tensorflow_hub as hub
+ import openvino as ov
-The figure below illustrates the typical workflow for deploying a trained deep learning model, where IR is a pair of files describing the model:
+ model = tf.keras.Sequential([
+ hub.KerasLayer("https://tfhub.dev/google/imagenet/mobilenet_v1_100_224/classification/5")
+ ])
-* ``.xml`` - Describes the network topology.
-* ``.bin`` - Contains the weights and biases binary data.
+ # Check model page for information about input shape: https://tfhub.dev/google/imagenet/mobilenet_v1_100_224/classification/5
+ model.build([None, 224, 224, 3])
-.. image:: _static/images/model_conversion_diagram.svg
+ model.save('mobilenet_v1_100_224') # use a temporary directory
+ ov_model = ov.convert_model('mobilenet_v1_100_224')
+
+ ###### Option 1: Save to OpenVINO IR:
+
+ ov.save_model(ov_model, 'model.xml')
+
+ ###### Option 2: Compile and infer with OpenVINO:
+
+ compiled_model = ov.compile_model(ov_model)
+
+ # prepare input_data
+ import numpy as np
+ input_data = np.random.rand(1, 224, 224, 3)
+
+ # run inference
+ result = compiled_model(input_data)
+
+ .. tab-item:: ONNX Model Hub
+
+ .. code-block:: py
+
+ import onnx
+
+ model = onnx.hub.load("resnet50")
+ onnx.save(model, 'resnet50.onnx') # use a temporary file for model
+
+ import openvino as ov
+ ov_model = ov.convert_model('resnet50.onnx')
+
+ ###### Option 1: Save to OpenVINO IR:
+
+ # save model to OpenVINO IR for later use
+ ov.save_model(ov_model, 'model.xml')
+
+ ###### Option 2: Compile and infer with OpenVINO:
+
+ # compile model
+ compiled_model = ov.compile_model(ov_model)
+
+ # prepare input_data
+ import numpy as np
+ input_data = np.random.rand(1, 3, 224, 224)
+
+ # run inference
+ result = compiled_model(input_data)
+
+In Option 1, where the ``openvino.save_model`` function is used, an OpenVINO model is serialized in the file system as two files with ``.xml`` and ``.bin`` extensions. This pair of files is called OpenVINO Intermediate Representation format (OpenVINO IR, or just IR) and useful for efficient model deployment. OpenVINO IR can be loaded into another application for inference using the ``openvino.Core.read_model`` function. For more details, refer to the :doc:`OpenVINO™ Runtime documentation `.
+
+Option 2, where ``openvino.compile_model`` is used, provides a convenient way to quickly switch from framework-based code to OpenVINO-based code in your existing Python inference application. In this case, the converted model is not saved to IR. Instead, the model is compiled and used for inference within the same application.
+
+Option 1 separates model conversion and model inference into two different applications. This approach is useful for deployment scenarios requiring fewer extra dependencies and faster model loading in the end inference application.
+
+For example, converting a PyTorch model to OpenVINO usually demands the ``torch`` Python module and Python. This process can take extra time and memory. But, after the converted model is saved as IR with ``openvino.save_model``, it can be loaded in a separate application without requiring the ``torch`` dependency and the time-consuming conversion. The inference application can be written in other languages supported by OpenVINO, for example, in C++, and Python installation is not necessary for it to run.
+
+Before saving the model to OpenVINO IR, consider applying :doc:`Post-training Optimization ` to enable more efficient inference and smaller model size.
+
+The figure below illustrates the typical workflow for deploying a trained deep-learning model.
+
+.. image:: ./_static/images/model_conversion_diagram.svg
:alt: model conversion diagram
+Convert a Model in CLI: ``ovc``
+###############################
+
+Another option for model conversion is to use ``ovc`` command-line tool, which stands for OpenVINO Model Converter. The tool combines both ``openvino.convert_model`` and ``openvino.save_model`` functionalities. It is convenient to use when the original model is ready for inference and is in one of the supported file formats: ONNX, TensorFlow, TensorFlow Lite, or PaddlePaddle. As a result, ``ovc`` produces an OpenVINO IR, consisting of ``.xml`` and ``.bin`` files, which needs to be read with the ``ov.read_model()`` method. You can compile and infer the ``ov.Model`` later with :doc:`OpenVINO™ Runtime `
-Convert a model using ``mo`` command-line tool
-#################################################
+.. note::
+ PyTorch models cannot be converted with ``ovc``, use ``openvino.convert_model`` instead.
-Another option to convert a model is to use ``mo`` command-line tool. ``mo`` is a cross-platform tool that facilitates the transition between training and deployment environments, performs static model analysis, and adjusts deep learning models for optimal execution on end-point target devices in the same measure, as the ``mo.convert_model()`` method.
+The results of both ``ovc`` and ``openvino.convert_model``/``openvino.save_model`` conversion methods are the same. You can choose either of them based on your convenience. Note that there should not be any differences in the results of model conversion if the same set of parameters is used and the model is saved into OpenVINO IR.
-``mo`` requires the use of a pre-trained deep learning model in one of the supported formats: TensorFlow, TensorFlow Lite, PaddlePaddle, or ONNX. ``mo`` converts the model to the OpenVINO Intermediate Representation format (IR), which needs to be read with the ``ov.read_model()`` method. Then, you can compile and infer the ``ov.Model`` later with :doc:`OpenVINO™ Runtime `.
+Cases when Model Preparation is not Required
+############################################
-The results of both ``mo`` and ``mo.convert_model()`` conversion methods described above are the same. You can choose one of them, depending on what is most convenient for you. Keep in mind that there should not be any differences in the results of model conversion if the same set of parameters is used.
+If a model is represented as a single file from ONNX, PaddlePaddle, TensorFlow and TensorFlow Lite (check :doc:`TensorFlow Frontend Capabilities and Limitations `), it does not require a separate conversion and IR-saving step, that is ``openvino.convert_model`` and ``openvino.save_model``, or ``ovc``.
-This section describes how to obtain and prepare your model for work with OpenVINO to get the best inference results:
+OpenVINO provides C++ and Python APIs for reading such models by just calling the ``openvino.Core.read_model`` or ``openvino.Core.compile_model`` methods. These methods perform conversion of the model from the original representation. While this conversion may take extra time compared to using prepared OpenVINO IR, it is convenient when you need to read a model in the original format in C++, since ``openvino.convert_model`` is only available in Python. However, for efficient model deployment with the OpenVINO Runtime, it is still recommended to prepare OpenVINO IR and then use it in your inference application.
-* :doc:`See the supported formats and how to use them in your project `.
-* :doc:`Convert different model formats to the ov.Model format `.
+Additional Resources
+####################
-@endsphinxdirective
+The following articles describe in details how to obtain and prepare your model depending on the source model type:
+
+* :doc:`Convert different model formats to the ov.Model format `.
+* :doc:`Review all available conversion parameters `.
+
+To achieve the best model inference performance and more compact OpenVINO IR representation follow:
+* :doc:`Post-training optimization `
+* :doc:`Model inference in OpenVINO Runtime `
+
+If you are using legacy conversion API (``mo`` or ``openvino.tools.mo.convert_model``), please refer to the following materials:
+
+* :doc:`Transition from legacy mo and ov.tools.mo.convert_model `
+* :doc:`Legacy Model Conversion API `
+
+@endsphinxdirective
diff --git a/docs/MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md b/docs/MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md
index fb82472b9b45ad..96d83591a4dedf 100644
--- a/docs/MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md
+++ b/docs/MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md
@@ -1,4 +1,4 @@
-# Convert a Model {#openvino_docs_MO_DG_Deep_Learning_Model_Optimizer_DevGuide}
+# Legacy Conversion API {#openvino_docs_MO_DG_Deep_Learning_Model_Optimizer_DevGuide}
@sphinxdirective
@@ -14,12 +14,15 @@
openvino_docs_MO_DG_FP16_Compression
openvino_docs_MO_DG_Python_API
openvino_docs_MO_DG_prepare_model_Model_Optimizer_FAQ
+ Supported_Model_Formats_MO_DG
.. meta::
- :description: Model conversion (MO) furthers the transition between training and
- deployment environments, it adjusts deep learning models for
+ :description: Model conversion (MO) furthers the transition between training and
+ deployment environments, it adjusts deep learning models for
optimal execution on target devices.
+.. note::
+ This part of the documentation describes a legacy approach to model conversion. Starting with OpenVINO 2023.1, a simpler alternative API for model conversion is available: ``openvino.convert_model`` and OpenVINO Model Converter ``ovc`` CLI tool. Refer to `Model preparation ` for more details. If you are still using `openvino.tools.mo.convert_model` or `mo` CLI tool, you can still refer to this documentation. However, consider checking the `transition guide ` to learn how to migrate from the legacy conversion API to the new one. Depending on the model topology, the new API can be a better option for you.
To convert a model to OpenVINO model format (``ov.Model``), you can use the following command:
diff --git a/docs/MO_DG/prepare_model/FP16_Compression.md b/docs/MO_DG/prepare_model/FP16_Compression.md
index f560a6a035063d..05b2676c055dc5 100644
--- a/docs/MO_DG/prepare_model/FP16_Compression.md
+++ b/docs/MO_DG/prepare_model/FP16_Compression.md
@@ -3,7 +3,7 @@
@sphinxdirective
By default, when IR is saved all relevant floating-point weights are compressed to ``FP16`` data type during model conversion.
-It results in creating a "compressed ``FP16`` model", which occupies about half of
+It results in creating a "compressed ``FP16`` model", which occupies about half of
the original space in the file system. The compression may introduce a minor drop in accuracy,
but it is negligible for most models.
In case if accuracy drop is significant user can disable compression explicitly.
@@ -29,20 +29,20 @@ To disable compression, use the ``compress_to_fp16=False`` option:
mo --input_model INPUT_MODEL --compress_to_fp16=False
-For details on how plugins handle compressed ``FP16`` models, see
+For details on how plugins handle compressed ``FP16`` models, see
:doc:`Working with devices `.
.. note::
- ``FP16`` compression is sometimes used as the initial step for ``INT8`` quantization.
- Refer to the :doc:`Post-training optimization ` guide for more
+ ``FP16`` compression is sometimes used as the initial step for ``INT8`` quantization.
+ Refer to the :doc:`Post-training optimization ` guide for more
information about that.
.. note::
Some large models (larger than a few GB) when compressed to ``FP16`` may consume an overly large amount of RAM on the loading
- phase of the inference. If that is the case for your model, try to convert it without compression:
+ phase of the inference. If that is the case for your model, try to convert it without compression:
``convert_model(INPUT_MODEL, compress_to_fp16=False)`` or ``convert_model(INPUT_MODEL)``
diff --git a/docs/MO_DG/prepare_model/convert_model/supported_model_formats.md b/docs/MO_DG/prepare_model/convert_model/supported_model_formats.md
index b57c73eac51324..068ba7fca16297 100644
--- a/docs/MO_DG/prepare_model/convert_model/supported_model_formats.md
+++ b/docs/MO_DG/prepare_model/convert_model/supported_model_formats.md
@@ -1,4 +1,4 @@
-# Supported Model Formats {#Supported_Model_Formats}
+# Supported Model Formats {#Supported_Model_Formats_MO_DG}
@sphinxdirective
@@ -17,7 +17,7 @@
:description: Learn about supported model formats and the methods used to convert, read, and compile them in OpenVINO™.
-**OpenVINO IR (Intermediate Representation)** - the proprietary and default format of OpenVINO, benefiting from the full extent of its features. All other supported model formats, as listed below, are converted to :doc:`OpenVINO IR ` to enable inference. Consider storing your model in this format to minimize first-inference latency, perform model optimization, and, in some cases, save space on your drive.
+**OpenVINO IR (Intermediate Representation)** - the proprietary and default format of OpenVINO, benefiting from the full extent of its features. All other supported model formats, as listed below, are converted to :doc:`OpenVINO IR ` to enable inference. Consider storing your model in this format to minimize first-inference latency, perform model optimization, and, in some cases, save space on your drive.
**PyTorch, TensorFlow, ONNX, and PaddlePaddle** - can be used with OpenVINO Runtime API directly,
which means you do not need to save them as OpenVINO IR before including them in your application.
@@ -62,9 +62,9 @@ Here are code examples of how to use these methods with different model formats:
ov_model = convert_model(model)
compiled_model = core.compile_model(ov_model, "AUTO")
- For more details on conversion, refer to the
- :doc:`guide `
- and an example `tutorial `__
+ For more details on conversion, refer to the
+ :doc:`guide `
+ and an example `tutorial `__
on this topic.
.. tab-item:: TensorFlow
@@ -104,10 +104,10 @@ Here are code examples of how to use these methods with different model formats:
ov_model = convert_model("saved_model.pb")
compiled_model = core.compile_model(ov_model, "AUTO")
- For more details on conversion, refer to the
- :doc:`guide `
- and an example `tutorial `__
- on this topic.
+ For more details on conversion, refer to the
+ :doc:`guide `
+ and an example `tutorial `__
+ on this topic.
* The ``read_model()`` and ``compile_model()`` methods:
@@ -125,8 +125,8 @@ Here are code examples of how to use these methods with different model formats:
ov_model = read_model("saved_model.pb")
compiled_model = core.compile_model(ov_model, "AUTO")
- For a guide on how to run inference, see how to
- :doc:`Integrate OpenVINO™ with Your Application `.
+ For a guide on how to run inference, see how to
+ :doc:`Integrate OpenVINO™ with Your Application `.
For TensorFlow format, see :doc:`TensorFlow Frontend Capabilities and Limitations `.
.. tab-item:: C++
@@ -146,7 +146,7 @@ Here are code examples of how to use these methods with different model formats:
ov::CompiledModel compiled_model = core.compile_model("saved_model.pb", "AUTO");
- For a guide on how to run inference, see how to
+ For a guide on how to run inference, see how to
:doc:`Integrate OpenVINO™ with Your Application `.
.. tab-item:: C
@@ -167,7 +167,7 @@ Here are code examples of how to use these methods with different model formats:
ov_compiled_model_t* compiled_model = NULL;
ov_core_compile_model_from_file(core, "saved_model.pb", "AUTO", 0, &compiled_model);
- For a guide on how to run inference, see how to
+ For a guide on how to run inference, see how to
:doc:`Integrate OpenVINO™ with Your Application `.
.. tab-item:: CLI
@@ -206,9 +206,9 @@ Here are code examples of how to use these methods with different model formats:
ov_model = convert_model(".tflite")
compiled_model = core.compile_model(ov_model, "AUTO")
- For more details on conversion, refer to the
- :doc:`guide `
- and an example `tutorial `__
+ For more details on conversion, refer to the
+ :doc:`guide `
+ and an example `tutorial `__
on this topic.
@@ -239,7 +239,7 @@ Here are code examples of how to use these methods with different model formats:
compiled_model = core.compile_model(".tflite", "AUTO")
- For a guide on how to run inference, see how to
+ For a guide on how to run inference, see how to
:doc:`Integrate OpenVINO™ with Your Application `.
@@ -258,7 +258,7 @@ Here are code examples of how to use these methods with different model formats:
ov::CompiledModel compiled_model = core.compile_model(".tflite", "AUTO");
- For a guide on how to run inference, see how to
+ For a guide on how to run inference, see how to
:doc:`Integrate OpenVINO™ with Your Application `.
.. tab-item:: C
@@ -277,7 +277,7 @@ Here are code examples of how to use these methods with different model formats:
ov_compiled_model_t* compiled_model = NULL;
ov_core_compile_model_from_file(core, ".tflite", "AUTO", 0, &compiled_model);
- For a guide on how to run inference, see how to
+ For a guide on how to run inference, see how to
:doc:`Integrate OpenVINO™ with Your Application `.
.. tab-item:: CLI
@@ -297,7 +297,7 @@ Here are code examples of how to use these methods with different model formats:
mo --input_model .tflite
- For details on the conversion, refer to the
+ For details on the conversion, refer to the
:doc:`article `.
.. tab-item:: ONNX
@@ -324,9 +324,9 @@ Here are code examples of how to use these methods with different model formats:
ov_model = convert_model(".onnx")
compiled_model = core.compile_model(ov_model, "AUTO")
- For more details on conversion, refer to the
- :doc:`guide `
- and an example `tutorial `__
+ For more details on conversion, refer to the
+ :doc:`guide `
+ and an example `tutorial `__
on this topic.
@@ -445,9 +445,9 @@ Here are code examples of how to use these methods with different model formats:
ov_model = convert_model(".pdmodel")
compiled_model = core.compile_model(ov_model, "AUTO")
- For more details on conversion, refer to the
- :doc:`guide `
- and an example `tutorial `__
+ For more details on conversion, refer to the
+ :doc:`guide `
+ and an example `tutorial `__
on this topic.
* The ``read_model()`` method:
@@ -477,7 +477,7 @@ Here are code examples of how to use these methods with different model formats:
compiled_model = core.compile_model(".pdmodel", "AUTO")
- For a guide on how to run inference, see how to
+ For a guide on how to run inference, see how to
:doc:`Integrate OpenVINO™ with Your Application `.
.. tab-item:: C++
@@ -495,7 +495,7 @@ Here are code examples of how to use these methods with different model formats:
ov::CompiledModel compiled_model = core.compile_model(".pdmodel", "AUTO");
- For a guide on how to run inference, see how to
+ For a guide on how to run inference, see how to
:doc:`Integrate OpenVINO™ with Your Application `.
.. tab-item:: C
@@ -514,7 +514,7 @@ Here are code examples of how to use these methods with different model formats:
ov_compiled_model_t* compiled_model = NULL;
ov_core_compile_model_from_file(core, ".pdmodel", "AUTO", 0, &compiled_model);
- For a guide on how to run inference, see how to
+ For a guide on how to run inference, see how to
:doc:`Integrate OpenVINO™ with Your Application `.
.. tab-item:: CLI
@@ -538,8 +538,8 @@ Here are code examples of how to use these methods with different model formats:
:doc:`article `.
-**MXNet, Caffe, and Kaldi** are legacy formats that need to be converted explicitly to OpenVINO IR or ONNX before running inference.
-As OpenVINO is currently proceeding **to deprecate these formats** and **remove their support entirely in the future**,
+**MXNet, Caffe, and Kaldi** are legacy formats that need to be converted explicitly to OpenVINO IR or ONNX before running inference.
+As OpenVINO is currently proceeding **to deprecate these formats** and **remove their support entirely in the future**,
converting them to ONNX for use with OpenVINO should be considered the default path.
.. note::
diff --git a/docs/OV_Converter_UG/Deep_Learning_Model_Optimizer_DevGuide.md b/docs/OV_Converter_UG/Deep_Learning_Model_Optimizer_DevGuide.md
new file mode 100644
index 00000000000000..d0362bd904d6d3
--- /dev/null
+++ b/docs/OV_Converter_UG/Deep_Learning_Model_Optimizer_DevGuide.md
@@ -0,0 +1,98 @@
+# Conversion Parameters {#openvino_docs_OV_Converter_UG_Conversion_Options}
+
+@sphinxdirective
+
+.. _deep learning model optimizer:
+
+.. meta::
+ :description: Model Conversion API provides several parameters to adjust model conversion.
+
+This document describes all available parameters for ``openvino.convert_model``, ``ovc``, and ``openvino.save_model`` without focusing on a particular framework model format. Use this information for your reference as a common description of the conversion API capabilities in general. Part of the options can be not relevant to some specific frameworks. Use :doc:`Supported Model Formats ` page for more dedicated framework-dependent tutorials.
+
+In most cases when it is required to convert a model the following simple syntax can be used:
+
+.. tab-set::
+
+ .. tab-item:: Python
+ :sync: py
+
+ .. code-block:: py
+ :force:
+
+ import openvino as ov
+
+ ov_model = ov.convert_model('path_to_your_model')
+ # or, when model is a Python model object
+ ov_model = ov.convert_model(model)
+
+ # Optionally adjust model by embedding pre-post processing here...
+
+ ov.save_model(ov_model, 'model.xml')
+
+ .. tab-item:: CLI
+ :sync: cli
+
+ .. code-block:: sh
+
+ ovc path_to_your_model
+
+Providing just a path to the model or model object as ``openvino.convert_model`` argument is frequently enough to make a successful conversion. However, depending on the model topology and original deep learning framework, additional parameters may be required, which are described below.
+
+- ``example_input`` parameter available in Python ``openvino.convert_model`` only is intended to trace the model to obtain its graph representation. This parameter is crucial for converting PyTorch models and may sometimes be required for TensorFlow models. For more details, refer to the :doc:`PyTorch Model Conversion ` or :doc:`TensorFlow Model Conversion `.
+
+- ``input`` parameter to set or override shapes for model inputs. It configures dynamic and static dimensions in model inputs depending on your inference requirements. For more information on this parameter, refer to the :doc:`Setting Input Shapes ` guide.
+
+- ``output`` parameter to select one or multiple outputs from the original model. This is useful when the model has outputs that are not required for inference in a deployment scenario. By specifying only necessary outputs, you can create a more compact model that infers faster.
+
+- ``compress_to_fp16`` parameter that is provided by ``ovc`` CLI tool and ``openvino.save_model`` Python function, gives controls over the compression of model weights to FP16 format when saving OpenVINO model to IR. This option is enabled by default which means all produced IRs are saved using FP16 data type for weights which saves up to 2x storage space for the model file and in most cases doesn't sacrifice model accuracy. In case it does affect accuracy, the compression can be disabled by setting this flag to ``False``:
+
+.. tab-set::
+
+ .. tab-item:: Python
+ :sync: py
+
+ .. code-block:: py
+ :force:
+
+ import openvino as ov
+
+ ov_model = ov.convert_model(original_model)
+ ov.save_model(ov_model, 'model.xml' compress_to_fp16=False)
+
+ .. tab-item:: CLI
+ :sync: cli
+
+ .. code-block:: sh
+
+ ovc path_to_your_model --compress_to_fp16=False
+
+For details on how plugins handle compressed ``FP16`` models, see
+:doc:`Working with devices `.
+
+.. note::
+
+ ``FP16`` compression is sometimes used as the initial step for ``INT8`` quantization.
+ Refer to the :doc:`Post-training optimization ` guide for more
+ information about that.
+
+- ``extension`` parameter which makes possible conversion of the models consisting of operations that are not supported by OpenVINO out-of-the-box. It requires implementing of an OpenVINO extension first, please refer to :doc:`Frontend Extensions ` guide.
+
+- ``share_weigths`` parameter with default value `True` allows reusing memory with original weights. For models loaded in Python and then passed to ``openvino.convert_model``, that means that OpenVINO model will share the same areas in program memory where the original weights are located. For models loaded from files by ``openvino.convert_model``, file memory mapping is used to avoid extra memory allocation. When enabled, the original model cannot be destroyed (Python object cannot be deallocated and original model file cannot be deleted) for the whole lifetime of OpenVINO model. If it is not desired, set ``share_weights=False`` when calling ``openvino.convert_model``.
+
+.. note:: ``ovc`` doesn't have ``share_weights`` option and always uses sharing to reduce conversion time and consume less amount of memory during the conversion.
+
+- ``output_model`` parameter in ``ovc`` and ``openvino.save_model`` specifies name for output ``.xml`` file with the resulting OpenVINO IR. The accompanying ``.bin`` file name will be generated automatically by replacing ``.xml`` extension with ``.bin`` extension. The value of ``output_model`` must end with ``.xml`` extension. For ``ovc`` command line tool, ``output_model`` can also contain a name of a directory. In this case, the resulting OpenVINO IR files will be put into that directory with a base name of ``.xml`` and ``.bin`` files matching the original model base name passed to ``ovc`` as a parameter. For example, when calling ``ovc your_model.onnx --output_model directory_name``, files ``directory_name/your_model.xml`` and ``directory_name/your_model.bin`` will be created. If ``output_model`` is not used, then the current directory is used as a destination directory.
+
+.. note:: ``openvino.save_model`` doesn't support a directory for ``output_model`` parameter value because ``openvino.save_model`` gets OpenVINO model object represented in a memory and there is no original model file name available for output file name generation. For the same reason, ``output_model`` is a mandatory parameter for ``openvino.save_model``.
+
+- ``verbose`` parameter activates extra diagnostics printed to the standard output. Use for debugging purposes in case there is an issue with the conversion and to collect information for better bug reporting to OpenVINO team.
+
+.. note:: Weights sharing doesn't equally work for all the supported model formats. The value of this flag is considered as a hint for the conversion API, and actual sharing is used only if it is implemented and possible for a particular model representation.
+
+You can always run ``ovc -h`` or ``ovc --help`` to recall all the supported parameters for ``ovc``.
+
+Use ``ovc --version`` to check the version of OpenVINO package installed.
+
+@endsphinxdirective
+
+
diff --git a/docs/OV_Converter_UG/prepare_model/convert_model/Convert_Model_From_ONNX.md b/docs/OV_Converter_UG/prepare_model/convert_model/Convert_Model_From_ONNX.md
new file mode 100644
index 00000000000000..37bfd58f87b01c
--- /dev/null
+++ b/docs/OV_Converter_UG/prepare_model/convert_model/Convert_Model_From_ONNX.md
@@ -0,0 +1,59 @@
+# Converting an ONNX Model {#openvino_docs_OV_Converter_UG_prepare_model_convert_model_Convert_Model_From_ONNX}
+
+@sphinxdirective
+
+.. meta::
+ :description: Learn how to convert a model from the
+ ONNX format to the OpenVINO Model.
+
+Introduction to ONNX
+####################
+
+`ONNX `__ is a representation format for deep learning models that enables AI developers to easily transfer models between different frameworks.
+
+.. note:: An ONNX model file can be loaded by ``openvino.Core.read_model`` or ``openvino.Core.compile_model`` methods by OpenVINO runtime API without the need to prepare an OpenVINO IR first. Refer to the :doc:`inference example ` for more details. Using ``openvino.convert_model`` is still recommended if the model load latency is important for the inference application.
+
+Converting an ONNX Model
+########################
+
+This page provides instructions on model conversion from the ONNX format to the OpenVINO IR format.
+
+For model conversion, you need an ONNX model either directly downloaded from a public repository or converted from any framework that supports exporting to the ONNX format.
+
+To convert an ONNX model, run model conversion with the path to the input model ``.onnx`` file:
+
+.. tab-set::
+
+ .. tab-item:: Python
+ :sync: py
+
+ .. code-block:: py
+
+ import openvino as ov
+ ov.convert_model('your_model_file.onnx')
+
+ .. tab-item:: CLI
+ :sync: cli
+
+ .. code-block:: sh
+
+ ovc your_model_file.onnx
+
+External Data Files
+###################
+
+ONNX models may consist of multiple files when the model size exceeds 2GB allowed by Protobuf. According to this `ONNX article `__, instead of a single file, the model is represented as one file with ``.onnx`` extension and multiple separate files with external data. These data files are located in the same directory as the main ``.onnx`` file or in another directory.
+
+OpenVINO model conversion API supports ONNX models with external data representation. In this case, you only need to pass the main file with ``.onnx`` extension as ``ovc`` or ``openvino.convert_model`` parameter. The other files will be found and loaded automatically during the model conversion. The resulting OpenVINO model, represented as an IR in the filesystem, will have the usual structure with a single ``.xml`` file and a single ``.bin`` file, where all the original model weights are copied and packed together.
+
+Supported ONNX Layers
+#####################
+
+For the list of supported standard layers, refer to the :doc:`Supported Operations ` page.
+
+Additional Resources
+####################
+
+Check out more examples of model conversion in :doc:`interactive Python tutorials `.
+
+@endsphinxdirective
diff --git a/docs/OV_Converter_UG/prepare_model/convert_model/Convert_Model_From_Paddle.md b/docs/OV_Converter_UG/prepare_model/convert_model/Convert_Model_From_Paddle.md
new file mode 100644
index 00000000000000..ad2aa8798738ff
--- /dev/null
+++ b/docs/OV_Converter_UG/prepare_model/convert_model/Convert_Model_From_Paddle.md
@@ -0,0 +1,201 @@
+# Converting a PaddlePaddle Model {#openvino_docs_OV_Converter_UG_prepare_model_convert_model_Convert_Model_From_Paddle}
+
+@sphinxdirective
+
+.. meta::
+ :description: Learn how to convert a model from the
+ PaddlePaddle format to the OpenVINO Model.
+
+This page provides general instructions on how to convert a model from the PaddlePaddle format to the OpenVINO IR format using OpenVINO model conversion API. The instructions are different depending on the PaddlePaddle model format.
+
+.. note:: PaddlePaddle model serialized in a file can be loaded by ``openvino.Core.read_model`` or ``openvino.Core.compile_model`` methods by OpenVINO runtime API without preparing OpenVINO IR first. Refer to the :doc:`inference example ` for more details. Using ``openvino.convert_model`` is still recommended if model load latency matters for the inference application.
+
+Converting PaddlePaddle Model Files
+###################################
+
+PaddlePaddle inference model includes ``.pdmodel`` (storing model structure) and ``.pdiparams`` (storing model weight). For details on how to export a PaddlePaddle inference model, refer to the `Exporting PaddlePaddle Inference Model `__ Chinese guide.
+
+To convert a PaddlePaddle model, use the ``ovc`` or ``openvino.convert_model`` and specify the path to the input ``.pdmodel`` model file:
+
+
+.. tab-set::
+
+ .. tab-item:: Python
+ :sync: py
+
+ .. code-block:: py
+
+ import openvino as ov
+ ov.convert_model('your_model_file.pdmodel')
+
+ .. tab-item:: CLI
+ :sync: cli
+
+ .. code-block:: sh
+
+ ovc your_model_file.pdmodel
+
+**For example**, this command converts a yolo v3 PaddlePaddle model to OpenVINO IR model:
+
+.. tab-set::
+
+ .. tab-item:: Python
+ :sync: py
+
+ .. code-block:: py
+
+ import openvino as ov
+ ov.convert_model('yolov3.pdmodel')
+
+ .. tab-item:: CLI
+ :sync: cli
+
+ .. code-block:: sh
+
+ ovc yolov3.pdmodel
+
+Converting PaddlePaddle Python Model
+####################################
+
+Model conversion API supports passing PaddlePaddle models directly in Python without saving them to files in the user code.
+
+Following PaddlePaddle model object types are supported:
+
+* ``paddle.hapi.model.Model``
+* ``paddle.fluid.dygraph.layers.Layer``
+* ``paddle.fluid.executor.Executor``
+
+Some PaddlePaddle models may require setting ``example_input`` or ``output`` for conversion as shown in the examples below:
+
+* Example of converting ``paddle.hapi.model.Model`` format model:
+
+ .. code-block:: py
+ :force:
+
+ import paddle
+ import openvino as ov
+
+ # create a paddle.hapi.model.Model format model
+ resnet50 = paddle.vision.models.resnet50()
+ x = paddle.static.InputSpec([1,3,224,224], 'float32', 'x')
+ y = paddle.static.InputSpec([1,1000], 'float32', 'y')
+
+ model = paddle.Model(resnet50, x, y)
+
+ # convert to OpenVINO IR format
+ ov_model = ov.convert_model(model)
+
+ ov.save_model(ov_model, "resnet50.xml")
+
+* Example of converting ``paddle.fluid.dygraph.layers.Layer`` format model:
+
+ ``example_input`` is required while ``output`` is optional, which accept the following formats:
+
+ ``list`` with tensor (``paddle.Tensor``) or InputSpec (``paddle.static.input.InputSpec``)
+
+ .. code-block:: py
+ :force:
+
+ import paddle
+ import openvino as ov
+
+ # create a paddle.fluid.dygraph.layers.Layer format model
+ model = paddle.vision.models.resnet50()
+ x = paddle.rand([1,3,224,224])
+
+ # convert to OpenVINO IR format
+ ov_model = ov.convert_model(model, example_input=[x])
+
+* Example of converting ``paddle.fluid.executor.Executor`` format model:
+
+ ``example_input`` and ``output`` are required, which accept the following formats:
+
+ ``list`` or ``tuple`` with variable(``paddle.static.data``)
+
+ .. code-block:: py
+ :force:
+
+ import paddle
+ import openvino as ov
+
+ paddle.enable_static()
+
+ # create a paddle.fluid.executor.Executor format model
+ x = paddle.static.data(name="x", shape=[1,3,224])
+ y = paddle.static.data(name="y", shape=[1,3,224])
+ relu = paddle.nn.ReLU()
+ sigmoid = paddle.nn.Sigmoid()
+ y = sigmoid(relu(x))
+
+ exe = paddle.static.Executor(paddle.CPUPlace())
+ exe.run(paddle.static.default_startup_program())
+
+ # convert to OpenVINO IR format
+ ov_model = ov.convert_model(exe, example_input=[x], output=[y])
+
+Supported PaddlePaddle Layers
+#############################
+
+For the list of supported standard layers, refer to the :doc:`Supported Operations ` page.
+
+Officially Supported PaddlePaddle Models
+########################################
+
+The following PaddlePaddle models have been officially validated and confirmed to work (as of OpenVINO 2022.1):
+
+.. list-table::
+ :widths: 20 25 55
+ :header-rows: 1
+
+ * - Model Name
+ - Model Type
+ - Description
+ * - ppocr-det
+ - optical character recognition
+ - Models are exported from `PaddleOCR `_. Refer to `READ.md `_.
+ * - ppocr-rec
+ - optical character recognition
+ - Models are exported from `PaddleOCR `_. Refer to `READ.md `_.
+ * - ResNet-50
+ - classification
+ - Models are exported from `PaddleClas `_. Refer to `getting_started_en.md `_.
+ * - MobileNet v2
+ - classification
+ - Models are exported from `PaddleClas `_. Refer to `getting_started_en.md `_.
+ * - MobileNet v3
+ - classification
+ - Models are exported from `PaddleClas `_. Refer to `getting_started_en.md `_.
+ * - BiSeNet v2
+ - semantic segmentation
+ - Models are exported from `PaddleSeg `_. Refer to `model_export.md `_.
+ * - DeepLab v3 plus
+ - semantic segmentation
+ - Models are exported from `PaddleSeg `_. Refer to `model_export.md `_.
+ * - Fast-SCNN
+ - semantic segmentation
+ - Models are exported from `PaddleSeg `_. Refer to `model_export.md `_.
+ * - OCRNET
+ - semantic segmentation
+ - Models are exported from `PaddleSeg `_. Refer to `model_export.md `_.
+ * - Yolo v3
+ - detection
+ - Models are exported from `PaddleDetection `_. Refer to `EXPORT_MODEL.md `_.
+ * - ppyolo
+ - detection
+ - Models are exported from `PaddleDetection `_. Refer to `EXPORT_MODEL.md `_.
+ * - MobileNetv3-SSD
+ - detection
+ - Models are exported from `PaddleDetection `_. Refer to `EXPORT_MODEL.md `_.
+ * - U-Net
+ - semantic segmentation
+ - Models are exported from `PaddleSeg `_. Refer to `model_export.md `_.
+ * - BERT
+ - language representation
+ - Models are exported from `PaddleNLP `_. Refer to `README.md `_.
+
+Additional Resources
+####################
+
+Check out more examples of model conversion in :doc:`interactive Python tutorials `.
+
+@endsphinxdirective
diff --git a/docs/OV_Converter_UG/prepare_model/convert_model/Convert_Model_From_PyTorch.md b/docs/OV_Converter_UG/prepare_model/convert_model/Convert_Model_From_PyTorch.md
new file mode 100644
index 00000000000000..cc6126cffd6043
--- /dev/null
+++ b/docs/OV_Converter_UG/prepare_model/convert_model/Convert_Model_From_PyTorch.md
@@ -0,0 +1,155 @@
+# Converting a PyTorch Model {#openvino_docs_OV_Converter_UG_prepare_model_convert_model_Convert_Model_From_PyTorch}
+
+@sphinxdirective
+
+.. meta::
+ :description: Learn how to convert a model from the
+ PyTorch format to the OpenVINO Model.
+
+This page provides instructions on how to convert a model from the PyTorch format to the OpenVINO Model using the ``openvino.convert_model`` function.
+
+.. note::
+
+ In the examples below the ``openvino.save_model`` function is not used because there are no PyTorch-specific details regarding the usage of this function. In all examples, the converted OpenVINO model can be saved to IR by calling ``ov.save_model(ov_model, 'model.xml')`` as usual.
+
+Here is the simplest example of PyTorch model conversion using a model from ``torchvision``:
+
+.. code-block:: py
+ :force:
+
+ import torchvision
+ import torch
+ import openvino as ov
+
+ model = torchvision.models.resnet50(pretrained=True)
+ ov_model = ov.convert_model(model)
+
+``openvino.convert_model`` function supports the following PyTorch model object types:
+
+* ``torch.nn.Module`` derived classes
+* ``torch.jit.ScriptModule``
+* ``torch.jit.ScriptFunction``
+
+When passing a ``torch.nn.Module`` derived class object as an input model, converting PyTorch models often requires the ``example_input`` parameter to be specified in the ``openvino.convert_model`` function call. Internally it triggers the model tracing during the model conversion process, using the capabilities of the ``torch.jit.trace`` function.
+
+The use of ``example_input`` can lead to a better quality of the resulting OpenVINO model in terms of correctness and performance compared to converting the same original model without specifying ``example_input``. While the necessity of ``example_input`` depends on the implementation details of a specific PyTorch model, it is recommended to always set the ``example_input`` parameter when it is available.
+
+The value for the ``example_input`` parameter can be easily derived from knowing the input tensor's element type and shape. While it may not be suitable for all cases, random numbers can frequently serve this purpose effectively:
+
+.. code-block:: py
+ :force:
+
+ import torchvision
+ import torch
+ import openvino as ov
+
+ model = torchvision.models.resnet50(pretrained=True)
+ ov_model = ov.convert_model(model, example_input=example_input=torch.rand(1, 3, 224, 224))
+
+In practice, the code to evaluate or test the PyTorch model is usually provided with the model itself and can be used to generate a proper ``example_input`` value. A modified example of using ``resnet50`` model from ``torchvision`` is presented below. It demonstrates how to switch inference in the existing PyTorch application to OpenVINO and how to get value for ``example_input``:
+
+.. code-block:: py
+ :force:
+
+ from torchvision.io import read_image
+ from torchvision.models import resnet50, ResNet50_Weights
+ import requests, PIL, io, torch
+
+ # Get a picture of a cat from the web:
+ img = PIL.Image.open(io.BytesIO(requests.get("https://placekitten.com/200/300").content))
+
+ # Torchvision model and input data preparation from https://pytorch.org/vision/stable/models.html
+
+ weights = ResNet50_Weights.DEFAULT
+ model = resnet50(weights=weights)
+ model.eval()
+ preprocess = weights.transforms()
+ batch = preprocess(img).unsqueeze(0)
+
+ # PyTorch model inference and post-processing
+
+ prediction = model(batch).squeeze(0).softmax(0)
+ class_id = prediction.argmax().item()
+ score = prediction[class_id].item()
+ category_name = weights.meta["categories"][class_id]
+ print(f"{category_name}: {100 * score:.1f}% (with PyTorch)")
+
+ # OpenVINO model preparation and inference with the same post-processing
+
+ import openvino as ov
+ compiled_model = ov.compile_model(ov.convert_model(model, example_input=batch))
+
+ prediction = torch.tensor(compiled_model(batch)[0]).squeeze(0).softmax(0)
+ class_id = prediction.argmax().item()
+ score = prediction[class_id].item()
+ category_name = weights.meta["categories"][class_id]
+ print(f"{category_name}: {100 * score:.1f}% (with OpenVINO)")
+
+Check out more examples in :doc:`interactive Python tutorials `.
+
+Supported Input Parameter Types
+###############################
+
+If the model has a single input, the following input types are supported in ``example_input``:
+
+* ``openvino.runtime.Tensor``
+* ``torch.Tensor``
+* ``tuple`` or any nested combination of tuples
+
+If a model has multiple inputs, the input values are combined in a ``list``, a ``tuple``, or a ``dict``:
+
+* values in a ``list`` or ``tuple`` should be passed in the same order as the original model specifies,
+* ``dict`` has keys from the names of the original model argument names.
+
+Enclosing in ``list``, ``tuple`` or ``dict`` can be used for a single input as well as for multiple inputs.
+
+If a model has a single input parameter and the type of this input is a ``tuple``, it should be always passed enclosed into an extra ``list``, ``tuple`` or ``dict`` as in the case of multiple inputs. It is required to eliminate ambiguity between ``model((a, b))`` and ``model(a, b)`` in this case.
+
+Non-tensor Data Types
+#####################
+
+When a non-tensor data type, such as a ``tuple`` or ``dict``, appears in a model input or output, it is flattened. The flattening means that each element within the ``tuple`` will be represented as a separate input or output. The same is true for ``dict`` values, where the keys of the ``dict`` are used to form a model input/output name. The original non-tensor input or output is replaced by one or multiple new inputs or outputs resulting from this flattening process. This flattening procedure is applied recursively in the case of nested ``tuples`` and ``dicts`` until it reaches the assumption that the most nested data type is a tensor.
+
+For example, if the original model is called with ``example_input=(a, (b, c, (d, e)))``, where ``a``, ``b``, ...``e`` are tensors, it means that the original model has two inputs. The first is a tensor ``a``, and the second is a tuple ``(b, c, (d, e))``, containing two tensors ``b`` and ``c`` and a nested tuple ``(d, e)``. Then the resulting OpenVINO model will have signature ``(a, b, c, d, e)``, which means it will have five inputs, all of type tensor, instead of two in the original model.
+
+Flattening of a ``dict`` is supported for outputs only. If your model has an input of type ``dict``, you will need to decompose the ``dict`` to one or multiple tensor inputs by modifying the original model signature or making a wrapper model on top of the original model. This approach hides the dictionary from the model signature and allows it to be processed inside the model successfully.
+
+.. note::
+
+ An important consequence of flattening is that only ``tuple`` and ``dict`` with a fixed number of elements and key values are supported. The structure of such inputs should be fully described in the ``example_input`` parameter of ``convert_model``. The flattening on outputs should be reproduced with the given ``example_input`` and cannot be changed once the conversion is done.
+
+Check out more examples of model conversion with non-tensor data types in the following tutorials:
+
+* `Video Subtitle Generation using Whisper and OpenVINO™ `__
+* `Visual Question Answering and Image Captioning using BLIP and OpenVINO `__
+
+
+Exporting a PyTorch Model to ONNX Format
+########################################
+
+An alternative method of converting PyTorch models is exporting a PyTorch model to ONNX with ``torch.onnx.export`` first and then converting the resulting ``.onnx`` file to OpenVINO Model with ``openvino.convert_model``. It can be considered as a backup solution if a model cannot be converted directly from PyTorch to OpenVINO as described in the above chapters. Converting through ONNX can be more expensive in terms of code, conversion time, and allocated memory.
+
+1. Refer to the `Exporting PyTorch models to ONNX format `__ guide to learn how to export models from PyTorch to ONNX.
+2. Follow :doc:`Convert the ONNX model ` chapter to produce OpenVINO model.
+
+Here is an illustration of using these two steps together:
+
+.. code-block:: py
+ :force:
+
+ import torchvision
+ import torch
+ import openvino as ov
+
+ model = torchvision.models.resnet50(pretrained=True)
+ # 1. Export to ONNX
+ torch.onnx.export(model, (torch.rand(1, 3, 224, 224), ), 'model.onnx')
+ # 2. Convert to OpenVINO
+ ov_model = ov.convert_model('model.onnx')
+
+.. note::
+
+ As of version 1.8.1, not all PyTorch operations can be exported to ONNX opset 9 which is used by default.
+ It is recommended to export models to opset 11 or higher when export to default opset 9 is not working. In that case, use ``opset_version`` option of the ``torch.onnx.export``. For more information about ONNX opset, refer to the `Operator Schemas `__ page.
+
+@endsphinxdirective
diff --git a/docs/OV_Converter_UG/prepare_model/convert_model/Convert_Model_From_TensorFlow.md b/docs/OV_Converter_UG/prepare_model/convert_model/Convert_Model_From_TensorFlow.md
new file mode 100644
index 00000000000000..d2c8f1418c0815
--- /dev/null
+++ b/docs/OV_Converter_UG/prepare_model/convert_model/Convert_Model_From_TensorFlow.md
@@ -0,0 +1,331 @@
+# Converting a TensorFlow Model {#openvino_docs_OV_Converter_UG_prepare_model_convert_model_Convert_Model_From_TensorFlow}
+
+@sphinxdirective
+
+.. meta::
+ :description: Learn how to convert a model from a
+ TensorFlow format to the OpenVINO Model.
+
+This page provides general instructions on how to run model conversion from a TensorFlow format to the OpenVINO IR format. The instructions are different depending on whether your model was created with TensorFlow v1.X or TensorFlow v2.X.
+
+.. note:: TensorFlow models can be loaded by `openvino.Core.read_model` or `openvino.Core.compile_model` methods by OpenVINO runtime API without preparing OpenVINO IR first. Refer to the :doc:`inference example ` for more details. Using ``openvino.convert_model`` is still recommended if model load latency matters for the inference application.
+
+.. note:: Examples below that convert TensorFlow models from a file, do not require any version of TensorFlow to be installed on the system, except in cases when the `tensorflow` module is imported explicitly.
+
+Converting TensorFlow 2 Models
+##############################
+
+TensorFlow 2.X officially supports two model formats: SavedModel and Keras H5 (or HDF5).
+Below are the instructions on how to convert each of them.
+
+SavedModel Format
++++++++++++++++++
+
+A model in the SavedModel format consists of a directory with a ``saved_model.pb`` file and two subfolders: ``variables`` and ``assets`` inside.
+To convert a model, run conversion with the directory as the model argument:
+
+.. tab-set::
+
+ .. tab-item:: Python
+ :sync: py
+
+ .. code-block:: py
+ :force:
+
+ import openvino as ov
+ ov_model = ov.convert_model('path_to_saved_model_dir')
+
+ .. tab-item:: CLI
+ :sync: cli
+
+ .. code-block:: sh
+
+ ovc path_to_saved_model_dir
+
+Keras H5 Format
++++++++++++++++
+
+If you have a model in the HDF5 format, load the model using TensorFlow 2 and serialize it in the
+SavedModel format. Here is an example of how to do it:
+
+.. code-block:: py
+ :force:
+
+ import tensorflow as tf
+ model = tf.keras.models.load_model('model.h5')
+ tf.saved_model.save(model,'model')
+
+Converting a Keras H5 model with a custom layer to the SavedModel format requires special considerations.
+For example, the model with a custom layer ``CustomLayer`` from ``custom_layer.py`` is converted as follows:
+
+.. code-block:: py
+ :force:
+
+ import tensorflow as tf
+ from custom_layer import CustomLayer
+ model = tf.keras.models.load_model('model.h5', custom_objects={'CustomLayer': CustomLayer})
+ tf.saved_model.save(model,'model')
+
+Then follow the above instructions for the SavedModel format.
+
+.. note::
+
+ Avoid using any workarounds or hacks to resave TensorFlow 2 models into TensorFlow 1 formats.
+
+Converting TensorFlow 1 Models
+###############################
+
+Converting Frozen Model Format
++++++++++++++++++++++++++++++++
+
+To convert a TensorFlow model, run model conversion with the path to the input model ``*.pb*`` file:
+
+.. tab-set::
+
+ .. tab-item:: Python
+ :sync: py
+
+ .. code-block:: py
+
+ import openvino as ov
+ ov_model = ov.convert_model('your_model_file.pb')
+
+ .. tab-item:: CLI
+ :sync: cli
+
+ .. code-block:: sh
+
+ ovc your_model_file.pb
+
+
+Converting Non-Frozen Model Formats
++++++++++++++++++++++++++++++++++++
+
+There are three ways to store non-frozen TensorFlow models.
+
+1. **SavedModel format**. In this case, a model consists of a special directory with a ``.pb`` file
+and several subfolders: ``variables``, ``assets``, and ``assets.extra``. For more information about the SavedModel directory, refer to the `README `__ file in the TensorFlow repository.
+To convert such TensorFlow model, run the conversion similarly to other model formats and pass a path to the directory as a model argument:
+
+.. tab-set::
+
+ .. tab-item:: Python
+ :sync: py
+
+ .. code-block:: py
+
+ import openvino as ov
+ ov_model = ov.convert_model('path_to_saved_model_dir')
+
+ .. tab-item:: CLI
+ :sync: cli
+
+ .. code-block:: sh
+
+ ovc path_to_saved_model_dir
+
+2. **Checkpoint**. In this case, a model consists of two files: ``inference_graph.pb`` (or ``inference_graph.pbtxt``) and ``checkpoint_file.ckpt``.
+If you do not have an inference graph file, refer to the `Freezing Custom Models in Python <#Freezing-Custom-Models-in-Python>`__ section.
+To convert the model with the inference graph in ``.pb`` format, provide paths to both files as an argument for ``ovc`` or ``openvino.convert_model``:
+
+.. tab-set::
+
+ .. tab-item:: Python
+ :sync: py
+
+ .. code-block:: py
+
+ import openvino as ov
+ ov_model = ov.convert_model(['path_to_inference_graph.pb', 'path_to_checkpoint_file.ckpt'])
+
+ .. tab-item:: CLI
+ :sync: cli
+
+ .. code-block:: sh
+
+ ovc path_to_inference_graph.pb path_to_checkpoint_file.ckpt
+
+To convert the model with the inference graph in the ``.pbtxt`` format, specify the path to ``.pbtxt`` file instead of the ``.pb`` file. The conversion API automatically detects the format of the provided file, there is no need to specify the model file format explicitly when calling ``ovc`` or ``openvino.convert_model`` in all examples in this document.
+
+3. **MetaGraph**. In this case, a model consists of three or four files stored in the same directory: ``model_name.meta``, ``model_name.index``,
+``model_name.data-00000-of-00001`` (the numbers may vary), and ``checkpoint`` (optional).
+To convert such a TensorFlow model, run the conversion providing a path to `.meta` file as an argument:
+
+.. tab-set::
+
+ .. tab-item:: Python
+ :sync: py
+
+ .. code-block:: py
+
+ import openvino as ov
+ ov_model = ov.convert_model('path_to_meta_graph.meta')
+
+ .. tab-item:: CLI
+ :sync: cli
+
+ .. code-block:: sh
+
+ ovc path_to_meta_graph.meta
+
+
+Freezing Custom Models in Python
+++++++++++++++++++++++++++++++++
+
+When a model is defined in Python code, you must create an inference graph file. Graphs are usually built in a form
+that allows model training. That means all trainable parameters are represented as variables in the graph.
+To be able to use such a graph with the model conversion API, it should be frozen first before passing to the ``openvino.convert_model`` function:
+
+.. code-block:: py
+ :force:
+
+ import tensorflow as tf
+ from tensorflow.python.framework import graph_io
+ frozen = tf.compat.v1.graph_util.convert_variables_to_constants(sess, sess.graph_def, ["name_of_the_output_node"])
+
+ import openvino as ov
+ ov_model = ov.convert_model(frozen)
+
+Where:
+
+* ``sess`` is the instance of the TensorFlow Session object where the network topology is defined.
+* ``["name_of_the_output_node"]`` is the list of output node names in the graph; ``frozen`` graph will include only those nodes from the original ``sess.graph_def`` that are directly or indirectly used to compute given output nodes. The ``'name_of_the_output_node'`` is an example of a possible output node name. You should derive the names based on your own graph.
+
+Converting TensorFlow Models from Memory Using Python API
+############################################################
+
+Model conversion API supports passing TensorFlow/TensorFlow2 models directly from memory.
+
+* ``tf.keras.Model``
+
+ .. code-block:: py
+ :force:
+
+ import openvino as ov
+ model = tf.keras.applications.ResNet50(weights="imagenet")
+ ov_model = ov.convert_model(model)
+
+* ``tf.keras.layers.Layer``. Requires saving model to TensorFlow ``saved_model`` file format and then loading to ``openvino.convert_model``. Saving to the file and then restoring is required due to a known bug in ``openvino.convert_model`` that ignores model signature.
+
+ .. code-block:: py
+ :force:
+
+ import tensorflow_hub as hub
+ import openvino as ov
+
+ model = hub.KerasLayer("https://tfhub.dev/google/imagenet/mobilenet_v1_100_224/classification/5")
+ model.build([None, 224, 224, 3])
+ model.save('mobilenet_v1_100_224') # use a temporary directory
+
+ ov_model = ov.convert_model('mobilenet_v1_100_224')
+
+* ``tf.Module``. Requires setting shapes in ``input`` parameter.
+
+ .. code-block:: py
+ :force:
+
+ import tensorflow as tf
+ import openvino as ov
+
+ class MyModule(tf.Module):
+ def __init__(self, name=None):
+ super().__init__(name=name)
+ self.constant1 = tf.constant(5.0, name="var1")
+ self.constant2 = tf.constant(1.0, name="var2")
+ def __call__(self, x):
+ return self.constant1 * x + self.constant2
+
+ model = MyModule(name="simple_module")
+ ov_model = ov.convert_model(model, input=[-1])
+
+.. note:: There is a known bug in ``openvino.convert_model`` on using ``tf.Variable`` nodes in the model graph. The results of the conversion of such models are unpredictable. It is recommended to save a model with ``tf.Variable`` into TensorFlow Saved Model format and load it with `openvino.convert_model`.
+
+* ``tf.compat.v1.Graph``
+
+ .. code-block:: py
+ :force:
+
+ with tf.compat.v1.Session() as sess:
+ inp1 = tf.compat.v1.placeholder(tf.float32, [100], 'Input1')
+ inp2 = tf.compat.v1.placeholder(tf.float32, [100], 'Input2')
+ output = tf.nn.relu(inp1 + inp2, name='Relu')
+ tf.compat.v1.global_variables_initializer()
+ model = sess.graph
+
+ import openvino as ov
+ ov_model = ov.convert_model(model)
+
+* ``tf.compat.v1.GraphDef``
+
+ .. code-block:: py
+ :force:
+
+ with tf.compat.v1.Session() as sess:
+ inp1 = tf.compat.v1.placeholder(tf.float32, [100], 'Input1')
+ inp2 = tf.compat.v1.placeholder(tf.float32, [100], 'Input2')
+ output = tf.nn.relu(inp1 + inp2, name='Relu')
+ tf.compat.v1.global_variables_initializer()
+ model = sess.graph_def
+
+ import openvino as ov
+ ov_model = ov.convert_model(model)
+
+* ``tf.function``
+
+ .. code-block:: py
+ :force:
+
+ @tf.function(
+ input_signature=[tf.TensorSpec(shape=[1, 2, 3], dtype=tf.float32),
+ tf.TensorSpec(shape=[1, 2, 3], dtype=tf.float32)])
+ def func(x, y):
+ return tf.nn.sigmoid(tf.nn.relu(x + y))
+
+ import openvino as ov
+ ov_model = ov.convert_model(func)
+
+* ``tf.compat.v1.session``
+
+ .. code-block:: py
+ :force:
+
+ with tf.compat.v1.Session() as sess:
+ inp1 = tf.compat.v1.placeholder(tf.float32, [100], 'Input1')
+ inp2 = tf.compat.v1.placeholder(tf.float32, [100], 'Input2')
+ output = tf.nn.relu(inp1 + inp2, name='Relu')
+ tf.compat.v1.global_variables_initializer()
+
+ import openvino as ov
+ ov_model = ov.convert_model(sess)
+
+* ``tf.train.checkpoint``
+
+ .. code-block:: py
+ :force:
+
+ model = tf.keras.Model(...)
+ checkpoint = tf.train.Checkpoint(model)
+ save_path = checkpoint.save(save_directory)
+ # ...
+ checkpoint.restore(save_path)
+
+ import openvino as ov
+ ov_model = ov.convert_model(checkpoint)
+
+Supported TensorFlow and TensorFlow 2 Keras Layers
+##################################################
+
+For the list of supported standard layers, refer to the :doc:`Supported Operations ` page.
+
+Summary
+#######
+
+In this document, you learned:
+
+* Basic information about how the model conversion API works with TensorFlow models.
+* Which TensorFlow models are supported.
+* How to freeze a TensorFlow model.
+
+@endsphinxdirective
+
+
diff --git a/docs/OV_Converter_UG/prepare_model/convert_model/Convert_Model_From_TensorFlow_Lite.md b/docs/OV_Converter_UG/prepare_model/convert_model/Convert_Model_From_TensorFlow_Lite.md
new file mode 100644
index 00000000000000..e25795c95a4b1f
--- /dev/null
+++ b/docs/OV_Converter_UG/prepare_model/convert_model/Convert_Model_From_TensorFlow_Lite.md
@@ -0,0 +1,42 @@
+# Converting a TensorFlow Lite Model {#openvino_docs_OV_Converter_UG_prepare_model_convert_model_Convert_Model_From_TensorFlow_Lite}
+
+@sphinxdirective
+
+.. meta::
+ :description: Learn how to convert a model from a
+ TensorFlow Lite format to the OpenVINO Model.
+
+
+To convert an ONNX model, run model conversion with the path to the ``.tflite`` model file:
+
+.. tab-set::
+
+ .. tab-item:: Python
+ :sync: py
+
+ .. code-block:: py
+
+ import openvino as ov
+ ov.convert_model('your_model_file.tflite')
+
+ .. tab-item:: CLI
+ :sync: cli
+
+ .. code-block:: sh
+
+ ovc your_model_file.tflite
+
+.. note:: TensorFlow Lite model file can be loaded by ``openvino.Core.read_model`` or ``openvino.Core.compile_model`` methods by OpenVINO runtime API without preparing OpenVINO IR first. Refer to the :doc:`inference example ` for more details. Using ``openvino.convert_model`` is still recommended if model load latency matters for the inference application.
+
+Supported TensorFlow Lite Layers
+###################################
+
+For the list of supported standard layers, refer to the :doc:`Supported Operations ` page.
+
+Supported TensorFlow Lite Models
+###################################
+
+More than eighty percent of public TensorFlow Lite models are supported from open sources `TensorFlow Hub `__ and `MediaPipe `__.
+Unsupported models usually have custom TensorFlow Lite operations.
+
+@endsphinxdirective
diff --git a/docs/OV_Converter_UG/prepare_model/convert_model/Converting_Model.md b/docs/OV_Converter_UG/prepare_model/convert_model/Converting_Model.md
new file mode 100644
index 00000000000000..24fa33c17f4a94
--- /dev/null
+++ b/docs/OV_Converter_UG/prepare_model/convert_model/Converting_Model.md
@@ -0,0 +1,141 @@
+# Setting Input Shapes {#openvino_docs_OV_Converter_UG_prepare_model_convert_model_Converting_Model}
+
+With model conversion API you can increase your model's efficiency by providing an additional shape definition using the ``input`` parameter.
+
+@sphinxdirective
+
+.. meta::
+ :description: Learn how to increase the efficiency of a model by providing an additional shape definition with the ``input`` parameter of ``openvino.convert_model`` and ``ovc``.
+
+.. _when_to_specify_input_shapes:
+
+Specifying Shapes in the ``input`` Parameter
+#####################################################
+
+``openvino.convert_model`` supports conversion of models with dynamic input shapes that contain undefined dimensions.
+However, if the shape of data is not going to change from one inference request to another,
+it is recommended to set up static shapes (when all dimensions are fully defined) for the inputs.
+Doing it at this stage, instead of during inference in runtime, can be beneficial in terms of performance and memory consumption.
+To set up static shapes, model conversion API provides the ``input`` parameter.
+For more information on changing input shapes in runtime, refer to the :doc:`Changing input shapes ` guide.
+To learn more about dynamic shapes in runtime, refer to the :doc:`Dynamic Shapes ` guide.
+
+The OpenVINO Runtime API may present certain limitations in inferring models with undefined dimensions on some hardware. See the :doc:`Features support matrix ` for reference.
+In this case, the ``input`` parameter and the :doc:`reshape method ` can help to resolve undefined dimensions.
+
+For example, run model conversion for the TensorFlow MobileNet model with the single input
+and specify the input shape of ``[2,300,300,3]``:
+
+.. tab-set::
+
+ .. tab-item:: Python
+ :sync: py
+
+ .. code-block:: py
+ :force:
+
+ import openvino as ov
+ ov_model = ov.convert_model("MobileNet.pb", input=[2, 300, 300, 3])
+
+ .. tab-item:: CLI
+ :sync: cli
+
+ .. code-block:: sh
+
+ ovc MobileNet.pb --input [2,300,300,3]
+
+If a model has multiple inputs, the input shape should be specified in ``input`` parameter as a list. In ``ovc``, this is a command separate list, and in ``openvino.convert_model`` this is a Python list or tuple with number of elements matching the number of inputs in the model. Use input names from the original model to define the mapping between inputs and shapes specified.
+The following example demonstrates the conversion of the ONNX OCR model with a pair of inputs ``data`` and ``seq_len``
+and specifies shapes ``[3,150,200,1]`` and ``[3]`` for them respectively:
+
+.. tab-set::
+
+ .. tab-item:: Python
+ :sync: py
+
+ .. code-block:: py
+ :force:
+
+ import openvino as ov
+ ov_model = ov.convert_model("ocr.onnx", input=[("data", [3,150,200,1]), ("seq_len", [3])])
+
+ .. tab-item:: CLI
+ :sync: cli
+
+ .. code-block:: sh
+
+ ovc ocr.onnx --input data[3,150,200,1],seq_len[3]
+
+If the order of inputs is defined in the input model and the order is known for the user, names could be omitted. In this case, it is important to specify shapes in the same order of input model inputs:
+
+.. tab-set::
+
+ .. tab-item:: Python
+ :sync: py
+
+ .. code-block:: py
+ :force:
+
+ import openvino as ov
+ ov_model = ov.convert_model("ocr.onnx", input=([3,150,200,1], [3]))
+
+ .. tab-item:: CLI
+ :sync: cli
+
+ .. code-block:: sh
+
+ ovc ocr.onnx --input [3,150,200,1],[3]
+
+Whether the model has a specified order of inputs depends on the original framework. Usually, it is convenient to set shapes without specifying the names of the parameters in the case of PyTorch model conversion because a PyTorch model is considered as a callable that usually accepts positional parameters. On the other hand, names of inputs are convenient when converting models from model files, because naming of inputs is a good practice for many frameworks that serialize models to files.
+
+The ``input`` parameter allows overriding original input shapes if it is supported by the model topology.
+Shapes with dynamic dimensions in the original model can be replaced with static shapes for the converted model, and vice versa.
+The dynamic dimension can be marked in model conversion API parameter as ``-1`` or ``?`` when using ``ovc``.
+For example, launch model conversion for the ONNX OCR model and specify dynamic batch dimension for inputs:
+
+.. tab-set::
+
+ .. tab-item:: Python
+ :sync: py
+
+ .. code-block:: py
+ :force:
+
+ import openvino as ov
+ ov_model = ov.convert_model("ocr.onnx", input=[("data", [-1, 150, 200, 1]), ("seq_len", [-1])])
+
+ .. tab-item:: CLI
+ :sync: cli
+
+ .. code-block:: sh
+
+ ovc ocr.onnx --input "data[?,150,200,1],seq_len[?]"
+
+To optimize memory consumption for models with undefined dimensions in run-time, model conversion API provides the capability to define boundaries of dimensions.
+The boundaries of undefined dimension can be specified with ellipsis in the command line or with ``openvino.Dimension`` class in Python.
+For example, launch model conversion for the ONNX OCR model and specify a boundary for the batch dimension 1..3, which means that the input tensor will have batch dimension minimum 1 and maximum 3 in inference:
+
+.. tab-set::
+
+ .. tab-item:: Python
+ :sync: py
+
+ .. code-block:: py
+ :force:
+
+ import openvino as ov
+ batch_dim = ov.Dimension(1, 3)
+ ov_model = ov.convert_model("ocr.onnx", input=[("data", [batch_dim, 150, 200, 1]), ("seq_len", [batch_dim])])
+
+ .. tab-item:: CLI
+ :sync: cli
+
+ .. code-block:: sh
+
+ ovc ocr.onnx --input data[1..3,150,200,1],seq_len[1..3]
+
+In practice, not every model is designed in a way that allows change of input shapes. An attempt to change the shape for such models may lead to an exception during model conversion, later in model inference, or even to wrong results of inference without explicit exception raised. A knowledge about model topology is required to set shapes appropriately.
+For more information about shape follow the :doc:`inference troubleshooting `
+and :ref:`ways to relax shape inference flow ` guides.
+
+@endsphinxdirective
diff --git a/docs/OV_Converter_UG/prepare_model/convert_model/MO_OVC_transition.md b/docs/OV_Converter_UG/prepare_model/convert_model/MO_OVC_transition.md
new file mode 100644
index 00000000000000..e550d515b753ad
--- /dev/null
+++ b/docs/OV_Converter_UG/prepare_model/convert_model/MO_OVC_transition.md
@@ -0,0 +1,634 @@
+# Transition from Legacy Conversion API {#openvino_docs_OV_Converter_UG_prepare_model_convert_model_MO_OVC_transition}
+
+@sphinxdirective
+
+.. meta::
+ :description: Transition guide from MO / mo.convert_model() to OVC / ov.convert_model().
+
+.. toctree::
+ :maxdepth: 1
+ :hidden:
+
+ openvino_docs_MO_DG_Deep_Learning_Model_Optimizer_DevGuide
+
+In 2023.1 OpenVINO release a new OVC (OpenVINO Model Converter) tool has been introduced with the corresponding Python API: ``openvino.convert_model`` method. ``ovc`` and ``openvino.convert_model`` represent
+a lightweight alternative of ``mo`` and ``openvino.tools.mo.convert_model`` which are considered legacy API now. In this article, all the differences between ``mo`` and ``ovc`` are summarized and the transition guide from the legacy API to the new API is provided.
+
+Parameters Comparison
+#####################
+
+The comparison of parameters between ov.convert_model() / OVC and mo.convert_model() / MO.
+
+.. list-table::
+ :widths: 20 25 55
+ :header-rows: 1
+
+ * - mo.convert_model() / MO
+ - ov.convert_model() / OVC
+ - Differences description
+ * - input_model
+ - input_model
+ - Along with model object or path to input model ov.convert_model() accepts list of model parts, for example, the path to TensorFlow weights plus the path to TensorFlow checkpoint. OVC tool accepts an unnamed input model.
+ * - output_dir
+ - output_model
+ - output_model in OVC tool sets both output model name and output directory.
+ * - model_name
+ - output_model
+ - output_model in OVC tool sets both output model name and output directory.
+ * - input
+ - input
+ - ov.convert_model() accepts tuples for setting multiple parameters. OVC tool 'input' does not have type setting and freezing functionality. ov.convert_model() does not allow input cut.
+ * - output
+ - output
+ - ov.convert_model() does not allow output cut.
+ * - input_shape
+ - N/A
+ - Not available in ov.convert_model() / OVC. Can be replaced by ``input`` parameter.
+ * - example_input
+ - example_input
+ - No differences.
+ * - batch
+ - N/A
+ - Not available in ov.convert_model() / OVC. Can be replaced by model reshape functionality. See details below.
+ * - mean_values
+ - N/A
+ - Not available in ov.convert_model() / OVC. Can be replaced by functionality from ``PrePostProcessor``. See details below.
+ * - scale_values
+ - N/A
+ - Not available in ov.convert_model() / OVC. Can be replaced by functionality from ``PrePostProcessor``. See details below.
+ * - scale
+ - N/A
+ - Not available in ov.convert_model() / OVC. Can be replaced by functionality from ``PrePostProcessor``. See details below.
+ * - reverse_input_channels
+ - N/A
+ - Not available in ov.convert_model() / OVC. Can be replaced by functionality from ``PrePostProcessor``. See details below.
+ * - source_layout
+ - N/A
+ - Not available in ov.convert_model() / OVC. Can be replaced by functionality from ``PrePostProcessor``. See details below.
+ * - target_layout
+ - N/A
+ - Not available in ov.convert_model() / OVC. Can be replaced by functionality from ``PrePostProcessor``. See details below.
+ * - layout
+ - N/A
+ - Not available in ov.convert_model() / OVC. Can be replaced by functionality from ``PrePostProcessor``. See details below.
+ * - compress_to_fp16
+ - compress_to_fp16
+ - OVC provides 'compress_to_fp16' for command line tool only, as compression is performed during saving a model to IR (Intermediate Representation).
+ * - extensions
+ - extension
+ - No differences.
+ * - transform
+ - N/A
+ - Not available in ov.convert_model() / OVC. Can be replaced by functionality from ``PrePostProcessor``. See details below.
+ * - transformations_config
+ - N/A
+ - Not available in ov.convert_model() / OVC.
+ * - static_shape
+ - N/A
+ - Not available in ov.convert_model() / OVC.
+ * - freeze_placeholder_with_value
+ - N/A
+ - Not available in ov.convert_model() / OVC.
+ * - use_legacy_frontend
+ - N/A
+ - Not available in ov.convert_model() / OVC.
+ * - use_legacy_frontend
+ - N/A
+ - Not available in ov.convert_model() / OVC.
+ * - silent
+ - verbose
+ - OVC / ov.convert_model provides 'verbose' parameter instead of 'silent' for printing of detailed conversion information if 'verbose' is set to True.
+ * - log_level
+ - N/A
+ - Not available in ov.convert_model() / OVC.
+ * - version
+ - version
+ - N/A
+ * - progress
+ - N/A
+ - Not available in ov.convert_model() / OVC.
+ * - stream_output
+ - N/A
+ - Not available in ov.convert_model() / OVC.
+ * - share_weights
+ - share_weights
+ - No differences.
+ * - framework
+ - N/A
+ - Not available in ov.convert_model() / OVC.
+ * - help / -h
+ - help / -h
+ - OVC provides help parameter only in command line tool.
+ * - example_output
+ - output
+ - OVC / ov.convert_model 'output' parameter includes capabilities of MO 'example_output' parameter.
+ * - input_model_is_text
+ - N/A
+ - Not available in ov.convert_model() / OVC.
+ * - input_checkpoint
+ - input_model
+ - All supported model formats can be passed to 'input_model'.
+ * - input_meta_graph
+ - input_model
+ - All supported model formats can be passed to 'input_model'.
+ * - saved_model_dir
+ - input_model
+ - All supported model formats can be passed to 'input_model'.
+ * - saved_model_tags
+ - N/A
+ - Not available in ov.convert_model() / OVC.
+ * - tensorflow_custom_operations_config_update
+ - N/A
+ - Not available in ov.convert_model() / OVC.
+ * - tensorflow_object_detection_api_pipeline_config
+ - N/A
+ - Not available in ov.convert_model() / OVC.
+ * - tensorboard_logdir
+ - N/A
+ - Not available in ov.convert_model() / OVC.
+ * - tensorflow_custom_layer_libraries
+ - N/A
+ - Not available in ov.convert_model() / OVC.
+ * - input_symbol
+ - N/A
+ - Not available in ov.convert_model() / OVC.
+ * - nd_prefix_name
+ - N/A
+ - Not available in ov.convert_model() / OVC.
+ * - pretrained_model_name
+ - N/A
+ - Not available in ov.convert_model() / OVC.
+ * - save_params_from_nd
+ - N/A
+ - Not available in ov.convert_model() / OVC.
+ * - legacy_mxnet_model
+ - N/A
+ - Not available in ov.convert_model() / OVC.
+ * - enable_ssd_gluoncv
+ - N/A
+ - Not available in ov.convert_model() / OVC.
+ * - input_proto
+ - N/A
+ - Not available in ov.convert_model() / OVC.
+ * - caffe_parser_path
+ - N/A
+ - Not available in ov.convert_model() / OVC.
+ * - k
+ - N/A
+ - Not available in ov.convert_model() / OVC.
+ * - disable_omitting_optional
+ - N/A
+ - Not available in ov.convert_model() / OVC.
+ * - enable_flattening_nested_params
+ - N/A
+ - Not available in ov.convert_model() / OVC.
+ * - counts
+ - N/A
+ - Not available in ov.convert_model() / OVC.
+ * - remove_output_softmax
+ - N/A
+ - Not available in ov.convert_model() / OVC.
+ * - remove_memory
+ - N/A
+ - Not available in ov.convert_model() / OVC.
+
+Transition from Legacy API to New API
+############################################################################
+
+mo.convert_model() provides a wide range of preprocessing parameters. Most of these parameters have analogs in OVC or can be replaced with functionality from ``ov.PrePostProcessor`` class.
+Here is the guide to transition from legacy model preprocessing to new API preprocessing.
+
+
+``input_shape``
+################
+
+.. tab-set::
+
+ .. tab-item:: Python
+ :sync: py
+
+ .. list-table::
+ :header-rows: 1
+
+ * - Legacy API
+ - New API
+ * - .. code-block:: py
+ :force:
+
+ from openvino.tools import mo
+
+ ov_model = mo.convert_model(model, input_shape=[[1, 3, 100, 100],[1]])
+
+ - .. code-block:: py
+ :force:
+
+ import openvino as ov
+
+ ov_model = ov.convert_model(model, input=[[1, 3, 100, 100],[1]])
+
+ .. tab-item:: CLI
+ :sync: cli
+
+ .. list-table::
+ :header-rows: 1
+
+ * - Legacy API
+ - New API
+ * - .. code-block:: sh
+ :force:
+
+ mo --input_model MODEL_NAME --input_shape [1,3,100,100],[1] --output_dir OUTPUT_DIR
+
+ - .. code-block:: sh
+ :force:
+
+ ovc MODEL_NAME --input [1,3,100,100],[1] --output_model OUTPUT_MODEL
+
+``batch``
+##########
+
+.. tab-set::
+
+ .. tab-item:: Python
+ :sync: py
+
+ .. list-table::
+ :header-rows: 1
+
+ * - Legacy API
+ - New API
+ * - .. code-block:: py
+ :force:
+
+ from openvino.tools import mo
+
+ ov_model = mo.convert_model(model, batch=2)
+
+ - .. code-block:: py
+ :force:
+
+ import openvino as ov
+
+ ov_model = ov.convert_model(model)
+ input_shape = ov_model.inputs[0].partial_shape
+ input_shape[0] = 2 # batch size
+ ov_model.reshape(input_shape)
+
+ .. tab-item:: CLI
+ :sync: cli
+
+ .. list-table::
+ :header-rows: 1
+
+ * - Legacy API
+ - New API
+ * - .. code-block:: sh
+ :force:
+
+ mo --input_model MODEL_NAME --batch 2 --output_dir OUTPUT_DIR
+
+ - Not available in OVC tool. Please check Python API.
+
+``mean_values``
+################
+
+.. tab-set::
+
+ .. tab-item:: Python
+ :sync: py
+
+ .. list-table::
+ :header-rows: 1
+
+ * - Legacy API
+ - New API
+ * - .. code-block:: py
+ :force:
+
+ from openvino.tools import mo
+
+ ov_model = mo.convert_model(model, mean_values=[0.5, 0.5, 0.5])
+
+ - .. code-block:: py
+ :force:
+
+ import openvino as ov
+
+ ov_model = ov.convert_model(model)
+
+ prep = ov.preprocess.PrePostProcessor(ov_model)
+ prep.input(input_name).tensor().set_layout(ov.Layout("NHWC"))
+ prep.input(input_name).preprocess().mean([0.5, 0.5, 0.5])
+ ov_model = prep.build()
+
+ There is currently no heuristic for automatic detection of the channel to which mean, scale or reverse channels should be applied. ``Layout`` needs to be explicitly specified with "C" channel. For example "NHWC", "NCHW", "?C??". See also :doc:`Layout API overview `.
+
+ .. tab-item:: CLI
+ :sync: cli
+
+ .. list-table::
+ :header-rows: 1
+
+ * - Legacy API
+ - New API
+ * - .. code-block:: sh
+ :force:
+
+ mo --input_model MODEL_NAME --mean_values [0.5,0.5,0.5] --output_dir OUTPUT_DIR
+
+ - Not available in OVC tool. Please check Python API.
+
+``scale_values``
+#################
+
+.. tab-set::
+
+ .. tab-item:: Python
+ :sync: py
+
+ .. list-table::
+ :header-rows: 1
+
+ * - Legacy API
+ - New API
+ * - .. code-block:: py
+ :force:
+
+ from openvino.tools import mo
+
+ ov_model = mo.convert_model(model, scale_values=[255., 255., 255.])
+
+ - .. code-block:: py
+ :force:
+
+ import openvino as ov
+
+ ov_model = ov.convert_model(model)
+
+ prep = ov.preprocess.PrePostProcessor(ov_model)
+ prep.input(input_name).tensor().set_layout(ov.Layout("NHWC"))
+ prep.input(input_name).preprocess().scale([255., 255., 255.])
+ ov_model = prep.build()
+
+ There is currently no heuristic for automatic detection of the channel to which mean, scale or reverse channels should be applied. ``Layout`` needs to be explicitly specified with "C" channel. For example "NHWC", "NCHW", "?C??". See also :doc:`Layout API overview `.
+
+ .. tab-item:: CLI
+ :sync: cli
+
+ .. list-table::
+ :header-rows: 1
+
+ * - Legacy API
+ - New API
+ * - .. code-block:: sh
+ :force:
+
+ mo --input_model MODEL_NAME --scale_values [255,255,255] --output_dir OUTPUT_DIR
+
+ - Not available in OVC tool. Please check Python API.
+
+``reverse_input_channels``
+###########################
+
+.. tab-set::
+
+ .. tab-item:: Python
+ :sync: py
+
+ .. list-table::
+ :header-rows: 1
+
+ * - Legacy API
+ - New API
+ * - .. code-block:: py
+ :force:
+
+ from openvino.tools import mo
+
+ ov_model = mo.convert_model(model, reverse_input_channels=True)
+
+ - .. code-block:: py
+ :force:
+
+ import openvino as ov
+
+ ov_model = ov.convert_model(model)
+
+ prep = ov.preprocess.PrePostProcessor(ov_model)
+ prep.input(input_name).tensor().set_layout(ov.Layout("NHWC"))
+ prep.input(input_name).preprocess().reverse_channels()
+ ov_model = prep.build()
+
+ There is currently no heuristic for automatic detection of the channel to which mean, scale or reverse channels should be applied. ``Layout`` needs to be explicitly specified with "C" channel. For example "NHWC", "NCHW", "?C??". See also :doc:`Layout API overview `.
+
+ .. tab-item:: CLI
+ :sync: cli
+
+ .. list-table::
+ :header-rows: 1
+
+ * - Legacy API
+ - New API
+ * - .. code-block:: sh
+ :force:
+
+ mo --input_model MODEL_NAME --reverse_input_channels --output_dir OUTPUT_DIR
+
+ - Not available in OVC tool. Please check Python API.
+
+``source_layout``
+##################
+
+.. tab-set::
+
+ .. tab-item:: Python
+ :sync: py
+
+ .. list-table::
+ :header-rows: 1
+
+ * - Legacy API
+ - New API
+ * - .. code-block:: py
+ :force:
+
+ import openvino as ov
+ from openvino.tools import mo
+
+ ov_model = mo.convert_model(model, source_layout={input_name: ov.Layout("NHWC")})
+
+ - .. code-block:: py
+ :force:
+
+ import openvino as ov
+
+ ov_model = ov.convert_model(model)
+
+ prep = ov.preprocess.PrePostProcessor(ov_model)
+ prep.input(input_name).model().set_layout(ov.Layout("NHWC"))
+ ov_model = prep.build()
+
+ .. tab-item:: CLI
+ :sync: cli
+
+ .. list-table::
+ :header-rows: 1
+
+ * - Legacy API
+ - New API
+ * - .. code-block:: sh
+ :force:
+
+ mo --input_model MODEL_NAME --source_layout input_name(NHWC) --output_dir OUTPUT_DIR
+
+ - Not available in OVC tool. Please check Python API.
+
+``target_layout``
+##################
+
+.. tab-set::
+
+ .. tab-item:: Python
+ :sync: py
+
+ .. list-table::
+ :header-rows: 1
+
+ * - Legacy API
+ - New API
+ * - .. code-block:: py
+ :force:
+
+ import openvino as ov
+ from openvino.tools import mo
+
+ ov_model = mo.convert_model(model, target_layout={input_name: ov.Layout("NHWC")})
+
+ - .. code-block:: py
+ :force:
+
+ import openvino as ov
+
+ ov_model = ov.convert_model(model)
+
+ prep = ov.preprocess.PrePostProcessor(ov_model)
+ prep.input(input_name).tensor().set_layout(ov.Layout("NHWC"))
+ ov_model = prep.build()
+
+ .. tab-item:: CLI
+ :sync: cli
+
+ .. list-table::
+ :header-rows: 1
+
+ * - Legacy API
+ - New API
+ * - .. code-block:: sh
+ :force:
+
+ mo --input_model MODEL_NAME --target_layout input_name(NHWC) --output_dir OUTPUT_DIR
+
+ - Not available in OVC tool. Please check Python API.
+
+``layout``
+###########
+
+.. tab-set::
+
+ .. tab-item:: Python
+ :sync: py
+
+ .. list-table::
+ :header-rows: 1
+
+ * - Legacy API
+ - New API
+ * - .. code-block:: py
+ :force:
+
+ from openvino.tools import mo
+
+ ov_model = mo.convert_model(model, layout={input_name: mo.LayoutMap("NCHW", "NHWC")})
+
+ - .. code-block:: py
+ :force:
+
+ import openvino as ov
+
+ ov_model = ov.convert_model(model)
+
+ prep = ov.preprocess.PrePostProcessor(ov_model)
+ prep.input(input_name).model().set_layout(ov.Layout("NCHW"))
+ prep.input(input_name).tensor().set_layout(ov.Layout("NHWC"))
+ ov_model = prep.build()
+
+ .. tab-item:: CLI
+ :sync: cli
+
+ .. list-table::
+ :header-rows: 1
+
+ * - Legacy API
+ - New API
+ * - .. code-block:: sh
+ :force:
+
+ mo --input_model MODEL_NAME --layout "input_name(NCHW->NHWC)" --output_dir OUTPUT_DIR
+
+ - Not available in OVC tool. Please check Python API.
+
+``transform``
+##############
+
+.. tab-set::
+
+ .. tab-item:: Python
+ :sync: py
+
+ .. list-table::
+ :header-rows: 1
+
+ * - Legacy API
+ - New API
+ * - .. code-block:: py
+ :force:
+
+ from openvino.tools import mo
+
+ ov_model = mo.convert_model(model, transform=[('LowLatency2', {'use_const_initializer': False}), 'Pruning', ('MakeStateful', {'param_res_names': {'input_name': 'output_name'}})])
+
+ - .. code-block:: py
+ :force:
+
+ import openvino as ov
+ from openvino._offline_transformations import apply_low_latency_transformation, apply_pruning_transformation, apply_make_stateful_transformation
+
+ ov_model = ov.convert_model(model)
+ apply_low_latency_transformation(model, use_const_initializer=False)
+ apply_pruning_transformation(model)
+ apply_make_stateful_transformation(model, param_res_names={'input_name': 'output_name'})
+
+ .. tab-item:: CLI
+ :sync: cli
+
+ .. list-table::
+ :header-rows: 1
+
+ * - Legacy API
+ - New API
+ * - .. code-block:: sh
+ :force:
+
+ mo --input_model MODEL_NAME --transform LowLatency2[use_const_initializer=False],Pruning,MakeStateful[param_res_names={'input_name':'output_name'}] --output_dir OUTPUT_DIR
+
+ - Not available in OVC tool. Please check Python API.
+
+Supported Frameworks in MO vs OVC
+#################################
+
+ov.convert_model() and OVC tool support conversion from PyTorch, TF, TF Lite, ONNX, PaddlePaddle.
+The following frameworks are supported only in MO and mo.convert_model(): Caffe, MxNet, Kaldi.
+
+@endsphinxdirective
+
+
diff --git a/docs/OV_Converter_UG/prepare_model/convert_model/supported_model_formats.md b/docs/OV_Converter_UG/prepare_model/convert_model/supported_model_formats.md
new file mode 100644
index 00000000000000..75262747e3a9fc
--- /dev/null
+++ b/docs/OV_Converter_UG/prepare_model/convert_model/supported_model_formats.md
@@ -0,0 +1,33 @@
+# Supported Model Formats {#Supported_Model_Formats}
+
+@sphinxdirective
+
+.. toctree::
+ :maxdepth: 1
+ :hidden:
+
+ openvino_docs_OV_Converter_UG_prepare_model_convert_model_Convert_Model_From_PyTorch
+ openvino_docs_OV_Converter_UG_prepare_model_convert_model_Convert_Model_From_TensorFlow
+ openvino_docs_OV_Converter_UG_prepare_model_convert_model_Convert_Model_From_ONNX
+ openvino_docs_OV_Converter_UG_prepare_model_convert_model_Convert_Model_From_TensorFlow_Lite
+ openvino_docs_OV_Converter_UG_prepare_model_convert_model_Convert_Model_From_Paddle
+
+
+**OpenVINO IR (Intermediate Representation)** - the proprietary format of OpenVINO™, benefiting from the full extent of its features. The result of running ``ovc`` CLI tool or ``openvino.save_model`` is OpenVINO IR. All other supported formats can be converted to the IR, refer to the following articles for details on conversion:
+
+* :doc:`How to convert Pytorch `
+* :doc:`How to convert ONNX `
+* :doc:`How to convert TensorFlow `
+* :doc:`How to convert TensorFlow Lite `
+* :doc:`How to convert PaddlePaddle `
+
+To choose the best workflow for your application, read :doc:`Introduction to Model Preparation`
+
+Refer to the list of all supported conversion options in :doc:`Conversion Parameters `
+
+Additional Resources
+####################
+
+* :doc:`Transition guide from the legacy to new conversion API `
+
+@endsphinxdirective
diff --git a/docs/get_started/get_started_demos.md b/docs/get_started/get_started_demos.md
index 61e6e60c600c7b..0ec61b0f0c3e1f 100644
--- a/docs/get_started/get_started_demos.md
+++ b/docs/get_started/get_started_demos.md
@@ -3,11 +3,11 @@
@sphinxdirective
.. meta::
- :description: Learn the details on the workflow of Intel® Distribution of OpenVINO™
+ :description: Learn the details on the workflow of Intel® Distribution of OpenVINO™
toolkit, and how to run inference, using provided code samples.
-The guide presents a basic workflow for building and running C++ code samples in OpenVINO. Note that these steps will not work with the Python samples.
+The guide presents a basic workflow for building and running C++ code samples in OpenVINO. Note that these steps will not work with the Python samples.
To get started, you must first install OpenVINO Runtime, install OpenVINO Development tools, and build the sample applications. See the :ref:`Prerequisites ` section for instructions.
@@ -40,8 +40,8 @@ Make sure that you also `install OpenCV `. This guide uses the ``googlenet-v1`` model from the Caffe framework, therefore, when you get to Step 4 of the installation, run the following command to install OpenVINO with the Caffe requirements:
@@ -76,11 +76,11 @@ You can use one of the following options to find a model suitable for OpenVINO:
- Download public or Intel pre-trained models from :doc:`Open Model Zoo ` using :doc:`Model Downloader tool `
- Download from GitHub, Caffe Zoo, TensorFlow Zoo, etc.
- Train your own model with machine learning tools
-
+
This guide uses OpenVINO Model Downloader to get pre-trained models. You can use one of the following commands to find a model with this method:
* List the models available in the downloader.
-
+
.. code-block:: sh
omz_info_dumper --print_all
@@ -115,21 +115,21 @@ This guide used the following model to run the Image Classification Sample:
:sync: windows
.. code-block:: bat
-
+
omz_downloader --name googlenet-v1 --output_dir %USERPROFILE%\Documents\models
.. tab-item:: Linux
:sync: linux
.. code-block:: sh
-
+
omz_downloader --name googlenet-v1 --output_dir ~/models
-
+
.. tab-item:: macOS
:sync: macos
-
+
.. code-block:: sh
-
+
omz_downloader --name googlenet-v1 --output_dir ~/models
@@ -139,54 +139,54 @@ This guide used the following model to run the Image Classification Sample:
.. tab-item:: Windows
:sync: windows
-
+
.. code-block:: bat
-
+
################|| Downloading models ||################
-
+
========== Downloading C:\Users\username\Documents\models\public\googlenet-v1\googlenet-v1.prototxt
... 100%, 9 KB, ? KB/s, 0 seconds passed
-
+
========== Downloading C:\Users\username\Documents\models\public\googlenet-v1\googlenet-v1.caffemodel
... 100%, 4834 KB, 571 KB/s, 8 seconds passed
-
+
################|| Post-processing ||################
-
+
========== Replacing text in C:\Users\username\Documents\models\public\googlenet-v1\googlenet-v1.prototxt
.. tab-item:: Linux
:sync: linux
-
+
.. code-block:: sh
-
+
###############|| Downloading models ||###############
-
+
========= Downloading /home/username/models/public/googlenet-v1/googlenet-v1.prototxt
-
+
========= Downloading /home/username/models/public/googlenet-v1/googlenet-v1.caffemodel
... 100%, 4834 KB, 3157 KB/s, 1 seconds passed
-
+
###############|| Post processing ||###############
-
+
========= Replacing text in /home/username/models/public/googlenet-v1/googlenet-v1.prototxt =========
-
+
.. tab-item:: macOS
:sync: macos
-
+
.. code-block:: sh
-
+
###############|| Downloading models ||###############
-
+
========= Downloading /Users/username/models/public/googlenet-v1/googlenet-v1.prototxt
... 100%, 9 KB, 44058 KB/s, 0 seconds passed
-
+
========= Downloading /Users/username/models/public/googlenet-v1/googlenet-v1.caffemodel
... 100%, 4834 KB, 4877 KB/s, 0 seconds passed
-
+
###############|| Post processing ||###############
-
+
========= Replacing text in /Users/username/models/public/googlenet-v1/googlenet-v1.prototxt =========
-
+
.. _convert-models-to-intermediate-representation:
Step 2: Convert the Model with ``mo``
@@ -210,26 +210,26 @@ Create an ```` directory to contain the model's Intermediate Representat
.. tab-item:: Windows
:sync: windows
-
+
.. code-block:: bat
-
+
mkdir %USERPROFILE%\Documents\ir
.. tab-item:: Linux
:sync: linux
-
+
.. code-block:: sh
-
+
mkdir ~/ir
-
+
.. tab-item:: macOS
:sync: macos
-
+
.. code-block:: sh
-
+
mkdir ~/ir
-To save disk space for your IR file, you can apply :doc:`weights compression to FP16 `. To generate an IR with FP16 weights, run model conversion with the ``--compress_to_fp16`` option.
+To save disk space for your IR files, OpenVINO stores weights in FP16 format by default.
Generic model conversion script:
@@ -246,23 +246,23 @@ The command with most placeholders filled in and FP16 precision:
.. tab-item:: Windows
:sync: windows
-
+
.. code-block:: bat
-
+
mo --input_model %USERPROFILE%\Documents\models\public\googlenet-v1\googlenet-v1.caffemodel --compress_to_fp16 --output_dir %USERPROFILE%\Documents\ir
.. tab-item:: Linux
:sync: linux
-
+
.. code-block:: sh
-
+
mo --input_model ~/models/public/googlenet-v1/googlenet-v1.caffemodel --compress_to_fp16 --output_dir ~/ir
-
+
.. tab-item:: macOS
:sync: macos
-
+
.. code-block:: sh
-
+
mo --input_model ~/models/public/googlenet-v1/googlenet-v1.caffemodel --compress_to_fp16 --output_dir ~/ir
.. _download-media:
@@ -290,75 +290,75 @@ To run the **Image Classification** code sample with an input image using the IR
.. tab-item:: Windows
:sync: windows
-
+
.. code-block:: bat
-
+
\setupvars.bat
.. tab-item:: Linux
:sync: linux
-
+
.. code-block:: sh
-
+
source /setupvars.sh
-
+
.. tab-item:: macOS
:sync: macos
-
+
.. code-block:: sh
-
+
source /setupvars.sh
-
+
2. Go to the code samples release directory created when you built the samples earlier:
.. tab-set::
.. tab-item:: Windows
:sync: windows
-
+
.. code-block:: bat
-
+
cd %USERPROFILE%\Documents\Intel\OpenVINO\openvino_samples_build\intel64\Release
.. tab-item:: Linux
:sync: linux
-
+
.. code-block:: sh
-
+
cd ~/openvino_cpp_samples_build/intel64/Release
-
+
.. tab-item:: macOS
:sync: macos
-
+
.. code-block:: sh
-
+
cd ~/openvino_cpp_samples_build/intel64/Release
-
+
3. Run the code sample executable, specifying the input media file, the IR for your model, and a target device for performing inference:
.. tab-set::
.. tab-item:: Windows
:sync: windows
-
+
.. code-block:: bat
-
+
classification_sample_async.exe -i -m -d
.. tab-item:: Linux
:sync: linux
-
+
.. code-block:: sh
-
+
classification_sample_async -i -m -d
-
+
.. tab-item:: macOS
:sync: macos
-
+
.. code-block:: sh
-
+
classification_sample_async -i -m -d
-
+
Examples
++++++++
@@ -371,23 +371,23 @@ The following command shows how to run the Image Classification Code Sample usin
.. tab-item:: Windows
:sync: windows
-
+
.. code-block:: bat
-
+
.\classification_sample_async.exe -i %USERPROFILE%\Downloads\dog.bmp -m %USERPROFILE%\Documents\ir\googlenet-v1.xml -d CPU
.. tab-item:: Linux
:sync: linux
-
+
.. code-block:: sh
-
+
./classification_sample_async -i ~/Downloads/dog.bmp -m ~/ir/googlenet-v1.xml -d CPU
-
+
.. tab-item:: macOS
:sync: macos
-
+
.. code-block:: sh
-
+
./classification_sample_async -i ~/Downloads/dog.bmp -m ~/ir/googlenet-v1.xml -d CPU
When the sample application is complete, you are given the label and confidence for the top 10 categories. The input image and sample output of the inference results is shown below:
@@ -418,24 +418,24 @@ The following example shows how to run the same sample using GPU as the target d
Running Inference on GPU
------------------------
-.. note::
-
+.. note::
+
Running inference on Intel® Processor Graphics (GPU) requires :doc:`additional hardware configuration steps `, as described earlier on this page. Running on GPU is not compatible with macOS.
.. tab-set::
.. tab-item:: Windows
:sync: windows
-
+
.. code-block:: bat
-
+
.\classification_sample_async.exe -i %USERPROFILE%\Downloads\dog.bmp -m %USERPROFILE%\Documents\ir\googlenet-v1.xml -d GPU
.. tab-item:: Linux
:sync: linux
-
+
.. code-block:: sh
-
+
./classification_sample_async -i ~/Downloads/dog.bmp -m ~/ir/googlenet-v1.xml -d GPU
diff --git a/docs/glossary.md b/docs/glossary.md
index 48d65d57d49b24..5380db526c10f7 100644
--- a/docs/glossary.md
+++ b/docs/glossary.md
@@ -3,7 +3,7 @@
@sphinxdirective
.. meta::
- :description: Check the list of acronyms, abbreviations and terms used in
+ :description: Check the list of acronyms, abbreviations and terms used in
Intel® Distribution of OpenVINO™ toolkit.
@@ -11,54 +11,55 @@ Acronyms and Abbreviations
#################################################
================== ===========================================================================
- Abbreviation Description
+ Abbreviation Description
================== ===========================================================================
- API Application Programming Interface
- AVX Advanced Vector Extensions
- clDNN Compute Library for Deep Neural Networks
- CLI Command Line Interface
- CNN Convolutional Neural Network
- CPU Central Processing Unit
- CV Computer Vision
- DL Deep Learning
- DLL Dynamic Link Library
- DNN Deep Neural Networks
- ELU Exponential Linear rectification Unit
- FCN Fully Convolutional Network
- FP Floating Point
- GCC GNU Compiler Collection
- GPU Graphics Processing Unit
- HD High Definition
- IR Intermediate Representation
- JIT Just In Time
- JTAG Joint Test Action Group
- LPR License-Plate Recognition
- LRN Local Response Normalization
- mAP Mean Average Precision
- Intel® OneDNN Intel® OneAPI Deep Neural Network Library
- `mo` Command-line tool for model conversion, CLI for ``tools.mo.convert_model``
- MVN Mean Variance Normalization
- NCDHW Number of images, Channels, Depth, Height, Width
- NCHW Number of images, Channels, Height, Width
- NHWC Number of images, Height, Width, Channels
- NMS Non-Maximum Suppression
- NN Neural Network
- NST Neural Style Transfer
- OD Object Detection
- OS Operating System
- PCI Peripheral Component Interconnect
- PReLU Parametric Rectified Linear Unit
- PSROI Position Sensitive Region Of Interest
- RCNN, R-CNN Region-based Convolutional Neural Network
- ReLU Rectified Linear Unit
- ROI Region Of Interest
- SDK Software Development Kit
- SSD Single Shot multibox Detector
- SSE Streaming SIMD Extensions
- USB Universal Serial Bus
- VGG Visual Geometry Group
- VOC Visual Object Classes
- WINAPI Windows Application Programming Interface
+ API Application Programming Interface
+ AVX Advanced Vector Extensions
+ clDNN Compute Library for Deep Neural Networks
+ CLI Command Line Interface
+ CNN Convolutional Neural Network
+ CPU Central Processing Unit
+ CV Computer Vision
+ DL Deep Learning
+ DLL Dynamic Link Library
+ DNN Deep Neural Networks
+ ELU Exponential Linear rectification Unit
+ FCN Fully Convolutional Network
+ FP Floating Point
+ GCC GNU Compiler Collection
+ GPU Graphics Processing Unit
+ HD High Definition
+ IR Intermediate Representation
+ JIT Just In Time
+ JTAG Joint Test Action Group
+ LPR License-Plate Recognition
+ LRN Local Response Normalization
+ mAP Mean Average Precision
+ Intel® OneDNN Intel® OneAPI Deep Neural Network Library
+ `mo` Command-line tool for model conversion, CLI for ``tools.mo.convert_model`` (legacy)
+ MVN Mean Variance Normalization
+ NCDHW Number of images, Channels, Depth, Height, Width
+ NCHW Number of images, Channels, Height, Width
+ NHWC Number of images, Height, Width, Channels
+ NMS Non-Maximum Suppression
+ NN Neural Network
+ NST Neural Style Transfer
+ OD Object Detection
+ OS Operating System
+ `ovc` OpenVINO Model Converter, command line tool for model conversion
+ PCI Peripheral Component Interconnect
+ PReLU Parametric Rectified Linear Unit
+ PSROI Position Sensitive Region Of Interest
+ RCNN, R-CNN Region-based Convolutional Neural Network
+ ReLU Rectified Linear Unit
+ ROI Region Of Interest
+ SDK Software Development Kit
+ SSD Single Shot multibox Detector
+ SSE Streaming SIMD Extensions
+ USB Universal Serial Bus
+ VGG Visual Geometry Group
+ VOC Visual Object Classes
+ WINAPI Windows Application Programming Interface
================== ===========================================================================
@@ -68,46 +69,46 @@ Terms
Glossary of terms used in OpenVINO™
-| *Batch*
+| *Batch*
| Number of images to analyze during one call of infer. Maximum batch size is a property of the model set before its compilation. In NHWC, NCHW, and NCDHW image data layout representations, the 'N' refers to the number of images in the batch.
-| *Device Affinity*
+| *Device Affinity*
| A preferred hardware device to run inference (CPU, GPU, GNA, etc.).
-| *Extensibility mechanism, Custom layers*
+| *Extensibility mechanism, Custom layers*
| The mechanism that provides you with capabilities to extend the OpenVINO™ Runtime and model conversion API so that they can work with models containing operations that are not yet supported.
| *layer / operation*
-| In OpenVINO, both terms are treated synonymously. To avoid confusion, "layer" is being pushed out and "operation" is the currently accepted term.
+| In OpenVINO, both terms are treated synonymously. To avoid confusion, "layer" is being pushed out and "operation" is the currently accepted term.
-| *Model conversion API*
-| A component of OpenVINO Development Tools. The API is used to import, convert, and optimize models trained in popular frameworks to a format usable by other OpenVINO components. In ``openvino.tools.mo`` namespace, model conversion API is represented by a Python ``mo.convert_model()`` method and ``mo`` command-line tool.
+| *Model conversion API*
+| The Conversion API is used to import and convert models trained in popular frameworks to a format usable by other OpenVINO components. Model conversion API is represented by a Python ``openvino.convert_model()`` method and ``ovc`` command-line tool.
-| *OpenVINO™ Core*
-| OpenVINO™ Core is a software component that manages inference on certain Intel(R) hardware devices: CPU, GPU, GNA, etc.
+| *OpenVINO™ Core*
+| OpenVINO™ Core is a software component that manages inference on certain Intel(R) hardware devices: CPU, GPU, GNA, etc.
-| *OpenVINO™ API*
+| *OpenVINO™ API*
| The basic default API for all supported devices, which allows you to load a model from Intermediate Representation or convert from ONNX, PaddlePaddle, TensorFlow, TensorFlow Lite file formats, set input and output formats and execute the model on various devices.
-| *OpenVINO™ Runtime*
+| *OpenVINO™ Runtime*
| A C++ library with a set of classes that you can use in your application to infer input tensors and get the results.
-| *ov::Model*
+| *ov::Model*
| A class of the Model that OpenVINO™ Runtime reads from IR or converts from ONNX, PaddlePaddle, TensorFlow, TensorFlow Lite formats. Consists of model structure, weights and biases.
-| *ov::CompiledModel*
+| *ov::CompiledModel*
| An instance of the compiled model which allows the OpenVINO™ Runtime to request (several) infer requests and perform inference synchronously or asynchronously.
-| *ov::InferRequest*
+| *ov::InferRequest*
| A class that represents the end point of inference on the model compiled by the device and represented by a compiled model. Inputs are set here, outputs should be requested from this interface as well.
-| *ov::ProfilingInfo*
+| *ov::ProfilingInfo*
| Represents basic inference profiling information per operation.
-| *ov::Layout*
+| *ov::Layout*
| Image data layout refers to the representation of images batch. Layout shows a sequence of 4D or 5D tensor data in memory. A typical NCHW format represents pixel in horizontal direction, rows by vertical dimension, planes by channel and images into batch. See also [Layout API Overview](./OV_Runtime_UG/layout_overview.md).
-| *ov::element::Type*
+| *ov::element::Type*
| Represents data element type. For example, f32 is 32-bit floating point, f16 is 16-bit floating point.
| *plugin / Inference Device / Inference Mode*
diff --git a/docs/install_guides/installing-openvino-from-archive-linux.md b/docs/install_guides/installing-openvino-from-archive-linux.md
index 3eca6f4acf21fe..d7cfdf8d224787 100644
--- a/docs/install_guides/installing-openvino-from-archive-linux.md
+++ b/docs/install_guides/installing-openvino-from-archive-linux.md
@@ -3,34 +3,33 @@
@sphinxdirective
.. meta::
- :description: Learn how to install OpenVINO™ Runtime on the Linux operating
+ :description: Learn how to install OpenVINO™ Runtime on the Linux operating
system, using an archive file.
.. note::
-
+
Note that the Archive distribution:
-
+
* offers both C/C++ and Python APIs
- * additionally includes code samples
+ * additionally includes code samples
* is dedicated to users of all major OSs: Windows, Linux, macOS
* may offer different hardware support under different operating systems
(see the drop-down below for more details).
-
- .. dropdown:: Inference Options
- =================== ===== ===== ===== =====
- Operating System CPU GPU GNA NPU
- =================== ===== ===== ===== =====
- Debian9 armhf V n/a n/a n/a
- Debian9 arm64 V n/a n/a n/a
- CentOS7 x86_64 V V n/a n/a
- Ubuntu18 x86_64 V V V n/a
- Ubuntu20 x86_64 V V V V
- Ubuntu22 x86_64 V V V V
- RHEL8 x86_64 V V V n/a
- =================== ===== ===== ===== =====
+ .. dropdown:: Inference Options
+ =================== ===== ===== ===== =====
+ Operating System CPU GPU GNA NPU
+ =================== ===== ===== ===== =====
+ Debian9 armhf V n/a n/a n/a
+ Debian9 arm64 V n/a n/a n/a
+ CentOS7 x86_64 V V n/a n/a
+ Ubuntu18 x86_64 V V V n/a
+ Ubuntu20 x86_64 V V V V
+ Ubuntu22 x86_64 V V V V
+ RHEL8 x86_64 V V V n/a
+ =================== ===== ===== ===== =====
.. tab-set::
@@ -40,58 +39,58 @@
| Full requirement listing is available in:
| `System Requirements Page `__
-
+
.. tab-item:: Processor Notes
:sync: processor-notes
-
+
| To see if your processor includes the integrated graphics technology and supports iGPU inference, refer to:
| `Product Specifications `__
-
+
.. tab-item:: Software
:sync: software
-
+
* `CMake 3.13 or higher, 64-bit `__
* `Python 3.7 - 3.11, 64-bit `__
* GCC:
-
+
.. tab-set::
.. tab-item:: Ubuntu 20.04
:sync: ubuntu-20
-
+
* GCC 9.3.0
.. tab-item:: Ubuntu 18.04
:sync: ubuntu-18
-
+
* GCC 7.5.0
-
+
.. tab-item:: RHEL 8
:sync: rhel-8
-
+
* GCC 8.4.1
-
+
.. tab-item:: CentOS 7
:sync: centos-7
-
+
* GCC 8.3.1
Use the following instructions to install it:
-
+
Install GCC 8.3.1 via devtoolset-8
-
+
.. code-block:: sh
-
+
sudo yum update -y && sudo yum install -y centos-release-scl epel-release
sudo yum install -y devtoolset-8
-
+
Enable devtoolset-8 and check current gcc version
-
+
.. code-block:: sh
-
+
source /opt/rh/devtoolset-8/enable
gcc -v
-
-
+
+
@@ -107,19 +106,19 @@ Step 1: Download and Install the OpenVINO Core Components
2. Create the ``/opt/intel`` folder for OpenVINO by using the following command. If the folder already exists, skip this step.
.. code-block:: sh
-
+
sudo mkdir /opt/intel
-
+
.. note::
-
+
The ``/opt/intel`` path is the recommended folder path for administrators or root users. If you prefer to install OpenVINO in regular userspace, the recommended path is ``/home//intel``. You may use a different path if desired.
3. Browse to the current user's ``Downloads`` folder:
-
+
.. code-block:: sh
-
+
cd /Downloads
-
+
4. Download the `OpenVINO Runtime archive file for your system `_, extract the files, rename the extracted folder and move it to the desired path:
.. tab-set::
@@ -131,70 +130,70 @@ Step 1: Download and Install the OpenVINO Core Components
.. tab-item:: Ubuntu 22.04
:sync: ubuntu-22
-
+
.. code-block:: sh
-
+
curl -L https://storage.openvinotoolkit.org/repositories/openvino/packages/2023.0.1/linux/l_openvino_toolkit_ubuntu22_2023.0.1.11005.fa1c41994f3_x86_64.tgz --output openvino_2023.0.1.tgz
tar -xf openvino_2023.0.1.tgz
sudo mv l_openvino_toolkit_ubuntu22_2023.0.1.11005.fa1c41994f3_x86_64 /opt/intel/openvino_2023.0.1
-
+
.. tab-item:: Ubuntu 20.04
:sync: ubuntu-20
-
+
.. code-block:: sh
-
+
curl -L https://storage.openvinotoolkit.org/repositories/openvino/packages/2023.0.1/linux/l_openvino_toolkit_ubuntu20_2023.0.1.11005.fa1c41994f3_x86_64.tgz --output openvino_2023.0.1.tgz
tar -xf openvino_2023.0.1.tgz
sudo mv l_openvino_toolkit_ubuntu20_2023.0.1.11005.fa1c41994f3_x86_64 /opt/intel/openvino_2023.0.1
-
+
.. tab-item:: Ubuntu 18.04
:sync: ubuntu-18
-
+
.. code-block:: sh
-
+
curl -L https://storage.openvinotoolkit.org/repositories/openvino/packages/2023.0.1/linux/l_openvino_toolkit_ubuntu18_2023.0.1.11005.fa1c41994f3_x86_64.tgz --output openvino_2023.0.1.tgz
tar -xf openvino_2023.0.1.tgz
sudo mv l_openvino_toolkit_ubuntu18_2023.0.1.11005.fa1c41994f3_x86_64 /opt/intel/openvino_2023.0.1
-
+
.. tab-item:: RHEL 8
:sync: rhel-8
-
+
.. code-block:: sh
-
+
curl -L https://storage.openvinotoolkit.org/repositories/openvino/packages/2023.0.1/linux/l_openvino_toolkit_rhel8_2023.0.1.11005.fa1c41994f3_x86_64.tgz --output openvino_2023.0.1.tgz
tar -xf openvino_2023.0.1.tgz
sudo mv l_openvino_toolkit_rhel8_2023.0.1.11005.fa1c41994f3_x86_64 /opt/intel/openvino_2023.0.1
-
+
.. tab-item:: CentOS 7
:sync: centos-7
-
+
.. code-block:: sh
-
+
curl -L https://storage.openvinotoolkit.org/repositories/openvino/packages/2023.0.1/linux/l_openvino_toolkit_centos7_2023.0.1.11005.fa1c41994f3_x86_64.tgz --output openvino_2023.0.1.tgz
tar -xf openvino_2023.0.1.tgz
sudo mv l_openvino_toolkit_centos7_2023.0.1.11005.fa1c41994f3_x86_64 /opt/intel/openvino_2023.0.1
-
+
.. tab-item:: ARM 64-bit
:sync: arm-64
-
+
.. code-block:: sh
-
+
curl -L https://storage.openvinotoolkit.org/repositories/openvino/packages/2023.0.1/linux/l_openvino_toolkit_debian9_2023.0.1.11005.fa1c41994f3_arm64.tgz -O openvino_2023.0.1.tgz
tar -xf openvino_2023.0.1.tgz
sudo mv l_openvino_toolkit_debian9_2023.0.1.11005.fa1c41994f3_arm64 /opt/intel/openvino_2023.0.1
-
+
.. tab-item:: ARM 32-bit
:sync: arm-32
-
+
.. code-block:: sh
-
+
curl -L https://storage.openvinotoolkit.org/repositories/openvino/packages/2023.0.1/linux/l_openvino_toolkit_debian9_2023.0.1.11005.fa1c41994f3_armhf.tgz -O openvino_2023.0.1.tgz
tar -xf openvino_2023.0.1.tgz
sudo mv l_openvino_toolkit_debian9_2023.0.1.11005.fa1c41994f3_armhf /opt/intel/openvino_2023.0.1
-
-
+
+
5. Install required system dependencies on Linux. To do this, OpenVINO provides a script in the extracted installation directory. Run the following command:
-
+
.. code-block:: sh
cd /opt/intel/openvino_2023.0.1
@@ -214,33 +213,33 @@ Step 1: Download and Install the OpenVINO Core Components
python3 -m pip install -r ./python/requirements.txt
7. For simplicity, it is useful to create a symbolic link as below:
-
+
.. code-block:: sh
-
+
cd /opt/intel
sudo ln -s openvino_2023.0.1 openvino_2023
-
+
.. note::
- If you have already installed a previous release of OpenVINO 2023, a symbolic link to the ``openvino_2023`` folder may already exist.
+ If you have already installed a previous release of OpenVINO 2023, a symbolic link to the ``openvino_2023`` folder may already exist.
Unlink the previous link with ``sudo unlink openvino_2023``, and then re-run the command above.
-Congratulations, you have finished the installation! For some use cases you may still
-need to install additional components. Check the description below, as well as the
+Congratulations, you have finished the installation! For some use cases you may still
+need to install additional components. Check the description below, as well as the
:doc:`list of additional configurations `
to see if your case needs any of them.
-The ``/opt/intel/openvino_2023`` folder now contains the core components for OpenVINO.
-If you used a different path in Step 2, for example, ``/home//intel/``,
-OpenVINO is now in ``/home//intel/openvino_2023``. The path to the ``openvino_2023``
+The ``/opt/intel/openvino_2023`` folder now contains the core components for OpenVINO.
+If you used a different path in Step 2, for example, ``/home//intel/``,
+OpenVINO is now in ``/home//intel/openvino_2023``. The path to the ``openvino_2023``
directory is also referred as ```` throughout the OpenVINO documentation.
Step 2: Configure the Environment
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
-You must update several environment variables before you can compile and run OpenVINO applications.
-Open a terminal window and run the ``setupvars.sh`` script as shown below to temporarily set your environment variables.
+You must update several environment variables before you can compile and run OpenVINO applications.
+Open a terminal window and run the ``setupvars.sh`` script as shown below to temporarily set your environment variables.
If your is not ``/opt/intel/openvino_2023``, use the correct one instead.
.. code-block:: sh
@@ -250,12 +249,12 @@ If your is not ``/opt/intel/openvino_2023``, use the correct one i
If you have more than one OpenVINO version installed on your system, you can easily switch versions by sourcing the `setupvars.sh` of your choice.
-.. note::
-
- The above command must be re-run every time you start a new terminal session.
- To set up Linux to automatically run the command every time a new terminal is opened,
- open ``~/.bashrc`` in your favorite editor and add ``source /opt/intel/openvino_2023/setupvars.sh`` after the last line.
- Next time when you open a terminal, you will see ``[setupvars.sh] OpenVINO™ environment initialized``.
+.. note::
+
+ The above command must be re-run every time you start a new terminal session.
+ To set up Linux to automatically run the command every time a new terminal is opened,
+ open ``~/.bashrc`` in your favorite editor and add ``source /opt/intel/openvino_2023/setupvars.sh`` after the last line.
+ Next time when you open a terminal, you will see ``[setupvars.sh] OpenVINO™ environment initialized``.
Changing ``.bashrc`` is not recommended when you have multiple OpenVINO versions on your machine and want to switch among them.
The environment variables are set.
@@ -266,57 +265,57 @@ The environment variables are set.
What's Next?
############################################################
-Now that you've installed OpenVINO Runtime, you're ready to run your own machine learning applications!
+Now that you've installed OpenVINO Runtime, you're ready to run your own machine learning applications!
Learn more about how to integrate a model in OpenVINO applications by trying out the following tutorials.
.. tab-set::
.. tab-item:: Get started with Python
:sync: get-started-py
-
+
Try the `Python Quick Start Example `_
to estimate depth in a scene using an OpenVINO monodepth model in a Jupyter Notebook inside your web browser.
-
+
.. image:: https://user-images.githubusercontent.com/15709723/127752390-f6aa371f-31b5-4846-84b9-18dd4f662406.gif
:width: 400
-
+
Visit the :doc:`Tutorials ` page for more Jupyter Notebooks to get you started with OpenVINO, such as:
-
+
* `OpenVINO Python API Tutorial `__
* `Basic image classification program with Hello Image Classification `__
* `Convert a PyTorch model and use it for image background removal `__
-
-
+
+
.. tab-item:: Get started with C++
:sync: get-started-cpp
-
- Try the :doc:`C++ Quick Start Example ` for step-by-step instructions
+
+ Try the :doc:`C++ Quick Start Example ` for step-by-step instructions
on building and running a basic image classification C++ application.
-
+
.. image:: https://user-images.githubusercontent.com/36741649/127170593-86976dc3-e5e4-40be-b0a6-206379cd7df5.jpg
:width: 400
-
+
Visit the :doc:`Samples ` page for other C++ example applications to get you started with OpenVINO, such as:
-
+
* `Basic object detection with the Hello Reshape SSD C++ sample `__
* `Automatic speech recognition C++ sample `__
-
-
-
+
+
+
Uninstalling the Intel® Distribution of OpenVINO™ Toolkit
###########################################################
If you have installed OpenVINO Runtime from archive files, you can uninstall it by deleting the archive files and the extracted folders.
-Uninstallation removes all Intel® Distribution of OpenVINO™ Toolkit component files but does not affect user files in the installation directory.
+Uninstallation removes all Intel® Distribution of OpenVINO™ Toolkit component files but does not affect user files in the installation directory.
If you have created the symbolic link, remove the link first:
-
+
.. code-block:: sh
sudo rm /opt/intel/openvino_2023
-
+
To delete the files:
-
+
.. code-block:: sh
rm -r && rm
@@ -330,7 +329,7 @@ Additional Resources
###########################################################
* :doc:`Troubleshooting Guide for OpenVINO Installation & Configuration `
-* Converting models for use with OpenVINO™: :doc:`Convert a Model `
+* Converting models for use with OpenVINO™: :doc:`Convert a Model `
* Writing your own OpenVINO™ applications: :doc:`OpenVINO™ Runtime User Guide `
* Sample applications: :doc:`OpenVINO™ Toolkit Samples Overview `
* Pre-trained deep learning models: :doc:`Overview of OpenVINO™ Toolkit Pre-Trained Models `
diff --git a/docs/install_guides/pypi-openvino-dev.md b/docs/install_guides/pypi-openvino-dev.md
index df7568f9a179bf..25e1aeac76ea02 100644
--- a/docs/install_guides/pypi-openvino-dev.md
+++ b/docs/install_guides/pypi-openvino-dev.md
@@ -1,4 +1,4 @@
-# OpenVINO™ Development Tools
+# OpenVINO™ Development Tools
> **NOTE**: This version is pre-release software and has not undergone full release validation or qualification. No support is offered on pre-release software and APIs/behavior are subject to change. It should NOT be incorporated into any production software/solution and instead should be used only for early testing and integration while awaiting a final release version of this software.
@@ -31,11 +31,11 @@ pip install openvino-dev
### Installation in a New Environment
If you do not have an environment with the source deep learning framework for the input model or you encounter any compatibility issues between OpenVINO and your version of deep learning framework,
-you may install OpenVINO Development Tools with validated versions of frameworks into a new environment.
+you may install OpenVINO Development Tools with validated versions of frameworks into a new environment.
#### Step 1. Set Up Python Virtual Environment
-Use a virtual environment to avoid dependency conflicts.
+Use a virtual environment to avoid dependency conflicts.
To create a virtual environment, use the following commands:
@@ -75,7 +75,7 @@ Use the following command:
```sh
pip install openvino-dev[extras]
```
- where `extras` is the source deep learning framework for the input model and is one or more of the following values separated with "," :
+ where `extras` is the source deep learning framework for the input model and is one or more of the following values separated with "," :
| Extras Value | DL Framework |
| :-------------------------------| :------------------------------------------------------------------------------- |
@@ -113,34 +113,34 @@ For example, to install and configure the components for working with TensorFlow
## What's in the Package?
-> **NOTE**: The openvino-dev package installs [OpenVINO™ Runtime](https://pypi.org/project/openvino) as a dependency, which is the engine that runs the deep learning model and includes a set of libraries for an easy inference integration into your applications.
+> **NOTE**: The openvino-dev package installs [OpenVINO™ Runtime](https://pypi.org/project/openvino) as a dependency, which is the engine that runs the deep learning model and includes a set of libraries for an easy inference integration into your applications.
**In addition, the openvino-dev package installs the following components by default:**
-| Component | Console Script | Description |
+| Component | Console Script | Description |
|------------------|---------------------------------------------------------------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
-| [Model conversion API](https://docs.openvino.ai/nightly/openvino_docs_MO_DG_Deep_Learning_Model_Optimizer_DevGuide.html) | `mo` |**Model conversion API** imports, converts, and optimizes models that were trained in popular frameworks to a format usable by OpenVINO components.
Supported frameworks include Caffe\*, TensorFlow\*, MXNet\*, PaddlePaddle\*, and ONNX\*. |
+| [Legacy Model conversion API](https://docs.openvino.ai/nightly/openvino_docs_MO_DG_Deep_Learning_Model_Optimizer_DevGuide.html) | `mo` |**Model conversion API** imports, converts, and optimizes models that were trained in popular frameworks to a format usable by OpenVINO components.
Supported frameworks include Caffe\*, TensorFlow\*, MXNet\*, PaddlePaddle\*, and ONNX\*. |
| [Benchmark Tool](https://docs.openvino.ai/nightly/openvino_inference_engine_tools_benchmark_tool_README.html)| `benchmark_app` | **Benchmark Application** allows you to estimate deep learning inference performance on supported devices for synchronous and asynchronous modes. |
| [Accuracy Checker](https://docs.openvino.ai/nightly/omz_tools_accuracy_checker.html) and
[Annotation Converter](https://docs.openvino.ai/nightly/omz_tools_accuracy_checker_annotation_converters.html) | `accuracy_check`
`convert_annotation` |**Accuracy Checker** is a deep learning accuracy validation tool that allows you to collect accuracy metrics against popular datasets. The main advantages of the tool are the flexibility of configuration and a set of supported datasets, preprocessing, postprocessing, and metrics.
**Annotation Converter** is a utility that prepares datasets for evaluation with Accuracy Checker. |
| [Post-Training Optimization Tool](https://docs.openvino.ai/nightly/pot_introduction.html)| `pot` |**Post-Training Optimization Tool** allows you to optimize trained models with advanced capabilities, such as quantization and low-precision optimizations, without the need to retrain or fine-tune models. |
-| [Model Downloader and other Open Model Zoo tools](https://docs.openvino.ai/nightly/omz_tools_downloader.html)| `omz_downloader`
`omz_converter`
`omz_quantizer`
`omz_info_dumper`| **Model Downloader** is a tool for getting access to the collection of high-quality and extremely fast pre-trained deep learning [public](@ref omz_models_group_public) and [Intel](@ref omz_models_group_intel)-trained models. These free pre-trained models can be used to speed up the development and production deployment process without training your own models. The tool downloads model files from online sources and, if necessary, patches them to make them more usable with model conversion API. A number of additional tools are also provided to automate the process of working with downloaded models:
**Model Converter** is a tool for converting Open Model Zoo models that are stored in an original deep learning framework format into the OpenVINO Intermediate Representation (IR) using model conversion API.
**Model Quantizer** is a tool for automatic quantization of full-precision models in the IR format into low-precision versions using the Post-Training Optimization Tool.
**Model Information Dumper** is a helper utility for dumping information about the models to a stable, machine-readable format. |
+| [Model Downloader and other Open Model Zoo tools](https://docs.openvino.ai/nightly/omz_tools_downloader.html)| `omz_downloader`
`omz_converter`
`omz_quantizer`
`omz_info_dumper`| **Model Downloader** is a tool for getting access to the collection of high-quality and extremely fast pre-trained deep learning [public](@ref omz_models_group_public) and [Intel](@ref omz_models_group_intel)-trained models. These free pre-trained models can be used to speed up the development and production deployment process without training your own models. The tool downloads model files from online sources and, if necessary, patches them to make them more usable with model conversion API. A number of additional tools are also provided to automate the process of working with downloaded models:
**Model Converter** is a tool for converting Open Model Zoo models that are stored in an original deep learning framework format into the OpenVINO Intermediate Representation (IR) using model conversion API.
**Model Quantizer** is a tool for automatic quantization of full-precision models in the IR format into low-precision versions using the Post-Training Optimization Tool.
**Model Information Dumper** is a helper utility for dumping information about the models to a stable, machine-readable format. |
## Troubleshooting
-For general troubleshooting steps and issues, see [Troubleshooting Guide for OpenVINO Installation](https://docs.openvino.ai/2023.1/openvino_docs_get_started_guide_troubleshooting.html). The following sections also provide explanations to several error messages.
+For general troubleshooting steps and issues, see [Troubleshooting Guide for OpenVINO Installation](https://docs.openvino.ai/2023.1/openvino_docs_get_started_guide_troubleshooting.html). The following sections also provide explanations to several error messages.
### Errors with Installing via PIP for Users in China
Users in China might encounter errors while downloading sources via PIP during OpenVINO™ installation. To resolve the issues, try the following solution:
-
-* Add the download source using the ``-i`` parameter with the Python ``pip`` command. For example:
+
+* Add the download source using the ``-i`` parameter with the Python ``pip`` command. For example:
``` sh
pip install openvino-dev -i https://mirrors.aliyun.com/pypi/simple/
```
Use the ``--trusted-host`` parameter if the URL above is ``http`` instead of ``https``.
You can also run the following command to install openvino-dev with specific frameworks. For example:
-
+
```
pip install openvino-dev[tensorflow2] -i https://mirrors.aliyun.com/pypi/simple/
```
@@ -154,7 +154,7 @@ pip install openvino-dev[tensorflow2,mxnet,caffe]
zsh: no matches found: openvino-dev[tensorflow2,mxnet,caffe]
```
-By default zsh interprets square brackets as an expression for pattern matching. To resolve this issue, you need to escape the command with quotes:
+By default zsh interprets square brackets as an expression for pattern matching. To resolve this issue, you need to escape the command with quotes:
```sh
pip install 'openvino-dev[tensorflow2,mxnet,caffe]'
diff --git a/docs/model_zoo.md b/docs/model_zoo.md
index a7f95024b08b01..560b67304a771f 100644
--- a/docs/model_zoo.md
+++ b/docs/model_zoo.md
@@ -7,7 +7,7 @@
.. toctree::
:maxdepth: 1
:hidden:
-
+
omz_models_group_intel
omz_models_group_public
@@ -29,7 +29,7 @@
Open Model Zoo for OpenVINO™ toolkit delivers a wide variety of free, pre-trained deep learning models and demo applications that provide full application templates to help you implement deep learning in Python, C++, or OpenCV Graph API (G-API). Models and demos are available in the `Open Model Zoo GitHub repo `__ and licensed under Apache License Version 2.0.
-Browse through over 200 neural network models, both :doc:`public ` and from :doc:`Intel `, and pick the right one for your solution. Types include object detection, classification, image segmentation, handwriting recognition, text to speech, pose estimation, and others. The Intel models have already been converted to work with OpenVINO™ toolkit, while public models can easily be converted using the :doc:`Model Optimizer ` utility.
+Browse through over 200 neural network models, both :doc:`public ` and from :doc:`Intel `, and pick the right one for your solution. Types include object detection, classification, image segmentation, handwriting recognition, text to speech, pose estimation, and others. The Intel models have already been converted to work with OpenVINO™ toolkit, while public models can easily be converted using the :doc:`OpenVINO Model Conversion API ` utility.
Get started with simple :doc:`step-by-step procedures ` to learn how to build and run demo applications or discover the :doc:`full set of demos ` and adapt them for implementing specific deep learning scenarios in your applications.