diff --git a/docs/Documentation/model_introduction.md b/docs/Documentation/model_introduction.md index 54cfc1edc4de67..ba3f785829f26e 100644 --- a/docs/Documentation/model_introduction.md +++ b/docs/Documentation/model_introduction.md @@ -16,14 +16,14 @@ omz_tools_downloader -Every deep learning workflow begins with obtaining a model. You can choose to prepare a custom one, use a ready-made solution and adjust it to your needs, or even download and run a pre-trained network from an online database, such as `TensorFlow Hub `__, `Hugging Face `__, `Torchvision models `__. +Every deep learning workflow begins with obtaining a model. You can choose to prepare a custom one, use a ready-made solution and adjust it to your needs, or even download and run a pre-trained network from an online database, such as `TensorFlow Hub `__, `Hugging Face `__, or `Torchvision models `__. Import a model using ``read_model()`` ################################################# Model files (not Python objects) from :doc:`ONNX, PaddlePaddle, TensorFlow and TensorFlow Lite ` (check :doc:`TensorFlow Frontend Capabilities and Limitations `) do not require a separate step for model conversion, that is ``mo.convert_model``. -The ``read_model()`` method reads a model from a file and produces `openvino.runtime.Model `__. If the file is in one of the supported original framework file :doc:`formats `, the method runs internal conversion to an OpenVINO model format. If the file is already in the :doc:`OpenVINO IR format `, it is read "as-is", without any conversion involved. +The ``read_model()`` method reads a model from a file and produces `openvino.runtime.Model `__. If the file is in one of the supported original framework :doc:`file formats `, the method runs internal conversion to an OpenVINO model format. If the file is already in the :doc:`OpenVINO IR format `, it is read "as-is", without any conversion involved. You can also convert a model from original framework to `openvino.runtime.Model `__ using ``convert_model()`` method. More details about ``convert_model()`` are provided in :doc:`model conversion guide ` . diff --git a/docs/MO_DG/prepare_model/convert_model/supported_model_formats.md b/docs/MO_DG/prepare_model/convert_model/supported_model_formats.md index 91ad25df461cff..fb80b66773add6 100644 --- a/docs/MO_DG/prepare_model/convert_model/supported_model_formats.md +++ b/docs/MO_DG/prepare_model/convert_model/supported_model_formats.md @@ -17,14 +17,14 @@ openvino_docs_MO_DG_prepare_model_convert_model_tutorials .. meta:: - :description: Learn about supported model formats and the methods used to convert, read and compile them in OpenVINO™. + :description: Learn about supported model formats and the methods used to convert, read, and compile them in OpenVINO™. -**OpenVINO IR (Intermediate Representation)** - the proprietary and default format of OpenVINO, benefiting from the full extent of its features. All other model formats presented below will ultimately be converted to :doc:`OpenVINO IR `. +**OpenVINO IR (Intermediate Representation)** - the proprietary and default format of OpenVINO, benefiting from the full extent of its features. All other supported model formats, as listed below, are converted to :doc:`OpenVINO IR ` to enable inference. Consider storing your model in this format to minimize first-inference latency, perform model optimization, and, in some cases, save space on your drive. -**PyTorch, TensorFlow, ONNX, and PaddlePaddle** may be used without any prior conversion and can be read by OpenVINO Runtime API by the use of ``read_model()`` or ``compile_model()``. Additional adjustment for the model can be performed using the ``convert_model()`` method, which allows you to set shapes, types or the layout of model inputs, cut parts of the model, freeze inputs etc. The detailed information of capabilities of ``convert_model()`` can be found in :doc:`this ` article. +**PyTorch, TensorFlow, ONNX, and PaddlePaddle** can be used by OpenVINO Runtime API, via the ``read_model()`` and ``compile_model()`` commands, resulting in under-the-hood conversion to OpenVINO IR. To perform additional adjustments to the model you can use the ``convert_model()`` method, which allows you to set shapes, change model input types or layouts, cut parts of the model, freeze inputs etc. A detailed description of ``convert_model()`` capabilities can be found in the :doc:`model conversion guide `. -Below you will find code examples for each method, for all supported model formats. +Here are code examples for each method, for all supported model formats. .. tab-set:: @@ -531,7 +531,7 @@ Below you will find code examples for each method, for all supported model forma :doc:`article `. -**MXNet, Caffe, and Kaldi** are legacy formats that need to be converted to OpenVINO IR before running inference. The model conversion in some cases may involve intermediate steps. For more details, refer to the :doc:`MXNet `, :doc:`Caffe `, :doc:`Kaldi ` conversion guides. +**MXNet, Caffe, and Kaldi** are legacy formats that need to be converted explicitly to OpenVINO IR before running inference. The model conversion in some cases may involve intermediate steps. For more details, refer to the :doc:`MXNet `, :doc:`Caffe `, :doc:`Kaldi ` conversion guides. OpenVINO is currently proceeding **to deprecate these formats** and **remove their support entirely in the future**. Converting these formats to ONNX or using an LTS version might be a viable solution for inference in OpenVINO Toolkit.