Skip to content

Commit

Permalink
[DOCS] Docs file structure update with fixes (#23343)
Browse files Browse the repository at this point in the history
- Updated paths in an entire documentation,
- updated scripts used in docs building process,
- updated docs cmake to handle new scripts,
- fixed links,
- fixed other errors found in docs.

---------

Co-authored-by: Maciej Smyk <[email protected]>
Co-authored-by: Sebastian Golebiewski <[email protected]>
Co-authored-by: Karol Blaszczak <[email protected]>
Co-authored-by: Andrzej Kopytko <[email protected]>
Co-authored-by: Tatiana Savina <[email protected]>
Co-authored-by: Vishniakov Nikolai <[email protected]>
  • Loading branch information
7 people authored Mar 11, 2024
1 parent a78d914 commit a3c7e15
Show file tree
Hide file tree
Showing 296 changed files with 6,150 additions and 6,056 deletions.
6 changes: 3 additions & 3 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -160,9 +160,9 @@ You can also check out [Awesome OpenVINO](https://github.com/openvinotoolkit/awe
## System requirements

The system requirements vary depending on platform and are available on dedicated pages:
- [Linux](https://docs.openvino.ai/2024/get-started/install-openvino-overview/install-openvino-linux-header.html)
- [Windows](https://docs.openvino.ai/2024/get-started/install-openvino-overview/install-openvino-windows-header.html)
- [macOS](https://docs.openvino.ai/2024/get-started/install-openvino-overview/install-openvino-macos-header.html)
- [Linux](https://docs.openvino.ai/2024/get-started/install-openvino/install-openvino-linux.html)
- [Windows](https://docs.openvino.ai/2024/get-started/install-openvino/install-openvino-windows.html)
- [macOS](https://docs.openvino.ai/2024/get-started/install-openvino/install-openvino-macos.html)

## How to build

Expand Down
6 changes: 4 additions & 2 deletions docs/CMakeLists.txt
Original file line number Diff line number Diff line change
Expand Up @@ -22,11 +22,13 @@ function(build_docs)

set(DOCS_BUILD_DIR "${CMAKE_CURRENT_BINARY_DIR}")
set(DOCS_SOURCE_DIR "${OpenVINO_SOURCE_DIR}/docs")
set(ARTICLES_EN_DIR "${OpenVINO_SOURCE_DIR}/docs/articles_en")
set(SCRIPTS_DIR "${DOCS_SOURCE_DIR}/scripts")

# Preprocessing scripts
set(REMOVE_XML_SCRIPT "${SCRIPTS_DIR}/remove_xml.py")
set(FILE_HELPER_SCRIPT "${SCRIPTS_DIR}/filehelper.py")
set(ARTICLES_HELPER_SCRIPT "${SCRIPTS_DIR}/articles_helper.py")
set(COPY_IMAGES_SCRIPT "${SCRIPTS_DIR}/copy_images.py")
set(DOXYGEN_MAPPING_SCRIPT "${SCRIPTS_DIR}/create_mapping.py")
set(BREATHE_APIDOC_SCRIPT "${SCRIPTS_DIR}/apidoc.py")
Expand All @@ -38,9 +40,9 @@ function(build_docs)
set(SPHINX_OUTPUT "${DOCS_BUILD_DIR}/_build")

list(APPEND commands COMMAND ${CMAKE_COMMAND} -E cmake_echo_color --green "STARTED preprocessing OpenVINO articles")
list(APPEND commands COMMAND ${Python3_EXECUTABLE} ${FILE_HELPER_SCRIPT}
list(APPEND commands COMMAND ${Python3_EXECUTABLE} ${ARTICLES_HELPER_SCRIPT}
--filetype=rst
--input_dir=${OpenVINO_SOURCE_DIR}
--input_dir=${ARTICLES_EN_DIR}
--output_dir=${SPHINX_SOURCE_DIR}
--exclude_dir=${SPHINX_SOURCE_DIR})
list(APPEND commands COMMAND ${CMAKE_COMMAND} -E cmake_echo_color --green "FINISHED preprocessing OpenVINO articles")
Expand Down
32 changes: 16 additions & 16 deletions docs/articles_en/about-openvino.rst
Original file line number Diff line number Diff line change
Expand Up @@ -8,11 +8,11 @@ About OpenVINO
:maxdepth: 1
:hidden:

openvino_docs_performance_benchmarks
compatibility_and_support
system_requirements
Release Notes <openvino_release_notes>
Additional Resources <resources>
about-openvino/performance-benchmarks
about-openvino/compatibility-and-support
about-openvino/system-requirements
Release Notes <about-openvino/release-notes-openvino>
Additional Resources <about-openvino/additional-resources>

OpenVINO is a toolkit for simple and efficient deployment of various deep learning models.
In this section you will find information on the product itself, as well as the software
Expand All @@ -24,39 +24,39 @@ OpenVINO (Open Visual Inference and Neural network Optimization) is an open-sour
Features
##############################################################

One of the main purposes of OpenVINO is to streamline the deployment of deep learning models in user applications. It optimizes and accelerates model inference, which is crucial for such domains as Generative AI, Large Language models, and use cases like object detection, classification, segmentation, and many others.
One of the main purposes of OpenVINO is to streamline the deployment of deep learning models in user applications. It optimizes and accelerates model inference, which is crucial for such domains as Generative AI, Large Language models, and use cases like object detection, classification, segmentation, and many others.

* :doc:`Model Optimization <openvino_docs_model_optimization_guide>`
* :doc:`Model Optimization <openvino-workflow/model-optimization>`

OpenVINO provides multiple optimization methods for both the training and post-training stages, including weight compression for Large Language models and Intel Optimum integration with Hugging Face.

* :doc:`Model Conversion and Framework Compatibility <openvino_docs_model_processing_introduction>`
* :doc:`Model Conversion and Framework Compatibility <openvino-workflow/model-preparation>`

Supported models can be loaded directly or converted to the OpenVINO format to achieve better performance. Supported frameworks include ONNX, PyTorch, TensorFlow, TensorFlow Lite, Keras, and PaddlePaddle.
Supported models can be loaded directly or converted to the OpenVINO format to achieve better performance. Supported frameworks include ONNX, PyTorch, TensorFlow, TensorFlow Lite, Keras, and PaddlePaddle.

* :doc:`Model Inference <openvino_docs_OV_UG_OV_Runtime_User_Guide>`
* :doc:`Model Inference <openvino-workflow/running-inference>`

OpenVINO accelerates deep learning models on various hardware platforms, ensuring real-time, efficient inference.

* `Deployment on a server <https://github.com/openvinotoolkit/model_server>`__

A model can be deployed either locally using OpenVINO Runtime or on a model server. Runtime is a set of C++ libraries with C and Python bindings providing a common API to deliver inference solutions. The model server enables quick model inference using external resources.
A model can be deployed either locally using OpenVINO Runtime or on a model server. Runtime is a set of C++ libraries with C and Python bindings providing a common API to deliver inference solutions. The model server enables quick model inference using external about-openvino/additional-resources.

Architecture
##############################################################

To learn more about how OpenVINO works, read the Developer documentation on its `architecture <https://github.com/openvinotoolkit/openvino/blob/master/src/docs/architecture.md>`__ and `core components <https://github.com/openvinotoolkit/openvino/blob/master/src/README.md>`__.

OpenVINO Ecosystem
OpenVINO Ecosystem
##############################################################

Along with the primary components of model optimization and runtime, the toolkit also includes:

* `Neural Network Compression Framework (NNCF) <https://github.com/openvinotoolkit/nncf>`__ - a tool for enhanced OpenVINO™ inference to get performance boost with minimal accuracy drop.
* :doc:`Openvino Notebooks <tutorials>`- Jupyter Python notebook tutorials, which demonstrate key features of the toolkit.
* :doc:`Openvino Notebooks <learn-openvino/interactive-tutorials-python>`- Jupyter Python notebook, which demonstrate key features of the toolkit.
* `OpenVINO Model Server <https://github.com/openvinotoolkit/model_server>`__ - a server that enables scalability via a serving microservice.
* :doc:`OpenVINO Training Extensions <ote_documentation>` – a convenient environment to train Deep Learning models and convert them using the OpenVINO™ toolkit for optimized inference.
* :doc:`Dataset Management Framework (Datumaro) <datumaro_documentation>` - a tool to build, transform, and analyze datasets.
* :doc:`OpenVINO Training Extensions <documentation/openvino-ecosystem/openvino-training-extensions>` – a convenient environment to train Deep Learning models and convert them using the OpenVINO™ toolkit for optimized inference.
* :doc:`Dataset Management Framework (Datumaro) <documentation/openvino-ecosystem/datumaro>` - a tool to build, transform, and analyze datasets.

Community
##############################################################
Expand All @@ -66,7 +66,7 @@ OpenVINO community plays a vital role in the growth and development of the open-
* `OpenVINO GitHub issues, discussions and pull requests <https://github.com/openvinotoolkit/openvino>`__
* `OpenVINO Blog <https://blog.openvino.ai/>`__
* `Community Forum <https://community.intel.com/t5/Intel-Distribution-of-OpenVINO/bd-p/distribution-openvino-toolkit>`__
* `OpenVINO video tutorials <https://www.youtube.com/watch?v=_Jnjt21ZDS8&list=PLg-UKERBljNxdIQir1wrirZJ50yTp4eHv>`__
* `OpenVINO video <https://www.youtube.com/watch?v=_Jnjt21ZDS8&list=PLg-UKERBljNxdIQir1wrirZJ50yTp4eHv>`__
* `Support Information <https://www.intel.com/content/www/us/en/support/products/96066/software/development-software/openvino-toolkit.html>`__

Case Studies
Expand Down
14 changes: 7 additions & 7 deletions docs/articles_en/about-openvino/additional-resources.rst
Original file line number Diff line number Diff line change
Expand Up @@ -13,19 +13,19 @@ Additional Resources
:maxdepth: 1
:hidden:

openvino_docs_OV_Glossary
openvino_docs_Legal_Information
openvino_docs_telemetry_information
additional-resources/glossary
additional-resources/legal-information
additional-resources/telemetry
Case Studies <https://www.intel.com/openvino-success-stories>


:doc:`Performance Benchmarks <openvino_docs_performance_benchmarks>` contain results from benchmarking models with OpenVINO on Intel hardware.
:doc:`Performance Benchmarks <performance-benchmarks>` contain results from benchmarking models with OpenVINO on Intel hardware.

:doc:`Glossary <openvino_docs_OV_Glossary>` contains terms used in OpenVINO.
:doc:`Glossary <additional-resources/glossary>` contains terms used in OpenVINO.

:doc:`Legal Information <openvino_docs_Legal_Information>` has trademark information and other legal statements.
:doc:`Legal Information <additional-resources/legal-information>` has trademark information and other legal statements.

:doc:`OpenVINO™ Telemetry <openvino_docs_telemetry_information>` has detailed information on the telemetry data collection.
:doc:`OpenVINO™ Telemetry <additional-resources/telemetry>` has detailed information on the telemetry data collection.

`Case Studies <https://www.intel.com/openvino-success-stories>`__ are articles about real-world examples of OpenVINO™ usage.

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -119,7 +119,7 @@ Glossary of terms used in OpenVINO™
still find this term in some articles. Because of their role in the software,
they are now referred to as Devices and Modes ("virtual" devices). For a detailed
description of the concept, refer to
:doc:`Inference Devices and Modes <openvino_docs_Runtime_Inference_Modes_Overview>`.
:doc:`Inference Devices and Modes <../../openvino-workflow/running-inference/inference-devices-and-modes>`.
| *Tensor*
| A memory container used for storing inputs and outputs of the model, as well as
Expand All @@ -128,4 +128,4 @@ Glossary of terms used in OpenVINO™

See Also
#################################################
* :doc:`Available Operations Sets <openvino_docs_ops_opset>`
* :doc:`Available Operations Sets <../../documentation/openvino-ir-format/operation-sets/available-opsets>`
16 changes: 8 additions & 8 deletions docs/articles_en/about-openvino/compatibility-and-support.rst
Original file line number Diff line number Diff line change
Expand Up @@ -8,19 +8,19 @@ Compatibility and Support
:maxdepth: 1
:hidden:

openvino_supported_models
openvino_supported_devices
openvino_resources_supported_operations
openvino_resources_supported_operations_frontend
compatibility-and-support/supported-models
compatibility-and-support/supported-devices
compatibility-and-support/supported-operations-inference-devices
compatibility-and-support/supported-operations-framework-frontend


:doc:`Supported Devices <openvino_supported_devices>` - compatibility information for supported hardware accelerators.
:doc:`Supported Devices <compatibility-and-support/supported-devices>` - compatibility information for supported hardware accelerators.

:doc:`Supported Models <openvino_supported_models>` - a table of models officially supported by OpenVINO.
:doc:`Supported Models <compatibility-and-support/supported-models>` - a table of models officially supported by OpenVINO.

:doc:`Supported Operations <openvino_resources_supported_operations>` - a listing of framework layers supported by OpenVINO.
:doc:`Supported Operations <compatibility-and-support/supported-operations-inference-devices>` - a listing of framework layers supported by OpenVINO.

:doc:`Supported Operations <openvino_resources_supported_operations_frontend>` - a listing of layers supported by OpenVINO inference devices.
:doc:`Supported Operations <compatibility-and-support/supported-operations-framework-frontend>` - a listing of layers supported by OpenVINO inference devices.



Original file line number Diff line number Diff line change
Expand Up @@ -11,45 +11,45 @@ Inference Device Support

The OpenVINO™ runtime enables you to use a selection of devices to run your
deep learning models:
:doc:`CPU <openvino_docs_OV_UG_supported_plugins_CPU>`,
:doc:`GPU <openvino_docs_OV_UG_supported_plugins_GPU>`,
:doc:`NPU <openvino_docs_OV_UG_supported_plugins_NPU>`.
:doc:`CPU <../../openvino-workflow/running-inference/inference-devices-and-modes/cpu-device>`,
:doc:`GPU <../../openvino-workflow/running-inference/inference-devices-and-modes/gpu-device>`,
:doc:`NPU <../../openvino-workflow/running-inference/inference-devices-and-modes/npu-device>`.

| For their usage guides, see :doc:`Devices and Modes <openvino_docs_Runtime_Inference_Modes_Overview>`.
| For a detailed list of devices, see :doc:`System Requirements <system_requirements>`.
| For their usage guides, see :doc:`Devices and Modes <../../openvino-workflow/running-inference/inference-devices-and-modes>`.
| For a detailed list of devices, see :doc:`System Requirements <../system-requirements>`.
Beside running inference with a specific device,
OpenVINO offers the option of running automated inference with the following inference modes:

* :doc:`Automatic Device Selection <openvino_docs_OV_UG_supported_plugins_AUTO>` - automatically selects the best device
* :doc:`Automatic Device Selection <../../openvino-workflow/running-inference/inference-devices-and-modes/auto-device-selection>` - automatically selects the best device
available for the given task. It offers many additional options and optimizations, including inference on
multiple devices at the same time.
* :doc:`Heterogeneous Inference <openvino_docs_OV_UG_Hetero_execution>` - enables splitting inference among several devices
* :doc:`Heterogeneous Inference <../../openvino-workflow/running-inference/inference-devices-and-modes/hetero-execution>` - enables splitting inference among several devices
automatically, for example, if one device doesn't support certain operations.
* :doc:`Multi-device Inference <openvino_docs_OV_UG_Running_on_multiple_devices>` - executes inference on multiple devices.
* :doc:`Multi-device Inference <../../openvino-workflow/running-inference/inference-devices-and-modes/multi-device>` - executes inference on multiple devices.
Currently, this mode is considered a legacy solution. Using Automatic Device Selection is advised.
* :doc:`Automatic Batching <openvino_docs_OV_UG_Automatic_Batching>` - automatically groups inference requests to improve
* :doc:`Automatic Batching <../../openvino-workflow/running-inference/inference-devices-and-modes/automatic-batching>` - automatically groups inference requests to improve
device utilization.



Feature Support and API Coverage
#################################

================================================================================== ======= ========== ===========
Supported Feature CPU GPU NPU
================================================================================== ======= ========== ===========
:doc:`Heterogeneous execution <openvino_docs_OV_UG_Hetero_execution>` Yes Yes No
:doc:`Multi-device execution <openvino_docs_OV_UG_Running_on_multiple_devices>` Yes Yes Partial
:doc:`Automatic batching <openvino_docs_OV_UG_Automatic_Batching>` No Yes No
:doc:`Multi-stream execution <openvino_docs_deployment_optimization_guide_tput>` Yes Yes No
:doc:`Models caching <openvino_docs_OV_UG_Model_caching_overview>` Yes Partial Yes
:doc:`Dynamic shapes <openvino_docs_OV_UG_DynamicShapes>` Yes Partial No
:doc:`Import/Export <openvino_ecosystem>` Yes No Yes
:doc:`Preprocessing acceleration <openvino_docs_OV_UG_Preprocessing_Overview>` Yes Yes No
:doc:`Stateful models <openvino_docs_OV_UG_stateful_models_intro>` Yes No Yes
:doc:`Extensibility <openvino_docs_Extensibility_UG_Intro>` Yes Yes No
================================================================================== ======= ========== ===========
=============================================================================================================================== ======= ========== ===========
Supported Feature CPU GPU NPU
=============================================================================================================================== ======= ========== ===========
:doc:`Heterogeneous execution <../../openvino-workflow/running-inference/inference-devices-and-modes/hetero-execution>` Yes Yes No
:doc:`Multi-device execution <../../openvino-workflow/running-inference/inference-devices-and-modes/multi-device>` Yes Yes Partial
:doc:`Automatic batching <../../openvino-workflow/running-inference/inference-devices-and-modes/automatic-batching>` No Yes No
:doc:`Multi-stream execution <../../openvino-workflow/running-inference/optimize-inference/optimizing-throughput>` Yes Yes No
:doc:`Models caching <../../openvino-workflow/running-inference/optimize-inference/optimizing-latency/model-caching-overview>` Yes Partial Yes
:doc:`Dynamic shapes <../../openvino-workflow/running-inference/dynamic-shapes>` Yes Partial No
:doc:`Import/Export <../../documentation/openvino-ecosystem>` Yes No Yes
:doc:`Preprocessing acceleration <../../openvino-workflow/running-inference/optimize-inference/optimize-preprocessing>` Yes Yes No
:doc:`Stateful models <../../openvino-workflow/running-inference/stateful-models>` Yes No Yes
:doc:`Extensibility <../../documentation/openvino-extensibility>` Yes Yes No
=============================================================================================================================== ======= ========== ===========


+-------------------------+-----------+------------------+-------------------+
Expand Down Expand Up @@ -82,11 +82,11 @@ Devices similar to the ones used for benchmarking can be accessed using
`Intel® DevCloud for the Edge <https://devcloud.intel.com/edge/>`__,
a remote development environment with access to Intel® hardware and the latest versions
of the Intel® Distribution of OpenVINO™ Toolkit.
`Learn more <https://devcloud.intel.com/edge/get_started/devcloud/>`__ or
`Learn more <https://devcloud.intel.com/edge/../../get-started/devcloud/>`__ or
`Register here <https://inteliot.force.com/DevcloudForEdge/s/>`__.

For setting up a relevant configuration, refer to the
:doc:`Integrate with Customer Application <openvino_docs_OV_UG_Integrate_OV_with_your_application>`
:doc:`Integrate with Customer Application <../../openvino-workflow/running-inference/integrate-openvino-with-your-application>`
topic (step 3 "Configure input and output").


Expand Down
Loading

0 comments on commit a3c7e15

Please sign in to comment.