From a0962e12bc5ac5945d6bdf1802fd01cace777cdd Mon Sep 17 00:00:00 2001 From: Nikolay Tyukaev Date: Tue, 15 Dec 2020 19:52:24 +0300 Subject: [PATCH] doc fixes (#3626) Co-authored-by: Nikolay Tyukaev # Conflicts: # docs/IE_DG/network_state_intro.md --- docs/IE_DG/Extensibility_DG/Custom_ONNX_Ops.md | 2 +- .../convert_model/tf_specific/Convert_YOLO_From_Tensorflow.md | 2 +- docs/doxygen/ie_docs.xml | 1 + docs/install_guides/installing-openvino-windows.md | 2 +- .../python/sample/ngraph_function_creation_sample/README.md | 4 ++-- 5 files changed, 6 insertions(+), 5 deletions(-) diff --git a/docs/IE_DG/Extensibility_DG/Custom_ONNX_Ops.md b/docs/IE_DG/Extensibility_DG/Custom_ONNX_Ops.md index 73a0800e2b56e0..b6728e65bc402d 100644 --- a/docs/IE_DG/Extensibility_DG/Custom_ONNX_Ops.md +++ b/docs/IE_DG/Extensibility_DG/Custom_ONNX_Ops.md @@ -28,7 +28,7 @@ The example below demonstrates an exemplary model that requires previously creat @snippet onnx_custom_op/onnx_custom_op.cpp onnx_custom_op:model -For a reference on how to create a graph with nGraph operations, visit [nGraph tutorial](../nGraphTutorial.md). +For a reference on how to create a graph with nGraph operations, visit [Custom nGraph Operations](AddingNGraphOps.md). For a complete list of predefined nGraph operators, visit [available operations sets](../../ops/opset.md). If operator is no longer needed, it can be unregistered by calling `unregister_operator`. The function takes three arguments `op_type`, `version`, and `domain`. diff --git a/docs/MO_DG/prepare_model/convert_model/tf_specific/Convert_YOLO_From_Tensorflow.md b/docs/MO_DG/prepare_model/convert_model/tf_specific/Convert_YOLO_From_Tensorflow.md index 0073ac2f5490ca..4f905b3369ef8b 100644 --- a/docs/MO_DG/prepare_model/convert_model/tf_specific/Convert_YOLO_From_Tensorflow.md +++ b/docs/MO_DG/prepare_model/convert_model/tf_specific/Convert_YOLO_From_Tensorflow.md @@ -110,7 +110,7 @@ where: > **NOTE:** The color channel order (RGB or BGR) of an input data should match the channel order of the model training dataset. If they are different, perform the `RGB<->BGR` conversion specifying the command-line parameter: `--reverse_input_channels`. Otherwise, inference results may be incorrect. For more information about the parameter, refer to **When to Reverse Input Channels** section of [Converting a Model Using General Conversion Parameters](../Converting_Model_General.md). -OpenVINO™ toolkit provides a demo that uses YOLOv3 model. For more information, refer to [Object Detection YOLO* V3 Demo, Async API Performance Showcase](@ref omz_demos_object_detection_demo_yolov3_async_README). +OpenVINO™ toolkit provides a demo that uses YOLOv3 model. For more information, refer to [Object Detection C++ Demo](@ref omz_demos_object_detection_demo_ssd_async_README). ## Convert YOLOv1 and YOLOv2 Models to the IR diff --git a/docs/doxygen/ie_docs.xml b/docs/doxygen/ie_docs.xml index a6b2dcb47cf43e..31ba924a731d02 100644 --- a/docs/doxygen/ie_docs.xml +++ b/docs/doxygen/ie_docs.xml @@ -98,6 +98,7 @@ limitations under the License. + diff --git a/docs/install_guides/installing-openvino-windows.md b/docs/install_guides/installing-openvino-windows.md index 1b63f2de841e91..e2edaaf50c77b1 100644 --- a/docs/install_guides/installing-openvino-windows.md +++ b/docs/install_guides/installing-openvino-windows.md @@ -473,7 +473,7 @@ To learn more about converting deep learning models, go to: - [Intel Distribution of OpenVINO Toolkit home page](https://software.intel.com/en-us/openvino-toolkit) - [Intel Distribution of OpenVINO Toolkit documentation](https://software.intel.com/en-us/openvino-toolkit/documentation/featured) - [OpenVINO™ Release Notes](https://software.intel.com/en-us/articles/OpenVINO-RelNotes) -- [Introduction to Inference Engine](inference_engine_intro.md) +- [Introduction to Inference Engine](../IE_DG/inference_engine_intro.md) - [Inference Engine Developer Guide](../IE_DG/Deep_Learning_Inference_Engine_DevGuide.md) - [Model Optimizer Developer Guide](../MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md) - [Inference Engine Samples Overview](../IE_DG/Samples_Overview.md) diff --git a/inference-engine/ie_bridges/python/sample/ngraph_function_creation_sample/README.md b/inference-engine/ie_bridges/python/sample/ngraph_function_creation_sample/README.md index 75b05f78c5f5df..bdba6c38ab46e3 100644 --- a/inference-engine/ie_bridges/python/sample/ngraph_function_creation_sample/README.md +++ b/inference-engine/ie_bridges/python/sample/ngraph_function_creation_sample/README.md @@ -13,7 +13,7 @@ When the inference is done, the application outputs inference results to the sta > **NOTE**: This sample supports models with FP32 weights only. -The `lenet.bin` weights file was generated by the [Model Optimizer](../../../docs/MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md) +The `lenet.bin` weights file was generated by the [Model Optimizer](../../../../../docs/MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md) tool from the public LeNet model with the `--input_shape [64,1,28,28]` parameter specified. The original model is available in the [Caffe* repository](https://github.com/BVLC/caffe/tree/master/examples/mnist) on GitHub\*. @@ -69,4 +69,4 @@ By default, the application outputs top-1 inference result for each inference re ## See Also -* [Using Inference Engine Samples](../../../docs/IE_DG/Samples_Overview.md) +* [Using Inference Engine Samples](../../../../../docs/IE_DG/Samples_Overview.md)