Skip to content

Commit

Permalink
doc fixes (openvinotoolkit#3626)
Browse files Browse the repository at this point in the history
Co-authored-by: Nikolay Tyukaev <[email protected]>
  • Loading branch information
2 people authored and maxim-kurin committed Jan 20, 2021
1 parent 6340486 commit 365d9a6
Show file tree
Hide file tree
Showing 6 changed files with 34 additions and 36 deletions.
2 changes: 1 addition & 1 deletion docs/IE_DG/Extensibility_DG/Custom_ONNX_Ops.md
Original file line number Diff line number Diff line change
Expand Up @@ -28,7 +28,7 @@ The example below demonstrates an exemplary model that requires previously creat
@snippet onnx_custom_op/onnx_custom_op.cpp onnx_custom_op:model


For a reference on how to create a graph with nGraph operations, visit [nGraph tutorial](../nGraphTutorial.md).
For a reference on how to create a graph with nGraph operations, visit [Custom nGraph Operation](AddingNGraphOps.md).
For a complete list of predefined nGraph operators, visit [available operations sets](../../ops/opset.md).

If operator is no longer needed, it can be unregistered by calling `unregister_operator`. The function takes three arguments `op_type`, `version`, and `domain`.
Expand Down
59 changes: 28 additions & 31 deletions docs/IE_DG/network_state_intro.md
Original file line number Diff line number Diff line change
Expand Up @@ -216,35 +216,32 @@ LowLatency transformation changes the structure of the network containing [Tenso
* [from nGraph Function](../nGraph_DG/build_function.md)
2. [Reshape](ShapeInference) CNNNetwork network if necessary
**Necessary case:** the sequence_lengths dimention of input > 1, it means TensorIterator layer will have number_iterations > 1. We should reshape the inputs of the network to set sequence_dimension exactly to 1.
```cpp
**Necessary case:** the sequence_lengths dimention of input > 1, it means TensorIterator layer will have number_iterations > 1. We should reshape the inputs of the network to set sequence_dimension exactly to 1.
```cpp
// Network before reshape: Parameter (name: X, shape: [2 (sequence_lengths), 1, 16]) -> TensorIterator (num_iteration = 2, axis = 0) -> ...
// Network before reshape: Parameter (name: X, shape: [2 (sequence_lengths), 1, 16]) -> TensorIterator (num_iteration = 2, axis = 0) -> ...
cnnNetwork.reshape({"X" : {1, 1, 16});
cnnNetwork.reshape({"X" : {1, 1, 16});
// Network after reshape: Parameter (name: X, shape: [1 (sequence_lengths), 1, 16]) -> TensorIterator (num_iteration = 1, axis = 0) -> ...
```
3. Apply LowLatency transformation
```cpp
#include "ie_transformations.hpp"
// Network after reshape: Parameter (name: X, shape: [1 (sequence_lengths), 1, 16]) -> TensorIterator (num_iteration = 1, axis = 0) -> ...
...
InferenceEngine::LowLatency(cnnNetwork);
```
```

**State naming rule:** a name of state is a concatenation of names: original TensorIterator operation, Parameter of the body, and additional suffix "variable_" + id (0-base indexing, new indexing for each TensorIterator), for example:
3. Apply LowLatency transformation
```cpp
#include "ie_transformations.hpp"

tensor_iterator_name = "TI_name"
body_parameter_name = "param_name"
...

state_name = "TI_name/param_name/variable_0"
InferenceEngine::LowLatency(cnnNetwork);
```
**State naming rule:** a name of state is a concatenation of names: original TensorIterator operation, Parameter of the body, and additional suffix "variable_" + id (0-base indexing, new indexing for each TensorIterator), for example:
```
tensor_iterator_name = "TI_name"
body_parameter_name = "param_name"

state_name = "TI_name/param_name/variable_0"
```
4. [Use state API](#openvino-state-api)
Expand All @@ -265,14 +262,14 @@ LowLatency transformation changes the structure of the network containing [Tenso
**Current solution:** trim non-reshapable layers via [ModelOptimizer CLI](../MO_DG/prepare_model/convert_model/Converting_Model_General.md) `--input`, `--output` or via nGraph.
```cpp
// nGraph example:
auto func = cnnNetwork.getFunction();
auto new_const = std::make_shared<ngraph::opset5::Constant>(); // type, shape, value
for (const auto& node : func->get_ops()) {
if (node->get_friendly_name() == "name_of_non_reshapable_const") {
auto bad_const = std::dynamic_pointer_cast<ngraph::opset5::Constant>(node);
ngraph::replace_node(bad_const, new_const); // replace constant
}
```cpp
// nGraph example:
auto func = cnnNetwork.getFunction();
auto new_const = std::make_shared<ngraph::opset5::Constant>(); // type, shape, value
for (const auto& node : func->get_ops()) {
if (node->get_friendly_name() == "name_of_non_reshapable_const") {
auto bad_const = std::dynamic_pointer_cast<ngraph::opset5::Constant>(node);
ngraph::replace_node(bad_const, new_const); // replace constant
}
```
}
```
Original file line number Diff line number Diff line change
Expand Up @@ -110,7 +110,7 @@ where:

> **NOTE:** The color channel order (RGB or BGR) of an input data should match the channel order of the model training dataset. If they are different, perform the `RGB<->BGR` conversion specifying the command-line parameter: `--reverse_input_channels`. Otherwise, inference results may be incorrect. For more information about the parameter, refer to **When to Reverse Input Channels** section of [Converting a Model Using General Conversion Parameters](../Converting_Model_General.md).
OpenVINO&trade; toolkit provides a demo that uses YOLOv3 model. For more information, refer to [Object Detection YOLO* V3 Demo, Async API Performance Showcase](@ref omz_demos_object_detection_demo_yolov3_async_README).
OpenVINO&trade; toolkit provides a demo that uses YOLOv3 model. For more information, refer to [Object Detection C++ Demo](@ref omz_demos_object_detection_demo_ssd_async_README).

## Convert YOLOv1 and YOLOv2 Models to the IR

Expand Down
1 change: 1 addition & 0 deletions docs/doxygen/ie_docs.xml
Original file line number Diff line number Diff line change
Expand Up @@ -79,6 +79,7 @@
<tab type="user" title="Atan-1" url="@ref openvino_docs_ops_arithmetic_Atan_1"/>
<tab type="user" title="Atanh-3" url="@ref openvino_docs_ops_arithmetic_Atanh_3"/>
<tab type="user" title="AvgPool-1" url="@ref openvino_docs_ops_pooling_AvgPool_1"/>
<tab type="user" title="BatchNormInference-1" url="@ref openvino_docs_ops_normalization_BatchNormInference_1"/>
<tab type="user" title="BatchNormInference-5" url="@ref openvino_docs_ops_normalization_BatchNormInference_5"/>
<tab type="user" title="BatchToSpace-2" url="@ref openvino_docs_ops_movement_BatchToSpace_2"/>
<tab type="user" title="BinaryConvolution-1" url="@ref openvino_docs_ops_convolution_BinaryConvolution_1"/>
Expand Down
2 changes: 1 addition & 1 deletion docs/install_guides/installing-openvino-windows.md
Original file line number Diff line number Diff line change
Expand Up @@ -486,7 +486,7 @@ To learn more about converting deep learning models, go to:
- [Intel Distribution of OpenVINO Toolkit home page](https://software.intel.com/en-us/openvino-toolkit)
- [Intel Distribution of OpenVINO Toolkit documentation](https://software.intel.com/en-us/openvino-toolkit/documentation/featured)
- [OpenVINO™ Release Notes](https://software.intel.com/en-us/articles/OpenVINO-RelNotes)
- [Introduction to Inference Engine](inference_engine_intro.md)
- [Introduction to Inference Engine](../IE_DG/inference_engine_intro.md)
- [Inference Engine Developer Guide](../IE_DG/Deep_Learning_Inference_Engine_DevGuide.md)
- [Model Optimizer Developer Guide](../MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md)
- [Inference Engine Samples Overview](../IE_DG/Samples_Overview.md)
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,7 @@ When the inference is done, the application outputs inference results to the sta

> **NOTE**: This sample supports models with FP32 weights only.
The `lenet.bin` weights file was generated by the [Model Optimizer](../../../docs/MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md)
The `lenet.bin` weights file was generated by the [Model Optimizer](../../../../../docs/MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md)
tool from the public LeNet model with the `--input_shape [64,1,28,28]` parameter specified.
The original model is available in the [Caffe* repository](https://github.com/BVLC/caffe/tree/master/examples/mnist) on GitHub\*.

Expand Down Expand Up @@ -69,4 +69,4 @@ By default, the application outputs top-1 inference result for each inference re

## See Also

* [Using Inference Engine Samples](../../../docs/IE_DG/Samples_Overview.md)
* [Using Inference Engine Samples](../../../../../docs/IE_DG/Samples_Overview.md)

0 comments on commit 365d9a6

Please sign in to comment.