Skip to content

Commit

Permalink
Merge remote-tracking branch 'upstream/master' into no_replace_parameter
Browse files Browse the repository at this point in the history
  • Loading branch information
ilyachur committed Jul 22, 2021
2 parents c3e1e1c + 1e1e3bf commit aa87c1f
Show file tree
Hide file tree
Showing 35 changed files with 550 additions and 138 deletions.
2 changes: 1 addition & 1 deletion .ci/azure/linux_onnxruntime.yml
Original file line number Diff line number Diff line change
Expand Up @@ -112,7 +112,7 @@ jobs:

- script: |
source $(INSTALL_DIR)/bin/setupvars.sh
echo "2021.2" > $(INSTALL_DIR)/deployment_tools/inference_engine/version.txt
echo "2021.4" > $(INSTALL_DIR)/deployment_tools/inference_engine/version.txt
CXXFLAGS="-Wno-error=deprecated-declarations" ./build.sh --config RelWithDebInfo --use_openvino CPU_FP32 --build_shared_lib --parallel --skip_tests --build_dir $(ONNXRUNTIME_BUILD_DIR)
workingDirectory: $(ONNXRUNTIME_REPO_DIR)
displayName: 'Build ONNX Runtime'
Expand Down
3 changes: 0 additions & 3 deletions .ci/azure/mac.yml
Original file line number Diff line number Diff line change
Expand Up @@ -87,9 +87,6 @@ jobs:
export PATH="/usr/local/opt/cython/bin:$PATH"
export CC=gcc
export CXX=g++
# Disable errors with Ninja
export CXXFLAGS="-Wno-error=unused-command-line-argument"
export CFLAGS="-Wno-error=unused-command-line-argument"
cmake -GNinja -DVERBOSE_BUILD=ON -DCMAKE_BUILD_TYPE=$(BUILD_TYPE) -DENABLE_PYTHON=ON -DENABLE_TESTS=ON -DENABLE_STRICT_DEPENDENCIES=OFF -DIE_EXTRA_MODULES=$(OPENVINO_CONTRIB_REPO_DIR)/modules $(REPO_DIR)
workingDirectory: $(BUILD_DIR)
displayName: 'CMake'
Expand Down
Original file line number Diff line number Diff line change
@@ -0,0 +1,15 @@
# Converting RetinaNet Model from TensorFlow* to the Intermediate Representation {#openvino_docs_MO_DG_prepare_model_convert_model_tf_specific_Convert_RetinaNet_From_Tensorflow}

This tutorial explains how to convert RetinaNet model to the Intermediate Representation (IR).

[Public RetinaNet model](https://github.com/fizyr/keras-retinanet) does not contain pretrained TensorFlow\* weights.
To convert this model to the TensorFlow\* format, you can use [Reproduce Keras* to TensorFlow* Conversion tutorial](https://docs.openvinotoolkit.org/latest/omz_models_model_retinanet_tf.html).

After you convert the model to TensorFlow* format, run the Model Optimizer command below:
```sh
python mo.py --input "input_1[1 1333 1333 3]" --input_model retinanet_resnet50_coco_best_v2.1.0.pb --data_type FP32 --transformations_config ./extensions/front/tf/retinanet.json
```

Where `transformations_config` command-line parameter specifies the configuration json file containing model conversion hints for the Model Optimizer.
The json file contains some parameters that need to be changed if you train the model yourself. It also contains information on how to match endpoints
to replace the subgraph nodes. After the model is converted to IR, the output nodes will be replaced with DetectionOutput layer.
2 changes: 2 additions & 0 deletions docs/doxygen/ie_docs.xml
Original file line number Diff line number Diff line change
Expand Up @@ -34,6 +34,7 @@ limitations under the License.
<tab type="user" title="Converting DeepSpeech Model from TensorFlow" url="@ref openvino_docs_MO_DG_prepare_model_convert_model_tf_specific_Convert_DeepSpeech_From_Tensorflow"/>
<tab type="user" title="Converting Language Model on One Billion Word Benchmark from TensorFlow" url="@ref openvino_docs_MO_DG_prepare_model_convert_model_tf_specific_Convert_lm_1b_From_Tensorflow"/>
<tab type="user" title="Converting TensorFlow* Object Detection API Models" url="@ref openvino_docs_MO_DG_prepare_model_convert_model_tf_specific_Convert_Object_Detection_API_Models"/>
<tab type="user" title="Converting RetinaNet Model from TensorFlow" url="@ref openvino_docs_MO_DG_prepare_model_convert_model_tf_specific_Convert_RetinaNet_From_Tensorflow"/>
<tab type="user" title="Converting TensorFlow*-Slim Image Classification Model Library Models" url="@ref openvino_docs_MO_DG_prepare_model_convert_model_tf_specific_Convert_Slim_Library_Models"/>
<tab type="user" title="Converting CRNN Model from TensorFlow" url="@ref openvino_docs_MO_DG_prepare_model_convert_model_tf_specific_Convert_CRNN_From_Tensorflow"/>
<tab type="user" title="Converting GNMT from TensorFlow" url="@ref openvino_docs_MO_DG_prepare_model_convert_model_tf_specific_Convert_GNMT_From_Tensorflow"/>
Expand Down Expand Up @@ -176,6 +177,7 @@ limitations under the License.
<tab type="user" title="HSigmoid-5" url="@ref openvino_docs_ops_activation_HSigmoid_5"/>
<tab type="user" title="HSwish-4" url="@ref openvino_docs_ops_activation_HSwish_4"/>
<tab type="user" title="IDFT-7" url="@ref openvino_docs_ops_signals_IDFT_7"/>
<tab type="user" title="If-8" url="@ref openvino_docs_ops_condition_If_8"/>
<tab type="user" title="Interpolate-1" url="@ref openvino_docs_ops_image_Interpolate_1"/>
<tab type="user" title="Interpolate-4" url="@ref openvino_docs_ops_image_Interpolate_4"/>
<tab type="user" title="LRN-1" url="@ref openvino_docs_ops_normalization_LRN_1"/>
Expand Down
226 changes: 226 additions & 0 deletions docs/ops/condition/If_8.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,226 @@
## If <a name="If"></a> {#openvino_docs_ops_infrastructure_If_8}

**Versioned name**: *If-8*

**Category**: Infrastructure

**Short description**: *If* operation contains two internal networks(subgraphs) such as `then_body` and `else_body`,
and performs one of them depending on `cond` value. If `cond` is `True`, `then_body` is executed. If `cond` is `False`,
the operation executes the `else_body` subgraph.

**Detailed description**

*If* must not contain empty subgraphs. Each of them must have at least one operation `Result`.
Also the number of outputs from *If* always must be greater than zero and equal to the number of outputs from each subgraph.

**If attributes**:

* **Subgraphs**:

`then_body`/`else_body` are subgraphs that are executed depending on the `cond` value.
The subgraph is described operation by operation as a typical IR network.
The subgraph has inputs (`Parameter` operations) and outputs (`Result` operations).

* **Subgraph's inputs** - inputs to the subgraph which associated with *If* inputs via *port_map*.
The subgraph can have any number of inputs (even zero).

* **Subgraph's outputs** - outputs from the subgraph which associated with *If* outputs via *port_map*.
The subgraph must contain at least one output. Each *If* output is associated with one output from the subgraph.
Therefore the number of `then_body` outputs is equal to the number of outputs from *If* and
the number of `else_body` outputs.
The type of the subgraph output and the type of the associated output from *If* must be equal.


* **Port maps**:

*port_map* is a set of rules to map input or output data tensors of *If* operation onto the subgraph data tensors.
The `port_map` entries can be `input` and `output`. Each entry describes a corresponding mapping rule.
*If* has two *port_maps*: `then_port_map` for `then_body` and `else_port_map` for `else_body`.

* **Port map attributes**:

* *external_port_id*
* **Description**: *external_port_id* is a port ID of *If* operation.
* **Range of values**: IDs of the *If* inputs and outputs
* **Type**: `unsigned int`
* **Default value**: None
* **Required**: *yes*

* *internal_layer_id*

* **Description**: *internal_layer_id* is a `Parameter` or `Result` operation ID inside
the subgraph to map to.
* **Range of values**: IDs of the `Parameter` or `Result` operations in the subgraph
* **Type**: `unsigned int`
* **Default value**: None
* **Required**: *yes*

**If Inputs**


* **cond**: A scalar or 1D tensor with 1 element of `boolean` type specifying which subgraph to execute.
`True` value means to execute the `then_body`, `False` - `else_body`. *Required*.

* **Multiple other inputs**: Tensors of different types and shapes. *Optional*.

**If Outputs**

* **Multiple outputs**: Results of execution of one of the subgraph. Tensors of any type and shape.


**Body Inputs**

* **Multiple inputs**: Tensors of different types and shapes. *Optional*.


**Body Outputs**

* **Multiple outputs**: Results of execution of the subgraph. Tensors of any type and shape.


**Examples**

*Example 1: a typical If structure*
```xml
<layer id="6" name="if/cond" type="If" version="opset8">
<input>
<port id="0"/>
<port id="1">
<dim>2</dim>
<dim>4</dim>
</port>
<port id="2">
<dim>2</dim>
<dim>4</dim>
</port>
<port id="3">
<dim>2</dim>
<dim>4</dim>
</port>
</input>
<output>
<port id="4" names="if/cond/Identity:0,if/cond:0" precision="FP32">
<dim>2</dim>
<dim>4</dim>
</port>
</output>
<then_port_map>
<input external_port_id="1" internal_layer_id="0"/>
<input external_port_id="2" internal_layer_id="1"/>
<output external_port_id="0" internal_layer_id="3"/>
</then_port_map>
<else_port_map>
<input external_port_id="1" internal_layer_id="0"/>
<input external_port_id="3" internal_layer_id="1"/>
<output external_port_id="0" internal_layer_id="3"/>
</else_port_map>
<then_body>
<layers>
<layer id="0" name="add_x" type="Parameter" version="opset1">
<data element_type="f32" shape="2,4"/>
<output>
<port id="0" names="add_x:0" precision="FP32">
<dim>2</dim>
<dim>4</dim>
</port>
</output>
</layer>
<layer id="1" name="add_z" type="Parameter" version="opset1">
<data element_type="f32" shape="2,4"/>
<output>
<port id="0" names="add_z:0" precision="FP32">
<dim>2</dim>
<dim>4</dim>
</port>
</output>
</layer>
<layer id="2" name="Add" type="Add" version="opset1">
<data auto_broadcast="numpy"/>
<input>
<port id="0">
<dim>2</dim>
<dim>4</dim>
</port>
<port id="1">
<dim>2</dim>
<dim>4</dim>
</port>
</input>
<output>
<port id="2" names="Add:0" precision="FP32">
<dim>2</dim>
<dim>4</dim>
</port>
</output>
</layer>
<layer id="3" name="Identity/sink_port_0" type="Result" version="opset1">
<input>
<port id="0">
<dim>2</dim>
<dim>4</dim>
</port>
</input>
</layer>
</layers>
<edges>
<edge from-layer="0" from-port="0" to-layer="2" to-port="0"/>
<edge from-layer="1" from-port="0" to-layer="2" to-port="1"/>
<edge from-layer="2" from-port="2" to-layer="3" to-port="0"/>
</edges>
</then_body>
<else_body>
<layers>
<layer id="0" name="add_x" type="Parameter" version="opset1">
<data element_type="f32" shape="2,4"/>
<output>
<port id="0" names="add_x:0" precision="FP32">
<dim>2</dim>
<dim>4</dim>
</port>
</output>
</layer>
<layer id="1" name="add_w" type="Parameter" version="opset1">
<data element_type="f32" shape="2,4"/>
<output>
<port id="0" names="add_w:0" precision="FP32">
<dim>2</dim>
<dim>4</dim>
</port>
</output>
</layer>
<layer id="2" name="Add" type="Add" version="opset1">
<data auto_broadcast="numpy"/>
<input>
<port id="0">
<dim>2</dim>
<dim>4</dim>
</port>
<port id="1">
<dim>2</dim>
<dim>4</dim>
</port>
</input>
<output>
<port id="2" names="Add:0" precision="FP32">
<dim>2</dim>
<dim>4</dim>
</port>
</output>
</layer>
<layer id="3" name="Identity/sink_port_0" type="Result" version="opset1">
<input>
<port id="0">
<dim>2</dim>
<dim>4</dim>
</port>
</input>
</layer>
</layers>
<edges>
<edge from-layer="0" from-port="0" to-layer="2" to-port="0"/>
<edge from-layer="1" from-port="0" to-layer="2" to-port="1"/>
<edge from-layer="2" from-port="2" to-layer="3" to-port="0"/>
</edges>
</else_body>
</layer>
```
17 changes: 11 additions & 6 deletions docs/ops/condition/Select_1.md
Original file line number Diff line number Diff line change
Expand Up @@ -17,26 +17,31 @@

* **Description**: specifies rules used for auto-broadcasting of input tensors.
* **Range of values**:
* *none* - no auto-broadcasting is allowed, all input shapes should match
* *numpy* - numpy broadcasting rules, aligned with ONNX Broadcasting. Description is available in <a href="https://github.com/onnx/onnx/blob/master/docs/Broadcasting.md">ONNX docs</a>.
* **Type**: string
* *none* - no auto-broadcasting is allowed, all input shapes must match
* *numpy* - numpy broadcasting rules, description is available in [Broadcast Rules For Elementwise Operations](../broadcast_rules.md)
* *pdpd* - PaddlePaddle-style implicit broadcasting, description is available in [Broadcast Rules For Elementwise Operations](../broadcast_rules.md)
* **Type**: `string`
* **Default value**: "numpy"
* **Required**: *no*


**Inputs**:

* **1**: `cond` tensor with selection mask of type `boolean`. The tensor can be 0D.
* **1**: `cond` - tensor of type *T_COND* and arbitrary shape with selection mask. **Required**.

* **2**: `then` the tensor with elements to take where the corresponding element in `cond` is true. Arbitrary type that should match type of `else` input tensor.
* **2**: `then` - tensor of type *T* and arbitrary shape with elements to take where the corresponding element in `cond` is `true`. **Required**.

* **3**: `else` the tensor with elements to take where the corresponding element in `cond` is false. Arbitrary type that should match type of `then` input tensor.
* **3**: `else` - tensor of type *T* and arbitrary shape with elements to take where the corresponding element in `cond` is `false`. **Required**.


**Outputs**:

* **1**: blended output tensor that is tailored from values of inputs tensors `then` and `else` based on `cond` and broadcasting rules. It has the same type of elements as `then` and `else`.

**Types**

* *T_COND*: `boolean` type.
* *T*: any supported numeric type.

**Example**

Expand Down
1 change: 1 addition & 0 deletions docs/ops/opset8.md
Original file line number Diff line number Diff line change
Expand Up @@ -79,6 +79,7 @@ declared in `namespace opset8`.
* [HSigmoid](activation/HSigmoid_5.md)
* [HSwish](activation/HSwish_4.md)
* [IDFT](signals/IDFT_7.md)
* [If](condition/If_8.md)
* [Interpolate](image/Interpolate_4.md)
* [Less](comparison/Less_1.md)
* [LessEqual](comparison/LessEqual_1.md)
Expand Down
9 changes: 5 additions & 4 deletions inference-engine/src/inference_engine/CMakeLists.txt
Original file line number Diff line number Diff line change
Expand Up @@ -120,11 +120,12 @@ ie_faster_build(${TARGET_NAME}_obj
)

target_compile_definitions(${TARGET_NAME}_obj PRIVATE IMPLEMENT_INFERENCE_ENGINE_API
$<TARGET_PROPERTY:ngraph::ngraph,INTERFACE_COMPILE_DEFINITIONS>)
$<TARGET_PROPERTY:ngraph::ngraph,INTERFACE_COMPILE_DEFINITIONS>
$<TARGET_PROPERTY:ngraph::frontend_manager::static,INTERFACE_COMPILE_DEFINITIONS>)

target_include_directories(${TARGET_NAME}_obj SYSTEM PRIVATE $<TARGET_PROPERTY:ngraph::ngraph,INTERFACE_INCLUDE_DIRECTORIES>
$<TARGET_PROPERTY:pugixml::static,INTERFACE_INCLUDE_DIRECTORIES>
$<TARGET_PROPERTY:ngraph::frontend_manager,INTERFACE_INCLUDE_DIRECTORIES>
$<TARGET_PROPERTY:ngraph::frontend_manager::static,INTERFACE_INCLUDE_DIRECTORIES>
$<TARGET_PROPERTY:xbyak,INTERFACE_INCLUDE_DIRECTORIES>)

target_include_directories(${TARGET_NAME}_obj PRIVATE "${CMAKE_CURRENT_SOURCE_DIR}"
Expand Down Expand Up @@ -161,7 +162,7 @@ if (TBBBIND_2_4_FOUND)
endif()

target_link_libraries(${TARGET_NAME} PRIVATE pugixml::static openvino::itt ${CMAKE_DL_LIBS} Threads::Threads
ngraph ngraph::frontend_manager inference_engine_transformations)
ngraph ngraph::frontend_manager::static inference_engine_transformations)

target_include_directories(${TARGET_NAME} INTERFACE
$<BUILD_INTERFACE:${PUBLIC_HEADERS_DIR}>
Expand Down Expand Up @@ -201,7 +202,7 @@ if(WIN32)
set_target_properties(${TARGET_NAME}_s PROPERTIES COMPILE_PDB_NAME ${TARGET_NAME}_s)
endif()

target_link_libraries(${TARGET_NAME}_s PRIVATE openvino::itt ${CMAKE_DL_LIBS} ngraph ngraph::frontend_manager
target_link_libraries(${TARGET_NAME}_s PRIVATE openvino::itt ${CMAKE_DL_LIBS} ngraph ngraph::frontend_manager::static
inference_engine_transformations pugixml::static)

target_compile_definitions(${TARGET_NAME}_s PUBLIC USE_STATIC_IE)
Expand Down
9 changes: 9 additions & 0 deletions inference-engine/src/mkldnn_plugin/mkldnn_plugin.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -26,6 +26,7 @@
#include "transformations/common_optimizations/convert_quantize_dequantize.hpp"
#include <transformations/common_optimizations/depth_to_space_fusion.hpp>
#include <transformations/common_optimizations/softmax_fusion.hpp>
#include <transformations/common_optimizations/normalize_l2_fusion.hpp>
#include <transformations/op_conversions/convert_depth_to_space.hpp>
#include <transformations/op_conversions/convert_shuffle_channels3.hpp>
#include <transformations/op_conversions/convert_space_to_depth.hpp>
Expand Down Expand Up @@ -87,6 +88,7 @@

#include "nodes/mkldnn_mvn_node.h"
#include "nodes/mkldnn_fake_quantize_node.h"
#include "nodes/mkldnn_normalize_node.h"
#include "ngraph_transformations/convert_to_cpu_specific_opset.hpp"

#if !defined(__arm__) && !defined(_M_ARM) && !defined(__aarch64__) && !defined(_M_ARM64)
Expand Down Expand Up @@ -277,6 +279,13 @@ static void Transformation(CNNNetwork& clonedNetwork, const Config& conf) {
return node->input_value(0).get_partial_shape().rank().get_length() > 5;
});

auto normalizeL2FusionCallback = [](const_node_ptr &node) -> bool {
std::string errorMsg;
return !MKLDNNNormalizeL2Node::isSupportedOperation(node, errorMsg);
};
pass_config->set_callback<ngraph::pass::NormalizeL2FusionWithAdd>(normalizeL2FusionCallback);
pass_config->set_callback<ngraph::pass::NormalizeL2FusionWithMax>(normalizeL2FusionCallback);

// List of enabled/disabled transformations
pass_config->disable<ngraph::pass::ConvertGELU>();
pass_config->disable<ngraph::pass::ConvertShuffleChannels3>();
Expand Down
Loading

0 comments on commit aa87c1f

Please sign in to comment.