Skip to content

Commit

Permalink
Merge branch 'master' into div_21464
Browse files Browse the repository at this point in the history
  • Loading branch information
rkazants authored Jan 7, 2024
2 parents 89eb158 + 9c373d3 commit f2780f6
Show file tree
Hide file tree
Showing 86 changed files with 419 additions and 3,729 deletions.
4 changes: 2 additions & 2 deletions .github/workflows/windows_conditional_compilation.yml
Original file line number Diff line number Diff line change
Expand Up @@ -336,7 +336,7 @@ jobs:
CPU_Functional_Tests:
name: CPU functional tests
needs: [Build, Smart_CI]
timeout-minutes: 70
timeout-minutes: 85
defaults:
run:
shell: pwsh
Expand Down Expand Up @@ -390,7 +390,7 @@ jobs:
run: |
set path=%path%;${{ env.INSTALL_TEST_DIR }}\tbb\bin;${{ env.INSTALL_TEST_DIR }}\tbb
python3 ${{ env.PARALLEL_TEST_SCRIPT }} -e ${{ env.INSTALL_TEST_DIR }}\ov_cpu_func_tests.exe -w ${{ env.INSTALL_TEST_DIR }} -s suite -rf 0 -- --gtest_print_time=1 --gtest_filter=*smoke*
timeout-minutes: 45
timeout-minutes: 60

- name: Upload Test Results
uses: actions/upload-artifact@v3
Expand Down

This file was deleted.

Original file line number Diff line number Diff line change
Expand Up @@ -148,5 +148,4 @@ offering.
| OpenVINO™ Integration with TensorFlow is longer supported, as OpenVINO now features a
native TensorFlow support, significantly enhancing user experience with no need for
explicit model conversion.
| :doc:`Learn more <openvino_docs_MO_DG_TensorFlow_Frontend>`
Original file line number Diff line number Diff line change
Expand Up @@ -91,7 +91,7 @@ To understand the differences between Inference Engine API and API 2.0, see the
- **New behavior** implemented in 2022.1 assumes full model alignment with the framework:

- Model Conversion API preserves input element types and order of dimensions (layouts), and stores tensor names from the original models.
- OpenVINO Runtime 2022.1 reads models in any format (OpenVINO IR v10, OpenVINO IR v11, TensorFlow (check :doc:`TensorFlow Frontend Capabilities and Limitations <openvino_docs_MO_DG_TensorFlow_Frontend>`), ONNX, PaddlePaddle, etc.).
- OpenVINO Runtime 2022.1 reads models in any format (OpenVINO IR v10, OpenVINO IR v11, TensorFlow, ONNX, PaddlePaddle, etc.).
- API 2.0 uses tensor names for addressing, which is the standard approach among the compatible model frameworks.
- API 2.0 can also address input and output tensors by the index. Some model formats like ONNX are sensitive to the input and output order, which is preserved by OpenVINO 2022.1.

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -134,7 +134,6 @@ Here are code examples of how to use these methods with different model formats:
For a guide on how to run inference, see how to
:doc:`Integrate OpenVINO™ with Your Application <openvino_docs_OV_UG_Integrate_OV_with_your_application>`.
For TensorFlow format, see :doc:`TensorFlow Frontend Capabilities and Limitations <openvino_docs_MO_DG_TensorFlow_Frontend>`.

.. tab-item:: C++
:sync: cpp
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@
This guide describes a deprecated conversion method. The guide on the new and recommended method can be found in the :doc:`Converting a TensorFlow Model <openvino_docs_OV_Converter_UG_prepare_model_convert_model_Convert_Model_From_TensorFlow>` article.


.. note:: TensorFlow models are supported via :doc:`FrontEnd API <openvino_docs_MO_DG_TensorFlow_Frontend>`. You may skip conversion to IR and read models directly by OpenVINO runtime API. Refer to the :doc:`inference example <openvino_docs_OV_UG_Integrate_OV_with_your_application>` for more details. Using ``convert_model`` is still necessary in more complex cases, such as new custom inputs/outputs in model pruning, adding pre-processing, or using Python conversion extensions.
.. note:: TensorFlow models are supported via FrontEnd API. You may skip conversion to IR and read models directly by OpenVINO runtime API. Refer to the :doc:`inference example <openvino_docs_OV_UG_Integrate_OV_with_your_application>` for more details. Using ``convert_model`` is still necessary in more complex cases, such as new custom inputs/outputs in model pruning, adding pre-processing, or using Python conversion extensions.

The conversion instructions are different depending on whether your model was created with TensorFlow v1.X or TensorFlow v2.X.

Expand Down
2 changes: 0 additions & 2 deletions docs/articles_en/openvino_workflow/model_preparation.rst
Original file line number Diff line number Diff line change
Expand Up @@ -50,8 +50,6 @@ and `Torchvision models <https://pytorch.org/hub/>`__. Now you have two options:

For PyTorch models, `Python API <#convert-a-model-with-python-convert-model>`__ is the only conversion option.

TensorFlow may present additional considerations :doc:`TensorFlow Frontend Capabilities and Limitations <openvino_docs_MO_DG_TensorFlow_Frontend>`.

Model States
##############################################

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -120,7 +120,6 @@ Here are code examples of how to use these methods with different model formats:
For a guide on how to run inference, see how to
:doc:`Integrate OpenVINO™ with Your Application <openvino_docs_OV_UG_Integrate_OV_with_your_application>`.
For TensorFlow format, see :doc:`TensorFlow Frontend Capabilities and Limitations <openvino_docs_MO_DG_TensorFlow_Frontend>`.

.. tab-item:: C++
:sync: cpp
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,6 @@ Integrate OpenVINO™ with Your Application
openvino_docs_OV_UG_Infer_request
openvino_docs_OV_UG_Python_API_inference
openvino_docs_OV_UG_Python_API_exclusives
openvino_docs_MO_DG_TensorFlow_Frontend


.. meta::
Expand Down
25 changes: 22 additions & 3 deletions src/frontends/tensorflow/src/op/tensor_array_operations.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -64,8 +64,10 @@ OutputVector translate_tensor_array_v3_op(const NodeContext& node) {
auto dtype = node.get_attribute<element::Type>("dtype");
auto size = node.get_input(0);
auto element_shape = node.get_attribute<PartialShape>("element_shape");
bool dynamic_size = node.get_attribute<bool>("dynamic_size", false);
int64_t element_rank = element_shape.rank().is_static() ? element_shape.rank().get_length() : -1;

if (element_shape.rank().is_static()) {
if (element_rank != -1 && !dynamic_size) {
auto node_name = node.get_name();
auto new_output1 =
create_initial_tensor_array_constant(element_shape.rank().get_length(), dtype, size, node.get_name());
Expand All @@ -76,8 +78,8 @@ OutputVector translate_tensor_array_v3_op(const NodeContext& node) {
return OutputVector{new_output1, new_output2};
}

// dynamic case when it is unable retrieve element rank from the attribute
auto tensor_array_v3 = make_shared<TensorArrayV3>(size, dtype, node.get_decoder());
// dynamic case when it is unable retrieve element rank from the attribute or container size is dynamic
auto tensor_array_v3 = make_shared<TensorArrayV3>(size, dtype, element_rank, dynamic_size, node.get_decoder());
set_node_name(node.get_name(), tensor_array_v3);

return tensor_array_v3->outputs();
Expand Down Expand Up @@ -290,6 +292,7 @@ OutputVector translate_tensor_array_write_v3_op(const NodeContext& node) {
// if it just initialized, its shape is equal to [tensor_array_size, 1, ..., 1]
// otherwise, it is equal to [tensor_array_size, <element shape>]
auto tensor_array = node.get_input(3);
bool dynamic_size = true;

// reshape index to have it of [1] shape
auto new_index_shape = make_shared<v0::Constant>(element::i32, Shape{1}, 1);
Expand All @@ -302,6 +305,7 @@ OutputVector translate_tensor_array_write_v3_op(const NodeContext& node) {
auto tensor_array_v3 = as_type_ptr<TensorArrayV3>(enter->input_value(0).get_node_shared_ptr());
int64_t tensor_element_rank = value.get_partial_shape().rank().get_length();
tensor_array_v3->set_element_rank(tensor_element_rank);
dynamic_size = tensor_array_v3->get_dynamic_size();
}
}

Expand All @@ -317,6 +321,21 @@ OutputVector translate_tensor_array_write_v3_op(const NodeContext& node) {
auto new_tensor_array_shape = make_shared<v0::Concat>(OutputVector{tensor_array_size, element_shape}, 0);
tensor_array = make_shared<v3::Broadcast>(tensor_array, new_tensor_array_shape);

if (dynamic_size) {
// it requires to adjust a container size
auto const_one = make_shared<v0::Constant>(element::i32, Shape{1}, 1);
auto index_plus_one = make_shared<v1::Add>(index, const_one);
auto max_size = make_shared<v1::Maximum>(tensor_array_size, index_plus_one);

auto dummy_size = make_shared<v1::Subtract>(max_size, tensor_array_size);
auto dummy_tensor_shape = make_shared<v0::Concat>(OutputVector{dummy_size, element_shape}, 0);

// create dummy tensor and concatenate it
auto zero_element = create_same_type_const_scalar<int32_t>(value, 0);
auto dummy_tensor = make_shared<v3::Broadcast>(zero_element, dummy_tensor_shape);
tensor_array = make_shared<v0::Concat>(OutputVector{tensor_array, dummy_tensor}, 0);
}

// update the resulted tensor using ScatterUpdate
value = make_shared<v0::Unsqueeze>(value, zero_const);
auto scatter_update = make_shared<v3::ScatterUpdate>(tensor_array, index, value, zero_const);
Expand Down
25 changes: 21 additions & 4 deletions src/frontends/tensorflow/src/tf_utils.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -433,7 +433,8 @@ bool propagate_conditional_flow(const OutputVector& ov_inputs,
shared_ptr<v5::Loop> create_loop_for_tf_while(const std::string& while_node_name,
const shared_ptr<Model>& body_model,
const shared_ptr<Model>& cond_model,
const OutputVector& ov_inputs) {
const OutputVector& ov_inputs,
const shared_ptr<Model>& prior_cond_model) {
size_t input_size = ov_inputs.size();
// inject condition body graph prior to Loop node
// to check condition before to start iterations
Expand All @@ -449,7 +450,20 @@ shared_ptr<v5::Loop> create_loop_for_tf_while(const std::string& while_node_name
}
cond_model->validate_nodes_and_infer_types();

auto cond_prior = cond_model->clone();
if (prior_cond_model) {
auto prior_cond_params = prior_cond_model->get_parameters();
FRONT_END_GENERAL_CHECK(
input_size == prior_cond_params.size(),
"[TensorFlow Frontend] internal error: mismatch number of inputs to While and a number of "
"inputs in a conditional graph");
for (size_t input_ind = 0; input_ind < input_size; ++input_ind) {
prior_cond_params[input_ind]->set_element_type(ov_inputs[input_ind].get_element_type());
prior_cond_params[input_ind]->set_partial_shape(ov_inputs[input_ind].get_partial_shape());
}
prior_cond_model->validate_nodes_and_infer_types();
}
auto cond_prior = prior_cond_model ? prior_cond_model : cond_model->clone();

ov::OutputVector ov_outputs;
inject_body_model(cond_prior, while_node_name + "/cond", ov_inputs, ov_outputs);
FRONT_END_GENERAL_CHECK(
Expand All @@ -475,14 +489,17 @@ shared_ptr<v5::Loop> create_loop_for_tf_while(const std::string& while_node_name
for (size_t param_ind = 0; param_ind < body_results.size(); ++param_ind) {
cond_params[param_ind]->output(0).replace(body_results[param_ind]->input_value(0));
}
auto body_condition_output_idx = body_results.size();
// body_results may contain less nodes than body_params that means back edge exists not for all body_params
for (size_t param_ind = body_condition_output_idx; param_ind < input_size; ++param_ind) {
cond_params[param_ind]->output(0).replace(body_params[param_ind]->output(0));
}

// update body model with the new result that corresponds to execution condition
FRONT_END_GENERAL_CHECK(
cond_results.size() == 1 && cond_results[0],
"[TensorFlow Frontend] Internal error or inconsistent model: condition body must contain one Result node.");
auto body_condition_output_idx = body_results.size();
body_model->add_results(cond_results);

// type setting for body graph parameters is needed for TensorList support since DT_VARIANT type is present
for (size_t input_ind = 0; input_ind < input_size; ++input_ind) {
body_params[input_ind]->set_element_type(ov_inputs[input_ind].get_element_type());
Expand Down
10 changes: 6 additions & 4 deletions src/frontends/tensorflow/src/tf_utils.hpp
Original file line number Diff line number Diff line change
Expand Up @@ -105,10 +105,12 @@ bool propagate_conditional_flow(const ov::OutputVector& ov_inputs,
void copy_conditional_flow_marker(const CfMarkerType& copy_from, CfMarkerType& copy_to);

// create Loop operation corresponding to TensorFlow While operation
std::shared_ptr<ov::op::v5::Loop> create_loop_for_tf_while(const std::string& while_node_name,
const std::shared_ptr<ov::Model>& body_model,
const std::shared_ptr<ov::Model>& cond_model,
const ov::OutputVector& ov_inputs);
std::shared_ptr<ov::op::v5::Loop> create_loop_for_tf_while(
const std::string& while_node_name,
const std::shared_ptr<ov::Model>& body_model,
const std::shared_ptr<ov::Model>& cond_model,
const ov::OutputVector& ov_inputs,
const std::shared_ptr<ov::Model>& prior_cond_model = nullptr);

// inject a graph by given inputs and return outputs of the injected graph
void inject_body_model(std::shared_ptr<ov::Model> ov_model_to_inject,
Expand Down
76 changes: 75 additions & 1 deletion src/frontends/tensorflow/src/translate_session.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -279,8 +279,82 @@ void fuse_loop_cond(std::shared_ptr<LoopCond>& loop_cond,
auto cond_model = std::make_shared<ov::Model>(ov_cond_output, cond_params);
auto body_model = std::make_shared<ov::Model>(ov_body_outputs, body_params);

auto loop_node = create_loop_for_tf_while(node_name, body_model, cond_model, ov_inputs);
// check if condition model has NextIteration->Merge construction
// if yes, it means we need to create separate condition for initial check prior to While execution
// and separate one for Loop inside
auto prior_cond_model = cond_model->clone();
for (const auto& op : prior_cond_model->get_ordered_ops()) {
auto merge = ov::as_type_ptr<Merge>(op);
if (!merge) {
continue;
}

auto next_iteration = ov::as_type_ptr<NextIteration>(merge->input_value(0).get_node_shared_ptr());
if (!next_iteration) {
next_iteration = ov::as_type_ptr<NextIteration>(merge->input_value(1).get_node_shared_ptr());
}

auto param_node = ov::as_type_ptr<ov::opset8::Parameter>(merge->input_value(0).get_node_shared_ptr());
if (!param_node) {
param_node = ov::as_type_ptr<ov::opset8::Parameter>(merge->input_value(1).get_node_shared_ptr());
}

if (!next_iteration || !param_node) {
continue;
}
merge->output(0).replace(param_node->output(0));
}

// create condition model to inject inside Loop operaion
auto cond_model_params = cond_model->get_parameters();
for (const auto& op : cond_model->get_ordered_ops()) {
auto merge = ov::as_type_ptr<Merge>(op);
if (!merge) {
continue;
}

auto next_iteration = ov::as_type_ptr<NextIteration>(merge->input_value(0).get_node_shared_ptr());
if (!next_iteration) {
next_iteration = ov::as_type_ptr<NextIteration>(merge->input_value(1).get_node_shared_ptr());
}

auto param_node = ov::as_type_ptr<ov::opset8::Parameter>(merge->input_value(0).get_node_shared_ptr());
if (!param_node) {
param_node = ov::as_type_ptr<ov::opset8::Parameter>(merge->input_value(1).get_node_shared_ptr());
}

if (!next_iteration || !param_node) {
continue;
}

std::string producer_name;
size_t producer_output_port_idx;
next_iteration->get_producer(producer_name, producer_output_port_idx);
FRONT_END_GENERAL_CHECK(
ov_tensors_map.count(producer_name) > 0,
"[TensorFlow Frontend] internal error: NextIteration producer is not found in the tensor map");
auto producer_outputs = ov_tensors_map.at(producer_name);
FRONT_END_GENERAL_CHECK(
producer_output_port_idx < producer_outputs.size(),
"[TensorFlow Frontend] internal error: NextIteration producer has insufficient number of outputs");
auto next_iteration_output = producer_outputs[producer_output_port_idx].port;

// create auxiliary body model having separate instances of ov::Nodes to avoid cycles in graph during Loop
// construction node
auto aux_cond_model =
std::make_shared<ov::Model>(ov::OutputVector{next_iteration_output}, body_params)->clone();
auto aux_cond_params = aux_cond_model->get_parameters();
auto aux_cond_results = aux_cond_model->get_results();
auto params_size = aux_cond_params.size();
// insert the auxiliary body model into condition model
for (size_t param_ind = 0; param_ind < params_size; ++param_ind) {
auto cond_param = cond_model_params[param_ind];
aux_cond_params[param_ind]->output(0).replace(cond_model_params[param_ind]->output(0));
}
merge->output(0).replace(aux_cond_results[0]->input_value(0));
}

auto loop_node = create_loop_for_tf_while(node_name, body_model, cond_model, ov_inputs, prior_cond_model);
auto loop_model = std::make_shared<ov::Model>(loop_node->outputs());

size_t loop_node_output_size = loop_node->get_output_size();
Expand Down
20 changes: 20 additions & 0 deletions src/frontends/tensorflow_common/include/helper_ops/block_lstm.hpp
Original file line number Diff line number Diff line change
Expand Up @@ -131,6 +131,26 @@ class BlockLSTM : public InternalOperation {
return m_hidden_size;
}

std::shared_ptr<Node> clone_with_new_inputs(const OutputVector& inputs) const override {
FRONT_END_OP_CONVERSION_CHECK(inputs.size() == 9,
"[TensorFlow Frontend] internal error: BlockLSTM expects 9 inputs");
auto block_lstm_node = std::make_shared<BlockLSTM>(inputs[0],
inputs[1],
inputs[2],
inputs[3],
inputs[4],
inputs[5],
inputs[6],
inputs[7],
inputs[8],
m_forget_bias,
m_cell_clip,
m_use_peephole,
m_decoder);
block_lstm_node->set_attrs(get_attrs());
return block_lstm_node;
}

private:
ov::Dimension m_hidden_size;
float m_forget_bias;
Expand Down
8 changes: 8 additions & 0 deletions src/frontends/tensorflow_common/include/helper_ops/enter.hpp
Original file line number Diff line number Diff line change
Expand Up @@ -38,6 +38,14 @@ class Enter : public InternalOperation {
return m_frame_name;
}

std::shared_ptr<Node> clone_with_new_inputs(const OutputVector& inputs) const override {
FRONT_END_OP_CONVERSION_CHECK(inputs.size() == 1,
"[TensorFlow Frontend] internal error: Enter expects one input");
auto enter_node = std::make_shared<Enter>(inputs[0], m_frame_name, m_decoder);
enter_node->set_attrs(get_attrs());
return enter_node;
}

private:
std::string m_frame_name;
};
Expand Down
Loading

0 comments on commit f2780f6

Please sign in to comment.