Skip to content

Commit

Permalink
Remove opset0 support and undesired passes from Interpreter backend (#…
Browse files Browse the repository at this point in the history
…1469)

* Move evaluate() interface from some OPs to Interpreter

* commit

* Move shuffle channels reference to OP's evaluate

* Add some operations missed in evaluate_node

* Fix select references invocation from evaluate_node()

* Activation refs (#2)

* HardSigmoid

* Elu

* Selu

* Gelu

* Move to test runtime

* Rollback donwgrade passes delition

* Initial batch to space refs

* Return opset1_upgrade

* WIP: Add space to batch evaluate

* Fix space to batch

* add evaluates function in evaluates_map (#4)

* Add space to batch evaluate

* Fix crop in batch to space references

* Remove vectors reallocation in evaluates for b2s and s2b

* .

* Add SpaceToDepth evaluate

* Add depth to space evaluate

* Remove code duplication depth to space evaluate

* Fix some failed layer tests

* Ngraph test (#3)

* Remove some v0 ops & fix some tests

* Fixes BatchNorm

* Next

* dd

* s

* Add dot & replace slice refs

* d

* dkj

* Review fixes part 1

* Fixes. Part 2

* Fixes. Part 3

* Enable cells refs in evaluate map

* Fix some failed layer tests

* Some more fixes

* Fix code style (#6)

* Tests (#7)

* PriorBox

* Mod

* NormilizeL2

* Update prior_box.hpp

* Fix one hot ref call

* .

* Select (#8)

* Select

* Fix code style

* Fix select messages

* ReverseSeq (#9)

* ReverseSeq

* Select

* ExtractImagePatches, Seqence

* Fix Code Style

* remove extra

* Remove etra line@

* Add fake quantize reference

* Align convolution layer tests instantiations with updated definition

* Disabled some failed LPT tests

* Disabled some failed LPT tests

* Remove undesired changes

* Update unit-test manifests + some code cleanup

* Fix code style (#10)

* Normalize L2 refs support (from PR #2327)

* Fix code style

* Apply review comments. Part 1 (#11)

* Apply first part of review comments

* Update onnx_import.in.cpp

* Remove redundant reshape from shuffle_channels evaluate

* Decompose GroupConvolution

* [IE Ngraph] Fix some operation inheritance  (#13)

* [IE TESTS] Depth2Space

* Space2Depth

* ShuffleChannels

* Fix ode style

* Fix code style

* [IE NGraph] Remove decompose op (#14)

* .

* Fix loosing control dependency in replace_node

* Fix loosing control dependency in replace_node

* Fix code style

* Fix FQ references build on windows

* Fix code style

* Apply comments (#15)

* [Ie Ngraph] Remove using v1::Add

* [Ie Ngraph] Remove using v1::Mutliply

* [Ie Ngraph] Remove using v1::Subtract

* [Ie Ngraph] Remove using v1::Divide

* [Ie Ngraph] Remove using v1::Equal

* [Ie Ngraph] Remove using v1::Greater

* [Ie Ngraph] Remove using v1::Greater_eq

* [Ie Ngraph] Remove using v1::Less

* [Ie Ngraph] Remove using v1::LessEq

* [Ie Ngraph] Remove using operator+

* [Ie Ngraph] Remove using operator/

* [Ie Ngraph] Remove using operator*

* [Ie Ngraph] Remove using operator-

* Fix code style

* Ci (#16)

* Fix CentOS compilation

* Revert ngraph::op::vo::Multiply removing due to OpenCV

* Android fix (#17)

* fix failures

* Fix code style

* Add (#18)

* Android fix

* Add

* Add in opset1 upgrade pass

* Add in opset1 upgrade pass

* Remove v0::Add, Reverted removing v0::Multiply (#19)

* Remove overloaded math operators from PyNgraph

* Remove overloaded math operators from PyNgraph

* Fix gna tests (#20)

* Fix gna tests

* Squashed commit of the following:

commit 565b504
Author: Alexander Zhogov <[email protected]>
Date:   Tue Oct 13 13:27:34 2020 +0300

    GitHub CI: Add files_size.yml (#2570)

    * GitHub CI: Add files_size.yml

    * Update job name

commit ab0fb29
Author: Vladislav Vinogradov <[email protected]>
Date:   Tue Oct 13 11:37:30 2020 +0300

    [IE][BUILD] Fix C5208 warning under Windows (#2628)

    * C++ feature in C `typedef struct` code.
    * The warning can be promoted to error in dependent projects.

    C5208: unnamed class used in typedef name cannot declare members other than
    non-static data members, member enumerations, or member classes

commit 15a338e
Author: helmutg <[email protected]>
Date:   Mon Oct 12 22:24:24 2020 +0200

    add build option USE_SYSTEM_PUGIXML (#2502)

    It allows skipping inference-engine/thirdparty/pugixml and using the
    system copy instead.

    Thanks to @Osse for helping understand cmake scoping rules.

    Co-authored-by: Helmut Grohne <[email protected]>

commit 7ac8cd8
Author: Alexander Zhogov <[email protected]>
Date:   Mon Oct 12 19:23:00 2020 +0300

    Azure CI: Fix nGraph ONNX

commit 3a2e339
Author: Alexander Zhogov <[email protected]>
Date:   Mon Oct 12 19:20:28 2020 +0300

    Azure CI: Disable steps in nGraph ONNX

commit 5835974
Author: azhogov <[email protected]>
Date:   Mon Oct 12 18:46:14 2020 +0300

    Azure CI: Add linux_ngraph_onnx.yml

* LRN Reference (#21)

* Disable failed tests on ia32

* Remove redundant broadcast from MVN ref

* Fix missed GatherND in opset_int_tbl + code style

* Remove one extra temporary buffer from MVN ref

* Merge master (#22)

* Leaky relu transformation refactor (#2640)

* Refactored LeakyRelu transformation

* Added unit test for LeakyRelu transformation + removed duplicate test function valued_const

* nGraph implementation of NMS-5 (without `evaluate()`) (#2651)

* Written nGraph NMS-5 without evaluate().

* Used NGRAPH_RTTI_DECLARATION.

* setupvars.sh: Updated setting pyenv error to warning. (#2663)

* Fix itt build (#2662)

* Loop-5 operation specification (#2291)

The Loop-5 operation specification

* Time tests improvements (#2642)

* Remove extra functions from run_timetest.py

* Add `log.debug` of raw and aggregated statistics in run_timetest.py

* Implement storing of models locally for test_timetest.py

* Fixed CVS-35316 (#2072)

* Extend MO for operation GatherND (#2540)

* Extend MO for operation GatherND

* Update documentation

* Rename GatherNd.py to gathernd.py

Signed-off-by: Roman Kazantsev <[email protected]>

* Add hsigmoid op to ngraph (#2647)

* [IE CLDNN] Fixes for GatherTree and ReverseSequence  (#2660)

* ReorgYolo reference implementation (#2384)

* Align ReorgYolo to the spec (vector strides -> int stride)

* ReorgYolo ref impl

* ReorgYolo evaluate method

* ReorgYolo tests

* Tests update

* Style apply

* Add some coments

* Code refactor

* Comment update

* Style apply

* Build fix, mark evaluate as override

* Revert "Align ReorgYolo to the spec (vector strides -> int stride)"

* Use int_executable instead of evaluate

* Use char* instead of templates

* Code refactor

* Comment update

* Code review comment

* Add constructor aligned with spec

* Update shape validation

* Update attributes tests

* Add type_prop tests

* Update backend tests

* Add single layer tests

* Update the spec

* Remove wrong transformation test

* Add ReorgYolo to evaluates_map

* code style

Co-authored-by: Evgeny Lazarev <[email protected]>
Co-authored-by: Vladimir Gavrilov <[email protected]>
Co-authored-by: Artyom Anokhov <[email protected]>
Co-authored-by: Andrey Somsikov <[email protected]>
Co-authored-by: Vitaliy Urusovskij <[email protected]>
Co-authored-by: Anastasiya Ageeva <[email protected]>
Co-authored-by: Roman Kazantsev <[email protected]>
Co-authored-by: iliya mironov <[email protected]>
Co-authored-by: Vladimir Paramuzov <[email protected]>
Co-authored-by: Katarzyna Mitrus <[email protected]>

* RegionYolo

* Apply review comments

* Merge remote-tracking branch 'upstream/master' into update_evaluates

# Conflicts:
#	ngraph/core/src/op/mvn.cpp
#	ngraph/test/backend/fused_op.in.cpp
#	ngraph/test/runtime/ie/unit_test.manifest
#	ngraph/test/runtime/interpreter/int_executable.hpp
#	ngraph/test/runtime/interpreter/opset_int_tbl.hpp
#	ngraph/test/runtime/interpreter/unit_test.manifest
#	ngraph/test/runtime/opset0_tbl.hpp

* Apply code style

* Apply comments

* Apply code style

* Fix RegionYolo evaluate redefinition

* Removed defines from evaluates map

* Apply code style

* Fix MVN ref

* rename select reference argument

* Fix code style

* Fix Fake Quantize references calculation (#24)

* Fix MVN ref

* Fix MVN & adding NMS

* Fix TI

* Temporary relax comparison threshold for FQ SLT

* Fix GPU LPT Tests

* Add explicit rounding mode seetting in FQ references

* Apply code style

* Rollback op_is test deletion

* Apply code style

* Fix merge conflict resolving issues

* Apply code style

Co-authored-by: Irina Efode <[email protected]>
Co-authored-by: Anton Zaytsev <[email protected]>
Co-authored-by: Evgeny Lazarev <[email protected]>
Co-authored-by: Vladimir Gavrilov <[email protected]>
Co-authored-by: Artyom Anokhov <[email protected]>
Co-authored-by: Andrey Somsikov <[email protected]>
Co-authored-by: Vitaliy Urusovskij <[email protected]>
Co-authored-by: Anastasiya Ageeva <[email protected]>
Co-authored-by: Roman Kazantsev <[email protected]>
Co-authored-by: iliya mironov <[email protected]>
Co-authored-by: Vladimir Paramuzov <[email protected]>
Co-authored-by: Katarzyna Mitrus <[email protected]>
  • Loading branch information
13 people authored Dec 3, 2020
1 parent b3124a5 commit 6467c64
Show file tree
Hide file tree
Showing 170 changed files with 3,796 additions and 5,222 deletions.
Original file line number Diff line number Diff line change
Expand Up @@ -1062,7 +1062,7 @@ void convertFunctionToICNNNetwork(const std::shared_ptr<const ::ngraph::Function
std::make_shared<Builder::NodeConverter<::ngraph::op::v1::Softmax>>(),
std::make_shared<Builder::NodeConverter<::ngraph::op::v1::Split>>(),
std::make_shared<Builder::NodeConverter<::ngraph::op::VariadicSplit>>(),
std::make_shared<Builder::NodeConverter<::ngraph::op::Subtract>>(),
std::make_shared<Builder::NodeConverter<::ngraph::op::v1::Subtract>>(),
std::make_shared<Builder::NodeConverter<::ngraph::op::Tanh>>(),
std::make_shared<Builder::NodeConverter<::ngraph::op::TileIE>>(),
std::make_shared<Builder::NodeConverter<::ngraph::op::TensorIterator>>(),
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -537,7 +537,7 @@ CNNLayer::Ptr NodeConverter<ngraph::op::v1::Softmax>::createLayer(const std::sha
}

template <>
CNNLayer::Ptr NodeConverter<ngraph::op::Subtract>::createLayer(const std::shared_ptr<ngraph::Node>& layer) const {
CNNLayer::Ptr NodeConverter<ngraph::op::v1::Subtract>::createLayer(const std::shared_ptr<ngraph::Node>& layer) const {
LayerParams params = {layer->get_friendly_name(), "Eltwise",
details::convertPrecision(layer->get_output_element_type(0))};
auto res = std::make_shared<InferenceEngine::EltwiseLayer>(params);
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -36,10 +36,10 @@ TEST(algebraic_simplification, add_negative_tests) {
auto c = make_shared<op::Parameter>(type, shape);
auto abs_a = make_shared<op::Abs>(a);
auto iconst2 = ngraph::make_constant_from_string("2", type, shape);
auto add_a_0 = a + iconst2;
auto add_a_0_0 = add_a_0 + iconst2;
auto add_b_0 = b + abs_a;
auto add_b_0_0 = add_b_0 + abs_a;
auto add_a_0 = std::make_shared<ngraph::op::v1::Add>(a, iconst2);
auto add_a_0_0 = std::make_shared<ngraph::op::v1::Add>(add_a_0, iconst2);
auto add_b_0 = std::make_shared<ngraph::op::v1::Add>(b, abs_a);
auto add_b_0_0 = std::make_shared<ngraph::op::v1::Add>(add_b_0, abs_a);

auto f = std::make_shared<Function>(ngraph::NodeVector{a, b, add_a_0_0, c, add_b_0_0},
ParameterVector{a, b, c});
Expand All @@ -63,10 +63,10 @@ TEST(algebraic_simplification, multiply_negative_tests) {
auto c = make_shared<op::Parameter>(type, shape);
auto abs_a = make_shared<op::Abs>(a);
auto iconst2 = ngraph::make_constant_from_string("2", type, shape);
auto add_a_0 = a * iconst2;
auto add_a_0_0 = add_a_0 * iconst2;
auto add_b_0 = b * abs_a;
auto add_b_0_0 = add_b_0 * abs_a;
auto add_a_0 = make_shared<op::v1::Multiply>(a, iconst2);
auto add_a_0_0 = make_shared<op::v1::Multiply>(add_a_0, iconst2);
auto add_b_0 = make_shared<op::v1::Multiply>(b, abs_a);
auto add_b_0_0 = make_shared<op::v1::Multiply>(add_b_0, abs_a);

auto f = std::make_shared<Function>(ngraph::NodeVector{a, b, add_a_0_0, c, add_b_0_0},
ParameterVector{a, b, c});
Expand Down Expand Up @@ -228,7 +228,7 @@ TEST(algebraic_simplification, log_no_exp) {
auto a = make_shared<op::Parameter>(element::f32, Shape{96, 100});
auto b = make_shared<op::Parameter>(element::f32, Shape{96, 100});
auto abs_a = make_shared<op::Abs>(a);
auto div = abs_a / b;
auto div = std::make_shared<op::v1::Divide>(abs_a, b);
auto log_div = make_shared<op::Log>(div);

auto neg_inner = make_shared<op::Negative>(log_div);
Expand All @@ -248,7 +248,7 @@ TEST(algebraic_simplification, log_no_divide) {
auto a = make_shared<op::Parameter>(element::f32, Shape{96, 100});
auto b = make_shared<op::Parameter>(element::f32, Shape{96, 100});
auto exp_a = make_shared<op::Exp>(a);
auto mul = exp_a * b;
auto mul = make_shared<op::v1::Multiply>(exp_a, b);
auto log_mul = make_shared<op::Log>(mul);

auto neg_inner = make_shared<op::Negative>(log_mul);
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -48,7 +48,7 @@ class MemoryConv : public testing::WithParamInterface<LayerTestsUtils::basicPara
auto mem_i = make_shared<op::v0::Constant>(type, shape, 0);
auto mem_r = make_shared<op::v3::ReadValue>(mem_i, "id");

auto mul = make_shared<op::v0::Multiply>(mem_r, input);
auto mul = make_shared<op::v1::Multiply>(mem_r, input);
auto sig = make_shared<op::v0::Sigmoid>(mul);

auto fc1_w = make_shared<op::v0::Constant>(type, Shape{C, C}, 1);
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -21,15 +21,16 @@ const std::vector<LayerTransformation::Params> trasformationParamValues = {
};

const std::vector<ngraph::builder::subgraph::FakeQuantizeOnData> fakeQuantizeOnDataValues = {
{ 256ul, {}, { 0.f }, { 2.55f }, { 0.f }, { 2.55f } },
{
256ul,
{ 1ul, 3ul, 1ul, 1ul },
{ 0.f, 0.f, 0.f },
{ 2.55f / 10.f, 2.55f / 5.f, 2.55f / 2.f },
{ 0.f, 0.f, 0.f },
{ 2.55f / 10.f, 2.55f / 5.f, 2.55f / 2.f }
},
{ 256ul, {}, { 0.f }, { 2.55f }, { 0.f }, { 2.55f } }
// TODO: Issue 39810
// {
// 256ul,
// { 1ul, 3ul, 1ul, 1ul },
// { 0.f, 0.f, 0.f },
// { 2.55f / 10.f, 2.55f / 5.f, 2.55f / 2.f },
// { 0.f, 0.f, 0.f },
// { 2.55f / 10.f, 2.55f / 5.f, 2.55f / 2.f }
// },
};

INSTANTIATE_TEST_CASE_P(smoke_LPT, FuseFakeQuantizeAndScaleShiftTransformation,
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -26,7 +26,7 @@ const std::vector<ReshapeTransformationParam> params = {
{
ngraph::Shape{ 1, 3, 32 },
{ 1, 3, 4, 8 },
{ 256ul, ngraph::Shape{ 1, 1, 1, 1 }, { 0.f }, { 255.f }, { 0.f }, { 25.5f } },
{ 256ul, ngraph::Shape{ 1, 1, 1 }, { 0.f }, { 255.f }, { 0.f }, { 25.5f } },
},
// 4D -> 3D
{
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -24,27 +24,27 @@ namespace {

const std::vector<LayerTestsDefinitions::UnsqueezeTransformationParam> params = {
{
{ 256ul, ngraph::Shape { 1, 1, 1, 1 }, { -12.8f }, { 12.7f }, { -12.8f }, { 12.7f } },
{ 256ul, ngraph::Shape { 1, 1, 1 }, { -12.8f }, { 12.7f }, { -12.8f }, { 12.7f } },
{ 0.0, 3.0 },
{ 3, 3, 5}
},
{
{ 256ul, ngraph::Shape { 1, 1, 1, 1 }, { -12.8f }, { 12.7f }, { -12.8f }, { 12.7f } },
{ 256ul, ngraph::Shape { 1, 1, 1 }, { -12.8f }, { 12.7f }, { -12.8f }, { 12.7f } },
{ 0.0, 1.0 },
{ 3, 3, 3 }
},
{
{ 256ul, ngraph::Shape { 1, 1, 1, 1 }, { -12.8f }, { 12.7f }, { -12.8f }, { 12.7f } },
{ 256ul, ngraph::Shape { 1, 1, 1 }, { -12.8f }, { 12.7f }, { -12.8f }, { 12.7f } },
{ 3.0 },
{ 3, 4, 5, 6 }
},
{
{ 256ul, ngraph::Shape { 1, 1, 1, 1 }, { -12.8f }, { 12.7f }, { -12.8f }, { 12.7f } },
{ 256ul, ngraph::Shape { 1, 1, 1 }, { -12.8f }, { 12.7f }, { -12.8f }, { 12.7f } },
{ 0.0, 3.0 },
{ 1, 32, 2}
},
{
{ 256ul, ngraph::Shape { 1, 1, 1, 1 }, { -12.8f }, { 12.7f }, { -12.8f }, { 12.7f } },
{ 256ul, ngraph::Shape { 1, 1, 1 }, { -12.8f }, { 12.7f }, { -12.8f }, { 12.7f } },
{ 0.0, 1.0 },
{ 46, 128, 2 }
}
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -22,14 +22,15 @@ const std::vector<LayerTransformation::Params> trasformationParamValues = {

const std::vector<ngraph::builder::subgraph::FakeQuantizeOnData> fakeQuantizeOnDataValues = {
{ 256ul, {}, { 0.f }, { 2.55f }, { 0.f }, { 2.55f } },
{
256ul,
{ 1ul, 3ul, 1ul, 1ul },
{ 0.f, 0.f, 0.f },
{ 2.55f / 10.f, 2.55f / 5.f, 2.55f / 2.f },
{ 0.f, 0.f, 0.f },
{ 2.55f / 10.f, 2.55f / 5.f, 2.55f / 2.f }
},
// TODO: Issue 39810
// {
// 256ul,
// { 1ul, 3ul, 1ul, 1ul },
// { 0.f, 0.f, 0.f },
// { 2.55f / 10.f, 2.55f / 5.f, 2.55f / 2.f },
// { 0.f, 0.f, 0.f },
// { 2.55f / 10.f, 2.55f / 5.f, 2.55f / 2.f }
// },
};

INSTANTIATE_TEST_CASE_P(smoke_LPT, FuseFakeQuantizeAndScaleShiftTransformation,
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -26,19 +26,19 @@ const std::vector<ReshapeTransformationParam> params = {
{
ngraph::Shape{ 1, 3, 32 },
{ 1, 3, 4, 8 },
{ 256ul, ngraph::Shape{ 1, 1, 1, 1 }, { 0.f }, { 255.f }, { 0.f }, { 25.5f } },
{ 256ul, ngraph::Shape{ 1, 1, 1 }, { 0.f }, { 255.f }, { 0.f }, { 25.5f } },
},
// 4D -> 3D
{
ngraph::Shape{ 1, 3, 16, 16 },
{ 1, 3, 256 },
{ 256ul, ngraph::Shape{ 1, 1, 1, 1 }, { 0.f }, { 255.f }, { 0.f }, { 25.5f } },
{ 256ul, ngraph::Shape{ 1, 1, 1 }, { 0.f }, { 255.f }, { 0.f }, { 25.5f } },
},
// 4D -> 2D
{
ngraph::Shape{ 1, 3, 4, 8 },
{ 1, -1 },
{ 256ul, ngraph::Shape{ 1, 1, 1, 1 }, { 0.f }, { 255.f }, { 0.f }, { 25.5f } },
{ 256ul, ngraph::Shape{ 1, 1, 1 }, { 0.f }, { 255.f }, { 0.f }, { 25.5f } },
},
};

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -24,27 +24,27 @@ namespace {

const std::vector<LayerTestsDefinitions::UnsqueezeTransformationParam> params = {
{
{ 256ul, ngraph::Shape { 1, 1, 1, 1 }, { 0.f }, { 255.f }, { -128.f }, { 127.f } },
{ 256ul, ngraph::Shape { 1, 1, 1 }, { 0.f }, { 255.f }, { -128.f }, { 127.f } },
{ 0.0, 3.0 },
{ 3, 3, 5}
},
{
{ 256ul, ngraph::Shape { 1, 1, 1, 1 }, { 0.f }, { 255.f }, { -128.f }, { 127.f } },
{ 256ul, ngraph::Shape { 1, 1, 1 }, { 0.f }, { 255.f }, { -128.f }, { 127.f } },
{ 0.0, 1.0 },
{ 3, 3, 3 }
},
{
{ 256ul, ngraph::Shape { 1, 1, 1, 1 }, { 0.f }, { 255.f }, { -128.f }, { 127.f } },
{ 256ul, ngraph::Shape { 1, 1, 1 }, { 0.f }, { 255.f }, { -128.f }, { 127.f } },
{ 3.0 },
{ 3, 4, 5, 6 }
},
{
{ 256ul, ngraph::Shape { 1, 1, 1, 1 }, { 0.f }, { 255.f }, { -128.f }, { 127.f } },
{ 256ul, ngraph::Shape { 1, 1, 1 }, { 0.f }, { 255.f }, { -128.f }, { 127.f } },
{ 0.0, 3.0 },
{ 1, 32, 2}
},
{
{ 256ul, ngraph::Shape { 1, 1, 1, 1 }, { 0.f }, { 255.f }, { -128.f }, { 127.f } },
{ 256ul, ngraph::Shape { 1, 1, 1 }, { 0.f }, { 255.f }, { -128.f }, { 127.f } },
{ 0.0, 1.0 },
{ 46, 128, 2 }
}
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -29,13 +29,13 @@ TEST_P(ExecGraphKeepAssignNode, KeepAssignNode) {
using std::make_shared;
using namespace ngraph::op;

// Some simple graph with Memory(Assign) node // in read //
auto input = make_shared<Parameter>(type, shape); // | \ / //
auto mem_i = make_shared<Constant>(type, shape, 0); // | mul //
auto mem_r = make_shared<ReadValue>(mem_i, "id"); // | / \ //
auto mul = make_shared<Multiply>(mem_r, input); // sum assign //
auto mem_w = make_shared<Assign>(mul, "id"); // | //
auto sum = make_shared<Add>(mul, input); // out //
// Some simple graph with Memory(Assign) node // in read //
auto input = make_shared<Parameter>(type, shape); // | \ / //
auto mem_i = make_shared<Constant>(type, shape, 0); // | mul //
auto mem_r = make_shared<ReadValue>(mem_i, "id"); // | / \ //
auto mul = make_shared<ngraph::op::v1::Multiply>(mem_r, input); // sum assign //
auto mem_w = make_shared<Assign>(mul, "id"); // | //
auto sum = make_shared<ngraph::op::v1::Add>(mul, input); // out //

mem_w->add_control_dependency(mem_r);
sum->add_control_dependency(mem_w);
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -198,7 +198,7 @@ void ActivationParamLayerTest::SetUp() {
constantsValue = activationDecl.second;
auto ngPrc = FuncTestUtils::PrecisionUtils::convertIE2nGraphPrc(netPrecision);
auto params = ngraph::builder::makeParams(ngPrc, {shapes.first});
auto activationParams = createActivationParams(ngPrc);
auto activationParams = createActivationParams(ngPrc, shapes.second);

params[0]->set_friendly_name("Input");
params.insert(params.end(), activationParams.begin(), activationParams.end());
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -43,7 +43,6 @@ std::string BatchToSpaceLayerTest::getTestCaseName(const testing::TestParamInfo<
}

void BatchToSpaceLayerTest::SetUp() {
SetRefMode(LayerTestsUtils::RefMode::INTERPRETER_TRANSFORMATIONS);
std::vector<size_t> inputShape;
std::vector<int64_t> blockShape, cropsBegin, cropsEnd;
InferenceEngine::Precision netPrecision;
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -26,8 +26,8 @@
/**
* redefine this seed to reproduce issue with given seed that can be read from gtest logs
*/
#define BASE_SEED USE_CLOCK_TIME
#define NGRAPH_SEED USE_CLOCK_TIME
#define BASE_SEED 123
#define NGRAPH_SEED 123

namespace LayerTestsDefinitions {

Expand Down Expand Up @@ -85,6 +85,9 @@ void FakeQuantizeLayerTest::SetUp() {
inputDataMax = inputArg[1];
inputDataResolution = inputArg[2];
}
if (fqDirectArg.size() != 0) {
threshold = (fqDirectArg[3] - fqDirectArg[2]) / levels;
}
auto ngPrc = FuncTestUtils::PrecisionUtils::convertIE2nGraphPrc(netPrecision);
auto params = ngraph::builder::makeParams(ngPrc, {inputShape});
auto paramOuts = ngraph::helpers::convert2OutputVector(ngraph::helpers::castOps2Nodes<ngraph::op::Parameter>(params));
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -120,7 +120,7 @@ namespace LayerTestsDefinitions {
// Body
std::shared_ptr<ngraph::Node> Zo = body_params[0];
for (int i = 1; i < body_params.size(); ++i) {
Zo = body_params[i] + Zo;
Zo = std::make_shared<ngraph::op::v1::Add>(body_params[i], Zo);
}

// body_params.insert(body_params.begin(), current_iteration);
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -37,8 +37,6 @@ namespace LayerTestsDefinitions {
}

void SelectLayerTest::SetUp() {
SetRefMode(LayerTestsUtils::RefMode::CONSTANT_FOLDING);

std::vector<std::vector<size_t>> inputShapes(numOfInputs);
InferenceEngine::Precision inputPrecision;
ngraph::op::AutoBroadcastSpec broadcast;
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -43,7 +43,6 @@ std::string SpaceToBatchLayerTest::getTestCaseName(const testing::TestParamInfo<
}

void SpaceToBatchLayerTest::SetUp() {
SetRefMode(LayerTestsUtils::RefMode::INTERPRETER_TRANSFORMATIONS);
std::vector<size_t> inputShape;
std::vector<int64_t> blockShape, padsBegin, padsEnd;
InferenceEngine::Precision inputPrecision, netPrecision;
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -51,7 +51,7 @@ void CascadeConcat::SetUp() {
if (multioutput) {
auto const_mult = ngraph::builder::makeConstant(ngPrc, ngraph::Shape{1, input1[0][1]+input2[0][1]},
std::vector<float>{1.01f});
auto mult = std::make_shared<ngraph::op::v0::Multiply>(concat, const_mult);
auto mult = std::make_shared<ngraph::op::v1::Multiply>(concat, const_mult);
results = ngraph::ResultVector{std::make_shared<ngraph::opset1::Result>(concat2),
std::make_shared<ngraph::opset1::Result>(mult)};
} else {
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -52,7 +52,7 @@ void SoftsignTest::SetUp() {
auto abs = std::make_shared<ngraph::op::Abs>(params[0]);
auto add = std::make_shared<ngraph::op::PowerIE>(abs, 1, 1, 1);
auto power = std::make_shared<ngraph::op::PowerIE>(add, -1, 1, 0);
auto mul = std::make_shared<ngraph::op::Multiply>(power, params[0]);
auto mul = std::make_shared<ngraph::op::v1::Multiply>(power, params[0]);
ngraph::ResultVector results{ std::make_shared<ngraph::op::Result>(mul) };
function = std::make_shared<ngraph::Function>(results, params, "SoftSignTest");
}
Expand All @@ -75,10 +75,10 @@ std::shared_ptr<ngraph::Function> SoftsignTest::GenerateNgraphFriendlySoftSign()
auto params = ngraph::builder::makeParams(ngPrc, { inputShape });
auto abs = std::make_shared<ngraph::op::Abs>(params[0]);
auto constant_0 = ngraph::builder::makeConstant<float>(ngPrc, inputShape, { 1 });
auto add = std::make_shared<ngraph::op::Add>(abs, constant_0);
auto add = std::make_shared<ngraph::op::v1::Add>(abs, constant_0);
auto constant_1 = ngraph::builder::makeConstant<float>(ngPrc, inputShape, { -1 });
auto power = std::make_shared<ngraph::op::Power>(add, constant_1);
auto mul = std::make_shared<ngraph::op::Multiply>(power, params[0]);
auto power = std::make_shared<ngraph::op::v1::Power>(add, constant_1);
auto mul = std::make_shared<ngraph::op::v1::Multiply>(power, params[0]);

ngraph::ResultVector results{ std::make_shared<ngraph::op::Result>(mul) };
return std::make_shared<ngraph::Function>(results, params, "SoftSignTest");
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -64,7 +64,7 @@ void SplitConcatMemory::SetUp() {
auto spl = std::make_shared<ngraph::op::v1::VariadicSplit>(cnc, axis_c, chunk_c);

auto one = std::make_shared<ngraph::op::Constant>(ngPrc, ngraph::Shape{}, 1);
auto plus = std::make_shared<ngraph::op::Add>(cnc, one, ngraph::op::AutoBroadcastSpec::NUMPY);
auto plus = std::make_shared<ngraph::op::v1::Add>(cnc, one, ngraph::op::AutoBroadcastSpec::NUMPY);
plus->set_friendly_name("plus_one");

auto mem_w = std::make_shared<ngraph::op::Assign>(spl->output(1), "id");
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -370,17 +370,6 @@ std::vector<std::vector<std::uint8_t>> LayerTestsCommon::CalculateRefs() {
// reference inference on device with other options and nGraph function has to be implemented here
break;
}
case INTERPRETER_TRANSFORMATIONS: {
auto cloned_function = ngraph::clone_function(*function);

// todo: add functionality to configure the necessary transformations for each test separately
ngraph::pass::Manager m;
m.register_pass<ngraph::pass::ConvertSpaceToBatch>();
m.register_pass<ngraph::pass::ConvertBatchToSpace>();
m.run_passes(cloned_function);
expectedOutputs = ngraph::helpers::interpreterFunction(cloned_function, referenceInputs, inType, convertType);
break;
}
}

return expectedOutputs;
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -126,7 +126,6 @@ typedef std::tuple<

enum RefMode {
INTERPRETER,
INTERPRETER_TRANSFORMATIONS,
CONSTANT_FOLDING,
IE
};
Expand Down
4 changes: 2 additions & 2 deletions inference-engine/tests/unit/cpu/bf16_transformer_test.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -68,7 +68,7 @@ TEST(BF16TransformerTest, KeepMemoryPrecision) {
auto mem_r = make_shared<ReadValue>(mem_i, "id");
mem_r->set_friendly_name("mem_r");

auto mul = make_shared<Multiply>(mem_r, input);
auto mul = make_shared<ngraph::op::v1::Multiply>(mem_r, input);
auto sig = make_shared<Sigmoid>(mul);

auto fc1_w = make_shared<Constant>(type, Shape{2, 2}, 1);
Expand Down Expand Up @@ -131,7 +131,7 @@ TEST(BF16TransformerTest, DISABLED_KeepMemoryPrecisionWithGEMM) {
auto mem_r = make_shared<ReadValue>(mem_i, "id");
mem_r->set_friendly_name("mem_r");

auto mul = make_shared<Multiply>(mem_r, input);
auto mul = make_shared<ngraph::op::v1::Multiply>(mem_r, input);
auto sig = make_shared<Sigmoid>(mul);

auto fc1_w = make_shared<Constant>(type, Shape{2, 2}, 1);
Expand Down
Loading

0 comments on commit 6467c64

Please sign in to comment.