Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Remove opset0 support and undesired passes from Interpreter backend #1469

Merged
Merged
Show file tree
Hide file tree
Changes from 127 commits
Commits
Show all changes
133 commits
Select commit Hold shift + click to select a range
16e66e7
Move evaluate() interface from some OPs to Interpreter
Jun 17, 2020
8cdb2a2
commit
Aug 7, 2020
51e4d6e
Merge 'master' into evaluate_update
Aug 11, 2020
7a41458
Move shuffle channels reference to OP's evaluate
Aug 11, 2020
c0a43b6
Add some operations missed in evaluate_node
Aug 12, 2020
fa43065
Fix select references invocation from evaluate_node()
Aug 12, 2020
9cb9021
Activation refs (#2)
iefode Aug 17, 2020
8d225af
Merge upstream master
Aug 17, 2020
dc58f48
Rollback donwgrade passes delition
Aug 18, 2020
9a44221
Initial batch to space refs
Aug 21, 2020
4e0a3d4
Return opset1_upgrade
Aug 25, 2020
6c486cb
Merge upstreeam master
Aug 25, 2020
1e083c2
WIP: Add space to batch evaluate
Aug 26, 2020
af154ac
Fix space to batch
Aug 26, 2020
1776a53
add evaluates function in evaluates_map (#4)
antonzaycev96 Aug 26, 2020
19eabad
Merge branch 'update_evaluates' of github.com:mikhail-treskin/openvin…
Aug 26, 2020
41983a8
Add space to batch evaluate
Aug 26, 2020
dca1bba
Merge branch 'master' into update_evaluates
Aug 26, 2020
5a77ee2
Fix crop in batch to space references
Aug 27, 2020
a00a6b9
Remove vectors reallocation in evaluates for b2s and s2b
Aug 31, 2020
8e1a0e4
.
Sep 2, 2020
c57beab
Add SpaceToDepth evaluate
Sep 2, 2020
9e92e3b
Add depth to space evaluate
Sep 5, 2020
5794f2f
Remove code duplication depth to space evaluate
Sep 5, 2020
759f976
Fix some failed layer tests
Sep 5, 2020
d043403
merge upstream master
Sep 7, 2020
f80c325
Ngraph test (#3)
iefode Sep 7, 2020
3dab0d6
Enable cells refs in evaluate map
iefode Sep 8, 2020
1f42b8e
Fix some failed layer tests
Sep 10, 2020
7b06f04
Merge iefode/cells
Sep 10, 2020
f647cda
Merge upstreream/master
Sep 10, 2020
167073f
Some more fixes
Sep 10, 2020
d910235
Fix code style (#6)
iefode Sep 10, 2020
0ea28db
Merge branch 'update_evaluates' of github.com:mikhail-treskin/openvin…
Sep 10, 2020
fdd3c16
Tests (#7)
iefode Sep 15, 2020
4039a80
Fix one hot ref call
Sep 15, 2020
4eca370
Merge branch 'update_evaluates' of github.com:mikhail-treskin/openvin…
Sep 15, 2020
223f386
.
Sep 15, 2020
f0a5399
Select (#8)
iefode Sep 16, 2020
66425a3
Merge branch 'update_evaluates' of github.com:mikhail-treskin/openvin…
Sep 16, 2020
a5c32c1
ReverseSeq (#9)
iefode Sep 22, 2020
4660ca4
Merge branch 'update_evaluates' of github.com:mikhail-treskin/openvin…
Sep 22, 2020
7adf1c7
Add fake quantize reference
Sep 23, 2020
1bda367
Merge upstream master
Sep 23, 2020
631aa2f
Align convolution layer tests instantiations with updated definition
Sep 24, 2020
611ffdc
Disabled some failed LPT tests
Sep 28, 2020
7e5c5af
Merge branch 'master' into update_evaluates
Sep 28, 2020
c21484a
Disabled some failed LPT tests
Sep 28, 2020
86afce5
Remove undesired changes
Sep 28, 2020
1adbb2f
Update unit-test manifests + some code cleanup
Sep 28, 2020
dfa711d
Fix code style (#10)
iefode Sep 29, 2020
9938151
Normalize L2 refs support (from PR #2327)
Sep 29, 2020
d57c03e
Fix code style
Sep 29, 2020
cb96362
Apply review comments. Part 1 (#11)
iefode Oct 1, 2020
f37c347
Remove redundant reshape from shuffle_channels evaluate
Oct 1, 2020
7cd5ec6
Decompose GroupConvolution
Oct 6, 2020
0cf873b
[IE Ngraph] Fix some operation inheritance (#13)
iefode Oct 6, 2020
b6f95b8
Fix code style
Oct 6, 2020
ed8614f
[IE NGraph] Remove decompose op (#14)
iefode Oct 7, 2020
f8fc914
.
Oct 7, 2020
ecf96e5
Merge branch 'master' into update_evaluates
Oct 7, 2020
5233a09
Fix loosing control dependency in replace_node
Oct 7, 2020
1340aea
Fix loosing control dependency in replace_node
Oct 7, 2020
fc429b7
Merge branch 'master' into update_evaluates
Oct 8, 2020
1a47819
Fix code style
Oct 8, 2020
8480066
Fix FQ references build on windows
Oct 8, 2020
3463c9b
Merge remote-tracking branch 'origin/update_evaluates' into update_ev…
Oct 8, 2020
32dbaeb
Fix code style
Oct 8, 2020
a1a58ab
Apply comments (#15)
iefode Oct 8, 2020
d9092a8
Ci (#16)
iefode Oct 12, 2020
b83218a
Android fix (#17)
iefode Oct 12, 2020
6f301a9
fix failures
Oct 12, 2020
1fa4644
Fix code style
Oct 12, 2020
da4b5e3
Add (#18)
iefode Oct 12, 2020
7f66509
Merge branch 'master' into update_evaluates
Oct 12, 2020
c19bf49
Add in opset1 upgrade pass
Oct 12, 2020
e9729f3
Add in opset1 upgrade pass
Oct 12, 2020
083dcdb
Remove v0::Add, Reverted removing v0::Multiply (#19)
iefode Oct 13, 2020
83fc4c8
Remove overloaded math operators from PyNgraph
Oct 13, 2020
55ff773
Remove overloaded math operators from PyNgraph
Oct 13, 2020
f9760d8
Merge branch 'master' into update_evaluates
Oct 13, 2020
92c2c96
Fix gna tests (#20)
iefode Oct 13, 2020
620804f
Merge branch 'update_evaluates' of https://github.com/mikhail-treskin…
Oct 13, 2020
a9c338a
Merge upstream master
Oct 13, 2020
aec84b1
LRN Reference (#21)
iefode Oct 14, 2020
5b7afbd
Disable failed tests on ia32
Oct 14, 2020
b3cca60
Merge upstream master
Oct 14, 2020
0c892de
Remove redundant broadcast from MVN ref
Oct 14, 2020
58fb557
Merge branch 'update_evaluates' of https://github.com/mikhail-treskin…
Oct 14, 2020
fb1dc34
Fix missed GatherND in opset_int_tbl + code style
Oct 14, 2020
9389d24
Remove one extra temporary buffer from MVN ref
Oct 15, 2020
8e50854
Merge branch 'update_evaluates' of github.com:mikhail-treskin/openvin…
Oct 15, 2020
a5153c2
Merge master (#22)
iefode Oct 15, 2020
b3d514f
Merge remote-tracking branch 'upstream/master' into update_evaluates
Oct 15, 2020
d288aec
Merge remote-tracking branch 'upstream/master' into update_evaluates
iefode Oct 16, 2020
410f865
RegionYolo
iefode Oct 16, 2020
9d89220
Apply review comments
Oct 21, 2020
ed4e574
Merge remote-tracking branch 'upstream/master' into update_evaluates
Oct 21, 2020
cdbcbd2
Merge remote-tracking branch 'upstream/master' into update_evaluates
Oct 21, 2020
6f9a462
Merge remote-tracking branch 'origin/update_evaluates' into update_ev…
Oct 21, 2020
9ef2324
Apply code style
Oct 21, 2020
ba88b24
Apply comments
Oct 21, 2020
4c2e815
Merge branch 'update_evaluates' of github.com:mikhail-treskin/openvin…
Oct 21, 2020
5bf1ccb
Apply code style
Oct 21, 2020
061d97c
Fix RegionYolo evaluate redefinition
Oct 21, 2020
0435170
Removed defines from evaluates map
Oct 22, 2020
617ce58
Apply code style
Oct 22, 2020
ed8e562
Merge remote-tracking branch 'upstream/master' into update_evaluates
Oct 23, 2020
f116518
Fix MVN ref
Oct 23, 2020
c26efd3
Merge upstream master
Oct 29, 2020
13c5f14
rename select reference argument
Oct 30, 2020
1375188
Merge remote-tracking branch 'upstream/master' into update_evaluates
iefode Nov 2, 2020
941597d
Fix code style
iefode Nov 2, 2020
f6d2cc8
Merge remote-tracking branch 'upstream/master' into update_evaluates
iefode Nov 5, 2020
ad4d67e
Merge remote-tracking branch 'upstream/master' into update_evaluates
iefode Nov 24, 2020
18d52ed
Fix Fake Quantize references calculation (#24)
Nov 24, 2020
187fe80
Fix MVN ref
iefode Nov 24, 2020
015fe64
Fix MVN & adding NMS
iefode Nov 25, 2020
a798b60
Merge remote-tracking branch 'origin/update_evaluates' into update_ev…
iefode Nov 25, 2020
eb94ae7
Merge remote-tracking branch 'upstream/master' into update_evaluates
iefode Nov 25, 2020
4259a5b
Fix TI
iefode Nov 25, 2020
b32b601
Temporary relax comparison threshold for FQ SLT
Nov 27, 2020
7f8fd31
Fix GPU LPT Tests
Nov 27, 2020
fca0174
Merge remote-tracking branch 'upstream/master' into update_evaluates
Nov 27, 2020
9b3a687
Add explicit rounding mode seetting in FQ references
Nov 27, 2020
ed974ab
Merge remote-tracking branch 'upstream/master' into update_evaluates
Dec 1, 2020
70f603b
Apply code style
Dec 1, 2020
a7d897b
Rollback op_is test deletion
Dec 2, 2020
d93a4c7
Apply code style
Dec 2, 2020
de3987b
Merge remote-tracking branch 'upstream/master' into update_evaluates
Dec 2, 2020
282e54d
Merge remote-tracking branch 'upstream/master' into update_evaluates
Dec 3, 2020
9c23b70
Fix merge conflict resolving issues
Dec 3, 2020
0d84107
Apply code style
Dec 3, 2020
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
Expand Up @@ -1062,7 +1062,7 @@ void convertFunctionToICNNNetwork(const std::shared_ptr<const ::ngraph::Function
std::make_shared<Builder::NodeConverter<::ngraph::op::v1::Softmax>>(),
std::make_shared<Builder::NodeConverter<::ngraph::op::v1::Split>>(),
std::make_shared<Builder::NodeConverter<::ngraph::op::VariadicSplit>>(),
std::make_shared<Builder::NodeConverter<::ngraph::op::Subtract>>(),
std::make_shared<Builder::NodeConverter<::ngraph::op::v1::Subtract>>(),
std::make_shared<Builder::NodeConverter<::ngraph::op::Tanh>>(),
std::make_shared<Builder::NodeConverter<::ngraph::op::TileIE>>(),
std::make_shared<Builder::NodeConverter<::ngraph::op::TensorIterator>>(),
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -534,7 +534,7 @@ CNNLayer::Ptr NodeConverter<ngraph::op::v1::Softmax>::createLayer(const std::sha
}

template <>
CNNLayer::Ptr NodeConverter<ngraph::op::Subtract>::createLayer(const std::shared_ptr<ngraph::Node>& layer) const {
CNNLayer::Ptr NodeConverter<ngraph::op::v1::Subtract>::createLayer(const std::shared_ptr<ngraph::Node>& layer) const {
LayerParams params = {layer->get_friendly_name(), "Eltwise",
details::convertPrecision(layer->get_output_element_type(0))};
auto res = std::make_shared<InferenceEngine::EltwiseLayer>(params);
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -36,10 +36,10 @@ TEST(algebraic_simplification, add_negative_tests) {
auto c = make_shared<op::Parameter>(type, shape);
auto abs_a = make_shared<op::Abs>(a);
auto iconst2 = ngraph::make_constant_from_string("2", type, shape);
auto add_a_0 = a + iconst2;
auto add_a_0_0 = add_a_0 + iconst2;
auto add_b_0 = b + abs_a;
auto add_b_0_0 = add_b_0 + abs_a;
auto add_a_0 = std::make_shared<ngraph::op::v1::Add>(a, iconst2);
auto add_a_0_0 = std::make_shared<ngraph::op::v1::Add>(add_a_0, iconst2);
auto add_b_0 = std::make_shared<ngraph::op::v1::Add>(b, abs_a);
auto add_b_0_0 = std::make_shared<ngraph::op::v1::Add>(add_b_0, abs_a);

auto f = std::make_shared<Function>(ngraph::NodeVector{a, b, add_a_0_0, c, add_b_0_0},
ParameterVector{a, b, c});
Expand All @@ -63,10 +63,10 @@ TEST(algebraic_simplification, multiply_negative_tests) {
auto c = make_shared<op::Parameter>(type, shape);
auto abs_a = make_shared<op::Abs>(a);
auto iconst2 = ngraph::make_constant_from_string("2", type, shape);
auto add_a_0 = a * iconst2;
auto add_a_0_0 = add_a_0 * iconst2;
auto add_b_0 = b * abs_a;
auto add_b_0_0 = add_b_0 * abs_a;
auto add_a_0 = make_shared<op::v1::Multiply>(a, iconst2);
auto add_a_0_0 = make_shared<op::v1::Multiply>(add_a_0, iconst2);
auto add_b_0 = make_shared<op::v1::Multiply>(b, abs_a);
auto add_b_0_0 = make_shared<op::v1::Multiply>(add_b_0, abs_a);

auto f = std::make_shared<Function>(ngraph::NodeVector{a, b, add_a_0_0, c, add_b_0_0},
ParameterVector{a, b, c});
Expand Down Expand Up @@ -228,7 +228,7 @@ TEST(algebraic_simplification, log_no_exp) {
auto a = make_shared<op::Parameter>(element::f32, Shape{96, 100});
auto b = make_shared<op::Parameter>(element::f32, Shape{96, 100});
auto abs_a = make_shared<op::Abs>(a);
auto div = abs_a / b;
auto div = std::make_shared<op::v1::Divide>(abs_a, b);
auto log_div = make_shared<op::Log>(div);

auto neg_inner = make_shared<op::Negative>(log_div);
Expand All @@ -248,7 +248,7 @@ TEST(algebraic_simplification, log_no_divide) {
auto a = make_shared<op::Parameter>(element::f32, Shape{96, 100});
auto b = make_shared<op::Parameter>(element::f32, Shape{96, 100});
auto exp_a = make_shared<op::Exp>(a);
auto mul = exp_a * b;
auto mul = make_shared<op::v1::Multiply>(exp_a, b);
auto log_mul = make_shared<op::Log>(mul);

auto neg_inner = make_shared<op::Negative>(log_mul);
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -48,7 +48,7 @@ class MemoryConv : public testing::WithParamInterface<LayerTestsUtils::basicPara
auto mem_i = make_shared<op::v0::Constant>(type, shape, 0);
auto mem_r = make_shared<op::v3::ReadValue>(mem_i, "id");

auto mul = make_shared<op::v0::Multiply>(mem_r, input);
auto mul = make_shared<op::v1::Multiply>(mem_r, input);
auto sig = make_shared<op::v0::Sigmoid>(mul);

auto fc1_w = make_shared<op::v0::Constant>(type, Shape{C, C}, 1);
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -21,15 +21,16 @@ const std::vector<LayerTransformation::Params> trasformationParamValues = {
};

const std::vector<ngraph::builder::subgraph::FakeQuantizeOnData> fakeQuantizeOnDataValues = {
{ 256ul, {}, { 0.f }, { 2.55f }, { 0.f }, { 2.55f } },
{
256ul,
{ 1ul, 3ul, 1ul, 1ul },
{ 0.f, 0.f, 0.f },
{ 2.55f / 10.f, 2.55f / 5.f, 2.55f / 2.f },
{ 0.f, 0.f, 0.f },
{ 2.55f / 10.f, 2.55f / 5.f, 2.55f / 2.f }
},
{ 256ul, {}, { 0.f }, { 2.55f }, { 0.f }, { 2.55f } }
// TODO: Issue 39810
// {
// 256ul,
// { 1ul, 3ul, 1ul, 1ul },
// { 0.f, 0.f, 0.f },
// { 2.55f / 10.f, 2.55f / 5.f, 2.55f / 2.f },
// { 0.f, 0.f, 0.f },
// { 2.55f / 10.f, 2.55f / 5.f, 2.55f / 2.f }
// },
};

INSTANTIATE_TEST_CASE_P(smoke_LPT, FuseFakeQuantizeAndScaleShiftTransformation,
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -26,7 +26,7 @@ const std::vector<ReshapeTransformationParam> params = {
{
ngraph::Shape{ 1, 3, 32 },
{ 1, 3, 4, 8 },
{ 256ul, ngraph::Shape{ 1, 1, 1, 1 }, { 0.f }, { 255.f }, { 0.f }, { 25.5f } },
{ 256ul, ngraph::Shape{ 1, 1, 1 }, { 0.f }, { 255.f }, { 0.f }, { 25.5f } },
},
// 4D -> 3D
{
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -24,27 +24,27 @@ namespace {

const std::vector<LayerTestsDefinitions::UnsqueezeTransformationParam> params = {
{
{ 256ul, ngraph::Shape { 1, 1, 1, 1 }, { -12.8f }, { 12.7f }, { -12.8f }, { 12.7f } },
{ 256ul, ngraph::Shape { 1, 1, 1 }, { -12.8f }, { 12.7f }, { -12.8f }, { 12.7f } },
{ 0.0, 3.0 },
{ 3, 3, 5}
},
{
{ 256ul, ngraph::Shape { 1, 1, 1, 1 }, { -12.8f }, { 12.7f }, { -12.8f }, { 12.7f } },
{ 256ul, ngraph::Shape { 1, 1, 1 }, { -12.8f }, { 12.7f }, { -12.8f }, { 12.7f } },
{ 0.0, 1.0 },
{ 3, 3, 3 }
},
{
{ 256ul, ngraph::Shape { 1, 1, 1, 1 }, { -12.8f }, { 12.7f }, { -12.8f }, { 12.7f } },
{ 256ul, ngraph::Shape { 1, 1, 1 }, { -12.8f }, { 12.7f }, { -12.8f }, { 12.7f } },
{ 3.0 },
{ 3, 4, 5, 6 }
},
{
{ 256ul, ngraph::Shape { 1, 1, 1, 1 }, { -12.8f }, { 12.7f }, { -12.8f }, { 12.7f } },
{ 256ul, ngraph::Shape { 1, 1, 1 }, { -12.8f }, { 12.7f }, { -12.8f }, { 12.7f } },
{ 0.0, 3.0 },
{ 1, 32, 2}
},
{
{ 256ul, ngraph::Shape { 1, 1, 1, 1 }, { -12.8f }, { 12.7f }, { -12.8f }, { 12.7f } },
{ 256ul, ngraph::Shape { 1, 1, 1 }, { -12.8f }, { 12.7f }, { -12.8f }, { 12.7f } },
{ 0.0, 1.0 },
{ 46, 128, 2 }
}
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -22,14 +22,15 @@ const std::vector<LayerTransformation::Params> trasformationParamValues = {

const std::vector<ngraph::builder::subgraph::FakeQuantizeOnData> fakeQuantizeOnDataValues = {
{ 256ul, {}, { 0.f }, { 2.55f }, { 0.f }, { 2.55f } },
{
256ul,
{ 1ul, 3ul, 1ul, 1ul },
{ 0.f, 0.f, 0.f },
{ 2.55f / 10.f, 2.55f / 5.f, 2.55f / 2.f },
{ 0.f, 0.f, 0.f },
{ 2.55f / 10.f, 2.55f / 5.f, 2.55f / 2.f }
},
// TODO: Issue 39810
// {
// 256ul,
// { 1ul, 3ul, 1ul, 1ul },
// { 0.f, 0.f, 0.f },
// { 2.55f / 10.f, 2.55f / 5.f, 2.55f / 2.f },
// { 0.f, 0.f, 0.f },
// { 2.55f / 10.f, 2.55f / 5.f, 2.55f / 2.f }
// },
};

INSTANTIATE_TEST_CASE_P(smoke_LPT, FuseFakeQuantizeAndScaleShiftTransformation,
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -26,19 +26,19 @@ const std::vector<ReshapeTransformationParam> params = {
{
ngraph::Shape{ 1, 3, 32 },
{ 1, 3, 4, 8 },
{ 256ul, ngraph::Shape{ 1, 1, 1, 1 }, { 0.f }, { 255.f }, { 0.f }, { 25.5f } },
{ 256ul, ngraph::Shape{ 1, 1, 1 }, { 0.f }, { 255.f }, { 0.f }, { 25.5f } },
},
// 4D -> 3D
{
ngraph::Shape{ 1, 3, 16, 16 },
{ 1, 3, 256 },
{ 256ul, ngraph::Shape{ 1, 1, 1, 1 }, { 0.f }, { 255.f }, { 0.f }, { 25.5f } },
{ 256ul, ngraph::Shape{ 1, 1, 1 }, { 0.f }, { 255.f }, { 0.f }, { 25.5f } },
},
// 4D -> 2D
{
ngraph::Shape{ 1, 3, 4, 8 },
{ 1, -1 },
{ 256ul, ngraph::Shape{ 1, 1, 1, 1 }, { 0.f }, { 255.f }, { 0.f }, { 25.5f } },
{ 256ul, ngraph::Shape{ 1, 1, 1 }, { 0.f }, { 255.f }, { 0.f }, { 25.5f } },
},
};

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -24,27 +24,27 @@ namespace {

const std::vector<LayerTestsDefinitions::UnsqueezeTransformationParam> params = {
{
{ 256ul, ngraph::Shape { 1, 1, 1, 1 }, { 0.f }, { 255.f }, { -128.f }, { 127.f } },
{ 256ul, ngraph::Shape { 1, 1, 1 }, { 0.f }, { 255.f }, { -128.f }, { 127.f } },
{ 0.0, 3.0 },
{ 3, 3, 5}
},
{
{ 256ul, ngraph::Shape { 1, 1, 1, 1 }, { 0.f }, { 255.f }, { -128.f }, { 127.f } },
{ 256ul, ngraph::Shape { 1, 1, 1 }, { 0.f }, { 255.f }, { -128.f }, { 127.f } },
mikhail-treskin marked this conversation as resolved.
Show resolved Hide resolved
{ 0.0, 1.0 },
{ 3, 3, 3 }
},
{
{ 256ul, ngraph::Shape { 1, 1, 1, 1 }, { 0.f }, { 255.f }, { -128.f }, { 127.f } },
{ 256ul, ngraph::Shape { 1, 1, 1 }, { 0.f }, { 255.f }, { -128.f }, { 127.f } },
{ 3.0 },
{ 3, 4, 5, 6 }
},
{
{ 256ul, ngraph::Shape { 1, 1, 1, 1 }, { 0.f }, { 255.f }, { -128.f }, { 127.f } },
{ 256ul, ngraph::Shape { 1, 1, 1 }, { 0.f }, { 255.f }, { -128.f }, { 127.f } },
{ 0.0, 3.0 },
{ 1, 32, 2}
},
{
{ 256ul, ngraph::Shape { 1, 1, 1, 1 }, { 0.f }, { 255.f }, { -128.f }, { 127.f } },
{ 256ul, ngraph::Shape { 1, 1, 1 }, { 0.f }, { 255.f }, { -128.f }, { 127.f } },
{ 0.0, 1.0 },
{ 46, 128, 2 }
}
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -29,13 +29,13 @@ TEST_P(ExecGraphKeepAssignNode, KeepAssignNode) {
using std::make_shared;
using namespace ngraph::op;

// Some simple graph with Memory(Assign) node // in read //
auto input = make_shared<Parameter>(type, shape); // | \ / //
auto mem_i = make_shared<Constant>(type, shape, 0); // | mul //
auto mem_r = make_shared<ReadValue>(mem_i, "id"); // | / \ //
auto mul = make_shared<Multiply>(mem_r, input); // sum assign //
auto mem_w = make_shared<Assign>(mul, "id"); // | //
auto sum = make_shared<Add>(mul, input); // out //
// Some simple graph with Memory(Assign) node // in read //
auto input = make_shared<Parameter>(type, shape); // | \ / //
auto mem_i = make_shared<Constant>(type, shape, 0); // | mul //
mikhail-treskin marked this conversation as resolved.
Show resolved Hide resolved
auto mem_r = make_shared<ReadValue>(mem_i, "id"); // | / \ //
auto mul = make_shared<ngraph::op::v1::Multiply>(mem_r, input); // sum assign //
auto mem_w = make_shared<Assign>(mul, "id"); // | //
auto sum = make_shared<ngraph::op::v1::Add>(mul, input); // out //

mem_w->add_control_dependency(mem_r);
sum->add_control_dependency(mem_w);
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -198,7 +198,7 @@ void ActivationParamLayerTest::SetUp() {
constantsValue = activationDecl.second;
auto ngPrc = FuncTestUtils::PrecisionUtils::convertIE2nGraphPrc(netPrecision);
auto params = ngraph::builder::makeParams(ngPrc, {shapes.first});
auto activationParams = createActivationParams(ngPrc);
auto activationParams = createActivationParams(ngPrc, shapes.second);

params[0]->set_friendly_name("Input");
params.insert(params.end(), activationParams.begin(), activationParams.end());
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -43,7 +43,6 @@ std::string BatchToSpaceLayerTest::getTestCaseName(const testing::TestParamInfo<
}

void BatchToSpaceLayerTest::SetUp() {
SetRefMode(LayerTestsUtils::RefMode::INTERPRETER_TRANSFORMATIONS);
std::vector<size_t> inputShape;
std::vector<int64_t> blockShape, cropsBegin, cropsEnd;
InferenceEngine::Precision netPrecision;
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -26,8 +26,8 @@
/**
* redefine this seed to reproduce issue with given seed that can be read from gtest logs
*/
#define BASE_SEED USE_CLOCK_TIME
#define NGRAPH_SEED USE_CLOCK_TIME
#define BASE_SEED 123
#define NGRAPH_SEED 123
mikhail-treskin marked this conversation as resolved.
Show resolved Hide resolved

namespace LayerTestsDefinitions {

Expand Down Expand Up @@ -85,6 +85,9 @@ void FakeQuantizeLayerTest::SetUp() {
inputDataMax = inputArg[1];
inputDataResolution = inputArg[2];
}
if (fqDirectArg.size() != 0) {
threshold = (fqDirectArg[3] - fqDirectArg[2]) / levels;
}
auto ngPrc = FuncTestUtils::PrecisionUtils::convertIE2nGraphPrc(netPrecision);
auto params = ngraph::builder::makeParams(ngPrc, {inputShape});
auto paramOuts = ngraph::helpers::convert2OutputVector(ngraph::helpers::castOps2Nodes<ngraph::op::Parameter>(params));
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -120,7 +120,7 @@ namespace LayerTestsDefinitions {
// Body
std::shared_ptr<ngraph::Node> Zo = body_params[0];
for (int i = 1; i < body_params.size(); ++i) {
Zo = body_params[i] + Zo;
Zo = std::make_shared<ngraph::op::v1::Add>(body_params[i], Zo);
}

// body_params.insert(body_params.begin(), current_iteration);
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -37,8 +37,6 @@ namespace LayerTestsDefinitions {
}

void SelectLayerTest::SetUp() {
SetRefMode(LayerTestsUtils::RefMode::CONSTANT_FOLDING);

std::vector<std::vector<size_t>> inputShapes(numOfInputs);
InferenceEngine::Precision inputPrecision;
ngraph::op::AutoBroadcastSpec broadcast;
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -43,7 +43,6 @@ std::string SpaceToBatchLayerTest::getTestCaseName(const testing::TestParamInfo<
}

void SpaceToBatchLayerTest::SetUp() {
SetRefMode(LayerTestsUtils::RefMode::INTERPRETER_TRANSFORMATIONS);
std::vector<size_t> inputShape;
std::vector<int64_t> blockShape, padsBegin, padsEnd;
InferenceEngine::Precision inputPrecision, netPrecision;
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -51,7 +51,7 @@ void CascadeConcat::SetUp() {
if (multioutput) {
auto const_mult = ngraph::builder::makeConstant(ngPrc, ngraph::Shape{1, input1[0][1]+input2[0][1]},
std::vector<float>{1.01f});
auto mult = std::make_shared<ngraph::op::v0::Multiply>(concat, const_mult);
auto mult = std::make_shared<ngraph::op::v1::Multiply>(concat, const_mult);
results = ngraph::ResultVector{std::make_shared<ngraph::opset1::Result>(concat2),
std::make_shared<ngraph::opset1::Result>(mult)};
} else {
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -52,7 +52,7 @@ void SoftsignTest::SetUp() {
auto abs = std::make_shared<ngraph::op::Abs>(params[0]);
auto add = std::make_shared<ngraph::op::PowerIE>(abs, 1, 1, 1);
auto power = std::make_shared<ngraph::op::PowerIE>(add, -1, 1, 0);
auto mul = std::make_shared<ngraph::op::Multiply>(power, params[0]);
auto mul = std::make_shared<ngraph::op::v1::Multiply>(power, params[0]);
ngraph::ResultVector results{ std::make_shared<ngraph::op::Result>(mul) };
function = std::make_shared<ngraph::Function>(results, params, "SoftSignTest");
}
Expand All @@ -75,10 +75,10 @@ std::shared_ptr<ngraph::Function> SoftsignTest::GenerateNgraphFriendlySoftSign()
auto params = ngraph::builder::makeParams(ngPrc, { inputShape });
auto abs = std::make_shared<ngraph::op::Abs>(params[0]);
auto constant_0 = ngraph::builder::makeConstant<float>(ngPrc, inputShape, { 1 });
auto add = std::make_shared<ngraph::op::Add>(abs, constant_0);
auto add = std::make_shared<ngraph::op::v1::Add>(abs, constant_0);
auto constant_1 = ngraph::builder::makeConstant<float>(ngPrc, inputShape, { -1 });
auto power = std::make_shared<ngraph::op::Power>(add, constant_1);
auto mul = std::make_shared<ngraph::op::Multiply>(power, params[0]);
auto power = std::make_shared<ngraph::op::v1::Power>(add, constant_1);
auto mul = std::make_shared<ngraph::op::v1::Multiply>(power, params[0]);

ngraph::ResultVector results{ std::make_shared<ngraph::op::Result>(mul) };
return std::make_shared<ngraph::Function>(results, params, "SoftSignTest");
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -64,7 +64,7 @@ void SplitConcatMemory::SetUp() {
auto spl = std::make_shared<ngraph::op::v1::VariadicSplit>(cnc, axis_c, chunk_c);

auto one = std::make_shared<ngraph::op::Constant>(ngPrc, ngraph::Shape{}, 1);
auto plus = std::make_shared<ngraph::op::Add>(cnc, one, ngraph::op::AutoBroadcastSpec::NUMPY);
auto plus = std::make_shared<ngraph::op::v1::Add>(cnc, one, ngraph::op::AutoBroadcastSpec::NUMPY);
plus->set_friendly_name("plus_one");

auto mem_w = std::make_shared<ngraph::op::Assign>(spl->output(1), "id");
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -370,17 +370,6 @@ std::vector<std::vector<std::uint8_t>> LayerTestsCommon::CalculateRefs() {
// reference inference on device with other options and nGraph function has to be implemented here
break;
}
case INTERPRETER_TRANSFORMATIONS: {
auto cloned_function = ngraph::clone_function(*function);

// todo: add functionality to configure the necessary transformations for each test separately
ngraph::pass::Manager m;
m.register_pass<ngraph::pass::ConvertSpaceToBatch>();
m.register_pass<ngraph::pass::ConvertBatchToSpace>();
m.run_passes(cloned_function);
expectedOutputs = ngraph::helpers::interpreterFunction(cloned_function, referenceInputs, inType, convertType);
break;
}
}

return expectedOutputs;
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -126,7 +126,6 @@ typedef std::tuple<

enum RefMode {
INTERPRETER,
INTERPRETER_TRANSFORMATIONS,
CONSTANT_FOLDING,
IE
};
Expand Down
4 changes: 2 additions & 2 deletions inference-engine/tests/unit/cpu/bf16_transformer_test.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -68,7 +68,7 @@ TEST(BF16TransformerTest, KeepMemoryPrecision) {
auto mem_r = make_shared<ReadValue>(mem_i, "id");
mem_r->set_friendly_name("mem_r");

auto mul = make_shared<Multiply>(mem_r, input);
auto mul = make_shared<ngraph::op::v1::Multiply>(mem_r, input);
auto sig = make_shared<Sigmoid>(mul);

auto fc1_w = make_shared<Constant>(type, Shape{2, 2}, 1);
Expand Down Expand Up @@ -131,7 +131,7 @@ TEST(BF16TransformerTest, DISABLED_KeepMemoryPrecisionWithGEMM) {
auto mem_r = make_shared<ReadValue>(mem_i, "id");
mem_r->set_friendly_name("mem_r");

auto mul = make_shared<Multiply>(mem_r, input);
auto mul = make_shared<ngraph::op::v1::Multiply>(mem_r, input);
auto sig = make_shared<Sigmoid>(mul);

auto fc1_w = make_shared<Constant>(type, Shape{2, 2}, 1);
Expand Down
Loading