Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Revise logical and #6731

Merged
merged 28 commits into from
Aug 3, 2021
Merged
Show file tree
Hide file tree
Changes from 9 commits
Commits
Show all changes
28 commits
Select commit Hold shift + click to select a range
1bdb547
update docs
pszmel Jul 21, 2021
c476dda
add host tensors validation
pszmel Jul 21, 2021
ad7b50e
create type_prop tests
pszmel Jul 21, 2021
3644952
create serialization single layer test
pszmel Jul 21, 2021
d1527c6
create visitor test
pszmel Jul 21, 2021
dec1cec
create op_reference test
pszmel Jul 21, 2021
bdf9cf7
add logicalAnd to constants.py
pszmel Jul 21, 2021
2aac50e
Merge remote-tracking branch 'upstream/master' into revise_logicalAnd
pszmel Jul 21, 2021
b424abe
create additional op_reference tests
pszmel Jul 21, 2021
884ca76
add check for number of visited attributes in visitor test
pszmel Jul 26, 2021
fe17236
update auto_broadcast description
pszmel Jul 26, 2021
c489ca9
remoove backend test
pszmel Jul 26, 2021
9ded5af
update LogicalNot params name
pszmel Jul 26, 2021
67205ef
remove backend test from CMakeList
pszmel Jul 26, 2021
9af44bf
create util function for type_prop tests
pszmel Jul 26, 2021
c7f84cb
update op_reference tests
pszmel Jul 26, 2021
e1700c1
Merge remote-tracking branch 'upstream/master' into revise_logicalAnd
pszmel Jul 26, 2021
565a32f
remove typo in docs
pszmel Jul 26, 2021
0b462ec
Merge remote-tracking branch 'upstream/master' into revise_logicalAnd
pszmel Jul 26, 2021
256ce97
remove unsupported types from evaluate
pszmel Jul 27, 2021
16eb55a
Merge remote-tracking branch 'upstream/master' into revise_logicalAnd
pszmel Jul 27, 2021
ee057ef
Merge remote-tracking branch 'upstream/master' into revise_logicalAnd
pszmel Jul 27, 2021
37ff06e
fix bug in op_reference test
pszmel Jul 27, 2021
0b9d034
refactor visitor test
pszmel Aug 2, 2021
f495387
update math formula in the spec
pszmel Aug 2, 2021
79cfec5
update has_evaluate types
pszmel Aug 2, 2021
901fa15
Merge remote-tracking branch 'upstream/master' into revise_logicalAnd
pszmel Aug 2, 2021
eab1ef7
Merge remote-tracking branch 'upstream/master' into revise_logicalAnd
pszmel Aug 2, 2021
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
22 changes: 11 additions & 11 deletions docs/ops/logical/LogicalAnd_1.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,6 +6,14 @@

**Short description**: *LogicalAnd* performs element-wise logical AND operation with two given tensors applying multi-directional broadcast rules.

**Detailed description**: Before performing logical operation, input tensors *a* and *b* are broadcasted if their shapes are different and `auto_broadcast` attributes is not `none`. Broadcasting is performed according to `auto_broadcast` value.

After broadcasting *LogicalAnd* does the following with the input tensors *a* and *b*:

\f[
o_{i} = a_{i} and b_{i}
\f]
dkozykowski marked this conversation as resolved.
Show resolved Hide resolved

**Attributes**:

* *auto_broadcast*
pszmel marked this conversation as resolved.
Show resolved Hide resolved
Expand All @@ -20,25 +28,17 @@

**Inputs**

* **1**: A tensor of type *T*. **Required.**
* **2**: A tensor of type *T*. **Required.**
* **1**: A tensor of type *T* and arbitrary shape. **Required.**
* **2**: A tensor of type *T* and arbitrary shape. **Required.**

**Outputs**

* **1**: The result of element-wise logical AND operation. A tensor of type boolean.
* **1**: The result of element-wise *LogicalAnd* operation. A tensor of type boolean.

**Types**

* *T*: boolean type.

**Detailed description**
Before performing logical operation, input tensors *a* and *b* are broadcasted if their shapes are different and `auto_broadcast` attributes is not `none`. Broadcasting is performed according to `auto_broadcast` value.

After broadcasting *LogicalAnd* does the following with the input tensors *a* and *b*:

\f[
o_{i} = a_{i} and b_{i}
\f]

**Examples**

Expand Down
101 changes: 101 additions & 0 deletions docs/template_plugin/tests/functional/op_reference/logical_and.cpp
Original file line number Diff line number Diff line change
@@ -0,0 +1,101 @@
// Copyright (C) 2018-2021 Intel Corporation
// SPDX-License-Identifier: Apache-2.0
//

#include <gtest/gtest.h>

#include <ie_core.hpp>
#include <ie_ngraph_utils.hpp>
#include <ngraph/ngraph.hpp>
#include <shared_test_classes/base/layer_test_utils.hpp>
#include <tuple>

#include "base_reference_test.hpp"

using namespace ngraph;
using namespace InferenceEngine;

namespace {
struct LogicalAndParams {
template <class IT, class OT>
LogicalAndParams(const ngraph::PartialShape& input_shape1, const ngraph::PartialShape& input_shape2 , const ngraph::element::Type& iType,
const ngraph::element::Type& oType, const std::vector<IT>& iValues1, const std::vector<IT>& iValues2, const std::vector<OT>& oValues)
pszmel marked this conversation as resolved.
Show resolved Hide resolved
: pshape1(input_shape1), pshape2(input_shape2), inType(iType), outType(oType), inputData1(CreateBlob(iType, iValues1)),
inputData2(CreateBlob(iType, iValues2)), refData(CreateBlob(oType, oValues)) {}
ngraph::PartialShape pshape1;
ngraph::PartialShape pshape2;
ngraph::element::Type inType;
ngraph::element::Type outType;
InferenceEngine::Blob::Ptr inputData1;
InferenceEngine::Blob::Ptr inputData2;
InferenceEngine::Blob::Ptr refData;
};

class ReferenceLogicalAndLayerTest : public testing::TestWithParam<LogicalAndParams>, public CommonReferenceTest {
public:
void SetUp() override {
auto params = GetParam();
function = CreateFunction(params.pshape1, params.pshape2, params.inType);
inputData = {params.inputData1, params.inputData2};
refOutData = {params.refData};
}
static std::string getTestCaseName(const testing::TestParamInfo<LogicalAndParams>& obj) {
auto param = obj.param;
std::ostringstream result;
result << "input_shape1=" << param.pshape1 << "_";
result << "input_shape2=" << param.pshape2 << "_";
result << "iType=" << param.inType << "_";
result << "oType=" << param.outType;
return result.str();
}

private:
static std::shared_ptr<Function> CreateFunction(const PartialShape& input_shape1, const PartialShape& input_shape2, const element::Type& input_type) {
const auto in = std::make_shared<op::Parameter>(input_type, input_shape1);
const auto in2 = std::make_shared<op::Parameter>(input_type, input_shape2);
const auto logical_and = std::make_shared<op::v1::LogicalAnd>(in, in2);
return std::make_shared<Function>(NodeVector {logical_and}, ParameterVector {in, in2});
}
};

TEST_P(ReferenceLogicalAndLayerTest, CompareWithHardcodedRefs) {
Exec();
}

template <element::Type_t IN_ET>
std::vector<LogicalAndParams> generateLogicalAndParams(const ngraph::element::Type& type) {
using T = typename element_type_traits<IN_ET>::value_type;
std::vector<LogicalAndParams> logicalAndParams {
// 1D // 2D // 3D // 4D
LogicalAndParams(ngraph::PartialShape {2, 2}, ngraph::PartialShape {2, 2}, type, ngraph::element::boolean,
std::vector<T> {true, false, true, false},
pszmel marked this conversation as resolved.
Show resolved Hide resolved
std::vector<T> {false, true, true, false},
std::vector<char> {false, false, true, false}),
LogicalAndParams(ngraph::PartialShape {1}, ngraph::PartialShape {1}, type, ngraph::element::boolean,
std::vector<T> {true},
std::vector<T> {true},
std::vector<char> {true}),
pszmel marked this conversation as resolved.
Show resolved Hide resolved

LogicalAndParams(ngraph::PartialShape {2, 1, 2, 1}, ngraph::PartialShape {1, 1, 2, 1}, type, ngraph::element::boolean,
std::vector<T> {true, false, true, false},
std::vector<T> {true, false},
std::vector<char> {true, false, true, false}),
LogicalAndParams(ngraph::PartialShape {3, 4}, ngraph::PartialShape {3, 4}, type, ngraph::element::boolean,
std::vector<T> {true, true, true, true, true, false, true, false, false, true, true, true},
std::vector<T> {true, true, true, true, true, false, true, false, false, true, true, false},
std::vector<char> {true, true, true, true, true, false, true, false, false, true, true, false})};
return logicalAndParams;
}

std::vector<LogicalAndParams> generateLogicalAndCombinedParams() {
const std::vector<std::vector<LogicalAndParams>> logicalAndTypeParams {generateLogicalAndParams<element::Type_t::u8>(ngraph::element::boolean)};
std::vector<LogicalAndParams> combinedParams;
std::for_each(logicalAndTypeParams.begin(), logicalAndTypeParams.end(), [&](std::vector<LogicalAndParams> params) {
pelszkow marked this conversation as resolved.
Show resolved Hide resolved
combinedParams.insert(combinedParams.end(), params.begin(), params.end());
});
return combinedParams;
}

INSTANTIATE_TEST_SUITE_P(smoke_LogicalAnd_With_Hardcoded_Refs, ReferenceLogicalAndLayerTest, ::testing::ValuesIn(generateLogicalAndCombinedParams()),
pszmel marked this conversation as resolved.
Show resolved Hide resolved
ReferenceLogicalAndLayerTest::getTestCaseName);
} // namespace
Original file line number Diff line number Diff line change
@@ -0,0 +1,84 @@
// Copyright (C) 2018-2021 Intel Corporation
// SPDX-License-Identifier: Apache-2.0
//

#include <vector>
#include "shared_test_classes/single_layer/logical.hpp"
#include "common_test_utils/test_constants.hpp"

using namespace LayerTestsDefinitions;
using namespace LayerTestsDefinitions::LogicalParams;

namespace {
TEST_P(LogicalLayerTest, Serialize) {
Serialize();
}

std::map<std::vector<size_t>, std::vector<std::vector<size_t >>> inputShapes = {
{{1}, {{1}, {17}, {1, 1}, {2, 18}, {1, 1, 2}, {2, 2, 3}, {1, 1, 2, 3}}},
{{5}, {{1}, {1, 1}, {2, 5}, {1, 1, 1}, {2, 2, 5}}},
{{2, 200}, {{1}, {200}, {1, 200}, {2, 200}, {2, 2, 200}}},
{{1, 3, 20}, {{20}, {2, 1, 1}}},
{{2, 17, 3, 4}, {{4}, {1, 3, 4}, {2, 1, 3, 4}}},
{{2, 1, 1, 3, 1}, {{1}, {1, 3, 4}, {2, 1, 3, 4}, {1, 1, 1, 1, 1}}},
};

std::map<std::vector<size_t>, std::vector<std::vector<size_t >>> inputShapesNot = {
{{1}, {}},
{{5}, {}},
{{2, 200}, {}},
{{1, 3, 20}, {}},
{{2, 17, 3, 4}, {}},
{{2, 1, 1, 3, 1}, {}},
};

std::vector<InferenceEngine::Precision> inputsPrecisions = {
InferenceEngine::Precision::BOOL,
};

std::vector<ngraph::helpers::LogicalTypes> logicalOpTypes = {
ngraph::helpers::LogicalTypes::LOGICAL_AND,
ngraph::helpers::LogicalTypes::LOGICAL_OR,
ngraph::helpers::LogicalTypes::LOGICAL_XOR,
};

std::vector<ngraph::helpers::InputLayerType> secondInputTypes = {
ngraph::helpers::InputLayerType::CONSTANT,
ngraph::helpers::InputLayerType::PARAMETER,
};

std::vector<InferenceEngine::Precision> netPrecisions = {
InferenceEngine::Precision::FP32,
};

std::map<std::string, std::string> additional_config = {};

const auto LogicalTestParams = ::testing::Combine(
::testing::ValuesIn(LogicalLayerTest::combineShapes(inputShapes)),
::testing::ValuesIn(logicalOpTypes),
::testing::ValuesIn(secondInputTypes),
::testing::ValuesIn(netPrecisions),
::testing::ValuesIn(inputsPrecisions),
::testing::Values(InferenceEngine::Precision::UNSPECIFIED),
::testing::Values(InferenceEngine::Layout::ANY),
::testing::Values(InferenceEngine::Layout::ANY),
::testing::Values(CommonTestUtils::DEVICE_CPU),
::testing::Values(additional_config));

const auto LogicalTestParamsNot = ::testing::Combine(
pszmel marked this conversation as resolved.
Show resolved Hide resolved
::testing::ValuesIn(LogicalLayerTest::combineShapes(inputShapesNot)),
::testing::Values(ngraph::helpers::LogicalTypes::LOGICAL_NOT),
::testing::Values(ngraph::helpers::InputLayerType::CONSTANT),
::testing::ValuesIn(netPrecisions),
::testing::ValuesIn(inputsPrecisions),
::testing::Values(InferenceEngine::Precision::UNSPECIFIED),
::testing::Values(InferenceEngine::Layout::ANY),
::testing::Values(InferenceEngine::Layout::ANY),
::testing::Values(CommonTestUtils::DEVICE_CPU),
::testing::Values(additional_config));

INSTANTIATE_TEST_SUITE_P(smoke_CompareWithRefs, LogicalLayerTest, LogicalTestParams, LogicalLayerTest::getTestCaseName);

INSTANTIATE_TEST_SUITE_P(smoke_CompareWithRefsNot, LogicalLayerTest, LogicalTestParamsNot, LogicalLayerTest::getTestCaseName);

} // namespace
Original file line number Diff line number Diff line change
Expand Up @@ -54,6 +54,7 @@
'LRN-1',
'LSTMCell-4',
'LSTMSequence-5',
'LogicalAnd-1'
'LogSoftmax-5',
'Loop-5',
'MVN-6',
Expand Down
3 changes: 3 additions & 0 deletions ngraph/core/src/op/and.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -7,6 +7,8 @@
#include "ngraph/runtime/host_tensor.hpp"
#include "ngraph/runtime/reference/and.hpp"

#include "ngraph/validation_util.hpp"
pszmel marked this conversation as resolved.
Show resolved Hide resolved

using namespace std;
using namespace ngraph;

Expand Down Expand Up @@ -77,6 +79,7 @@ bool op::v1::LogicalAnd::evaluate(const HostTensorVector& outputs,
const HostTensorVector& inputs) const
{
NGRAPH_OP_SCOPE(v1_LogicalAnd_evaluate);
NGRAPH_CHECK(validate_host_tensor_vector(outputs, 1) && validate_host_tensor_vector(inputs, 2));
return logand::evaluate_logand(inputs[0], inputs[1], outputs[0], get_autob());
}

Expand Down
2 changes: 2 additions & 0 deletions ngraph/test/CMakeLists.txt
Original file line number Diff line number Diff line change
Expand Up @@ -151,6 +151,7 @@ set(SRC
type_prop/hswish.cpp
type_prop/idft.cpp
type_prop/interpolate.cpp
type_prop/logical_and.cpp
type_prop/lrn.cpp
type_prop/lstm_cell.cpp
type_prop/lstm_sequence.cpp
Expand Down Expand Up @@ -260,6 +261,7 @@ set(SRC
visitors/op/group_conv.cpp
visitors/op/interpolate.cpp
visitors/op/log.cpp
visitors/op/logical_and.cpp
visitors/op/logical_xor.cpp
visitors/op/lrn.cpp
visitors/op/lstm_cell.cpp
Expand Down
127 changes: 127 additions & 0 deletions ngraph/test/type_prop/logical_and.cpp
Original file line number Diff line number Diff line change
@@ -0,0 +1,127 @@
// Copyright (C) 2018-2021 Intel Corporation
// SPDX-License-Identifier: Apache-2.0
//

#include "gtest/gtest.h"
#include "ngraph/ngraph.hpp"
#include "util/type_prop.hpp"

using namespace std;
using namespace ngraph;

TEST(type_prop, logical_and_incorrect_type_f32)
{
auto input1 = make_shared<op::Parameter>(element::f32, Shape{1, 3, 6});
auto input2 = make_shared<op::Parameter>(element::f32, Shape{1, 3, 6});
try
{
auto logical_and = make_shared<op::v1::LogicalAnd>(input1, input2);
}
catch (const NodeValidationFailure& error)
{
EXPECT_HAS_SUBSTRING(error.what(),
std::string("Operands for logical operators must have boolean element type but have element type f32"));
}

pelszkow marked this conversation as resolved.
Show resolved Hide resolved
}

TEST(type_prop, logical_and_incorrect_type_f64)
{
auto input1 = make_shared<op::Parameter>(element::f64, Shape{1, 3, 6});
auto input2 = make_shared<op::Parameter>(element::f64, Shape{1, 3, 6});
try
{
auto logical_and = make_shared<op::v1::LogicalAnd>(input1, input2);
}
catch (const NodeValidationFailure& error)
{
EXPECT_HAS_SUBSTRING(error.what(),
std::string("Operands for logical operators must have boolean element type but have element type f64"));
}
}

TEST(type_prop, logical_and_incorrect_type_i32)
{
auto input1 = make_shared<op::Parameter>(element::i32, Shape{1, 3, 6});
auto input2 = make_shared<op::Parameter>(element::i32, Shape{1, 3, 6});
try
{
auto logical_and = make_shared<op::v1::LogicalAnd>(input1, input2);
}
catch (const NodeValidationFailure& error)
{
EXPECT_HAS_SUBSTRING(error.what(),
std::string("Operands for logical operators must have boolean element type but have element type i32"));
}
}

TEST(type_prop, logical_and_incorrect_type_i64)
{
auto input1 = make_shared<op::Parameter>(element::i64, Shape{1, 3, 6});
auto input2 = make_shared<op::Parameter>(element::i64, Shape{1, 3, 6});
try
{
auto logical_and = make_shared<op::v1::LogicalAnd>(input1, input2);
}
catch (const NodeValidationFailure& error)
{
EXPECT_HAS_SUBSTRING(error.what(),
std::string("Operands for logical operators must have boolean element type but have element type i64"));
}
}

TEST(type_prop, logical_and_incorrect_type_u32)
{
auto input1 = make_shared<op::Parameter>(element::u32, Shape{1, 3, 6});
auto input2 = make_shared<op::Parameter>(element::u32, Shape{1, 3, 6});
try
{
auto logical_and = make_shared<op::v1::LogicalAnd>(input1, input2);
}
catch (const NodeValidationFailure& error)
{
EXPECT_HAS_SUBSTRING(error.what(),
std::string("Operands for logical operators must have boolean element type but have element type u32"));
}
}

TEST(type_prop, logical_and_incorrect_type_u64)
{
auto input1 = make_shared<op::Parameter>(element::u64, Shape{1, 3, 6});
auto input2 = make_shared<op::Parameter>(element::u64, Shape{1, 3, 6});
try
{
auto logical_and = make_shared<op::v1::LogicalAnd>(input1, input2);
}
catch (const NodeValidationFailure& error)
{
EXPECT_HAS_SUBSTRING(error.what(),
std::string("Operands for logical operators must have boolean element type but have element type u64"));
}
}

TEST(type_prop, logical_and_incorrect_shape)
{
auto input1 = make_shared<op::Parameter>(element::boolean, Shape{1, 3, 6});
auto input2 = make_shared<op::Parameter>(element::boolean, Shape{1, 2, 6});
try
{
auto logical_and = make_shared<op::v1::LogicalAnd>(input1, input2);
}
catch (const NodeValidationFailure& error)
{
EXPECT_HAS_SUBSTRING(error.what(),
std::string("Argument shapes are inconsistent"));
}
}

TEST(type_prop, logical_and_broadcast)
{
auto input1 = make_shared<op::Parameter>(element::boolean, Shape{1, 1, 6});
auto input2 = make_shared<op::Parameter>(element::boolean, Shape{1, 3, 1});

auto logical_and = make_shared<op::v1::LogicalAnd>(input1, input2);

ASSERT_EQ(logical_and->get_element_type(), element::boolean);
ASSERT_EQ(logical_and->get_shape(), (Shape{1, 3, 6}));
}
Loading