Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Avgpool revise #3674

Merged
merged 41 commits into from
Jan 13, 2021
Merged
Show file tree
Hide file tree
Changes from 35 commits
Commits
Show all changes
41 commits
Select commit Hold shift + click to select a range
0932b97
Update the spec
pszmel Dec 9, 2020
92cb8ce
add unit-tests
pszmel Dec 9, 2020
8fb0a23
add avgPool unit-tests to CMakelist
pszmel Dec 9, 2020
5995941
Remove second constructor and change the first one to take default va…
pszmel Dec 9, 2020
7812563
add type_prop test for default values
pszmel Dec 9, 2020
a7a83e9
add 5d input single layer test instances
pszmel Dec 9, 2020
86c38b4
add type_prop tests
pszmel Dec 15, 2020
1447d17
Require input to be 4D or 5D
pszmel Dec 16, 2020
3dce61c
add validation check for pads size
pszmel Dec 16, 2020
9946bf4
Update few tests to take 5D input instead of 6D
pszmel Dec 17, 2020
77f3744
Update validate_and_infer_types method
pszmel Dec 18, 2020
88b24c0
Update infer_batched_pooling_forward and try_apply_auto_padding methods
pszmel Dec 18, 2020
295dcd5
Update auto_padding_spatial_dims_dynamic type_prop test for binary_co…
pszmel Dec 18, 2020
1abb033
style-apply
pszmel Dec 18, 2020
43fe80f
Merge remote-tracking branch 'upstream/master' into avgpool_revise
pszmel Dec 18, 2020
06b8c21
Merge remote-tracking branch 'upstream/master' into avgpool_revise
pszmel Dec 21, 2020
4801d6b
Merge remote-tracking branch 'upstream/master' into avgpool_revise
pszmel Dec 21, 2020
2eef6ec
add validation check for kernel size
pszmel Dec 22, 2020
0367368
add xfail for avgpool python backend test
pszmel Dec 22, 2020
4a188cb
style-apply
pszmel Dec 22, 2020
1347707
Merge remote-tracking branch 'upstream/master' into avgpool_revise
pszmel Dec 22, 2020
7b5ad7a
remove avgpool backend test from xfail list
pszmel Dec 23, 2020
71be5a6
Update spec
pszmel Dec 23, 2020
3c68cee
Allow the 3D input
pszmel Dec 23, 2020
964629f
Update type_prop test with 3D input
pszmel Dec 23, 2020
19e0fc8
style-apply
pszmel Dec 23, 2020
066d28c
Merge remote-tracking branch 'upstream/master' into avgpool_revise
pszmel Dec 23, 2020
b771667
Remove xfail_issue_38709
pszmel Dec 23, 2020
27be9d4
fix typo
pszmel Dec 23, 2020
f887a96
Merge remote-tracking branch 'upstream/master' into avgpool_revise
pszmel Dec 28, 2020
3e13f15
Update spec
pszmel Dec 28, 2020
6631ce6
Update outputs section in spec
pszmel Dec 28, 2020
37b7225
Merge remote-tracking branch 'upstream/master' into avgpool_revise
pszmel Dec 29, 2020
f78e947
Update spec
pszmel Dec 29, 2020
c01a9d3
fix typo
pszmel Dec 29, 2020
db2e260
Merge remote-tracking branch 'upstream/master' into avgpool_revise
pszmel Jan 7, 2021
006aac0
clean file
pszmel Jan 12, 2021
86d0057
Update detailed description and fix xml examples
pszmel Jan 12, 2021
dce5f0f
Merge remote-tracking branch 'upstream/master' into avgpool_revise
pszmel Jan 12, 2021
f29db85
fix exclude-type typo
pszmel Jan 13, 2021
c3425c1
fix typo in outputs section
pszmel Jan 13, 2021
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
120 changes: 111 additions & 9 deletions docs/ops/pooling/AvgPool_1.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,11 @@

**Short description**: [Reference](http://caffe.berkeleyvision.org/tutorial/layers/pooling.html)

**Detailed description**: [Reference](http://cs231n.github.io/convolutional-networks/#pool)
**Detailed description**: [Reference](http://cs231n.github.io/convolutional-networks/#pool).
`H_out = (H + pads_begin[0] + pads_end[0] - kernel[0] / strides[0]) + 1`
`W_out = (H + pads_begin[1] + pads_end[1] - kernel[1] / strides[1]) + 1`
`D_out = (H + pads_begin[2] + pads_end[2] - kernel[2] / strides[2]) + 1`

pszmel marked this conversation as resolved.
Show resolved Hide resolved

**Attributes**: *Pooling* attributes are specified in the `data` node, which is a child of the layer node.

Expand Down Expand Up @@ -46,7 +50,7 @@

* *exclude-pad*

* **Description**: *exclude-pad* is a type of pooling strategy for values in the padding area. For example, if *exclude-pad* is "true", zero-values in the padding are not used.
* **Description**: *exclude-pad* is a type of pooling strategy for values in the padding area. For example, if *exclude-pad* is "true", then zero-values that came from padding are not included in averaging calculation.
* **Range of values**: true or false
* **Type**: boolean
* **Default value**: None
Expand All @@ -60,6 +64,7 @@
* *floor*
* **Type**: string
* **Default value**: *floor*
* **Required**: *no*

* *auto_pad*

Expand All @@ -68,26 +73,123 @@
* *same_upper (same_lower)* the input is padded to match the output size. In case of odd padding value an extra padding is added at the end (at the beginning).
* *valid* - do not use padding.
* **Type**: string
* **Default value**: None
* **Default value**: *explicit*
pszmel marked this conversation as resolved.
Show resolved Hide resolved
* **Required**: *no*
* **Note**: *pads_begin* and *pads_end* attributes are ignored when *auto_pad* is specified.

**Inputs**:

* **1**: 4D or 5D input tensor. Required.
* **1**: 3D, 4D or 5D input tensor. Required.

**Outputs**:
* **1**: Input shape can be either [N,C,H], [N,C,H,W] or [N,C,H,W,D]. Then the corresponding output shape will be [N,C,H_out], [N,C,H_out,W_out] or [N,C,H_out,W_out,D_out]
pszmel marked this conversation as resolved.
Show resolved Hide resolved

**Mathematical Formulation**

\f[
output_{j} = \frac{\sum_{i = 0}^{n}x_{i}}{n}
\f]

**Example**
**Examples**

```xml
<layer ... type="AvgPool" ... >
pszmel marked this conversation as resolved.
Show resolved Hide resolved
<data auto_pad="same_upper" exclude-pad="true" kernel="3,3" pads_begin="0,0" pads_end="1,1" strides="2,2"/>
<input> ... </input>
<output> ... </output>
<data auto_pad="same_upper" exclude_pad="true" kernel="2,2" pads_begin="0,0" pads_end="1,1" strides="2,2"/>
<input>
<port id="0">
<dim>1</dim>
<dim>3</dim>
<dim>32</dim>
<dim>32</dim>
</port>
</input>
<output>
<port id="1">
<dim>1</dim>
<dim>3</dim>
<dim>32</dim>
pszmel marked this conversation as resolved.
Show resolved Hide resolved
<dim>32</dim>
</port>
</output>
</layer>

<layer ... type="AvgPool" ... >
<data auto_pad="same_upper" exclude_pad="false" kernel="2,2" pads_begin="0,0" pads_end="1,1" strides="2,2"/>
pszmel marked this conversation as resolved.
Show resolved Hide resolved
pszmel marked this conversation as resolved.
Show resolved Hide resolved
<input>
<port id="0">
<dim>1</dim>
<dim>3</dim>
<dim>32</dim>
<dim>32</dim>
</port>
</input>
<output>
<port id="1">
<dim>1</dim>
<dim>3</dim>
<dim>32</dim>
<dim>32</dim>
</port>
</output>
</layer>

<layer ... type="AvgPool" ... >
<data auto_pad="explicit" exclude_pad="true" kernel="2,2" pads_begin="1,1" pads_end="1,1" strides="2,2"/>
pszmel marked this conversation as resolved.
Show resolved Hide resolved
<input>
<port id="0">
<dim>1</dim>
<dim>3</dim>
<dim>32</dim>
<dim>32</dim>
</port>
</input>
<output>
<port id="1">
<dim>1</dim>
<dim>3</dim>
<dim>17</dim>
<dim>17</dim>
</port>
</output>
</layer>

<layer ... type="AvgPool" ... >
<data auto_pad="explicit" exclude_pad="false" kernel="2,2" pads_begin="1,1" pads_end="1,1" strides="2,2"/>
<input>
<port id="0">
<dim>1</dim>
<dim>3</dim>
<dim>32</dim>
<dim>32</dim>
</port>
</input>
<output>
<port id="1">
<dim>1</dim>
<dim>3</dim>
<dim>17</dim>
<dim>17</dim>
</port>
</output>
</layer>

<layer ... type="AvgPool" ... >
<data auto_pad="valid" exclude_pad="true" kernel="2,2" pads_begin="1,1" pads_end="1,1" strides="2,2"/>
<input>
<port id="0">
<dim>1</dim>
<dim>3</dim>
<dim>32</dim>
<dim>32</dim>
</port>
</input>
<output>
<port id="1">
<dim>1</dim>
<dim>3</dim>
<dim>16</dim>
<dim>16</dim>
</port>
</output>
</layer>
```
```
Original file line number Diff line number Diff line change
Expand Up @@ -31,6 +31,7 @@ const std::vector<std::vector<size_t >> strides = {{1, 1},
{1, 2}};
const std::vector<std::vector<size_t >> strides3D = {{1, 1, 1},
{2, 2, 2}};
const std::vector<std::vector<size_t >> stridess3D = {{2, 2, 2}};
const std::vector<std::vector<size_t >> padBegins = {{0, 0},
{0, 2}};
const std::vector<std::vector<size_t >> padBegins3D = {{0, 0, 0}};
Expand Down Expand Up @@ -277,6 +278,78 @@ INSTANTIATE_TEST_CASE_P(smoke_AvgPool_ExplicitPad_FloorRounding, PoolingLayerTes
::testing::Values(CommonTestUtils::DEVICE_CPU)),
PoolingLayerTest::getTestCaseName);

/* ========== Explicit Pad Floor Rounding 5D input========== */
const auto avgPool_ExplicitPad_FloorRounding_5Dinput_Params = ::testing::Combine(
::testing::Values(ngraph::helpers::PoolingTypes::AVG),
::testing::ValuesIn(kernel3D),
::testing::ValuesIn(strides3D),
::testing::ValuesIn(padBegins3D),
::testing::ValuesIn(padEnds3D),
::testing::Values(ngraph::op::RoundingType::FLOOR),
::testing::Values(ngraph::op::PadType::EXPLICIT),
::testing::Values(true, false)
);

INSTANTIATE_TEST_CASE_P(smoke_AvgPool_ExplicitPad_FloorRounding_5Dinput, PoolingLayerTest,
::testing::Combine(
avgPool_ExplicitPad_FloorRounding_5Dinput_Params,
::testing::ValuesIn(netPrecisions),
::testing::Values(InferenceEngine::Precision::UNSPECIFIED),
::testing::Values(InferenceEngine::Precision::UNSPECIFIED),
::testing::Values(InferenceEngine::Layout::ANY),
::testing::Values(InferenceEngine::Layout::ANY),
::testing::Values(std::vector<size_t >({32, 32, 2, 2, 4})),
::testing::Values(CommonTestUtils::DEVICE_CPU)),
PoolingLayerTest::getTestCaseName);

/* ========== Same Upper Pad Floor Rounding 5D input========== */
const auto avgPool_SameUpperPad_FloorRounding_5Dinput_Params = ::testing::Combine(
::testing::Values(ngraph::helpers::PoolingTypes::AVG),
::testing::ValuesIn(kernel3D),
::testing::ValuesIn(strides3D),
::testing::ValuesIn(padBegins3D),
::testing::ValuesIn(padEnds3D),
::testing::Values(ngraph::op::RoundingType::FLOOR),
::testing::Values(ngraph::op::PadType::SAME_UPPER),
::testing::Values(true)
);

INSTANTIATE_TEST_CASE_P(smoke_AvgPool_SameUpperPad_FloorRounding_5Dinput, PoolingLayerTest,
::testing::Combine(
avgPool_SameUpperPad_FloorRounding_5Dinput_Params,
::testing::ValuesIn(netPrecisions),
::testing::Values(InferenceEngine::Precision::UNSPECIFIED),
::testing::Values(InferenceEngine::Precision::UNSPECIFIED),
::testing::Values(InferenceEngine::Layout::ANY),
::testing::Values(InferenceEngine::Layout::ANY),
::testing::Values(std::vector<size_t >({32, 32, 2, 2, 4})),
::testing::Values(CommonTestUtils::DEVICE_CPU)),
PoolingLayerTest::getTestCaseName);

/* ========== Same Lower Pad Ceil Rounding 5D input========== */
const auto avgPool_SameLowerPad_CeilRounding_5Dinput_Params = ::testing::Combine(
::testing::Values(ngraph::helpers::PoolingTypes::AVG),
::testing::ValuesIn(kernel3D),
::testing::ValuesIn(strides3D),
::testing::ValuesIn(padBegins3D),
::testing::ValuesIn(padEnds3D),
::testing::Values(ngraph::op::RoundingType::CEIL),
::testing::Values(ngraph::op::PadType::SAME_LOWER),
::testing::Values(true)
);

INSTANTIATE_TEST_CASE_P(smoke_AvgPool_SameLowerPad_CeilRounding_5Dinput, PoolingLayerTest,
::testing::Combine(
avgPool_SameLowerPad_CeilRounding_5Dinput_Params,
::testing::ValuesIn(netPrecisions),
::testing::Values(InferenceEngine::Precision::UNSPECIFIED),
::testing::Values(InferenceEngine::Precision::UNSPECIFIED),
::testing::Values(InferenceEngine::Layout::ANY),
::testing::Values(InferenceEngine::Layout::ANY),
::testing::Values(std::vector<size_t >({32, 32, 2, 2, 2})),
::testing::Values(CommonTestUtils::DEVICE_CPU)),
PoolingLayerTest::getTestCaseName);

////* ========== Avg and Max Polling Cases ========== */
/* ========== Valid Pad Rounding Not Applicable ========== */
const auto allPools_ValidPad_Params = ::testing::Combine(
Expand Down Expand Up @@ -305,5 +378,6 @@ INSTANTIATE_TEST_CASE_P(smoke_MAX_and_AVGPool_ValidPad, PoolingLayerTest,




pszmel marked this conversation as resolved.
Show resolved Hide resolved
} // namespace

28 changes: 2 additions & 26 deletions ngraph/core/include/ngraph/op/avg_pool.hpp
Original file line number Diff line number Diff line change
Expand Up @@ -58,32 +58,8 @@ namespace ngraph
const Shape& pads_end,
const Shape& kernel,
bool exclude_pad,
op::RoundingType rounding_type,
const PadType& auto_pad);

///
/// \brief Constructs a batched average pooling operation.
///
/// \param arg The output producing the input data batch tensor.<br>
/// `[d1, dn]`
/// \param strides The strides.<br> `[n]`
/// \param pads_begin The beginning of padding shape.<br> `[n]`
/// \param pads_end The end of padding shape.<br> `[n]`
/// \param kernel The kernel shape.<br> `[n]`
/// \param exclude_pad If false then averages include padding elements, each
/// treated as the number zero. If true, padding
/// elements
/// are entirely ignored when computing averages.
/// \param rounding_type Whether to use ceiling or floor rounding type while
/// computing output shape.
///
AvgPool(const Output<Node>& arg,
const Strides& strides,
const Shape& pads_begin,
const Shape& pads_end,
const Shape& kernel,
bool exclude_pad,
op::RoundingType rounding_type);
op::RoundingType rounding_type = op::RoundingType::FLOOR,
const PadType& auto_pad = op::PadType::EXPLICIT);

size_t get_version() const override { return 1; }
void validate_and_infer_types() override;
Expand Down
64 changes: 39 additions & 25 deletions ngraph/core/src/op/avg_pool.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -46,24 +46,6 @@ op::v1::AvgPool::AvgPool(const Output<Node>& arg,
constructor_validate_and_infer_types();
}

op::v1::AvgPool::AvgPool(const Output<Node>& arg,
const Strides& strides,
const Shape& pads_begin,
const Shape& pads_end,
const Shape& kernel,
bool exclude_pad,
op::RoundingType rounding_type)
: AvgPool(arg,
strides,
pads_begin,
pads_end,
kernel,
exclude_pad,
rounding_type,
op::PadType::EXPLICIT)
{
}

bool op::v1::AvgPool::visit_attributes(AttributeVisitor& visitor)
{
NGRAPH_OP_SCOPE(v1_AvgPool_visit_attributes);
Expand Down Expand Up @@ -96,24 +78,53 @@ void op::v1::AvgPool::validate_and_infer_types()
}

const PartialShape& arg_shape = get_input_partial_shape(0);

NODE_VALIDATION_CHECK(this,
arg_shape.rank().compatible(3) || arg_shape.rank().compatible(4) ||
arg_shape.rank().compatible(5),
"Expected a 3D, 4D or 5D tensor for the input. Got: ",
arg_shape);

if (arg_shape.rank().is_static())
{
NODE_VALIDATION_CHECK(this,
m_pads_end.size() == arg_shape.rank().get_max_length() - 2,
"Expected pads_end size to be equal to input size - 2. Got: ",
m_pads_end.size());

NODE_VALIDATION_CHECK(this,
m_pads_begin.size() == arg_shape.rank().get_max_length() - 2,
"Expected pads_begin size to be equal to input size - 2. Got: ",
m_pads_begin.size());
NODE_VALIDATION_CHECK(this,
m_kernel.size() == arg_shape.rank().get_max_length() - 2,
"Expected kernel size to be equal to input size - 2. Got: ",
m_kernel.size());
NODE_VALIDATION_CHECK(this,
m_strides.size() == arg_shape.rank().get_max_length() - 2,
"Expected strides size to be equal to input size - 2. Got: ",
m_kernel.size());
}

auto output_shape = PartialShape::dynamic();
if (arg_shape.rank().is_static())
{
output_shape = std::vector<Dimension>(arg_shape.rank().get_length(), Dimension::dynamic());
if (arg_shape.rank().get_length() > 1)
output_shape =
std::vector<Dimension>(arg_shape.rank().get_max_length(), Dimension::dynamic());
if (arg_shape[0].is_static())
{
output_shape[0] = arg_shape[0]; // batch size
}
if (arg_shape.rank().get_length() > 2)
if (arg_shape[1].is_static())
{
output_shape[1] = arg_shape[1]; // channel size
}
}

bool update_auto_padding_succeed = true;
if (m_auto_pad == PadType::SAME_UPPER || m_auto_pad == PadType::SAME_LOWER)
{
CoordinateDiff pads_end, pads_begin;
CoordinateDiff pads_end;
CoordinateDiff pads_begin;
update_auto_padding_succeed =
try_apply_auto_padding(arg_shape,
m_kernel,
Expand All @@ -125,12 +136,15 @@ void op::v1::AvgPool::validate_and_infer_types()
m_pads_end = Shape(pads_end.begin(), pads_end.end());
m_pads_begin = Shape(pads_begin.begin(), pads_begin.end());
}

if (m_auto_pad == PadType::VALID)
{
m_pads_end = Shape(m_pads_end.size(), 0);
m_pads_begin = Shape(m_pads_begin.size(), 0);
}
// infer_batched_forward_pooling wants CoordinateDiffs for these, while the pooling ops for
// now still take Shape (no negative padding).
CoordinateDiff pads_begin(m_pads_begin.begin(), m_pads_begin.end());
CoordinateDiff pads_end(m_pads_end.begin(), m_pads_end.end());

set_output_type(0,
get_input_element_type(0),
update_auto_padding_succeed
Expand Down
Loading