Skip to content

Commit

Permalink
Update Reduction operations specification (#1446)
Browse files Browse the repository at this point in the history
  • Loading branch information
Anton Chetverikov authored Jul 23, 2020
1 parent 82aa1e1 commit f90f242
Show file tree
Hide file tree
Showing 8 changed files with 574 additions and 36 deletions.
87 changes: 82 additions & 5 deletions docs/ops/reduction/ReduceLogicalAnd_1.md
Original file line number Diff line number Diff line change
Expand Up @@ -18,24 +18,30 @@

**Inputs**

* **1**: Input tensor x of any data type that has defined *logical and* operation. **Required.**
* **1**: Input tensor x of type *T1*. **Required.**

* **2**: Scalar or 1D tensor with axis indices for the 1st input along which reduction is performed. **Required.**
* **2**: Scalar or 1D tensor of type *T_IND* with axis indices for the 1st input along which reduction is performed. Accepted range is `[-r, r-1]` where where `r` is the rank of input tensor, all values must be unique, repeats are not allowed. **Required.**

**Outputs**

* **1**: Tensor of the same type as the 1st input tensor and `shape[i] = shapeOf(input1)[i]` for all `i` that is not in the list of axes from the 2nd input. For dimensions from the 2nd input tensor, `shape[i] == 1` if `keep_dims == True`, or `i`-th dimension is removed from the output otherwise.

**Types**

* *T1*: any supported numeric type.
* *T_IND*: `int64` or `int32`.

**Detailed Description**

Each element in the output is the result of reduction with *logical and* operation along dimensions specified by the 2nd input:

output[i0, i1, ..., iN] = and[j0,..., jN](x[j0, ..., jN]**2))
output[i0, i1, ..., iN] = and[j0,..., jN](x[j0, ..., jN]))

Where indices i0, ..., iN run through all valid indices for the 1st input and *logical and* operation `and[j0, ..., jN]` have `jk = ik` for those dimensions `k` that are not in the set of indices specified by the 2nd input of the operation.
Corner cases:
1. When the 2nd input is an empty list, then this operation does nothing, it is an identity.
2. When the 2nd input contains all dimensions of the 1st input, this means that a single reduction value is calculated for entire input tensor.

1. When the 2nd input is an empty list, then this operation does nothing, it is an identity.
2. When the 2nd input contains all dimensions of the 1st input, this means that a single reduction value is calculated for entire input tensor.

**Example**

Expand All @@ -62,4 +68,75 @@ Corner cases:
</port>
</output>
</layer>
```

```xml
<layer id="1" type="ReduceLogicalAnd" ...>
<data keep_dims="False" />
<input>
<port id="0">
<dim>6</dim>
<dim>12</dim>
<dim>10</dim>
<dim>24</dim>
</port>
<port id="1">
<dim>2</dim> <!-- value is [2, 3] that means independent reduction in each channel and batch -->
</port>
</input>
<output>
<port id="2">
<dim>6</dim>
<dim>12</dim>
</port>
</output>
</layer>
```

```xml
<layer id="1" type="ReduceLogicalAnd" ...>
<data keep_dims="False" />
<input>
<port id="0">
<dim>6</dim>
<dim>12</dim>
<dim>10</dim>
<dim>24</dim>
</port>
<port id="1">
<dim>1</dim> <!-- value is [1] that means independent reduction in each channel and spatial dimensions -->
</port>
</input>
<output>
<port id="2">
<dim>6</dim>
<dim>10</dim>
<dim>24</dim>
</port>
</output>
</layer>
```

```xml
<layer id="1" type="ReduceLogicalAnd" ...>
<data keep_dims="False" />
<input>
<port id="0">
<dim>6</dim>
<dim>12</dim>
<dim>10</dim>
<dim>24</dim>
</port>
<port id="1">
<dim>1</dim> <!-- value is [-2] that means independent reduction in each channel, batch and second spatial dimension -->
</port>
</input>
<output>
<port id="2">
<dim>6</dim>
<dim>12</dim>
<dim>24</dim>
</port>
</output>
</layer>
```
85 changes: 81 additions & 4 deletions docs/ops/reduction/ReduceLogicalOr_1.md
Original file line number Diff line number Diff line change
Expand Up @@ -18,14 +18,19 @@

**Inputs**

* **1**: Input tensor x of any data type that has defined *logical or* operation. **Required.**
* **1**: Input tensor x of type *T1*. **Required.**

* **2**: Scalar or 1D tensor with axis indices for the 1st input along which reduction is performed. **Required.**
* **2**: Scalar or 1D tensor of type *T_IND* with axis indices for the 1st input along which reduction is performed. Accepted range is `[-r, r-1]` where where `r` is the rank of input tensor, all values must be unique, repeats are not allowed. **Required.**

**Outputs**

* **1**: Tensor of the same type as the 1st input tensor and `shape[i] = shapeOf(input1)[i]` for all `i` that is not in the list of axes from the 2nd input. For dimensions from the 2nd input tensor, `shape[i] == 1` if `keep_dims == True`, or `i`-th dimension is removed from the output otherwise.

**Types**

* *T1*: any supported numeric type.
* *T_IND*: `int64` or `int32`.

**Detailed Description**

Each element in the output is the result of reduction with *logical or* operation along dimensions specified by the 2nd input:
Expand All @@ -34,8 +39,9 @@ Each element in the output is the result of reduction with *logical or* operatio

Where indices i0, ..., iN run through all valid indices for the 1st input and *logical or* operation `or[j0, ..., jN]` have `jk = ik` for those dimensions `k` that are not in the set of indices specified by the 2nd input of the operation.
Corner cases:
1. When the 2nd input is an empty list, then this operation does nothing, it is an identity.
2. When the 2nd input contains all dimensions of the 1st input, this means that a single reduction value is calculated for entire input tensor.

1. When the 2nd input is an empty list, then this operation does nothing, it is an identity.
2. When the 2nd input contains all dimensions of the 1st input, this means that a single reduction value is calculated for entire input tensor.

**Example**

Expand All @@ -62,4 +68,75 @@ Corner cases:
</port>
</output>
</layer>
```

```xml
<layer id="1" type="ReduceLogicalOr" ...>
<data keep_dims="False" />
<input>
<port id="0">
<dim>6</dim>
<dim>12</dim>
<dim>10</dim>
<dim>24</dim>
</port>
<port id="1">
<dim>2</dim> <!-- value is [2, 3] that means independent reduction in each channel and batch -->
</port>
</input>
<output>
<port id="2">
<dim>6</dim>
<dim>12</dim>
</port>
</output>
</layer>
```

```xml
<layer id="1" type="ReduceLogicalOr" ...>
<data keep_dims="False" />
<input>
<port id="0">
<dim>6</dim>
<dim>12</dim>
<dim>10</dim>
<dim>24</dim>
</port>
<port id="1">
<dim>1</dim> <!-- value is [1] that means independent reduction in each channel and spatial dimensions -->
</port>
</input>
<output>
<port id="2">
<dim>6</dim>
<dim>10</dim>
<dim>24</dim>
</port>
</output>
</layer>
```

```xml
<layer id="1" type="ReduceLogicalOr" ...>
<data keep_dims="False" />
<input>
<port id="0">
<dim>6</dim>
<dim>12</dim>
<dim>10</dim>
<dim>24</dim>
</port>
<port id="1">
<dim>1</dim> <!-- value is [-2] that means independent reduction in each channel, batch and second spatial dimension -->
</port>
</input>
<output>
<port id="2">
<dim>6</dim>
<dim>12</dim>
<dim>24</dim>
</port>
</output>
</layer>
```
2 changes: 1 addition & 1 deletion docs/ops/reduction/ReduceLp_4.md
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
## ReduceLp <a name="ReduceLp"></a>
## ReduceLp <a name="ReduceLp"></a> {#openvino_docs_ops_reduction_ReduceLp_4}

**Versioned name**: *ReduceLp-4*

Expand Down
87 changes: 82 additions & 5 deletions docs/ops/reduction/ReduceMax_1.md
Original file line number Diff line number Diff line change
Expand Up @@ -18,24 +18,30 @@

**Inputs**

* **1**: Input tensor x of any data type that has defined maximum operation. **Required.**
* **1**: Input tensor x of type *T1*. **Required.**

* **2**: Scalar or 1D tensor with axis indices for the 1st input along which reduction is performed. **Required.**
* **2**: Scalar or 1D tensor of type *T_IND* with axis indices for the 1st input along which reduction is performed. Accepted range is `[-r, r-1]` where where `r` is the rank of input tensor, all values must be unique, repeats are not allowed. **Required.**

**Outputs**

* **1**: Tensor of the same type as the 1st input tensor and `shape[i] = shapeOf(input1)[i]` for all `i` that is not in the list of axes from the 2nd input. For dimensions from the 2nd input tensor, `shape[i] == 1` if `keep_dims == True`, or `i`-th dimension is removed from the output otherwise.

** Types **

* *T1*: any supported numeric type.
* *T_IND*: `int64` or `int32`.

**Detailed Description**

Each element in the output is the result of reduction with finding a maximum operation along dimensions specified by the 2nd input:

output[i0, i1, ..., iN] = max[j0,..., jN](x[j0, ..., jN]**2))
output[i0, i1, ..., iN] = max[j0,..., jN](x[j0, ..., jN]))

Where indices i0, ..., iN run through all valid indices for the 1st input and finding the maximum value `max[j0, ..., jN]` have `jk = ik` for those dimensions `k` that are not in the set of indices specified by the 2nd input of the operation.
Corner cases:
1. When the 2nd input is an empty list, then this operation does nothing, it is an identity.
2. When the 2nd input contains all dimensions of the 1st input, this means that a single reduction value is calculated for entire input tensor.

1. When the 2nd input is an empty list, then this operation does nothing, it is an identity.
2. When the 2nd input contains all dimensions of the 1st input, this means that a single reduction value is calculated for entire input tensor.

**Example**

Expand All @@ -62,4 +68,75 @@ Corner cases:
</port>
</output>
</layer>
```

```xml
<layer id="1" type="ReduceMax" ...>
<data keep_dims="False" />
<input>
<port id="0">
<dim>6</dim>
<dim>12</dim>
<dim>10</dim>
<dim>24</dim>
</port>
<port id="1">
<dim>2</dim> <!-- value is [2, 3] that means independent reduction in each channel and batch -->
</port>
</input>
<output>
<port id="2">
<dim>6</dim>
<dim>12</dim>
</port>
</output>
</layer>
```

```xml
<layer id="1" type="ReduceMax" ...>
<data keep_dims="False" />
<input>
<port id="0">
<dim>6</dim>
<dim>12</dim>
<dim>10</dim>
<dim>24</dim>
</port>
<port id="1">
<dim>1</dim> <!-- value is [1] that means independent reduction in each channel and spatial dimensions -->
</port>
</input>
<output>
<port id="2">
<dim>6</dim>
<dim>10</dim>
<dim>24</dim>
</port>
</output>
</layer>
```

```xml
<layer id="1" type="ReduceMax" ...>
<data keep_dims="False" />
<input>
<port id="0">
<dim>6</dim>
<dim>12</dim>
<dim>10</dim>
<dim>24</dim>
</port>
<port id="1">
<dim>1</dim> <!-- value is [-2] that means independent reduction in each channel, batch and second spatial dimension -->
</port>
</input>
<output>
<port id="2">
<dim>6</dim>
<dim>12</dim>
<dim>24</dim>
</port>
</output>
</layer>
```
Loading

0 comments on commit f90f242

Please sign in to comment.