diff --git a/docs/MO_DG/prepare_model/Model_Optimizer_FAQ.md b/docs/MO_DG/prepare_model/Model_Optimizer_FAQ.md index 7bc95e8782f380..66431fbc8c2e60 100644 --- a/docs/MO_DG/prepare_model/Model_Optimizer_FAQ.md +++ b/docs/MO_DG/prepare_model/Model_Optimizer_FAQ.md @@ -933,7 +933,7 @@ Q102. What does the message "Operation _contrib_box_nms is not supported ..." me Q103. What does the message "ModelOptimizer is not able to parse *.caffemodel" mean? ##################################################################################################################################################### -**A:** If a ``*.caffemodel`` file exists and is correct, the error occurred possibly because of the use of Python protobuf implementation. In some cases, error messages may appear during model parsing, for example: "``utf-8`` codec can't decode byte 0xe0 in position 4: invalid continuation byte in field: mo_caffe.SpatialTransformerParameter.transform_type". You can either use a newer Python version (3.7 - 3.11) or build the ``cpp`` implementation of ``protobuf`` yourself for your version of Python. For the complete instructions about building ``protobuf`` from sources, see the appropriate section in the :doc:`Converting Models with Model Optimizer ` guide. +**A:** If a ``*.caffemodel`` file exists and is correct, the error occurred possibly because of the use of Python protobuf implementation. In some cases, error messages may appear during model parsing, for example: "``utf-8`` codec can't decode byte 0xe0 in position 4: invalid continuation byte in field: mo_caffe.SpatialTransformerParameter.transform_type". You can either use a newer Python version (3.8 - 3.11) or build the ``cpp`` implementation of ``protobuf`` yourself for your version of Python. For the complete instructions about building ``protobuf`` from sources, see the appropriate section in the :doc:`Converting Models with Model Optimizer ` guide. .. _question-104: diff --git a/docs/OV_Runtime_UG/Operations_specifications.md b/docs/OV_Runtime_UG/Operations_specifications.md index 413a58accf08bc..c9d063b388108e 100644 --- a/docs/OV_Runtime_UG/Operations_specifications.md +++ b/docs/OV_Runtime_UG/Operations_specifications.md @@ -126,6 +126,7 @@ MulticlassNonMaxSuppression-9 Multiply-1 Negative-1 + NMSRotated-13 NonMaxSuppression-1 NonMaxSuppression-3 NonMaxSuppression-4 diff --git a/docs/dev/build_linux.md b/docs/dev/build_linux.md index dc617ed433ca5e..35fee45de09e94 100644 --- a/docs/dev/build_linux.md +++ b/docs/dev/build_linux.md @@ -11,7 +11,7 @@ The software was validated on: - [CMake](https://cmake.org/download/) 3.13 or higher - GCC 7.5 or higher to build OpenVINO Runtime -- Python 3.7 - 3.11 for OpenVINO Runtime Python API +- Python 3.8 - 3.11 for OpenVINO Runtime Python API - (Optional) Install Intel® Graphics Compute Runtime for OpenCL™ Driver package to enable inference on Intel integrated GPUs. Select a driver package from the table below depending on what version of Ubuntu you are installing on. | Ubuntu | Driver package | @@ -74,9 +74,9 @@ You can use the following additional build options: ``` 2. Enable the `-DENABLE_PYTHON=ON` option in the CMake step above (Step 4). To specify an exact Python version, use the following options: ``` - -DPYTHON_EXECUTABLE=`which python3.7` \ - -DPYTHON_LIBRARY=/usr/lib/x86_64-linux-gnu/libpython3.7m.so \ - -DPYTHON_INCLUDE_DIR=/usr/include/python3.7 + -DPYTHON_EXECUTABLE=`which python3.8` \ + -DPYTHON_LIBRARY=/usr/lib/x86_64-linux-gnu/libpython3.8.so \ + -DPYTHON_INCLUDE_DIR=/usr/include/python3.8 ``` 3. To build a wheel package (.whl), enable the `-DENABLE_WHEEL=ON` option in the CMake step above (Step 4), and install requirements: ```sh diff --git a/docs/dev/build_mac_arm.md b/docs/dev/build_mac_arm.md index 65857f9751db03..80678bb6ce4681 100644 --- a/docs/dev/build_mac_arm.md +++ b/docs/dev/build_mac_arm.md @@ -14,7 +14,7 @@ The software was validated on: - [brew](https://brew.sh) package manager to install additional dependencies. Use [install brew](https://brew.sh) guide to achieve this. - Installation step for python and python libraries varies depending on the host architecture: - - **arm64** Python 3.7 - 3.11 for the OpenVINO Runtime Python API, Development tools (Model Optimizer, POT and others): + - **arm64** Python 3.8 - 3.11 for the OpenVINO Runtime Python API, Development tools (Model Optimizer, POT and others): ```sh % # let's have a look what python versions are available in brew % brew search python diff --git a/docs/dev/build_mac_intel_cpu.md b/docs/dev/build_mac_intel_cpu.md index 83068109c14f1c..606178e3f376fe 100644 --- a/docs/dev/build_mac_intel_cpu.md +++ b/docs/dev/build_mac_intel_cpu.md @@ -12,7 +12,7 @@ The software was validated on: - [brew](https://brew.sh) package manager to install additional dependencies. Use [install brew](https://brew.sh) guide to achieve this. - Installation step for python and python libraries varies depending on the host architecture: - - **x86_64** Python 3.7 - 3.11 for the OpenVINO Runtime Python API, Development tools (Model Optimizer, POT and others): + - **x86_64** Python 3.8 - 3.11 for the OpenVINO Runtime Python API, Development tools (Model Optimizer, POT and others): ```sh % # let's have a look what python versions are available in brew % brew search python diff --git a/docs/dev/build_raspbian.md b/docs/dev/build_raspbian.md index 8743c3321c4621..9665c1ddb4954f 100644 --- a/docs/dev/build_raspbian.md +++ b/docs/dev/build_raspbian.md @@ -43,9 +43,9 @@ git clone --recurse-submodules --single-branch --branch=master https://github.co via `pip3`, adding the following options: ```sh -DENABLE_PYTHON=ON \ - -DPYTHON_EXECUTABLE=/usr/bin/python3.7 \ - -DPYTHON_LIBRARY=/usr/lib/arm-linux-gnueabihf/libpython3.7m.so \ - -DPYTHON_INCLUDE_DIR=/usr/include/python3.7 + -DPYTHON_EXECUTABLE=/usr/bin/python3.8 \ + -DPYTHON_LIBRARY=/usr/lib/arm-linux-gnueabihf/libpython3.8.so \ + -DPYTHON_INCLUDE_DIR=/usr/include/python3.8 ``` ## See also diff --git a/docs/dev/build_windows.md b/docs/dev/build_windows.md index e598fdd33f04e7..57b38d9deeb93c 100644 --- a/docs/dev/build_windows.md +++ b/docs/dev/build_windows.md @@ -11,7 +11,7 @@ Supported configurations: - [CMake](https://cmake.org/download/) 3.13 or higher - Microsoft Visual Studio 2019 or higher, version 16.3 or later > **NOTE**: Native Microsoft Visual Studio for WoA is available since 2022. -- Python 3.7 - 3.11 for OpenVINO Runtime Python API +- Python 3.8 - 3.11 for OpenVINO Runtime Python API > **NOTE**: Python for ARM64 is available since [3.11](https://www.python.org/downloads/windows/) version. - [Git for Windows*] - (Windows on ARM only) [LLVM for Windows on ARM (WoA)](https://github.com/llvm/llvm-project/releases/download/llvmorg-15.0.6/LLVM-15.0.6-woa64.exe) diff --git a/docs/install_guides/--installing-model-dev-tools.md b/docs/install_guides/--installing-model-dev-tools.md index 366034bafcac43..5454fbed103d12 100644 --- a/docs/install_guides/--installing-model-dev-tools.md +++ b/docs/install_guides/--installing-model-dev-tools.md @@ -16,7 +16,7 @@ OpenVINO Development Tools is a set of utilities that make it easy to develop an The instructions on this page show how to install OpenVINO Development Tools. If you are a Python developer, it only takes a few simple steps to install the tools with PyPI. If you are developing in C/C++, OpenVINO Runtime must be installed separately before installing OpenVINO Development Tools. -In both cases, Python 3.7 - 3.11 needs to be installed on your machine before starting. +In both cases, Python 3.8 - 3.11 needs to be installed on your machine before starting. .. note:: diff --git a/docs/install_guides/installing-openvino-apt.md b/docs/install_guides/installing-openvino-apt.md index 6dee8f9f19e2d8..6462e6c60b251b 100644 --- a/docs/install_guides/installing-openvino-apt.md +++ b/docs/install_guides/installing-openvino-apt.md @@ -34,7 +34,7 @@ * `CMake 3.13 or higher, 64-bit `__ * GCC 7.5.0 (for Ubuntu 18.04), GCC 9.3.0 (for Ubuntu 20.04) or GCC 11.3.0 (for Ubuntu 22.04) - * `Python 3.7 - 3.11, 64-bit `__ + * `Python 3.8 - 3.11, 64-bit `__ Installing OpenVINO Runtime diff --git a/docs/install_guides/installing-openvino-brew.md b/docs/install_guides/installing-openvino-brew.md index 93f20e2ae26063..d9c3545f57aaac 100644 --- a/docs/install_guides/installing-openvino-brew.md +++ b/docs/install_guides/installing-openvino-brew.md @@ -40,14 +40,14 @@ * `Homebrew `_ * `CMake 3.13 or higher, 64-bit `__ * GCC 7.5.0 (for Ubuntu 18.04), GCC 9.3.0 (for Ubuntu 20.04) or GCC 11.3.0 (for Ubuntu 22.04) - * `Python 3.7 - 3.10, 64-bit `__ + * `Python 3.8 - 3.10, 64-bit `__ .. tab-item:: macOS :sync: macos * `Homebrew `_ * `CMake 3.13 or higher `__ (choose "macOS 10.13 or later"). Add ``/Applications/CMake.app/Contents/bin`` to path (for default installation). - * `Python 3.7 - 3.11 `__ . Install and add it to path. + * `Python 3.8 - 3.11 `__ . Install and add it to path. * Apple Xcode Command Line Tools. In the terminal, run ``xcode-select --install`` from any directory to install it. * (Optional) Apple Xcode IDE (not required for OpenVINO™, but useful for development) diff --git a/docs/install_guides/installing-openvino-from-archive-linux.md b/docs/install_guides/installing-openvino-from-archive-linux.md index ace3fd5a424459..f08ef101b25ec4 100644 --- a/docs/install_guides/installing-openvino-from-archive-linux.md +++ b/docs/install_guides/installing-openvino-from-archive-linux.md @@ -50,7 +50,7 @@ :sync: software * `CMake 3.13 or higher, 64-bit `__ - * `Python 3.7 - 3.11, 64-bit `__ + * `Python 3.8 - 3.11, 64-bit `__ * GCC: .. tab-set:: diff --git a/docs/install_guides/installing-openvino-from-archive-macos.md b/docs/install_guides/installing-openvino-from-archive-macos.md index c2d95fa7180012..826fbe223e6374 100644 --- a/docs/install_guides/installing-openvino-from-archive-macos.md +++ b/docs/install_guides/installing-openvino-from-archive-macos.md @@ -28,7 +28,7 @@ :sync: software-requirements * `CMake 3.13 or higher `__ (choose "macOS 10.13 or later"). Add ``/Applications/CMake.app/Contents/bin`` to path (for default install). - * `Python 3.7 - 3.11 `__ (choose 3.7 - 3.11). Install and add to path. + * `Python 3.8 - 3.11 `__ (choose 3.8 - 3.11). Install and add to path. * Apple Xcode Command Line Tools. In the terminal, run ``xcode-select --install`` from any directory * (Optional) Apple Xcode IDE (not required for OpenVINO™, but useful for development) diff --git a/docs/install_guides/installing-openvino-from-archive-windows.md b/docs/install_guides/installing-openvino-from-archive-windows.md index 2193e78df60cb7..c10564ef6a8141 100644 --- a/docs/install_guides/installing-openvino-from-archive-windows.md +++ b/docs/install_guides/installing-openvino-from-archive-windows.md @@ -38,7 +38,7 @@ System Requirements * `Microsoft Visual Studio 2019 with MSBuild `__ or `Microsoft Visual Studio 2022 `__ * `CMake 3.14 or higher, 64-bit `__ (optional, only required for building sample applications) - * `Python 3.7 - 3.11, 64-bit `__ + * `Python 3.8 - 3.11, 64-bit `__ .. note:: diff --git a/docs/install_guides/installing-openvino-yum.md b/docs/install_guides/installing-openvino-yum.md index 0a71adce6b3f65..f4928cdceb3ae2 100644 --- a/docs/install_guides/installing-openvino-yum.md +++ b/docs/install_guides/installing-openvino-yum.md @@ -37,7 +37,7 @@ * `CMake 3.13 or higher, 64-bit `_ * GCC 8.2.0 - * `Python 3.7 - 3.11, 64-bit `_ + * `Python 3.8 - 3.11, 64-bit `_ Install OpenVINO Runtime diff --git a/docs/install_guides/pypi-openvino-dev.md b/docs/install_guides/pypi-openvino-dev.md index 25e1aeac76ea02..b7c4d4d397a242 100644 --- a/docs/install_guides/pypi-openvino-dev.md +++ b/docs/install_guides/pypi-openvino-dev.md @@ -170,11 +170,11 @@ alias pip='noglob pip' On Windows*, some libraries are necessary to run OpenVINO. To resolve this issue, install the [C++ redistributable (.exe)](https://aka.ms/vs/17/release/vc_redist.x64.exe). You can also view a full download list on the [official support page](https://docs.microsoft.com/en-us/cpp/windows/latest-supported-vc-redist). -### ImportError: libpython3.7m.so.1.0: cannot open shared object file: No such file or directory +### ImportError: libpython3.8.so.1.0: cannot open shared object file: No such file or directory To resolve missing external dependency on Ubuntu* 18.04, execute the following command: ```sh -sudo apt-get install libpython3.7 +sudo apt-get install libpython3.8 ``` ## Additional Resources diff --git a/docs/install_guides/pypi-openvino-rt.md b/docs/install_guides/pypi-openvino-rt.md index 2007e88dc1a4b3..157f6959122d45 100644 --- a/docs/install_guides/pypi-openvino-rt.md +++ b/docs/install_guides/pypi-openvino-rt.md @@ -89,11 +89,11 @@ Users in China might encounter errors while downloading sources via PIP during O On Windows*, some libraries are necessary to run OpenVINO. To resolve this issue, install the [C++ redistributable (.exe)](https://aka.ms/vs/17/release/vc_redist.x64.exe). You can also view a full download list on the [official support page](https://docs.microsoft.com/en-us/cpp/windows/latest-supported-vc-redist). -### ImportError: libpython3.7m.so.1.0: cannot open shared object file: No such file or directory +### ImportError: libpython3.8.so.1.0: cannot open shared object file: No such file or directory To resolve missing external dependency on Ubuntu*, execute the following command: ```sh -sudo apt-get install libpython3.7 +sudo apt-get install libpython3.8 ``` ## Additional Resources diff --git a/docs/install_guides/troubleshooting.md b/docs/install_guides/troubleshooting.md index d5c969bee40108..92447e2abc3c82 100644 --- a/docs/install_guides/troubleshooting.md +++ b/docs/install_guides/troubleshooting.md @@ -113,7 +113,7 @@ .. dropdown:: Check the versions of Python and PIP - To check your Python version, run ``python -VV`` or ``python --version``. The supported Python versions should be 64-bit and between 3.7 and 3.11. If you are using Python 3.6, you are recommended to upgrade the version to 3.7 or higher. + To check your Python version, run ``python -VV`` or ``python --version``. The supported Python versions should be 64-bit and between 3.8 and 3.11. If you are using Python 3.7, you are recommended to upgrade the version to 3.8 or higher. If your Python version does not meet the requirements, update Python: diff --git a/docs/notebooks-installation.md b/docs/notebooks-installation.md index 52fd608e8f1804..d85f616e256ae5 100644 --- a/docs/notebooks-installation.md +++ b/docs/notebooks-installation.md @@ -30,18 +30,18 @@ The table below lists the supported operating systems and Python versions. | | (64-bit | | | ) `__ | +=====================================+================================+ -| Ubuntu 18.04 LTS | 3.7, 3.8, 3.9, 3.10. 3.11 | +| Ubuntu 18.04 LTS | 3.8, 3.9, 3.10. 3.11 | +-------------------------------------+--------------------------------+ -| Ubuntu 20.04 LTS | 3.7, 3.8, 3.9, 3.10, 3.11 | +| Ubuntu 20.04 LTS | 3.8, 3.9, 3.10, 3.11 | +-------------------------------------+--------------------------------+ | Red Hat Enterprise Linux 8 | 3.8, 3.9, 3.10, 3.11 | +-------------------------------------+--------------------------------+ -| macOS 10.15.x versions | 3.7, 3.8, 3.9, 3.10, 3.11 | +| macOS 10.15.x versions | 3.8, 3.9, 3.10, 3.11 | +-------------------------------------+--------------------------------+ -| Windows 10 Pro, Enterprise | 3.7, 3.8, 3.9, 3.10, 3.11 | +| Windows 10 Pro, Enterprise | 3.8, 3.9, 3.10, 3.11 | | or Education editions | | +-------------------------------------+--------------------------------+ -| Windows Server 2016 or higher | 3.7, 3.8, 3.9, 3.10, 3.11 | +| Windows Server 2016 or higher | 3.8, 3.9, 3.10, 3.11 | +-------------------------------------+--------------------------------+ OpenVINO Notebooks also require Git. Follow the guide below for your @@ -57,7 +57,7 @@ Installing prerequisites 1. **Install Python** - Download 64 bit version of Python software (3.7, 3.8, 3.9, 3.10, 3.11) from `python.org `__ + Download 64 bit version of Python software (3.8, 3.9, 3.10, 3.11) from `python.org `__ Run the installer by double clicking it. Follow the installation steps to set up the software. diff --git a/docs/ops/opset.md b/docs/ops/opset.md index d4d582dfa15b4e..af0a1639a55d2d 100644 --- a/docs/ops/opset.md +++ b/docs/ops/opset.md @@ -3,13 +3,14 @@ @sphinxdirective .. meta:: - :description: Check the list of available operation sets fully supported in + :description: Check the list of available operation sets fully supported in specific versions of OpenVINO™ toolkit. .. toctree:: :maxdepth: 1 :hidden: + openvino_docs_ops_opset13 openvino_docs_ops_opset12 openvino_docs_ops_opset11 openvino_docs_ops_opset10 @@ -33,6 +34,8 @@ This topic provides a complete list of available sets of operations supported in * - OpenVINO™ Version - Actual Operations Set + * - 2023.2 + - :doc:`opset13 ` * - 2023.1 - :doc:`opset12 ` * - 2023.0 diff --git a/docs/ops/opset13.md b/docs/ops/opset13.md new file mode 100644 index 00000000000000..f1411293a17028 --- /dev/null +++ b/docs/ops/opset13.md @@ -0,0 +1,199 @@ +# opset13 {#openvino_docs_ops_opset13} + +@sphinxdirective + +.. meta:: + :description: Explore the examples of operation instances expressed as IR + XML snippets in the opset13 operation set, supported in OpenVINO™ + toolkit. + +This specification document describes the ``opset13`` operation set supported in OpenVINO™. +Support for each particular operation from the list below depends on the capabilities of an inference plugin +and may vary among different hardware platforms and devices. Examples of operation instances are provided as IR xml +snippets. Such IR is generated by the Model Optimizer. The semantics match corresponding OpenVINO operation classes +declared in ``namespace opset13``. + + +Table of Contents +################## + +* :doc:`Abs ` +* :doc:`Acos ` +* :doc:`Acosh ` +* :doc:`AdaptiveAvgPool ` +* :doc:`AdaptiveMaxPool ` +* :doc:`Add ` +* :doc:`Asin ` +* :doc:`Asinh ` +* :doc:`Assign ` +* :doc:`Atan ` +* :doc:`Atanh ` +* :doc:`AvgPool ` +* :doc:`BatchNormInference ` +* :doc:`BatchToSpace ` +* :doc:`BinaryConvolution ` +* :doc:`Broadcast ` +* :doc:`Bucketize ` +* :doc:`CTCGreedyDecoder ` +* :doc:`CTCGreedyDecoderSeqLen ` +* :doc:`CTCLoss ` +* :doc:`Ceiling ` +* :doc:`Clamp ` +* :doc:`Concat ` +* :doc:`Constant ` +* :doc:`Convert ` +* :doc:`ConvertLike ` +* :doc:`Convolution ` +* :doc:`ConvolutionBackpropData ` +* :doc:`Cos ` +* :doc:`Cosh ` +* :doc:`CumSum ` +* :doc:`DeformableConvolution ` +* :doc:`DeformablePSROIPooling ` +* :doc:`DepthToSpace ` +* :doc:`DetectionOutput ` +* :doc:`DFT ` +* :doc:`Divide ` +* :doc:`Einsum ` +* :doc:`Elu ` +* :doc:`EmbeddingBagOffsetsSum ` +* :doc:`EmbeddingBagPackedSum ` +* :doc:`EmbeddingSegmentsSum ` +* :doc:`Equal ` +* :doc:`Erf ` +* :doc:`Exp ` +* :doc:`ExperimentalDetectronDetectionOutput_6 ` +* :doc:`ExperimentalDetectronGenerateProposalsSingleImage_6 ` +* :doc:`ExperimentalDetectronPriorGridGenerator_6 ` +* :doc:`ExperimentalDetectronROIFeatureExtractor_6 ` +* :doc:`ExperimentalDetectronTopKROIs_6 ` +* :doc:`ExtractImagePatches ` +* :doc:`Eye ` +* :doc:`FakeQuantize ` +* :doc:`Floor ` +* :doc:`FloorMod ` +* :doc:`Gather ` +* :doc:`GatherElements ` +* :doc:`GatherND ` +* :doc:`GatherTree ` +* :doc:`Gelu ` +* :doc:`GenerateProposals ` +* :doc:`Greater ` +* :doc:`GreaterEqual ` +* :doc:`GridSample ` +* :doc:`GRN ` +* :doc:`GroupConvolution ` +* :doc:`GroupConvolutionBackpropData ` +* :doc:`GroupNormalization ` +* :doc:`GRUCell ` +* :doc:`GRUSequence ` +* :doc:`HardSigmoid ` +* :doc:`HSigmoid ` +* :doc:`HSwish ` +* :doc:`IDFT ` +* :doc:`I420toBGR ` +* :doc:`I420toRGB ` +* :doc:`If ` +* :doc:`Interpolate ` +* :doc:`IRDFT ` +* :doc:`IsInf ` +* :doc:`IsNaN ` +* :doc:`Less ` +* :doc:`LessEqual ` +* :doc:`Log ` +* :doc:`LogicalAnd ` +* :doc:`LogicalNot ` +* :doc:`LogicalOr ` +* :doc:`LogicalXor ` +* :doc:`LogSoftmax ` +* :doc:`Loop ` +* :doc:`LRN ` +* :doc:`LSTMCell ` +* :doc:`LSTMSequence ` +* :doc:`MatMul ` +* :doc:`MatrixNMS ` +* :doc:`MaxPool ` +* :doc:`Maximum ` +* :doc:`Minimum ` +* :doc:`Mish ` +* :doc:`Mod ` +* :doc:`MVN ` +* :doc:`MulticlassNMS ` +* :doc:`Multiply ` +* :doc:`Negative ` +* :doc:`NMSRotated ` +* :doc:`NonMaxSuppression ` +* :doc:`NonZero ` +* :doc:`NormalizeL2 ` +* :doc:`NotEqual ` +* :doc:`NV12toBGR ` +* :doc:`NV12toRGB ` +* :doc:`OneHot ` +* :doc:`Pad ` +* :doc:`Parameter ` +* :doc:`Power ` +* :doc:`PReLU ` +* :doc:`PriorBoxClustered ` +* :doc:`PriorBox ` +* :doc:`Proposal ` +* :doc:`PSROIPooling ` +* :doc:`RandomUniform ` +* :doc:`Range ` +* :doc:`RDFT ` +* :doc:`ReLU ` +* :doc:`ReadValue ` +* :doc:`ReduceL1 ` +* :doc:`ReduceL2 ` +* :doc:`ReduceLogicalAnd ` +* :doc:`ReduceLogicalOr ` +* :doc:`ReduceMax ` +* :doc:`ReduceMean ` +* :doc:`ReduceMin ` +* :doc:`ReduceProd ` +* :doc:`ReduceSum ` +* :doc:`RegionYolo ` +* :doc:`ReorgYolo ` +* :doc:`Reshape ` +* :doc:`Result ` +* :doc:`ReverseSequence ` +* :doc:`RNNCell ` +* :doc:`RNNSequence ` +* :doc:`ROIAlign ` +* :doc:`ROIPooling ` +* :doc:`Roll ` +* :doc:`Round ` +* :doc:`ScatterElementsUpdate ` +* :doc:`ScatterNDUpdate ` +* :doc:`ScatterUpdate ` +* :doc:`Select ` +* :doc:`Selu ` +* :doc:`ShapeOf ` +* :doc:`ShuffleChannels ` +* :doc:`Sigmoid ` +* :doc:`Sign ` +* :doc:`Sin ` +* :doc:`Sinh ` +* :doc:`Slice ` +* :doc:`SoftMax ` +* :doc:`SoftPlus ` +* :doc:`SoftSign ` +* :doc:`SpaceToBatch ` +* :doc:`SpaceToDepth ` +* :doc:`Split ` +* :doc:`Sqrt ` +* :doc:`SquaredDifference ` +* :doc:`Squeeze ` +* :doc:`StridedSlice ` +* :doc:`Subtract ` +* :doc:`Swish ` +* :doc:`Tan ` +* :doc:`Tanh ` +* :doc:`TensorIterator ` +* :doc:`Tile ` +* :doc:`TopK ` +* :doc:`Transpose ` +* :doc:`Unique ` +* :doc:`Unsqueeze ` +* :doc:`VariadicSplit ` + +@endsphinxdirective diff --git a/docs/ops/sort/NMSRotated_13.md b/docs/ops/sort/NMSRotated_13.md new file mode 100644 index 00000000000000..5ae29954802563 --- /dev/null +++ b/docs/ops/sort/NMSRotated_13.md @@ -0,0 +1,145 @@ +# NMSRotated {#openvino_docs_ops_sort_NMSRotated_13} + +@sphinxdirective + +.. meta:: + :description: Learn about NMSRotated-13 - a sorting and maximization + operation, which requires five input tensors. + +**Versioned name**: *NMSRotated-13* + +**Category**: *Sorting and maximization* + +**Short description**: *NMSRotated* performs non-maximum suppression of the rotated boxes with predicted scores. + +**Detailed description**: *NMSRotated* performs regular non-maximum suppression, but the value of IoU is calculated for bounding boxes rotated by the corresponding angle. + +The general algorithm is described below: + +1. Let ``B = [b_0,...,b_n]`` be the list of initial detection boxes, ``S = [s_0,...,s_N]`` be the list of corresponding scores. +2. Let ``D = []`` be an initial collection of resulting boxes. +3. If ``B`` is empty then go to step 8. +4. Take the box with the highest score. Suppose that it is the box ``b`` with the score ``s``. +5. Delete ``b`` from ``B``. +6. If the score ``s`` is greater or equal than ``score_threshold`` then add ``b`` to ``D`` else go to step 8. +7. For each input box ``b_i`` from ``B`` and the corresponding score ``s_i``, set ``s_i = s_i * func(rotated_iou(b_i, b))`` and go to step 3. +8. Return ``D``, a collection of the corresponding scores ``S``, and the number of elements in ``D``. + +Here ``func(rotated_iou(b_i, b)) = 1 if rotated_iou(b_i, b) <= iou_threshold else 0``. + +Having two bouding boxes ``B1`` and ``B2`` the following steps are performed to calculate ``rotated_iou(B1, B2)``: + +1. Calculate rotated vertices, (x, y) coordinates of the 4 corners of each box transformed by the corresponding angle in radians according to the direction specified by the *clockwise* attribute. +2. Find all intersection points between edges of ``B1`` and ``B2``. Add them to the ``intersection_points``. +3. Find all corners of ``B1`` within area of ``B2``, and all corners of ``B2`` within area of ``B1``. Add them to the ``intersection_points``. +4. Calculate ``intersection_area`` of the polygon described by ``intersection_points`` (see Sholeace formula). +5. Calculate ``union_area`` (the common area of ``B1`` and ``B2``), `union_area = (B1_area + B2_area) - intersection_area`. +6. Return intersection over union ``rotated_iou = intersection_area / (union_area - intersection_area)``. + + +This algorithm is applied independently to each class of each batch element. The total number of output boxes for each class must not exceed ``max_output_boxes_per_class``. + +**Attributes**: + + +* *sort_result_descending* + + * **Description**: *sort_result_descending* is a flag that specifies whether it is necessary to sort selected boxes across batches or not. + * **Range of values**: true of false + + * *true* - sort selected boxes across batches. + * *false* - do not sort selected boxes across batches (boxes are sorted per class). + * **Type**: boolean + * **Default value**: true + * **Required**: *no* + +* *output_type* + + * **Description**: the output tensor type + * **Range of values**: "i64" or "i32" + * **Type**: string + * **Default value**: "i64" + * **Required**: *no* + +* *clockwise* + + * **Description**: the direction of angle + * **Range of values**: true of false + + * *true* - positive value of the angle is clockwise. + * *false* - positive value of the angle is counterclockwise. + * **Type**: boolean + * **Default value**: true + * **Required**: *no* + + +**Inputs**: + +* **1**: ``boxes`` - tensor of type *T* and shape ``[num_batches, num_boxes, 5]``. The box data is supplied as ``[x_center, y_center, width, height, angle]``, the coordinates of the center, width (x), height (y) and the angle in radians. **Required.** + +* **2**: ``scores`` - tensor of type *T* and shape ``[num_batches, num_classes, num_boxes]`` with box scores. **Required.** + +* **3**: ``max_output_boxes_per_class`` - scalar or 1D tensor with 1 element of type *T_MAX_BOXES* specifying the maximum number of boxes to be selected per class. **Required.** + +* **4**: ``iou_threshold`` - scalar or 1D tensor with 1 element of type *T_THRESHOLDS* specifying intersection over union threshold. **Required.** + +* **5**: ``score_threshold`` - scalar or 1D tensor with 1 element of type *T_THRESHOLDS* specifying minimum score to consider box for the processing. **Required.** + + +**Outputs**: + +* **1**: ``selected_indices`` - tensor of type *output_type* and shape ``[number of selected boxes, 3]`` containing information about selected boxes as triplets ``[batch_index, class_index, box_index]``. + +* **2**: ``selected_scores`` - tensor of type *T_THRESHOLDS* and shape ``[number of selected boxes, 3]`` containing information about scores for each selected box as triplets ``[batch_index, class_index, box_score]``. + +* **3**: ``valid_outputs`` - 1D tensor with 1 element of type *output_type* representing the total number of selected boxes. + +Plugins that do not support dynamic output tensors produce ``selected_indices`` and ``selected_scores`` tensors of shape ``[min(num_boxes, max_output_boxes_per_class) * num_batches * num_classes, 3]`` which is an upper bound for the number of possible selected boxes. Output tensor elements following the really selected boxes are filled with value -1. + +**Types** + +* *T*: floating-point type. + +* *T_MAX_BOXES*: integer type. + +* *T_THRESHOLDS*: floating-point type. + + +**Example** + +.. code-block:: xml + :force: + + + + + + 3 + 100 + 5 + + + 3 + 5 + 100 + + + + + + + + 150 < !-- min(100, 10) * 3 * 5 --> + 3 + + + 150 < !-- min(100, 10) * 3 * 5 --> + 3 + + + 1 + + + + +@endsphinxdirective diff --git a/src/bindings/python/src/openvino/runtime/ie_api.py b/src/bindings/python/src/openvino/runtime/ie_api.py index 8fbe985016ec03..a5df4733d7a19b 100644 --- a/src/bindings/python/src/openvino/runtime/ie_api.py +++ b/src/bindings/python/src/openvino/runtime/ie_api.py @@ -597,14 +597,25 @@ def import_model( ) -def compile_model(model_path: Union[str, Path]) -> CompiledModel: +def compile_model( + model: Union[Model, str, Path], + device_name: Optional[str] = "AUTO", + config: Optional[dict] = None, +) -> CompiledModel: """Compact method to compile model with AUTO plugin. - :param model_path: Path to file with model. - :type model_path: str, pathlib.Path - :return: A compiled model + :param model: Model acquired from read_model function or a path to a model in IR / ONNX / PDPD / + TF and TFLite format. + :type model: Union[openvino.runtime.Model, str, pathlib.Path] + :param device_name: Optional. Name of the device to load the model to. If not specified, + the default OpenVINO device will be selected by AUTO plugin. + :type device_name: str + :param config: Optional dict of pairs: + (property name, property value) relevant only for this load operation. + :type config: dict, optional + :return: A compiled model. :rtype: openvino.runtime.CompiledModel """ core = Core() - return core.compile_model(model_path, "AUTO") + return core.compile_model(model, device_name, {} if config is None else config) diff --git a/src/bindings/python/tests/test_runtime/test_core.py b/src/bindings/python/tests/test_runtime/test_core.py index 0bbcf684179e92..f0583ca26aa03d 100644 --- a/src/bindings/python/tests/test_runtime/test_core.py +++ b/src/bindings/python/tests/test_runtime/test_core.py @@ -19,6 +19,7 @@ serialize, ) +import openvino.properties.hint as hints from openvino.runtime import Extension from tests.utils.helpers import ( generate_image, @@ -40,15 +41,6 @@ def test_compact_api_xml(): assert np.argmax(results[list(results)[0]]) == 531 -# request - https://docs.pytest.org/en/7.1.x/reference/reference.html#request -def test_compact_api_xml_posix_path(request, tmp_path): - xml_path, _ = create_filename_for_test(request.node.name, tmp_path, True) - model = get_relu_model() - serialize(model, xml_path) - compiled_model = compile_model(Path(xml_path)) - assert isinstance(compiled_model, CompiledModel) - - def test_compact_api_wrong_path(): # as inner method takes py::object as an input and turns it into string # it is necessary to assure that provided argument is either @@ -77,24 +69,58 @@ def test_core_class(device): # request - https://docs.pytest.org/en/7.1.x/reference/reference.html#request -def test_compile_model(request, tmp_path, device): +@pytest.mark.parametrize("device_name", [ + None, + "CPU", +]) +def test_compile_model(request, tmp_path, device_name): core = Core() xml_path, bin_path = create_filename_for_test(request.node.name, tmp_path) relu_model = get_relu_model() serialize(relu_model, xml_path, bin_path) model = core.read_model(model=xml_path, weights=bin_path) - compiled_model = core.compile_model(model, device) + compiled_model = None + if device_name is None: + compiled_model = core.compile_model(model) + else: + compiled_model = core.compile_model(model, device_name) + assert isinstance(compiled_model, CompiledModel) -# request - https://docs.pytest.org/en/7.1.x/reference/reference.html#request -def test_compile_model_without_device(request, tmp_path): - core = Core() - xml_path, bin_path = create_filename_for_test(request.node.name, tmp_path) - relu_model = get_relu_model() - serialize(relu_model, xml_path, bin_path) - model = core.read_model(model=xml_path, weights=bin_path) - compiled_model = core.compile_model(model) +@pytest.fixture() +def get_model(): + return get_relu_model() + + +@pytest.fixture() +def get_model_path(request, tmp_path): + xml_path, _ = create_filename_for_test(request.node.name, tmp_path, True) + serialize(get_relu_model(), xml_path) + return Path(xml_path) + + +@pytest.mark.parametrize("model_type", [ + "get_model", + "get_model_path", +]) +@pytest.mark.parametrize("device_name", [ + None, + "CPU", +]) +@pytest.mark.parametrize("config", [ + None, + {hints.performance_mode(): hints.PerformanceMode.THROUGHPUT}, +]) +def test_compact_api(model_type, device_name, config, request): + compiled_model = None + + model = request.getfixturevalue(model_type) + if device_name is not None: + compiled_model = compile_model(model=model, device_name=device_name, config=config) + else: + compiled_model = compile_model(model=model, config=config) + assert isinstance(compiled_model, CompiledModel) diff --git a/src/common/transformations/src/transformations/common_optimizations/optimize_strided_slice.cpp b/src/common/transformations/src/transformations/common_optimizations/optimize_strided_slice.cpp index 7abffbd7b24d66..4df503379644f5 100644 --- a/src/common/transformations/src/transformations/common_optimizations/optimize_strided_slice.cpp +++ b/src/common/transformations/src/transformations/common_optimizations/optimize_strided_slice.cpp @@ -8,12 +8,12 @@ #include #include "itt.hpp" -#include "ngraph/slice_plan.hpp" #include "openvino/core/rt_info.hpp" #include "openvino/op/constant.hpp" #include "openvino/op/result.hpp" #include "openvino/op/slice.hpp" #include "openvino/op/strided_slice.hpp" +#include "openvino/op/util/slice_plan.hpp" #include "openvino/op/util/sub_graph_base.hpp" #include "openvino/op/variadic_split.hpp" #include "openvino/pass/manager.hpp" @@ -51,10 +51,9 @@ bool ov::pass::UselessSliceEraser::run_on_model(const std::shared_ptr return rewritten; } -OPENVINO_SUPPRESS_DEPRECATED_START namespace { -ngraph::SlicePlan get_slice_plan(std::shared_ptr slice) { +op::util::SlicePlan get_slice_plan(std::shared_ptr slice) { auto convert_mask_to_axis_set = [](const std::vector& mask) { ov::AxisSet axis_set{}; for (size_t i = 0; i < static_cast(mask.size()); ++i) { @@ -69,7 +68,7 @@ ngraph::SlicePlan get_slice_plan(std::shared_ptr slice auto end = std::dynamic_pointer_cast(slice->input_value(2).get_node_shared_ptr()); auto strides = std::dynamic_pointer_cast(slice->input_value(3).get_node_shared_ptr()); if (!begin || !end || !strides || slice->input(0).get_partial_shape().is_dynamic()) - return ngraph::SlicePlan(); + return op::util::SlicePlan(); auto begin_vec = begin->cast_vector(); auto end_vec = end->cast_vector(); @@ -77,15 +76,15 @@ ngraph::SlicePlan get_slice_plan(std::shared_ptr slice const auto begin_mask = convert_mask_to_axis_set(slice->get_begin_mask()); const auto end_mask = convert_mask_to_axis_set(slice->get_end_mask()); - ngraph::SlicePlan plan = ngraph::make_slice_plan(slice->input(0).get_shape(), - begin_vec, - end_vec, - strides_vec, - begin_mask, - end_mask, - convert_mask_to_axis_set(slice->get_new_axis_mask()), - convert_mask_to_axis_set(slice->get_shrink_axis_mask()), - convert_mask_to_axis_set(slice->get_ellipsis_mask())); + const auto plan = op::util::make_slice_plan(slice->input(0).get_shape(), + begin_vec, + end_vec, + strides_vec, + begin_mask, + end_mask, + convert_mask_to_axis_set(slice->get_new_axis_mask()), + convert_mask_to_axis_set(slice->get_shrink_axis_mask()), + convert_mask_to_axis_set(slice->get_ellipsis_mask())); return plan; } @@ -94,7 +93,7 @@ bool strided_slices_perform_the_same(std::shared_ptr l auto lhs_plan = get_slice_plan(lhs); auto rhs_plan = get_slice_plan(rhs); - auto empty_plan = ngraph::SlicePlan(); + const auto empty_plan = op::util::SlicePlan(); if (lhs_plan == empty_plan || rhs_plan == empty_plan) return false; return lhs_plan == rhs_plan; @@ -138,7 +137,7 @@ bool ov::pass::GroupedStridedSliceOptimizer::run_on_model(const std::shared_ptr< bool graph_rewritten = false; struct planned_slice { std::shared_ptr ptr; - ngraph::SlicePlan plan; + op::util::SlicePlan plan; }; std::map, std::vector> source_to_ss_with_plan; @@ -151,7 +150,7 @@ bool ov::pass::GroupedStridedSliceOptimizer::run_on_model(const std::shared_ptr< } if (auto ss = std::dynamic_pointer_cast(node)) { auto slice_plan = get_slice_plan(ss); - if (slice_plan == ngraph::SlicePlan()) + if (slice_plan == op::util::SlicePlan()) continue; source_to_ss_with_plan[ss->input_value(0)].push_back({ss, slice_plan}); } diff --git a/src/core/dev_api/openvino/op/util/slice_plan.hpp b/src/core/dev_api/openvino/op/util/slice_plan.hpp new file mode 100644 index 00000000000000..a20ba63d0e31d8 --- /dev/null +++ b/src/core/dev_api/openvino/op/util/slice_plan.hpp @@ -0,0 +1,62 @@ +// Copyright (C) 2018-2023 Intel Corporation +// SPDX-License-Identifier: Apache-2.0 +// + +#pragma once + +#include "openvino/core/axis_set.hpp" +#include "openvino/core/shape.hpp" + +namespace ov { +namespace op { +namespace util { + +/** + * @brief A collection of parameters for advanced slicing + * @details In various places, like ConstantFolding, it is useful to transform DynSlice by converting it to a sequence + * of ops: + * + * Slice (to do the basic slicing) + * | + * v + * Reshape (non-transposing, to handle shrinks) + * | + * v + * Reverse (to emulate backwards stride) + * + * (The Reshape, Reverse, or both may be omitted if they would just be identities.) + * + * A SlicePlan is used to collect parameters for these ops. + **/ +struct OPENVINO_API SlicePlan { + // Parameters for the Slice + std::vector begins; + std::vector ends; + std::vector strides; + + // Shapes coming into, and going out of, the Reshape. + Shape reshape_in_shape; + Shape reshape_out_shape; + + // Parameters for the Reverse + AxisSet reverse_axes; + + bool operator==(const SlicePlan& other) const; + bool operator!=(const SlicePlan& other) const; +}; + +/** + * @brief Prepares slice plan for strided slicing + **/ +SlicePlan OPENVINO_API make_slice_plan(const Shape& input_shape, + const std::vector& begins, + const std::vector& ends, + const std::vector& strides, + const AxisSet& lower_bounds_mask, + const AxisSet& upper_bounds_mask, + const AxisSet& new_axis_mask, + const AxisSet& shrink_axis_mask, + const AxisSet& ellipsis_mask); +} // namespace util +} // namespace op +} // namespace ov diff --git a/src/core/include/openvino/op/logical_xor.hpp b/src/core/include/openvino/op/logical_xor.hpp index 41ad89abca2638..773f0eba593ef8 100644 --- a/src/core/include/openvino/op/logical_xor.hpp +++ b/src/core/include/openvino/op/logical_xor.hpp @@ -34,9 +34,7 @@ class OPENVINO_API LogicalXor : public util::BinaryElementwiseLogical { std::shared_ptr clone_with_new_inputs(const OutputVector& new_args) const override; - OPENVINO_SUPPRESS_DEPRECATED_START - bool evaluate(const HostTensorVector& outputs, const HostTensorVector& inputs) const override; - OPENVINO_SUPPRESS_DEPRECATED_END + bool evaluate(TensorVector& outputs, const TensorVector& inputs) const override; bool has_evaluate() const override; }; } // namespace v1 diff --git a/src/core/include/openvino/op/unsqueeze.hpp b/src/core/include/openvino/op/unsqueeze.hpp index 243c3abcc59f19..855507cf9146e5 100644 --- a/src/core/include/openvino/op/unsqueeze.hpp +++ b/src/core/include/openvino/op/unsqueeze.hpp @@ -23,9 +23,7 @@ class OPENVINO_API Unsqueeze : public Op { void validate_and_infer_types() override; bool visit_attributes(AttributeVisitor& visitor) override; - OPENVINO_SUPPRESS_DEPRECATED_START - bool evaluate(const HostTensorVector& outputs, const HostTensorVector& inputs) const override; - OPENVINO_SUPPRESS_DEPRECATED_END + bool evaluate(ov::TensorVector& outputs, const ov::TensorVector& inputs) const override; bool has_evaluate() const override; bool evaluate_lower(TensorVector& output_values) const override; bool evaluate_upper(TensorVector& output_values) const override; diff --git a/src/core/include/openvino/op/xor.hpp b/src/core/include/openvino/op/xor.hpp index fad595c5d61dce..700aa34891e0b2 100644 --- a/src/core/include/openvino/op/xor.hpp +++ b/src/core/include/openvino/op/xor.hpp @@ -34,9 +34,7 @@ class OPENVINO_API Xor : public util::BinaryElementwiseLogical { std::shared_ptr clone_with_new_inputs(const OutputVector& new_args) const override; - OPENVINO_SUPPRESS_DEPRECATED_START - bool evaluate(const HostTensorVector& outputs, const HostTensorVector& inputs) const override; - OPENVINO_SUPPRESS_DEPRECATED_END + bool evaluate(TensorVector& outputs, const TensorVector& inputs) const override; bool has_evaluate() const override; }; } // namespace v0 diff --git a/src/core/reference/CMakeLists.txt b/src/core/reference/CMakeLists.txt index 0a6c16313d7ccb..4154a1455ffef0 100644 --- a/src/core/reference/CMakeLists.txt +++ b/src/core/reference/CMakeLists.txt @@ -42,7 +42,7 @@ target_include_directories(${TARGET_NAME} SYSTEM PRIVATE $:$>>) find_package(Threads REQUIRED) -target_link_libraries(${TARGET_NAME} PRIVATE Threads::Threads) +target_link_libraries(${TARGET_NAME} PRIVATE Threads::Threads openvino::core::dev) ov_add_clang_format_target(${TARGET_NAME}_clang FOR_TARGETS ${TARGET_NAME}) diff --git a/src/core/reference/include/openvino/reference/strided_slice.hpp b/src/core/reference/include/openvino/reference/strided_slice.hpp index 9c47c8c1e8f584..256b58ca12d200 100644 --- a/src/core/reference/include/openvino/reference/strided_slice.hpp +++ b/src/core/reference/include/openvino/reference/strided_slice.hpp @@ -6,18 +6,13 @@ #include -#include "ngraph/check.hpp" -#include "ngraph/runtime/host_tensor.hpp" -#include "ngraph/runtime/opt_kernel/reshape.hpp" -#include "ngraph/slice_plan.hpp" +#include "openvino/op/util/slice_plan.hpp" #include "openvino/reference/reverse.hpp" #include "openvino/reference/slice.hpp" #include "openvino/reference/utils/coordinate_transform.hpp" namespace ov { namespace reference { -NGRAPH_SUPPRESS_DEPRECATED_START -void strided_slice(const char* arg, char* out, const Shape& arg_shape, const ngraph::SlicePlan& sp, size_t elem_type); -NGRAPH_SUPPRESS_DEPRECATED_END +void strided_slice(const char* arg, char* out, const Shape& arg_shape, const op::util::SlicePlan& sp, size_t elem_type); } // namespace reference } // namespace ov diff --git a/src/core/reference/include/openvino/reference/xor.hpp b/src/core/reference/include/openvino/reference/xor.hpp index b3c8bae203e826..637ffa868498c3 100644 --- a/src/core/reference/include/openvino/reference/xor.hpp +++ b/src/core/reference/include/openvino/reference/xor.hpp @@ -4,21 +4,36 @@ #pragma once +#include #include -#include "ngraph/op/util/attr_types.hpp" -#include "ngraph/shape.hpp" #include "openvino/reference/autobroadcast_binop.hpp" namespace ov { namespace reference { + +namespace func { +template +T logical_xor(const T a, const T b) { + return static_cast((a || b) && !(a && b)); +} +} // namespace func + template -void logical_xor(const T* arg0, const T* arg1, T* out, size_t count) { - for (size_t i = 0; i < count; i++) { - out[i] = static_cast((arg0[i] || arg1[i]) && !(arg0[i] && arg1[i])); - } +void logical_xor(const T* arg0, const T* arg1, T* out, const size_t count) { + std::transform(arg0, std::next(arg0, count), arg1, out, &func::logical_xor); } +/** + * @brief Reference implementation of binary elementwise LogicalXor operator. + * + * @param arg0 Pointer to input 0 data. + * @param arg1 Pointer to input 1 data. + * @param out Pointer to output data. + * @param arg_shape0 Input 0 shape. + * @param arg_shape1 Input 1 shape. + * @param broadcast_spec Broadcast specification mode. + */ template void logical_xor(const T* arg0, const T* arg1, @@ -26,9 +41,7 @@ void logical_xor(const T* arg0, const Shape& arg0_shape, const Shape& arg1_shape, const op::AutoBroadcastSpec& broadcast_spec) { - autobroadcast_binop(arg0, arg1, out, arg0_shape, arg1_shape, broadcast_spec, [](T x, T y) -> T { - return static_cast((x || y) && !(x && y)); - }); + autobroadcast_binop(arg0, arg1, out, arg0_shape, arg1_shape, broadcast_spec, &func::logical_xor); } } // namespace reference } // namespace ov diff --git a/src/core/reference/src/op/strided_slice.cpp b/src/core/reference/src/op/strided_slice.cpp index 06e95dd4d1727e..457a65dec5d0c1 100644 --- a/src/core/reference/src/op/strided_slice.cpp +++ b/src/core/reference/src/op/strided_slice.cpp @@ -10,6 +10,7 @@ #include "ngraph/check.hpp" #include "ngraph/runtime/aligned_buffer.hpp" +#include "ngraph/runtime/opt_kernel/reshape.hpp" using namespace ov; NGRAPH_SUPPRESS_DEPRECATED_START @@ -17,7 +18,7 @@ NGRAPH_SUPPRESS_DEPRECATED_START void reference::strided_slice(const char* arg, char* out, const Shape& arg_shape, - const ngraph::SlicePlan& sp, + const op::util::SlicePlan& sp, size_t elem_type) { auto hasZeroDims = [](const ov::Shape& shape) -> bool { return std::any_of(shape.begin(), shape.end(), [](const size_t& dim) { diff --git a/src/core/src/op/batch_to_space.cpp b/src/core/src/op/batch_to_space.cpp index 4e3a391675d74c..c7ae3d7580a02c 100644 --- a/src/core/src/op/batch_to_space.cpp +++ b/src/core/src/op/batch_to_space.cpp @@ -18,8 +18,8 @@ #include "ngraph/opsets/opset3.hpp" #include "ngraph/runtime/opt_kernel/reshape.hpp" #include "ngraph/shape.hpp" -#include "ngraph/slice_plan.hpp" #include "openvino/op/util/precision_sensitive_attribute.hpp" +#include "openvino/op/util/slice_plan.hpp" #include "openvino/reference/strided_slice.hpp" using namespace std; @@ -160,18 +160,16 @@ bool batch_to_space_evaluate(const HostTensorVector& outputs, const HostTensorVe begins.assign(crops_begin_values, crops_begin_values + shape_size(inputs[2]->get_shape())); std::vector default_strides(begins.size(), 1); - OPENVINO_SUPPRESS_DEPRECATED_START - SlicePlan slice_plan = make_slice_plan(data_shape, - begins, - upperbounds_values, - default_strides, - begin_mask, - end_mask, - AxisSet(), - AxisSet(), - AxisSet()); + const auto slice_plan = ov::op::util::make_slice_plan(data_shape, + begins, + upperbounds_values, + default_strides, + begin_mask, + end_mask, + AxisSet(), + AxisSet(), + AxisSet()); ov::reference::strided_slice(flat_data, outputs[0]->get_data_ptr(), data_shape, slice_plan, elem_size); - OPENVINO_SUPPRESS_DEPRECATED_END return true; } } // namespace diff --git a/src/core/src/op/strided_slice.cpp b/src/core/src/op/strided_slice.cpp index 0e5a459ef00406..79647177e654d3 100644 --- a/src/core/src/op/strided_slice.cpp +++ b/src/core/src/op/strided_slice.cpp @@ -14,13 +14,13 @@ #include "ngraph/op/constant.hpp" #include "ngraph/op/shape_of.hpp" #include "ngraph/runtime/host_tensor.hpp" -#include "ngraph/slice_plan.hpp" #include "ngraph/type/element_type_traits.hpp" #include "ngraph/util.hpp" #include "ngraph/validation_util.hpp" #include "openvino/core/rt_info.hpp" #include "openvino/core/validation_util.hpp" #include "openvino/op/util/precision_sensitive_attribute.hpp" +#include "openvino/op/util/slice_plan.hpp" #include "openvino/pass/constant_folding.hpp" #include "openvino/reference/strided_slice.hpp" #include "strided_slice_shape_inference.hpp" @@ -189,11 +189,10 @@ shared_ptr op::v1::StridedSlice::clone_with_new_inputs(const OutputVector& m_ellipsis_mask); } -OPENVINO_SUPPRESS_DEPRECATED_START namespace strided_slice { namespace { OPENVINO_SUPPRESS_DEPRECATED_START -inline bool evaluate(const HostTensorPtr& in, const SlicePlan& sp, const HostTensorPtr& out) +inline bool evaluate(const HostTensorPtr& in, const ov::op::util::SlicePlan& sp, const HostTensorPtr& out) { auto in_shape = in->get_shape(); @@ -219,15 +218,15 @@ bool evaluate_strided_slice(const HostTensorPtr& in, std::vector begin_const = host_tensor_2_vector(begin); std::vector end_const = host_tensor_2_vector(end); std::vector stride_const = host_tensor_2_vector(stride); - SlicePlan slice_plan = make_slice_plan(in->get_shape(), - begin_const, - end_const, - stride_const, - begin_mask, - end_mask, - new_axis_mask, - shrink_axis_mask, - ellipsis_mask); + const auto slice_plan = ov::op::util::make_slice_plan(in->get_shape(), + begin_const, + end_const, + stride_const, + begin_mask, + end_mask, + new_axis_mask, + shrink_axis_mask, + ellipsis_mask); return evaluate(in, slice_plan, out); } OPENVINO_SUPPRESS_DEPRECATED_END diff --git a/src/core/src/op/unsqueeze.cpp b/src/core/src/op/unsqueeze.cpp index 78587dd9423ebf..c0c7a65891b741 100644 --- a/src/core/src/op/unsqueeze.cpp +++ b/src/core/src/op/unsqueeze.cpp @@ -2,27 +2,23 @@ // SPDX-License-Identifier: Apache-2.0 // -#include "ngraph/op/unsqueeze.hpp" +#include "openvino/op/unsqueeze.hpp" #include #include #include #include "bound_evaluate.hpp" -#include "element_visitor.hpp" #include "itt.hpp" -#include "ngraph/validation_util.hpp" -#include "openvino/reference/copy.hpp" +#include "openvino/core/validation_util.hpp" #include "unsqueeze_shape_inference.hpp" -using namespace std; -using namespace ngraph; - -op::v0::Unsqueeze::Unsqueeze(const Output& data, const Output& axes) : Op({data, axes}) { +ov::op::v0::Unsqueeze::Unsqueeze(const ov::Output& data, const ov::Output& axes) + : Op({data, axes}) { constructor_validate_and_infer_types(); } -void op::v0::Unsqueeze::validate_and_infer_types() { +void ov::op::v0::Unsqueeze::validate_and_infer_types() { OV_OP_SCOPE(v0_Unsqueeze_validate_and_infer_types); OPENVINO_SUPPRESS_DEPRECATED_START @@ -33,112 +29,51 @@ void op::v0::Unsqueeze::validate_and_infer_types() { set_output_type(0, get_input_element_type(0), output_shapes[0]); } -bool op::v0::Unsqueeze::visit_attributes(AttributeVisitor& visitor) { +bool ov::op::v0::Unsqueeze::visit_attributes(AttributeVisitor& visitor) { OV_OP_SCOPE(v0_Unsqueeze_visit_attributes); return true; } -shared_ptr op::v0::Unsqueeze::clone_with_new_inputs(const OutputVector& new_args) const { +std::shared_ptr ov::op::v0::Unsqueeze::clone_with_new_inputs(const OutputVector& new_args) const { OV_OP_SCOPE(v0_Unsqueeze_clone_with_new_inputs); if (new_args.size() != 2) { OPENVINO_THROW("Incorrect number of new arguments"); } - return make_shared(new_args.at(0), new_args.at(1)); -} - -OPENVINO_SUPPRESS_DEPRECATED_START -namespace ov { -namespace op { -namespace unsqueeze { -struct Evaluate : element::NoAction { - using element::NoAction::visit; - - template - static result_type visit(const HostTensorPtr& arg0, const HostTensorPtr& out, const size_t count) { - ov::reference::copy(arg0->get_data_ptr(), out->get_data_ptr(), count); - return true; - } -}; - -// The evaluate cannot use shape_infer for output shape calculation as shape inference accepts -// repeated axis and evaluate not. When shape inference will changed to be compatible with `numpy` then -// evaluate and inference can use same function to calculate output shape. TODO for next version for this operator. -namespace { -bool evaluate_unsqueeze(const Node* node, - const HostTensorPtr& arg0, - const HostTensorPtr& arg1, - const HostTensorPtr& out) { - auto element_type = arg0->get_element_type(); - out->set_element_type(element_type); - - const auto& axes_shape = arg1->get_shape(); - ov::op::v0::check_unsqueeze_axes_rank(node, Rank(axes_shape.size())); - - const auto& data_shape = arg0->get_shape(); - const auto out_rank = static_cast(data_shape.size() + shape_size(axes_shape)); - - // Get axes and normalize - OPENVINO_SUPPRESS_DEPRECATED_START - auto axes = read_index_vector(arg1); - normalize_axes(node, out_rank, axes); - OPENVINO_SUPPRESS_DEPRECATED_END - - // Sort in increasing order - std::set axes_set(axes.begin(), axes.end()); - NGRAPH_CHECK(axes.size() == axes_set.size(), "Axes has duplicate axis."); - - auto out_shape = data_shape; - for (int64_t axis : axes_set) { - out_shape.insert(out_shape.begin() + axis, 1); - } - out->set_shape(out_shape); - - using namespace ov::element; - return IfTypeOf::apply(element_type, - arg0, - out, - shape_size(out_shape)); + return std::make_shared(new_args.at(0), new_args.at(1)); } -} // namespace -} // namespace unsqueeze -} // namespace op -} // namespace ov -bool op::v0::Unsqueeze::evaluate(const HostTensorVector& outputs, const HostTensorVector& inputs) const { +bool ov::op::v0::Unsqueeze::evaluate(ov::TensorVector& outputs, const ov::TensorVector& inputs) const { OV_OP_SCOPE(v0_Unsqueeze_evaluate); - OPENVINO_SUPPRESS_DEPRECATED_START - NGRAPH_CHECK(validate_host_tensor_vector(inputs, 2)); - NGRAPH_CHECK(validate_host_tensor_vector(outputs, 1)); - OPENVINO_SUPPRESS_DEPRECATED_END - return unsqueeze::evaluate_unsqueeze(this, inputs[0], inputs[1], outputs[0]); + OPENVINO_ASSERT(inputs.size() == 2); + if (outputs.empty()) { + outputs.emplace_back(ov::Tensor(inputs[0].get_element_type(), {0})); + } else { + OPENVINO_ASSERT(outputs.size() == 1); + } + const auto& output_shape = shape_infer(this, + std::vector{inputs[0].get_shape(), inputs[1].get_shape()}, + make_tensor_accessor(inputs)) + .front() + .to_shape(); + outputs[0].set_shape(output_shape); + std::memcpy(outputs[0].data(), inputs[0].data(), outputs[0].get_byte_size()); + return true; } -bool op::v0::Unsqueeze::has_evaluate() const { +bool ov::op::v0::Unsqueeze::has_evaluate() const { OV_OP_SCOPE(v0_Unsqueeze_has_evaluate); - switch (get_input_element_type(0)) { - case ngraph::element::i32: - case ngraph::element::i64: - case ngraph::element::u32: - case ngraph::element::u64: - case ngraph::element::f16: - case ngraph::element::f32: - case ngraph::element::f64: - case ngraph::element::bf16: - return true; - default: - return false; - } + return true; } -bool op::v0::Unsqueeze::evaluate_lower(ov::TensorVector& output_values) const { +bool ov::op::v0::Unsqueeze::evaluate_lower(ov::TensorVector& output_values) const { return get_input_tensor(1).has_and_set_bound() && default_lower_bound_evaluator(this, output_values); } -bool op::v0::Unsqueeze::evaluate_upper(ov::TensorVector& output_values) const { +bool ov::op::v0::Unsqueeze::evaluate_upper(ov::TensorVector& output_values) const { return get_input_tensor(1).has_and_set_bound() && default_upper_bound_evaluator(this, output_values); } -bool op::v0::Unsqueeze::evaluate_label(TensorLabelVector& output_labels) const { +bool ov::op::v0::Unsqueeze::evaluate_label(TensorLabelVector& output_labels) const { if (!get_input_tensor(1).has_and_set_bound()) return false; OPENVINO_SUPPRESS_DEPRECATED_START @@ -146,7 +81,7 @@ bool op::v0::Unsqueeze::evaluate_label(TensorLabelVector& output_labels) const { OPENVINO_SUPPRESS_DEPRECATED_END } -bool op::v0::Unsqueeze::constant_fold(OutputVector& output_values, const OutputVector& inputs_values) { +bool ov::op::v0::Unsqueeze::constant_fold(OutputVector& output_values, const OutputVector& inputs_values) { if (get_output_partial_shape(0).is_dynamic() || is_const_fold_disabled()) { return false; } diff --git a/src/core/src/op/util/slice_plan.cpp b/src/core/src/op/util/slice_plan.cpp index b93370c9220fe6..2025900745ec95 100644 --- a/src/core/src/op/util/slice_plan.cpp +++ b/src/core/src/op/util/slice_plan.cpp @@ -2,26 +2,28 @@ // SPDX-License-Identifier: Apache-2.0 // -#include "ngraph/slice_plan.hpp" +#include "openvino/op/util/slice_plan.hpp" #include -#include "ngraph/check.hpp" +#include "ngraph/op/util/slice_plan.hpp" +#include "openvino/core/except.hpp" -using namespace ngraph; -NGRAPH_SUPPRESS_DEPRECATED_START +namespace ov { +namespace op { +namespace util { -SlicePlan ngraph::make_slice_plan(const Shape& input_shape, - const std::vector& begins, - const std::vector& ends, - const std::vector& strides, - const AxisSet& lower_bounds_mask, - const AxisSet& upper_bounds_mask, - const AxisSet& new_axis_mask, - const AxisSet& shrink_axis_mask, - const AxisSet& ellipsis_mask) { - NGRAPH_CHECK(begins.size() == ends.size()); - NGRAPH_CHECK(ends.size() == strides.size()); +SlicePlan make_slice_plan(const Shape& input_shape, + const std::vector& begins, + const std::vector& ends, + const std::vector& strides, + const AxisSet& lower_bounds_mask, + const AxisSet& upper_bounds_mask, + const AxisSet& new_axis_mask, + const AxisSet& shrink_axis_mask, + const AxisSet& ellipsis_mask) { + OPENVINO_ASSERT(begins.size() == ends.size()); + OPENVINO_ASSERT(ends.size() == strides.size()); size_t num_slice_indices = begins.size(); size_t num_real_axes = 0; @@ -35,7 +37,7 @@ SlicePlan ngraph::make_slice_plan(const Shape& input_shape, // and are not the ellipsis). for (size_t i = 0; i < num_slice_indices; i++) { if (ellipsis_mask.count(i)) { - NGRAPH_CHECK(!ellipsis_found); + OPENVINO_ASSERT(!ellipsis_found); ellipsis_found = true; } else if (new_axis_mask.count(i)) { num_new_axes++; @@ -47,7 +49,11 @@ SlicePlan ngraph::make_slice_plan(const Shape& input_shape, } } - NGRAPH_CHECK(num_real_axes <= input_shape.size(), "num_real_axes=", num_real_axes, ", input_shape=", input_shape); + OPENVINO_ASSERT(num_real_axes <= input_shape.size(), + "num_real_axes=", + num_real_axes, + ", input_shape=", + input_shape); // Figure out how many axes need to be inserted when the ellipsis (which // may be an implicit ellipsis at the end) is expanded. @@ -100,7 +106,7 @@ SlicePlan ngraph::make_slice_plan(const Shape& input_shape, // Note that clipping is not used for "shrunken" axes: an // out-of-bounds index is an error. - NGRAPH_CHECK(begin >= -(int64_t(input_shape[i_in])) && begin < int64_t(input_shape[i_in])); + OPENVINO_ASSERT(begin >= -(int64_t(input_shape[i_in])) && begin < int64_t(input_shape[i_in])); if (begin < 0) { begin += int64_t(input_shape[i_in]); @@ -142,7 +148,7 @@ SlicePlan ngraph::make_slice_plan(const Shape& input_shape, real_end = std::max(min_real_end, std::min(int64_t(input_shape[i_in]), real_end)); // Ensure stride is not zero, and adjust it for backwards slicing. - NGRAPH_CHECK(strides[i] != 0); + OPENVINO_ASSERT(strides[i] != 0); int64_t real_stride = std::abs(strides[i]); // Adjust for reversal if needed. This isn't quite as simple as swapping begin and @@ -157,8 +163,7 @@ SlicePlan ngraph::make_slice_plan(const Shape& input_shape, p.reverse_axes.insert(i_out); } - // nGraph's slice op does not like it when end < begin, so we truncate for that case - // here. + // ov slice op does not like it when end < begin, so we truncate for that case here. if (real_end < real_begin) { real_end = real_begin; } @@ -194,7 +199,50 @@ SlicePlan ngraph::make_slice_plan(const Shape& input_shape, return p; } -bool SlicePlan::operator==(const ngraph::SlicePlan& other) const { +bool SlicePlan::operator==(const SlicePlan& other) const { + bool equal = true; + equal &= begins == other.begins; + equal &= ends == other.ends; + equal &= strides == other.strides; + equal &= reshape_in_shape == other.reshape_in_shape; + equal &= reshape_out_shape == other.reshape_out_shape; + equal &= reverse_axes == other.reverse_axes; + + return equal; +} + +bool SlicePlan::operator!=(const SlicePlan& other) const { + return !(*this == other); +} +} // namespace util +} // namespace op +} // namespace ov + +NGRAPH_SUPPRESS_DEPRECATED_START +namespace ngraph { + +SlicePlan make_slice_plan(const Shape& input_shape, + const std::vector& begins, + const std::vector& ends, + const std::vector& strides, + const AxisSet& lower_bounds_mask, + const AxisSet& upper_bounds_mask, + const AxisSet& new_axis_mask, + const AxisSet& shrink_axis_mask, + const AxisSet& ellipsis_mask) { + const auto sp = ov::op::util::make_slice_plan(input_shape, + begins, + ends, + strides, + lower_bounds_mask, + upper_bounds_mask, + new_axis_mask, + shrink_axis_mask, + ellipsis_mask); + return SlicePlan{sp.begins, sp.ends, sp.strides, sp.reshape_in_shape, sp.reshape_out_shape, sp.reverse_axes}; +} + +bool SlicePlan::operator==(const SlicePlan& other) const { bool equal = true; equal &= begins == other.begins; equal &= ends == other.ends; @@ -206,6 +254,8 @@ bool SlicePlan::operator==(const ngraph::SlicePlan& other) const { return equal; } -bool SlicePlan::operator!=(const ngraph::SlicePlan& other) const { +bool SlicePlan::operator!=(const SlicePlan& other) const { return !(*this == other); } +} // namespace ngraph +NGRAPH_SUPPRESS_DEPRECATED_END diff --git a/src/core/src/op/xor.cpp b/src/core/src/op/xor.cpp index 7fd85e8600f865..8a3b9c9e2c4303 100644 --- a/src/core/src/op/xor.cpp +++ b/src/core/src/op/xor.cpp @@ -2,105 +2,103 @@ // SPDX-License-Identifier: Apache-2.0 // -#include "ngraph/op/xor.hpp" +#include "openvino/op/xor.hpp" +#include "element_visitor.hpp" #include "itt.hpp" -#include "ngraph/runtime/host_tensor.hpp" -#include "ngraph/validation_util.hpp" +#include "openvino/op/logical_xor.hpp" #include "openvino/reference/xor.hpp" +#include "shape_util.hpp" -using namespace std; -using namespace ngraph; - -op::v1::LogicalXor::LogicalXor(const Output& arg0, - const Output& arg1, - const AutoBroadcastSpec& auto_broadcast) - : BinaryElementwiseLogical(arg0, arg1, auto_broadcast) { - constructor_validate_and_infer_types(); -} +namespace ov { +namespace op { +namespace logxor { +struct Evaluate : element::NoAction { + using element::NoAction::visit; -shared_ptr op::v1::LogicalXor::clone_with_new_inputs(const OutputVector& new_args) const { - OV_OP_SCOPE(v1_LogicalXor_clone_with_new_inputs); - check_new_args_count(this, new_args); - return make_shared(new_args.at(0), new_args.at(1), this->get_autob()); -} + template + static result_type visit(const Tensor& arg0, + const Tensor& arg1, + Tensor& out, + const AutoBroadcastSpec& broadcast_spec) { + using T = typename element_type_traits::value_type; + reference::logical_xor(arg0.data(), + arg1.data(), + out.data(), + arg0.get_shape(), + arg1.get_shape(), + broadcast_spec); + return true; + } +}; -OPENVINO_SUPPRESS_DEPRECATED_START -namespace logxor { namespace { -template -bool evaluate(const HostTensorPtr& arg0, - const HostTensorPtr& arg1, - const HostTensorPtr& out, - const op::AutoBroadcastSpec& broadcast_spec) { - ov::reference::logical_xor(arg0->get_data_ptr(), - arg1->get_data_ptr(), - out->get_data_ptr(), - arg0->get_shape(), - arg1->get_shape(), - broadcast_spec); - return true; +bool input_supported_type(const element::Type& et) { + return et == element::boolean; } -bool evaluate_logxor(const HostTensorPtr& arg0, - const HostTensorPtr& arg1, - const HostTensorPtr& out, - const op::AutoBroadcastSpec& broadcast_spec) { - bool rc = true; - out->set_broadcast(broadcast_spec, arg0, arg1); - switch (arg0->get_element_type()) { - NGRAPH_TYPE_CASE(evaluate_logxor, boolean, arg0, arg1, out, broadcast_spec); - default: - rc = false; - break; - } - return rc; +bool evaluate(TensorVector& outputs, const TensorVector& inputs, const AutoBroadcastSpec& broadcast_spec) { + OPENVINO_ASSERT(outputs.size() == 1); + OPENVINO_ASSERT(inputs.size() == 2); + + outputs[0].set_shape(ov::util::get_broadcast_shape(inputs[0].get_shape(), inputs[1].get_shape(), broadcast_spec)); + + using namespace ov::element; + return IfTypeOf::apply(inputs[0].get_element_type(), + inputs[0], + inputs[1], + outputs[0], + broadcast_spec); } } // namespace } // namespace logxor -bool op::v1::LogicalXor::evaluate(const HostTensorVector& outputs, const HostTensorVector& inputs) const { - OV_OP_SCOPE(v1_LogicalXor_evaluate); - OPENVINO_SUPPRESS_DEPRECATED_START - NGRAPH_CHECK(validate_host_tensor_vector(outputs, 1) && validate_host_tensor_vector(inputs, 2)); - OPENVINO_SUPPRESS_DEPRECATED_END - return logxor::evaluate_logxor(inputs[0], inputs[1], outputs[0], get_autob()); +namespace v0 { +Xor::Xor(const Output& arg0, const Output& arg1, const AutoBroadcastSpec& auto_broadcast) + : BinaryElementwiseLogical(arg0, arg1, auto_broadcast) { + constructor_validate_and_infer_types(); } -bool op::v1::LogicalXor::has_evaluate() const { - OV_OP_SCOPE(v1_LogicalXor_has_evaluate); - switch (get_input_element_type(0)) { - case ngraph::element::boolean: - return true; - default: - break; - } - return false; +std::shared_ptr Xor::clone_with_new_inputs(const OutputVector& new_args) const { + OV_OP_SCOPE(v0_Xor_clone_with_new_inputs); + check_new_args_count(this, new_args); + return std::make_shared(new_args.at(0), new_args.at(1), this->get_autob()); +} + +bool Xor::evaluate(TensorVector& outputs, const TensorVector& inputs) const { + OV_OP_SCOPE(v0_Xor_evaluate); + + return logxor::evaluate(outputs, inputs, get_autob()); +} + +bool Xor::has_evaluate() const { + OV_OP_SCOPE(v0_Xor_has_evaluate); + return logxor::input_supported_type(get_input_element_type(0)); } +} // namespace v0 -op::v0::Xor::Xor(const Output& arg0, const Output& arg1, const AutoBroadcastSpec& auto_broadcast) +namespace v1 { +LogicalXor::LogicalXor(const Output& arg0, const Output& arg1, const AutoBroadcastSpec& auto_broadcast) : BinaryElementwiseLogical(arg0, arg1, auto_broadcast) { constructor_validate_and_infer_types(); } -shared_ptr op::v0::Xor::clone_with_new_inputs(const OutputVector& new_args) const { - OV_OP_SCOPE(v0_Xor_clone_with_new_inputs); +std::shared_ptr LogicalXor::clone_with_new_inputs(const OutputVector& new_args) const { + OV_OP_SCOPE(v1_LogicalXor_clone_with_new_inputs); check_new_args_count(this, new_args); - return make_shared(new_args.at(0), new_args.at(1), this->get_autob()); + return std::make_shared(new_args.at(0), new_args.at(1), this->get_autob()); } -bool op::v0::Xor::evaluate(const HostTensorVector& outputs, const HostTensorVector& inputs) const { - OV_OP_SCOPE(v0_Xor_evaluate); - return logxor::evaluate_logxor(inputs[0], inputs[1], outputs[0], get_autob()); +bool LogicalXor::evaluate(TensorVector& outputs, const TensorVector& inputs) const { + OV_OP_SCOPE(v1_LogicalXor_evaluate); + + return logxor::evaluate(outputs, inputs, get_autob()); } -bool op::v0::Xor::has_evaluate() const { - OV_OP_SCOPE(v0_Xor_has_evaluate); - switch (get_input_element_type(0)) { - case ngraph::element::boolean: - return true; - default: - break; - } - return false; +bool LogicalXor::has_evaluate() const { + OV_OP_SCOPE(v1_LogicalXor_has_evaluate); + return logxor::input_supported_type(get_input_element_type(0)); } +} // namespace v1 +} // namespace op +} // namespace ov diff --git a/src/frontends/pytorch/src/op/tuple_index.cpp b/src/frontends/pytorch/src/op/tuple_index.cpp new file mode 100644 index 00000000000000..320733d701284d --- /dev/null +++ b/src/frontends/pytorch/src/op/tuple_index.cpp @@ -0,0 +1,39 @@ +// Copyright (C) 2018-2023 Intel Corporation +// SPDX-License-Identifier: Apache-2.0 +// + +#include "openvino/frontend/pytorch/node_context.hpp" +#include "openvino/op/constant.hpp" +#include "openvino/op/convert_like.hpp" +#include "openvino/op/gather.hpp" +#include "utils.hpp" + +namespace ov { +namespace frontend { +namespace pytorch { +namespace op { + +using namespace ov::op; + +OutputVector translate_tuple_index(const NodeContext& context) { + // prim::TupleIndex(Any tup, int i) -> Any + num_inputs_check(context, 2, 2); + auto tuple = context.get_input(0).get_node_shared_ptr(); + if (cast_fw_node(tuple, "prim::TupleConstruct")) { + // this case require index to be constant + auto index = context.const_input(1); + FRONT_END_OP_CONVERSION_CHECK(static_cast(index) < tuple->get_input_size(), + "Index of TupleIndex operation is higher then number of tuple elements."); + return {tuple->get_input_source_output(index)}; + } else { + // Assume this case is when tuple is represented as tensor + auto index = context.get_input(1); + auto zero = v0::Constant::create(element::i32, Shape{}, {0}); + return {std::make_shared(context.get_input(0), index, zero)}; + } +}; + +} // namespace op +} // namespace pytorch +} // namespace frontend +} // namespace ov \ No newline at end of file diff --git a/src/frontends/pytorch/src/op_table.cpp b/src/frontends/pytorch/src/op_table.cpp index 4dd60e01b71d7f..f475c2cd186275 100644 --- a/src/frontends/pytorch/src/op_table.cpp +++ b/src/frontends/pytorch/src/op_table.cpp @@ -165,6 +165,7 @@ OP_CONVERTER(translate_topk); OP_CONVERTER(translate_transpose); OP_CONVERTER(translate_tril); OP_CONVERTER(translate_triu); +OP_CONVERTER(translate_tuple_index); OP_CONVERTER(translate_unflatten); OP_CONVERTER(translate_unfold); OP_CONVERTER(translate_upsample_bicubic2d); @@ -479,6 +480,7 @@ const std::map get_supported_ops_ts() { {"prim::requires_grad", op::return_false_scalar}, {"prim::PythonOp", op::translate_pythonop}, {"prim::type", op::skip_node}, // Used with prim::device, pass PtFrameworkNode. + {"prim::TupleIndex", op::translate_tuple_index}, {"quantized::add", op::translate_quantized_add}, {"quantized::add_relu", op::translate_quantized_add_relu}, {"quantized::cat", op::translate_quantized_cat}, diff --git a/src/frontends/pytorch/src/transforms/softmax_reshape_elimination.cpp b/src/frontends/pytorch/src/transforms/softmax_reshape_elimination.cpp index a14fe910160574..821007819848ed 100644 --- a/src/frontends/pytorch/src/transforms/softmax_reshape_elimination.cpp +++ b/src/frontends/pytorch/src/transforms/softmax_reshape_elimination.cpp @@ -24,7 +24,7 @@ SoftmaxReshapeElimination::SoftmaxReshapeElimination() { register_matcher( std::make_shared(m_reshape1, - "ov::frontend::pytorch::pass::PrimTupleUnpackReplacer"), + "ov::frontend::pytorch::pass::SoftmaxReshapeElimination"), [=](ov::pass::pattern::Matcher& m) { auto& pattern_to_output = m.get_pattern_value_map(); auto reshape0 = pattern_to_output[m_reshape0].get_node_shared_ptr(); diff --git a/src/inference/dev_api/performance_heuristics.hpp b/src/inference/dev_api/performance_heuristics.hpp index 3fba34188080d7..6fde6443f246eb 100644 --- a/src/inference/dev_api/performance_heuristics.hpp +++ b/src/inference/dev_api/performance_heuristics.hpp @@ -1,12 +1,13 @@ // Copyright (C) 2018-2023 Intel Corporation // SPDX-License-Identifier: Apache-2.0 // - -/////////////////////////////////////////////////////////////////////////////////////////////////// #pragma once + #include +#include -#include "ngraph/ngraph.hpp" +#include "openvino/core/model.hpp" +#include "transformations/utils/utils.hpp" namespace ov { struct MemBandwidthPressure { @@ -24,7 +25,7 @@ struct MemBandwidthPressure { }; static MemBandwidthPressure MemBandwidthPressureTolerance( - const std::shared_ptr nGraphFunc, + const std::shared_ptr model, const float cache_size, const float memThresholdAssumeLimited = MemBandwidthPressure::LIMITED) { int total_convs = 0, mem_limited_convs = 0, compute_convs = 0, total_gemms = 0, mem_limited_gemms = 0, @@ -32,16 +33,16 @@ static MemBandwidthPressure MemBandwidthPressureTolerance( auto memLimitedFactor = [&](size_t size_data_moved, int datatype_size = 4) -> float { return (cache_size / (size_data_moved * datatype_size)); }; - auto isLowPrecision = [&](ngraph::element::Type type) -> bool { - return (type == ngraph::element::i8) || (type == ngraph::element::u8); + auto isLowPrecision = [&](ov::element::Type type) -> bool { + return (type == ov::element::i8) || (type == ov::element::u8); }; - auto isHalfPrecision = [&](ngraph::element::Type type) -> bool { - return (type == ngraph::element::bf16) || (type == ngraph::element::f16); + auto isHalfPrecision = [&](ov::element::Type type) -> bool { + return (type == ov::element::bf16) || (type == ov::element::f16); }; float worst_case = MemBandwidthPressure::UNKNOWN; - // Traverse nGraph Function in topological order - for (auto& node : nGraphFunc->get_ordered_ops()) { + // Traverse OpenVINO Model in topological order + for (auto& node : model->get_ordered_ops()) { const auto node_name = node->get_type_info().name; if (std::strcmp("MatMul", node_name) && std::strcmp("Convolution", node_name) && std::strcmp("ConvolutionBackpropData", node_name)) { diff --git a/src/inference/src/check_network_batchable.cpp b/src/inference/src/check_network_batchable.cpp index 869ee44948d5c3..b456fec87912b2 100644 --- a/src/inference/src/check_network_batchable.cpp +++ b/src/inference/src/check_network_batchable.cpp @@ -1,7 +1,5 @@ #include "check_network_batchable.hpp" -#include "ie_ngraph_utils.hpp" -#include "ngraph/opsets/opset.hpp" #include "openvino/core/dimension_tracker.hpp" #include "openvino/op/detection_output.hpp" #include "openvino/op/ops.hpp" diff --git a/src/inference/src/compilation_context.cpp b/src/inference/src/compilation_context.cpp index 7c0b9cfba869c6..c71b83c6df9fee 100644 --- a/src/inference/src/compilation_context.cpp +++ b/src/inference/src/compilation_context.cpp @@ -16,7 +16,6 @@ #include "details/ie_exception.hpp" #include "file_utils.h" #include "itt.hpp" -#include "ngraph/opsets/opset6.hpp" #include "openvino/pass/manager.hpp" #include "transformations/hash.hpp" #include "transformations/rt_info/fused_names_attribute.hpp" diff --git a/src/inference/src/cpp/ie_remote_context.cpp b/src/inference/src/cpp/ie_remote_context.cpp index b23d6735f26a60..10dde33bb6158b 100644 --- a/src/inference/src/cpp/ie_remote_context.cpp +++ b/src/inference/src/cpp/ie_remote_context.cpp @@ -7,7 +7,6 @@ #include #include "any_copy.hpp" -#include "ie_ngraph_utils.hpp" #include "ie_remote_blob.hpp" #include "openvino/core/except.hpp" #include "openvino/runtime/iremote_context.hpp" diff --git a/src/inference/src/dev/make_tensor.cpp b/src/inference/src/dev/make_tensor.cpp index 94ddf723eabc4c..e250f640e9a3e1 100644 --- a/src/inference/src/dev/make_tensor.cpp +++ b/src/inference/src/dev/make_tensor.cpp @@ -7,7 +7,6 @@ #include #include "ie_blob.h" -#include "ie_ngraph_utils.hpp" #include "ie_remote_blob.hpp" #include "openvino/runtime/iremote_tensor.hpp" #include "openvino/runtime/properties.hpp" diff --git a/src/inference/src/model_reader.cpp b/src/inference/src/model_reader.cpp index 203ffac5c85678..1837d75a2d44aa 100644 --- a/src/inference/src/model_reader.cpp +++ b/src/inference/src/model_reader.cpp @@ -39,8 +39,8 @@ void update_v10_model(std::shared_ptr& model, bool frontendMode = fal const auto outputs = model->outputs(); for (size_t i = 0; i < outputs.size(); ++i) { if (!frontendMode) { - const auto ngraph_type = outputs[i].get_element_type(); - const auto legacy_type = InferenceEngine::details::toLegacyType(ngraph_type, false); + const auto ov_type = outputs[i].get_element_type(); + const auto legacy_type = InferenceEngine::details::toLegacyType(ov_type, false); prepost.output(i).tensor().set_element_type(legacy_type); } for (const auto& name : outputs[i].get_names()) { diff --git a/src/inference/src/os/win/win_system_conf.cpp b/src/inference/src/os/win/win_system_conf.cpp index 01460db7cc8c2c..280f5ca24eb474 100644 --- a/src/inference/src/os/win/win_system_conf.cpp +++ b/src/inference/src/os/win/win_system_conf.cpp @@ -185,10 +185,13 @@ void parse_processor_info_win(const char* base_ptr, } num_blocked++; } else if (1 == list_len) { - _cpu_mapping_table[list[0] + base_proc][CPU_MAP_CORE_TYPE] = MAIN_CORE_PROC; - _cpu_mapping_table[list[0] + base_proc][CPU_MAP_GROUP_ID] = group; - _proc_type_table[0][MAIN_CORE_PROC]++; - group++; + if ((_cpu_mapping_table.size() > list[0]) && + (_cpu_mapping_table[list[0] + base_proc][CPU_MAP_CORE_TYPE] == -1)) { + _cpu_mapping_table[list[0] + base_proc][CPU_MAP_CORE_TYPE] = MAIN_CORE_PROC; + _cpu_mapping_table[list[0] + base_proc][CPU_MAP_GROUP_ID] = group; + _proc_type_table[0][MAIN_CORE_PROC]++; + group++; + } } } } diff --git a/src/inference/tests/unit/cpu_map_parser/parser_windows.cpp b/src/inference/tests/unit/cpu_map_parser/parser_windows.cpp index 1e0da72b68199e..560134c0890054 100644 --- a/src/inference/tests/unit/cpu_map_parser/parser_windows.cpp +++ b/src/inference/tests/unit/cpu_map_parser/parser_windows.cpp @@ -2050,6 +2050,84 @@ WinCpuMapTestCase _1sockets_10cores_hyperthreading = { "0000000000000000000000000000000000000000000000000000000000000000000000000ff0f000000000000"}, }; +WinCpuMapTestCase _1sockets_6cores_hyperthreading_FMT7 = { + 12, + 1, + 1, + 6, + {{12, 6, 0, 6, 0, 0}}, + { + {0, 0, 0, 0, HYPER_THREADING_PROC, 0, -1}, + {1, 0, 0, 0, MAIN_CORE_PROC, 0, -1}, + {2, 0, 0, 1, HYPER_THREADING_PROC, 1, -1}, + {3, 0, 0, 1, MAIN_CORE_PROC, 1, -1}, + {4, 0, 0, 2, HYPER_THREADING_PROC, 2, -1}, + {5, 0, 0, 2, MAIN_CORE_PROC, 2, -1}, + {6, 0, 0, 3, HYPER_THREADING_PROC, 3, -1}, + {7, 0, 0, 3, MAIN_CORE_PROC, 3, -1}, + {8, 0, 0, 4, HYPER_THREADING_PROC, 4, -1}, + {9, 0, 0, 4, MAIN_CORE_PROC, 4, -1}, + {10, 0, 0, 5, HYPER_THREADING_PROC, 5, -1}, + {11, 0, 0, 5, MAIN_CORE_PROC, 5, -1}, + }, + {"02000000380000000108400000800000020000000000000000000000000000000000000000000000010000000000000000000000000000000" + "20000003800000001084000008000000100000000000000000000000000000000000000000000000100000000000000000000000000000002" + "00000038000000020840000000080000000000000000000000000000000000000000000000000001000000000000000000000000000000020" + "00000380000000301400000000002000000000000000000000000000000000000000000000000010000000000000000000000000000000000" + "00003000000001000000000000000000000000000000000000000000010003000000000000000000000000000000020000003800000001084" + "00000800000020000000000000000000000000000000000000000000000020000000000000000000000000000000200000038000000010840" + "00008000000100000000000000000000000000000000000000000000000200000000000000000000000000000002000000380000000208400" + "00000080000000000000000000000000000000000000000000000000002000000000000000000000000000000020000003800000003014000" + "00000002000000000000000000000000000000000000000000000000020000000000000000000000000000000200000038000000010840000" + "08000000200000000000000000000000000000000000000000000000400000000000000000000000000000002000000380000000108400000" + "80000001000000000000000000000000000000000000000000000004000000000000000000000000000000020000003800000002084000000" + "00800000000000000000000000000000000000000000000000000040000000000000000000000000000000200000038000000030140000000" + "00020000000000000000000000000000000000000000000000000400000000000000000000000000000000000000300000000100000000000" + "000000000000000000000000000000001000c0000000000000000000000000000000200000038000000010840000080000002000000000000" + "00000000000000000000000000000000000800000000000000000000000000000002000000380000000108400000800000010000000000000" + "00000000000000000000000000000000008000000000000000000000000000000020000003800000002084000000008000000000000000000" + "00000000000000000000000000000000080000000000000000000000000000000200000038000000030140000000000200000000000000000" + "00000000000000000000000000000000800000000000000000000000000000002000000380000000108400000800000020000000000000000" + "00000000000000000000000000000010000000000000000000000000000000020000003800000001084000008000000100000000000000000" + "00000000000000000000000000000100000000000000000000000000000000200000038000000020840000000080000000000000000000000" + "00000000000000000000000000001000000000000000000000000000000002000000380000000301400000000002000000000000000000000" + "00000000000000000000000000010000000000000000000000000000000000000003000000001000000000000000000000000000000000000" + "00000001003000000000000000000000000000000002000000380000000108400000800000020000000000000000000000000000000000000" + "00000000020000000000000000000000000000000020000003800000001084000008000000100000000000000000000000000000000000000" + "00000000200000000000000000000000000000000200000038000000020840000000080000000000000000000000000000000000000000000" + "00000002000000000000000000000000000000002000000380000000301400000000002000000000000000000000000000000000000000000" + "00000020000000000000000000000000000000020000003800000001084000008000000200000000000000000000000000000000000000000" + "00000400000000000000000000000000000000200000038000000010840000080000001000000000000000000000000000000000000000000" + "00004000000000000000000000000000000002000000380000000208400000000800000000000000000000000000000000000000000000000" + "00040000000000000000000000000000000020000003800000003014000000000020000000000000000000000000000000000000000000000" + "00400000000000000000000000000000000000000030000000010000000000000000000000000000000000000000000100c00000000000000" + "00000000000000000020000003800000001084000008000000200000000000000000000000000000000000000000000008000000000000000" + "00000000000000000200000038000000010840000080000001000000000000000000000000000000000000000000000080000000000000000" + "00000000000000002000000380000000208400000000800000000000000000000000000000000000000000000000000800000000000000000" + "00000000000000020000003800000003014000000000020000000000000000000000000000000000000000000000008000000000000000000" + "00000000000000200000038000000010840000080000002000000000000000000000000000000000000000000000000010000000000000000" + "00000000000002000000380000000108400000800000010000000000000000000000000000000000000000000000000100000000000000000" + "00000000000020000003800000002084000000008000000000000000000000000000000000000000000000000000001000000000000000000" + "00000000000200000038000000030140000000000200000000000000000000000000000000000000000000000000010000000000000000000" + "00000000000000000300000000100000000000000000000000000000000000000000001000003000000000000000000000000000002000000" + "38000000010840000080000002000000000000000000000000000000000000000000000000020000000000000000000000000000020000003" + "80000000108400000800000010000000000000000000000000000000000000000000000000200000000000000000000000000000200000038" + "00000002084000000008000000000000000000000000000000000000000000000000000002000000000000000000000000000002000000380" + "00000030140000000000200000000000000000000000000000000000000000000000000020000000000000000000000000000020000003800" + "00000108400000800000020000000000000000000000000000000000000000000000000400000000000000000000000000000200000038000" + "00001084000008000000100000000000000000000000000000000000000000000000004000000000000000000000000000002000000380000" + "00020840000000080000000000000000000000000000000000000000000000000000040000000000000000000000000000020000003800000" + "00301400000000002000000000000000000000000000000000000000000000000000400000000000000000000000000000300000030000000" + "000000000000000000000000000000000000000000000100ff0f0000000000000000000000000000000000003000000001000000000000000" + "0000000000000000000000000000100000c000000000000000000000000000002000000380000000108400000800000020000000000000000" + "00000000000000000000000000000000080000000000000000000000000000020000003800000001084000008000000100000000000000000" + "00000000000000000000000000000000800000000000000000000000000000200000038000000020840000000080000000000000000000000" + "00000000000000000000000000000008000000000000000000000000000002000000380000000301400000000002000000000000000000000" + "00000000000000000000000000000080000000000000000000000000000010000003000000000000000000000000000000000000000000000" + "0000000000ff0f000000000000000000000000000004000000500000000100010000000000000000000000000000000000000000000c0c000" + "0000000000000000000000000000000000000000000000000000000000000000000000000ff0f000000000000"}, +}; + WinCpuMapTestCase _1sockets_4cores_hyperthreading = { 8, 1, @@ -2205,6 +2283,7 @@ INSTANTIATE_TEST_SUITE_P(CPUMap, _1sockets_14cores_hyperthreading_set2, _1sockets_14cores_hyperthreading_set3, _1sockets_10cores_hyperthreading, + _1sockets_6cores_hyperthreading_FMT7, _1sockets_4cores_hyperthreading, _1sockets_4cores_hyperthreading_1_FMT7, _1sockets_4cores_hyperthreading_2_FMT7, diff --git a/src/plugins/intel_cpu/src/nodes/fullyconnected.cpp b/src/plugins/intel_cpu/src/nodes/fullyconnected.cpp index 099db584f12456..deac818e4fbb98 100644 --- a/src/plugins/intel_cpu/src/nodes/fullyconnected.cpp +++ b/src/plugins/intel_cpu/src/nodes/fullyconnected.cpp @@ -473,6 +473,19 @@ void FullyConnected::prepareParams() { } if (!prevExecPtr || !execPtr->getWeightDesc()->isCompatible(*(prevExecPtr->getWeightDesc()))) { +#ifdef CPU_DEBUG_CAPS + // execPtr expects different weight layout. + if (prevExecPtr) { + const Shape weiShape{getParentEdgesAtPort(1)[0]->getMemoryPtr()->getStaticDims()}; + DEBUG_LOG("##", getName(), " weight desc is not compatible with previous inner product execPtr!"); + DEBUG_LOG("#", static_cast(execPtr->getWeightDesc()->getMaxMemSize()) / static_cast(1<<20), + "#", weiShape.toString(), + "#", prevExecPtr->getImplementationType() == brgconv_avx512_1x1 ? "Conv1x1," : "FullyConnnect,", + "#", execPtr->getImplementationType() == brgconv_avx512_1x1 ? "Conv1x1," : "FullyConnnect,", + "#", *prevExecPtr->getWeightDesc(), + "#", *execPtr->getWeightDesc()); + } +#endif if (weightsNonTransposed) { primArgs[DNNL_ARG_WEIGHTS] = prepareWeightMemory(execPtr->getWeightDesc(), makeTransposedWeightDescriptor())->getPrimitive(); } else { @@ -1004,7 +1017,10 @@ bool FullyConnected::canBeExecutedInConv1x1() const { widthInConv = srcDims[inRank - 2]; K = srcDims[inRank - 1]; N = weightDims[0]; - + // Disable Conv1x1 when weight size >= 16M to avoid different weight layout when having different input activation shapes. + // As a consuquence, peak memory consumption in LLM can be decreased. + if (weightMemPtr->getSize() >= (16 * 1 << 20)) + retVal = false; if (!(widthInConv >= 2 && widthInConv <= 3136 && K >= 96 && K <= 4096 && N >= 96 && N <= K * 4)) diff --git a/src/plugins/intel_cpu/tests/functional/shared_tests_instances/single_layer_tests/convolution.cpp b/src/plugins/intel_cpu/tests/functional/shared_tests_instances/single_layer_tests/convolution.cpp index c9543c22b55226..3bfe2fb06b814d 100644 --- a/src/plugins/intel_cpu/tests/functional/shared_tests_instances/single_layer_tests/convolution.cpp +++ b/src/plugins/intel_cpu/tests/functional/shared_tests_instances/single_layer_tests/convolution.cpp @@ -5,15 +5,15 @@ #include #include "common_test_utils/test_constants.hpp" -#include "single_layer_tests/convolution.hpp" +#include "single_op_tests/convolution.hpp" -using namespace LayerTestsDefinitions; namespace { +using ov::test::ConvolutionLayerTest; -const std::vector netPrecisions = { - InferenceEngine::Precision::FP32, InferenceEngine::Precision::FP16, - InferenceEngine::Precision::I32}; +const std::vector model_type = { + ov::element::f32, ov::element::f16, + ov::element::i32}; /* ============= 1D Convolution ============= */ const std::vector> kernels1D = {{3}, {5}}; @@ -27,35 +27,28 @@ const auto conv1DParams_ExplicitPadding = ::testing::Combine( ::testing::ValuesIn(kernels1D), ::testing::ValuesIn(strides1D), ::testing::ValuesIn(padBegins1D), ::testing::ValuesIn(padEnds1D), ::testing::ValuesIn(dilations1D), ::testing::ValuesIn(numOutChannels1D), - ::testing::Values(ngraph::op::PadType::EXPLICIT)); + ::testing::Values(ov::op::PadType::EXPLICIT)); const auto conv1DParams_AutoPadValid = ::testing::Combine( ::testing::ValuesIn(kernels1D), ::testing::ValuesIn(strides1D), ::testing::Values(std::vector({0})), ::testing::Values(std::vector({0})), ::testing::ValuesIn(dilations1D), ::testing::ValuesIn(numOutChannels1D), - ::testing::Values(ngraph::op::PadType::VALID)); + ::testing::Values(ov::op::PadType::VALID)); INSTANTIATE_TEST_SUITE_P( smoke_Convolution1D_ExplicitPadding, ConvolutionLayerTest, ::testing::Combine( - conv1DParams_ExplicitPadding, ::testing::ValuesIn(netPrecisions), - ::testing::Values(InferenceEngine::Precision::UNSPECIFIED), - ::testing::Values(InferenceEngine::Precision::UNSPECIFIED), - ::testing::Values(InferenceEngine::Layout::ANY), - ::testing::Values(InferenceEngine::Layout::ANY), - ::testing::Values(std::vector({1, 3, 30})), + conv1DParams_ExplicitPadding, + ::testing::ValuesIn(model_type), + ::testing::Values(ov::test::static_shapes_to_test_representation({{1, 3, 30}})), ::testing::Values(ov::test::utils::DEVICE_CPU)), ConvolutionLayerTest::getTestCaseName); INSTANTIATE_TEST_SUITE_P( smoke_Convolution1D_AutoPadValid, ConvolutionLayerTest, ::testing::Combine( - conv1DParams_AutoPadValid, ::testing::ValuesIn(netPrecisions), - ::testing::Values(InferenceEngine::Precision::UNSPECIFIED), - ::testing::Values(InferenceEngine::Precision::UNSPECIFIED), - ::testing::Values(InferenceEngine::Layout::ANY), - ::testing::Values(InferenceEngine::Layout::ANY), - ::testing::Values(std::vector({1, 3, 30})), + conv1DParams_AutoPadValid, ::testing::ValuesIn(model_type), + ::testing::Values(ov::test::static_shapes_to_test_representation({{1, 3, 30}})), ::testing::Values(ov::test::utils::DEVICE_CPU)), ConvolutionLayerTest::getTestCaseName); @@ -82,24 +75,16 @@ const auto conv2DParams_AutoPadValid = ::testing::Combine( INSTANTIATE_TEST_SUITE_P( smoke_Convolution2D_ExplicitPadding, ConvolutionLayerTest, ::testing::Combine( - conv2DParams_ExplicitPadding, ::testing::ValuesIn(netPrecisions), - ::testing::Values(InferenceEngine::Precision::UNSPECIFIED), - ::testing::Values(InferenceEngine::Precision::UNSPECIFIED), - ::testing::Values(InferenceEngine::Layout::ANY), - ::testing::Values(InferenceEngine::Layout::ANY), - ::testing::Values(std::vector({1, 3, 30, 30})), + conv2DParams_ExplicitPadding, ::testing::ValuesIn(model_type), + ::testing::Values(ov::test::static_shapes_to_test_representation({{1, 3, 30, 30}})), ::testing::Values(ov::test::utils::DEVICE_CPU)), ConvolutionLayerTest::getTestCaseName); INSTANTIATE_TEST_SUITE_P( smoke_Convolution2D_AutoPadValid, ConvolutionLayerTest, ::testing::Combine( - conv2DParams_AutoPadValid, ::testing::ValuesIn(netPrecisions), - ::testing::Values(InferenceEngine::Precision::UNSPECIFIED), - ::testing::Values(InferenceEngine::Precision::UNSPECIFIED), - ::testing::Values(InferenceEngine::Layout::ANY), - ::testing::Values(InferenceEngine::Layout::ANY), - ::testing::Values(std::vector({1, 3, 30, 30})), + conv2DParams_AutoPadValid, ::testing::ValuesIn(model_type), + ::testing::Values(ov::test::static_shapes_to_test_representation({{1, 3, 30, 30}})), ::testing::Values(ov::test::utils::DEVICE_CPU)), ConvolutionLayerTest::getTestCaseName); @@ -122,12 +107,8 @@ const auto conv2DParams_WeightLayout = ::testing::Combine(::testing::Values(kern INSTANTIATE_TEST_SUITE_P(smoke_Convolution2D_SpecificWeightLayout, ConvolutionLayerTest, ::testing::Combine(conv2DParams_WeightLayout, - ::testing::ValuesIn(netPrecisions), - ::testing::Values(InferenceEngine::Precision::UNSPECIFIED), - ::testing::Values(InferenceEngine::Precision::UNSPECIFIED), - ::testing::Values(InferenceEngine::Layout::ANY), - ::testing::Values(InferenceEngine::Layout::ANY), - ::testing::Values(std::vector({1, 1, 50, 75})), + ::testing::ValuesIn(model_type), + ::testing::Values(ov::test::static_shapes_to_test_representation({{1, 1, 50, 75}})), ::testing::Values(ov::test::utils::DEVICE_CPU)), ConvolutionLayerTest::getTestCaseName); } // namespace specificWeightLayout @@ -154,24 +135,16 @@ const auto conv3DParams_AutoPadValid = ::testing::Combine( INSTANTIATE_TEST_SUITE_P( smoke_Convolution3D_ExplicitPadding, ConvolutionLayerTest, ::testing::Combine( - conv3DParams_ExplicitPadding, ::testing::ValuesIn(netPrecisions), - ::testing::Values(InferenceEngine::Precision::UNSPECIFIED), - ::testing::Values(InferenceEngine::Precision::UNSPECIFIED), - ::testing::Values(InferenceEngine::Layout::ANY), - ::testing::Values(InferenceEngine::Layout::ANY), - ::testing::Values(std::vector({1, 3, 10, 10, 10})), + conv3DParams_ExplicitPadding, ::testing::ValuesIn(model_type), + ::testing::Values(ov::test::static_shapes_to_test_representation({{1, 3, 10, 10, 10}})), ::testing::Values(ov::test::utils::DEVICE_CPU)), ConvolutionLayerTest::getTestCaseName); INSTANTIATE_TEST_SUITE_P( smoke_Convolution3D_AutoPadValid, ConvolutionLayerTest, ::testing::Combine( - conv3DParams_AutoPadValid, ::testing::ValuesIn(netPrecisions), - ::testing::Values(InferenceEngine::Precision::UNSPECIFIED), - ::testing::Values(InferenceEngine::Precision::UNSPECIFIED), - ::testing::Values(InferenceEngine::Layout::ANY), - ::testing::Values(InferenceEngine::Layout::ANY), - ::testing::Values(std::vector({1, 3, 10, 10, 10})), + conv3DParams_AutoPadValid, ::testing::ValuesIn(model_type), + ::testing::Values(ov::test::static_shapes_to_test_representation({{1, 3, 10, 10, 10}})), ::testing::Values(ov::test::utils::DEVICE_CPU)), ConvolutionLayerTest::getTestCaseName); diff --git a/src/plugins/intel_cpu/tests/functional/shared_tests_instances/single_layer_tests/convolution_backprop_data.cpp b/src/plugins/intel_cpu/tests/functional/shared_tests_instances/single_layer_tests/convolution_backprop_data.cpp index 1326377941f6ce..f2a3adfd3436fa 100644 --- a/src/plugins/intel_cpu/tests/functional/shared_tests_instances/single_layer_tests/convolution_backprop_data.cpp +++ b/src/plugins/intel_cpu/tests/functional/shared_tests_instances/single_layer_tests/convolution_backprop_data.cpp @@ -4,31 +4,30 @@ #include -#include "single_layer_tests/convolution_backprop_data.hpp" +#include "single_op_tests/convolution_backprop_data.hpp" #include "common_test_utils/test_constants.hpp" -using namespace LayerTestsDefinitions; - namespace { +using ov::test::ConvolutionBackpropDataLayerTest; -const std::vector netPrecisions = { - InferenceEngine::Precision::FP32, - InferenceEngine::Precision::FP16 +const std::vector model_type = { + ov::element::f32, + ov::element::f16 }; const std::vector numOutChannels = {1, 5, 16}; -const std::vector> emptyOutputShape = {{}}; -const std::vector> emptyOutputPadding = {{}}; +const std::vector emptyOutputShape = {{}}; +const std::vector> emptyOutputPadding = {{}}; /* ============= 2D ConvolutionBackpropData ============= */ -const std::vector> inputShapes2D = {{1, 3, 30, 30}, - {1, 16, 10, 10}, - {1, 32, 10, 10}}; -const std::vector> kernels2D = {{1, 1}, {3, 3}, {3, 5}}; -const std::vector> strides2D = {{1, 1}, {1, 3}}; +const std::vector> inputShapes2D_static = {{{1, 3, 30, 30}}, + {{1, 16, 10, 10}}, + {{1, 32, 10, 10}}}; +const std::vector> kernels2D = {/*{1, 1},*/ {3, 3}, {3, 5}}; +const std::vector> strides2D = {{1, 1}, {1, 3}}; const std::vector> padBegins2D = {{0, 0}}; const std::vector> padEnds2D = {{0, 0}, {1, 1}}; -const std::vector> dilations2D = {{1, 1}, {2, 2}}; +const std::vector> dilations2D = {{1, 1}, {2, 2}}; const auto conv2DParams_ExplicitPadding = ::testing::Combine( ::testing::ValuesIn(kernels2D), @@ -37,7 +36,7 @@ const auto conv2DParams_ExplicitPadding = ::testing::Combine( ::testing::ValuesIn(padEnds2D), ::testing::ValuesIn(dilations2D), ::testing::ValuesIn(numOutChannels), - ::testing::Values(ngraph::op::PadType::EXPLICIT), + ::testing::Values(ov::op::PadType::EXPLICIT), ::testing::ValuesIn(emptyOutputPadding) ); const auto conv2DParams_AutoPadValid = ::testing::Combine( @@ -47,19 +46,15 @@ const auto conv2DParams_AutoPadValid = ::testing::Combine( ::testing::Values(std::vector({0, 0})), ::testing::ValuesIn(dilations2D), ::testing::ValuesIn(numOutChannels), - ::testing::Values(ngraph::op::PadType::VALID), + ::testing::Values(ov::op::PadType::VALID), ::testing::ValuesIn(emptyOutputPadding) ); INSTANTIATE_TEST_SUITE_P(smoke_ConvolutionBackpropData2D_ExplicitPadding, ConvolutionBackpropDataLayerTest, ::testing::Combine( conv2DParams_ExplicitPadding, - ::testing::ValuesIn(netPrecisions), - ::testing::Values(InferenceEngine::Precision::UNSPECIFIED), - ::testing::Values(InferenceEngine::Precision::UNSPECIFIED), - ::testing::Values(InferenceEngine::Layout::ANY), - ::testing::Values(InferenceEngine::Layout::ANY), - ::testing::ValuesIn(inputShapes2D), + ::testing::ValuesIn(model_type), + ::testing::ValuesIn(ov::test::static_shapes_to_test_representation(inputShapes2D_static)), ::testing::ValuesIn(emptyOutputShape), ::testing::Values(ov::test::utils::DEVICE_CPU)), ConvolutionBackpropDataLayerTest::getTestCaseName); @@ -67,28 +62,20 @@ INSTANTIATE_TEST_SUITE_P(smoke_ConvolutionBackpropData2D_ExplicitPadding, Convol INSTANTIATE_TEST_SUITE_P(smoke_ConvolutionBackpropData2D_AutoPadValid, ConvolutionBackpropDataLayerTest, ::testing::Combine( conv2DParams_AutoPadValid, - ::testing::ValuesIn(netPrecisions), - ::testing::Values(InferenceEngine::Precision::UNSPECIFIED), - ::testing::Values(InferenceEngine::Precision::UNSPECIFIED), - ::testing::Values(InferenceEngine::Layout::ANY), - ::testing::Values(InferenceEngine::Layout::ANY), - ::testing::ValuesIn(inputShapes2D), + ::testing::ValuesIn(model_type), + ::testing::ValuesIn(ov::test::static_shapes_to_test_representation(inputShapes2D_static)), ::testing::ValuesIn(emptyOutputShape), ::testing::Values(ov::test::utils::DEVICE_CPU)), ConvolutionBackpropDataLayerTest::getTestCaseName); -const std::vector> inputShape2D = {{1, 3, 9, 12}}; -const std::vector> outputShapes2D = {{6, 6}, {4, 9}}; +const std::vector inputShape2D_static = {{1, 3, 9, 12}}; +const std::vector outputShapes2D = {{6, 6}, {4, 9}}; INSTANTIATE_TEST_SUITE_P(smoke_ConvolutionBackpropData2D_OutputShapeDefined, ConvolutionBackpropDataLayerTest, ::testing::Combine( conv2DParams_AutoPadValid, - ::testing::ValuesIn(netPrecisions), - ::testing::Values(InferenceEngine::Precision::UNSPECIFIED), - ::testing::Values(InferenceEngine::Precision::UNSPECIFIED), - ::testing::Values(InferenceEngine::Layout::ANY), - ::testing::Values(InferenceEngine::Layout::ANY), - ::testing::ValuesIn(inputShape2D), + ::testing::ValuesIn(model_type), + ::testing::Values(ov::test::static_shapes_to_test_representation(inputShape2D_static)), ::testing::ValuesIn(outputShapes2D), ::testing::Values(ov::test::utils::DEVICE_CPU)), ConvolutionBackpropDataLayerTest::getTestCaseName); @@ -103,7 +90,7 @@ const auto conv2DParams_ExplicitPadding_output_padding = ::testing::Combine( ::testing::ValuesIn(padEnds2D), ::testing::ValuesIn(dilations2D), ::testing::ValuesIn(numOutChannels), - ::testing::Values(ngraph::op::PadType::EXPLICIT), + ::testing::Values(ov::op::PadType::EXPLICIT), ::testing::ValuesIn(outputPadding2D) ); const auto conv2DParams_AutoPadValid_output_padding = ::testing::Combine( @@ -113,19 +100,15 @@ const auto conv2DParams_AutoPadValid_output_padding = ::testing::Combine( ::testing::Values(std::vector({0, 0})), ::testing::ValuesIn(dilations2D), ::testing::ValuesIn(numOutChannels), - ::testing::Values(ngraph::op::PadType::VALID), + ::testing::Values(ov::op::PadType::VALID), ::testing::ValuesIn(outputPadding2D) ); INSTANTIATE_TEST_SUITE_P(smoke_ConvolutionBackpropData2D_ExplicitPadding_OutputPaddingDefined, ConvolutionBackpropDataLayerTest, ::testing::Combine( conv2DParams_AutoPadValid_output_padding, - ::testing::ValuesIn(netPrecisions), - ::testing::Values(InferenceEngine::Precision::UNSPECIFIED), - ::testing::Values(InferenceEngine::Precision::UNSPECIFIED), - ::testing::Values(InferenceEngine::Layout::ANY), - ::testing::Values(InferenceEngine::Layout::ANY), - ::testing::ValuesIn(inputShapes2D), + ::testing::ValuesIn(model_type), + ::testing::ValuesIn(ov::test::static_shapes_to_test_representation(inputShapes2D_static)), ::testing::ValuesIn(emptyOutputShape), ::testing::Values(ov::test::utils::DEVICE_CPU)), ConvolutionBackpropDataLayerTest::getTestCaseName); @@ -133,12 +116,8 @@ INSTANTIATE_TEST_SUITE_P(smoke_ConvolutionBackpropData2D_ExplicitPadding_OutputP INSTANTIATE_TEST_SUITE_P(smoke_ConvolutionBackpropData2D_AutoPadding_OutputPaddingDefined, ConvolutionBackpropDataLayerTest, ::testing::Combine( conv2DParams_ExplicitPadding_output_padding, - ::testing::ValuesIn(netPrecisions), - ::testing::Values(InferenceEngine::Precision::UNSPECIFIED), - ::testing::Values(InferenceEngine::Precision::UNSPECIFIED), - ::testing::Values(InferenceEngine::Layout::ANY), - ::testing::Values(InferenceEngine::Layout::ANY), - ::testing::ValuesIn(inputShapes2D), + ::testing::ValuesIn(model_type), + ::testing::ValuesIn(ov::test::static_shapes_to_test_representation(inputShapes2D_static)), ::testing::ValuesIn(emptyOutputShape), ::testing::Values(ov::test::utils::DEVICE_CPU)), ConvolutionBackpropDataLayerTest::getTestCaseName); @@ -152,23 +131,19 @@ INSTANTIATE_TEST_CASE_P(smoke_ConvolutionBackpropData2D_RoundingOfPadding, Convo ::testing::Values(std::vector({15, 0})), ::testing::Values(std::vector({1, 1})), ::testing::Values(size_t(4)), - ::testing::Values(ngraph::op::PadType::SAME_LOWER), + ::testing::Values(ov::op::PadType::SAME_LOWER), ::testing::Values(std::vector({0, 0}))), - ::testing::Values(InferenceEngine::Precision::FP32), - ::testing::Values(InferenceEngine::Precision::UNSPECIFIED), - ::testing::Values(InferenceEngine::Precision::UNSPECIFIED), - ::testing::Values(InferenceEngine::Layout::ANY), - ::testing::Values(InferenceEngine::Layout::ANY), - ::testing::Values(std::vector({ 1, 512, 2, 1 })), - ::testing::Values(std::vector({ 16, 1 })), + ::testing::Values(ov::element::f32), + ::testing::Values(ov::test::static_shapes_to_test_representation({{1, 512, 2, 1}})), + ::testing::Values(ov::Shape({ 16, 1 })), ::testing::Values(ov::test::utils::DEVICE_CPU)), ConvolutionBackpropDataLayerTest::getTestCaseName); /* ============= 3D ConvolutionBackpropData ============= */ -const std::vector> inputShapes3D = {{1, 3, 10, 10, 10}, - {1, 16, 5, 5, 5}, - {1, 32, 5, 5, 5}}; -const std::vector> kernels3D = {{1, 1, 1}, {3, 3, 3}}; +const std::vector> inputShapes3D_static = {{{1, 3, 10, 10, 10}}, + {{1, 16, 5, 5, 5}}, + {{1, 32, 5, 5, 5}}}; +const std::vector> kernels3D = {/*{1, 1, 1},*/ {3, 3, 3}}; const std::vector> strides3D = {{1, 1, 1}}; const std::vector> padBegins3D = {{0, 0, 0}}; const std::vector> padEnds3D = {{0, 0, 0}, {1, 1, 1}}; @@ -181,7 +156,7 @@ const auto conv3DParams_ExplicitPadding = ::testing::Combine( ::testing::ValuesIn(padEnds3D), ::testing::ValuesIn(dilations3D), ::testing::ValuesIn(numOutChannels), - ::testing::Values(ngraph::op::PadType::EXPLICIT), + ::testing::Values(ov::op::PadType::EXPLICIT), ::testing::ValuesIn(emptyOutputPadding) ); const auto conv3DParams_AutoPadValid = ::testing::Combine( @@ -191,19 +166,15 @@ const auto conv3DParams_AutoPadValid = ::testing::Combine( ::testing::Values(std::vector({0, 0, 0})), ::testing::ValuesIn(dilations3D), ::testing::ValuesIn(numOutChannels), - ::testing::Values(ngraph::op::PadType::VALID), + ::testing::Values(ov::op::PadType::VALID), ::testing::ValuesIn(emptyOutputPadding) ); INSTANTIATE_TEST_SUITE_P(smoke_ConvolutionBackpropData3D_ExplicitPadding, ConvolutionBackpropDataLayerTest, ::testing::Combine( conv3DParams_ExplicitPadding, - ::testing::ValuesIn(netPrecisions), - ::testing::Values(InferenceEngine::Precision::UNSPECIFIED), - ::testing::Values(InferenceEngine::Precision::UNSPECIFIED), - ::testing::Values(InferenceEngine::Layout::ANY), - ::testing::Values(InferenceEngine::Layout::ANY), - ::testing::ValuesIn(inputShapes3D), + ::testing::ValuesIn(model_type), + ::testing::ValuesIn(ov::test::static_shapes_to_test_representation(inputShapes3D_static)), ::testing::ValuesIn(emptyOutputShape), ::testing::Values(ov::test::utils::DEVICE_CPU)), ConvolutionBackpropDataLayerTest::getTestCaseName); @@ -211,28 +182,20 @@ INSTANTIATE_TEST_SUITE_P(smoke_ConvolutionBackpropData3D_ExplicitPadding, Convol INSTANTIATE_TEST_SUITE_P(smoke_ConvolutionBackpropData3D_AutoPadValid, ConvolutionBackpropDataLayerTest, ::testing::Combine( conv3DParams_AutoPadValid, - ::testing::ValuesIn(netPrecisions), - ::testing::Values(InferenceEngine::Precision::UNSPECIFIED), - ::testing::Values(InferenceEngine::Precision::UNSPECIFIED), - ::testing::Values(InferenceEngine::Layout::ANY), - ::testing::Values(InferenceEngine::Layout::ANY), - ::testing::ValuesIn(inputShapes3D), + ::testing::ValuesIn(model_type), + ::testing::ValuesIn(ov::test::static_shapes_to_test_representation(inputShapes3D_static)), ::testing::ValuesIn(emptyOutputShape), ::testing::Values(ov::test::utils::DEVICE_CPU)), ConvolutionBackpropDataLayerTest::getTestCaseName); -const std::vector> inputShape3D = {{1, 3, 10, 10, 10}}; -const std::vector> outputShapes3D = {{8, 8, 8}, {10, 10, 10}}; +const std::vector inputShape3D_static = {{1, 3, 10, 10, 10}}; +const std::vector outputShapes3D = {{8, 8, 8}, {10, 10, 10}}; INSTANTIATE_TEST_SUITE_P(smoke_ConvolutionBackpropData3D_OutputShapeDefined, ConvolutionBackpropDataLayerTest, ::testing::Combine( conv3DParams_AutoPadValid, - ::testing::ValuesIn(netPrecisions), - ::testing::Values(InferenceEngine::Precision::UNSPECIFIED), - ::testing::Values(InferenceEngine::Precision::UNSPECIFIED), - ::testing::Values(InferenceEngine::Layout::ANY), - ::testing::Values(InferenceEngine::Layout::ANY), - ::testing::ValuesIn(inputShape3D), + ::testing::ValuesIn(model_type), + ::testing::Values(ov::test::static_shapes_to_test_representation(inputShape3D_static)), ::testing::ValuesIn(outputShapes3D), ::testing::Values(ov::test::utils::DEVICE_CPU)), ConvolutionBackpropDataLayerTest::getTestCaseName); @@ -247,7 +210,7 @@ const auto conv3DParams_ExplicitPadding_output_padding = ::testing::Combine( ::testing::ValuesIn(padEnds3D), ::testing::ValuesIn(dilations3D), ::testing::ValuesIn(numOutChannels), - ::testing::Values(ngraph::op::PadType::EXPLICIT), + ::testing::Values(ov::op::PadType::EXPLICIT), ::testing::ValuesIn(outputPadding3D) ); const auto conv3DParams_AutoPadValid_output_padding = ::testing::Combine( @@ -257,34 +220,26 @@ const auto conv3DParams_AutoPadValid_output_padding = ::testing::Combine( ::testing::Values(std::vector({0, 0, 0})), ::testing::ValuesIn(dilations3D), ::testing::ValuesIn(numOutChannels), - ::testing::Values(ngraph::op::PadType::VALID), + ::testing::Values(ov::op::PadType::VALID), ::testing::ValuesIn(outputPadding3D) ); -INSTANTIATE_TEST_SUITE_P(smoke_ConvolutionBackpropData3D_ExplicitPadding_OutputPaddingDefined, ConvolutionBackpropDataLayerTest, - ::testing::Combine( - conv3DParams_AutoPadValid_output_padding, - ::testing::ValuesIn(netPrecisions), - ::testing::Values(InferenceEngine::Precision::UNSPECIFIED), - ::testing::Values(InferenceEngine::Precision::UNSPECIFIED), - ::testing::Values(InferenceEngine::Layout::ANY), - ::testing::Values(InferenceEngine::Layout::ANY), - ::testing::ValuesIn(inputShapes3D), - ::testing::ValuesIn(emptyOutputShape), - ::testing::Values(ov::test::utils::DEVICE_CPU)), - ConvolutionBackpropDataLayerTest::getTestCaseName); - -INSTANTIATE_TEST_SUITE_P(smoke_ConvolutionBackpropData3D_AutoPadding_OutputPaddingDefined, ConvolutionBackpropDataLayerTest, - ::testing::Combine( - conv3DParams_ExplicitPadding_output_padding, - ::testing::ValuesIn(netPrecisions), - ::testing::Values(InferenceEngine::Precision::UNSPECIFIED), - ::testing::Values(InferenceEngine::Precision::UNSPECIFIED), - ::testing::Values(InferenceEngine::Layout::ANY), - ::testing::Values(InferenceEngine::Layout::ANY), - ::testing::ValuesIn(inputShapes3D), - ::testing::ValuesIn(emptyOutputShape), - ::testing::Values(ov::test::utils::DEVICE_CPU)), - ConvolutionBackpropDataLayerTest::getTestCaseName); +// INSTANTIATE_TEST_SUITE_P(smoke_ConvolutionBackpropData3D_ExplicitPadding_OutputPaddingDefined, ConvolutionBackpropDataLayerTest, +// ::testing::Combine( +// conv3DParams_AutoPadValid_output_padding, +// ::testing::ValuesIn(model_type), +// ::testing::ValuesIn(ov::test::static_shapes_to_test_representation(inputShapes3D_static)), +// ::testing::ValuesIn(emptyOutputShape), +// ::testing::Values(ov::test::utils::DEVICE_CPU)), +// ConvolutionBackpropDataLayerTest::getTestCaseName); + +// INSTANTIATE_TEST_SUITE_P(smoke_ConvolutionBackpropData3D_AutoPadding_OutputPaddingDefined, ConvolutionBackpropDataLayerTest, +// ::testing::Combine( +// conv3DParams_ExplicitPadding_output_padding, +// ::testing::ValuesIn(model_type), +// ::testing::ValuesIn(ov::test::static_shapes_to_test_representation(inputShapes3D_static)), +// ::testing::ValuesIn(emptyOutputShape), +// ::testing::Values(ov::test::utils::DEVICE_CPU)), +// ConvolutionBackpropDataLayerTest::getTestCaseName); } // namespace diff --git a/src/plugins/intel_cpu/tests/functional/shared_tests_instances/single_layer_tests/deformable_convolution.cpp b/src/plugins/intel_cpu/tests/functional/shared_tests_instances/single_layer_tests/deformable_convolution.cpp index 4854502bb140b9..8192ab595e5fc7 100644 --- a/src/plugins/intel_cpu/tests/functional/shared_tests_instances/single_layer_tests/deformable_convolution.cpp +++ b/src/plugins/intel_cpu/tests/functional/shared_tests_instances/single_layer_tests/deformable_convolution.cpp @@ -1,15 +1,15 @@ // Copyright (C) 2018-2023 Intel Corporation // SPDX-License-Identifier: Apache-2.0 // -#include #include "common_test_utils/test_constants.hpp" -#include "single_layer_tests/deformable_convolution.hpp" -using namespace LayerTestsDefinitions; +#include "single_op_tests/deformable_convolution.hpp" + namespace { +using ov::test::DeformableConvolutionLayerTest; -const std::vector netPrecisions = { - InferenceEngine::Precision::FP32, InferenceEngine::Precision::FP16, - InferenceEngine::Precision::I32, InferenceEngine::Precision::I16}; +const std::vector netPrecisions = { + ov::element::f32, ov::element::f16, + ov::element::i32, ov::element::i16}; /* ============= 2D DeformableConvolution ============= */ const std::vector> deformable_vals = {{1, 16, 2, 2}}; @@ -25,74 +25,117 @@ const std::vector multiple_defor_groups = {4}; const std::vector> deform_vals = {{1, 72, 64, 64}}; const std::vector> kernel = {{16, 16, 3, 3}}; +const std::vector shapes_no_modulation { + {1, 2, 3, 3}, + {1, 16, 2, 2}, //deformable_vals + {2, 2, 2, 2}, //kernels +}; + +const std::vector shapes_with_modulation { + {1, 2, 3, 3}, + {1, 16, 2, 2}, //deformable_vals + {2, 2, 2, 2}, //kernels + {1, 8, 2, 2}, //modulation_shape +}; + const std::vector with_bilinear_interpolation_pad = { false, true }; const std::vector with_modulated_scalar = { false, true }; const auto deformableConv2DParams_ExplicitPadding = ::testing::Combine( - ::testing::ValuesIn(deformable_vals), - ::testing::ValuesIn(kernels), ::testing::ValuesIn(strides), + ::testing::ValuesIn(strides), ::testing::ValuesIn(padBegins), ::testing::ValuesIn(padEnds), ::testing::ValuesIn(dilations), ::testing::ValuesIn(groups), ::testing::ValuesIn(defor_groups), ::testing::ValuesIn(numOutChannels), - ::testing::Values(ngraph::op::PadType::EXPLICIT), ::testing::ValuesIn(with_bilinear_interpolation_pad), - ::testing::ValuesIn(with_modulated_scalar)); + ::testing::Values(ov::op::PadType::EXPLICIT), ::testing::ValuesIn(with_bilinear_interpolation_pad)); const auto deformableConv2DParams_AutoPadValid = ::testing::Combine( - ::testing::ValuesIn(deformable_vals), - ::testing::ValuesIn(kernels), ::testing::ValuesIn(strides), + ::testing::ValuesIn(strides), ::testing::Values(std::vector({0, 0})), ::testing::Values(std::vector({0, 0})), ::testing::ValuesIn(dilations), ::testing::ValuesIn(groups), ::testing::ValuesIn(defor_groups), ::testing::ValuesIn(numOutChannels), - ::testing::Values(ngraph::op::PadType::VALID), - ::testing::ValuesIn(with_bilinear_interpolation_pad), - ::testing::ValuesIn(with_modulated_scalar)); + ::testing::Values(ov::op::PadType::VALID), + ::testing::ValuesIn(with_bilinear_interpolation_pad)); const auto deformableConv2DParams_DeformableGroups_AutoPadExplicit = ::testing::Combine( - ::testing::ValuesIn(deform_vals), - ::testing::ValuesIn(kernel), ::testing::ValuesIn(strides), + ::testing::ValuesIn(strides), ::testing::Values(std::vector({0, 0})), ::testing::Values(std::vector({0, 0})), ::testing::ValuesIn(dilations), ::testing::ValuesIn(groups), ::testing::ValuesIn(multiple_defor_groups), ::testing::ValuesIn(numOutChannels), - ::testing::Values(ngraph::op::PadType::EXPLICIT), - ::testing::ValuesIn(with_bilinear_interpolation_pad), - ::testing::ValuesIn(with_modulated_scalar)); + ::testing::Values(ov::op::PadType::EXPLICIT), + ::testing::ValuesIn(with_bilinear_interpolation_pad)); + +INSTANTIATE_TEST_SUITE_P( + smoke_DeformableConvolution2D_ExplicitPadding_NoModulation, DeformableConvolutionLayerTest, + ::testing::Combine( + deformableConv2DParams_ExplicitPadding, + ::testing::Values(false), + ::testing::ValuesIn(netPrecisions), + ::testing::Values(ov::test::static_shapes_to_test_representation(shapes_no_modulation)), + ::testing::Values(ov::test::utils::DEVICE_CPU)), + DeformableConvolutionLayerTest::getTestCaseName); INSTANTIATE_TEST_SUITE_P( - smoke_DeformableConvolution2D_ExplicitPadding, DeformableConvolutionLayerTest, + smoke_DeformableConvolution2D_ExplicitPadding_WithModulation, DeformableConvolutionLayerTest, ::testing::Combine( - deformableConv2DParams_ExplicitPadding, ::testing::ValuesIn(netPrecisions), - ::testing::Values(InferenceEngine::Precision::UNSPECIFIED), - ::testing::Values(InferenceEngine::Precision::UNSPECIFIED), - ::testing::Values(InferenceEngine::Layout::ANY), - ::testing::Values(InferenceEngine::Layout::ANY), - ::testing::Values(std::vector({1, 2, 3, 3})), + deformableConv2DParams_ExplicitPadding, + ::testing::Values(true), + ::testing::ValuesIn(netPrecisions), + ::testing::Values(ov::test::static_shapes_to_test_representation(shapes_with_modulation)), ::testing::Values(ov::test::utils::DEVICE_CPU)), DeformableConvolutionLayerTest::getTestCaseName); INSTANTIATE_TEST_SUITE_P( - smoke_DeformableConvolution2D_AutoPadValid, DeformableConvolutionLayerTest, + smoke_DeformableConvolution2D_AutoPadValid_NoModulation, DeformableConvolutionLayerTest, ::testing::Combine( - deformableConv2DParams_AutoPadValid, ::testing::ValuesIn(netPrecisions), - ::testing::Values(InferenceEngine::Precision::UNSPECIFIED), - ::testing::Values(InferenceEngine::Precision::UNSPECIFIED), - ::testing::Values(InferenceEngine::Layout::ANY), - ::testing::Values(InferenceEngine::Layout::ANY), - ::testing::Values(std::vector({1, 2, 3, 3})), + deformableConv2DParams_AutoPadValid, + ::testing::Values(false), + ::testing::ValuesIn(netPrecisions), + ::testing::Values(ov::test::static_shapes_to_test_representation(shapes_no_modulation)), ::testing::Values(ov::test::utils::DEVICE_CPU)), DeformableConvolutionLayerTest::getTestCaseName); INSTANTIATE_TEST_SUITE_P( - smoke_DeformableConvolution2D_DeformableGroups_ExplicitPadding, DeformableConvolutionLayerTest, + smoke_DeformableConvolution2D_AutoPadValid_WithModulation, DeformableConvolutionLayerTest, + ::testing::Combine( + deformableConv2DParams_AutoPadValid, + ::testing::Values(true), + ::testing::ValuesIn(netPrecisions), + ::testing::Values(ov::test::static_shapes_to_test_representation(shapes_with_modulation)), + ::testing::Values(ov::test::utils::DEVICE_CPU)), + DeformableConvolutionLayerTest::getTestCaseName); + +const std::vector shapes_2d_deformable_groups_no_modulation { + {1, 16, 66, 66}, + {1, 72, 64, 64}, //deformable_vals + {16, 16, 3, 3}, //kernels +}; + +const std::vector shapes_2d_deformable_groups_with_modulation { + {1, 16, 66, 66}, + {1, 72, 64, 64}, //deformable_vals + {16, 16, 3, 3}, //kernels + {1, 36, 64, 64}, //modulation_shape +}; + +INSTANTIATE_TEST_SUITE_P( + smoke_DeformableConvolution2D_DeformableGroups_ExplicitPadding_NoModulation, DeformableConvolutionLayerTest, + ::testing::Combine( + deformableConv2DParams_DeformableGroups_AutoPadExplicit, + ::testing::Values(false), + ::testing::ValuesIn(netPrecisions), + ::testing::Values(ov::test::static_shapes_to_test_representation(shapes_2d_deformable_groups_no_modulation)), + ::testing::Values(ov::test::utils::DEVICE_CPU)), + DeformableConvolutionLayerTest::getTestCaseName); + +INSTANTIATE_TEST_SUITE_P( + smoke_DeformableConvolution2D_DeformableGroups_ExplicitPadding_WithModulation, DeformableConvolutionLayerTest, ::testing::Combine( deformableConv2DParams_DeformableGroups_AutoPadExplicit, + ::testing::Values(true), ::testing::ValuesIn(netPrecisions), - ::testing::Values(InferenceEngine::Precision::UNSPECIFIED), - ::testing::Values(InferenceEngine::Precision::UNSPECIFIED), - ::testing::Values(InferenceEngine::Layout::ANY), - ::testing::Values(InferenceEngine::Layout::ANY), - ::testing::Values(std::vector({1, 16, 66, 66})), + ::testing::Values(ov::test::static_shapes_to_test_representation(shapes_2d_deformable_groups_with_modulation)), ::testing::Values(ov::test::utils::DEVICE_CPU)), DeformableConvolutionLayerTest::getTestCaseName); @@ -101,9 +144,21 @@ const std::vector> single_deform_vals = {{1, 54, 28, 28}}; const std::vector> single_kernel = {{1, 3, 3, 3}}; const std::vector single_deform_groups = {3}; +const std::vector shapes_single_no_modulation { + {1, 3, 30, 30}, + {1, 54, 28, 28}, //deformable_vals + {1, 3, 3, 3}, //kernels +}; + +const std::vector shapes_single_with_modulation { + {1, 3, 30, 30}, + {1, 54, 28, 28}, //deformable_vals + {1, 3, 3, 3}, //kernels + {1, 27, 28, 28}, //modulation_shape +}; + + const auto deformableConv2DParams_SingleTestCase = ::testing::Combine( - ::testing::ValuesIn(single_deform_vals), - ::testing::ValuesIn(single_kernel), ::testing::ValuesIn(strides), ::testing::ValuesIn(padBegins), ::testing::ValuesIn(padEnds), @@ -112,29 +167,66 @@ const auto deformableConv2DParams_SingleTestCase = ::testing::Combine( ::testing::ValuesIn(single_deform_groups), ::testing::ValuesIn(numOutChannels), ::testing::Values(ngraph::op::PadType::EXPLICIT), - ::testing::ValuesIn(with_bilinear_interpolation_pad), - ::testing::ValuesIn(with_modulated_scalar) -); + ::testing::ValuesIn(with_bilinear_interpolation_pad)); INSTANTIATE_TEST_SUITE_P( - smoke_DeformableConvolution2D_SingleTestCase, DeformableConvolutionLayerTest, + smoke_DeformableConvolution2D_SingleTestCase_NoModulation, DeformableConvolutionLayerTest, ::testing::Combine( deformableConv2DParams_SingleTestCase, + ::testing::Values(false), ::testing::ValuesIn(netPrecisions), - ::testing::Values(InferenceEngine::Precision::UNSPECIFIED), - ::testing::Values(InferenceEngine::Precision::UNSPECIFIED), - ::testing::Values(InferenceEngine::Layout::ANY), - ::testing::Values(InferenceEngine::Layout::ANY), - ::testing::Values(std::vector({1, 3, 30, 30})), + ::testing::Values(ov::test::static_shapes_to_test_representation(shapes_single_no_modulation)), ::testing::Values(ov::test::utils::DEVICE_CPU)), DeformableConvolutionLayerTest::getTestCaseName); + +INSTANTIATE_TEST_SUITE_P( + smoke_DeformableConvolution2D_SingleTestCase_WithModulation, DeformableConvolutionLayerTest, + ::testing::Combine( + deformableConv2DParams_SingleTestCase, + ::testing::Values(true), + ::testing::ValuesIn(netPrecisions), + ::testing::Values(ov::test::static_shapes_to_test_representation(shapes_single_with_modulation)), + ::testing::Values(ov::test::utils::DEVICE_CPU)), + DeformableConvolutionLayerTest::getTestCaseName); + /* ============= Multiple groups case ============= */ + +const std::vector shapes_multiple_groups_no_modulation { + {1, 4, 3, 3}, + {1, 16, 2, 2}, //deformable_vals + {2, 2, 2, 2}, //kernels +}; + +const std::vector shapes_multiple_groups_with_modulation { + {1, 4, 3, 3}, + {1, 16, 2, 2}, //deformable_vals + {2, 2, 2, 2}, //kernels + {1, 8, 2, 2}, //modulation_shape +}; + +INSTANTIATE_TEST_SUITE_P( + smoke_DeformableConvolution2D_MultipleGroups_NoModulation, DeformableConvolutionLayerTest, + ::testing::Combine( + ::testing::Combine( + ::testing::ValuesIn(strides), + ::testing::Values(std::vector({0, 0})), + ::testing::Values(std::vector({0, 0})), + ::testing::ValuesIn(dilations), + ::testing::ValuesIn(std::vector {2}), // gr. + ::testing::ValuesIn(std::vector {2}), // def. gr. + ::testing::ValuesIn(numOutChannels), + ::testing::Values(ngraph::op::PadType::EXPLICIT), + ::testing::ValuesIn(with_bilinear_interpolation_pad)), + ::testing::Values(false), + ::testing::ValuesIn(netPrecisions), + ::testing::Values(ov::test::static_shapes_to_test_representation(shapes_multiple_groups_no_modulation)), + ::testing::Values(ov::test::utils::DEVICE_CPU)), + DeformableConvolutionLayerTest::getTestCaseName); + INSTANTIATE_TEST_SUITE_P( - smoke_DeformableConvolution2D_MultipleGroups, DeformableConvolutionLayerTest, + smoke_DeformableConvolution2D_MultipleGroups_WithModulation, DeformableConvolutionLayerTest, ::testing::Combine( ::testing::Combine( - ::testing::ValuesIn(std::vector> {{1, 16, 2, 2}}), // offsets - ::testing::ValuesIn(std::vector> {{2, 2, 2, 2}}), // ker. ::testing::ValuesIn(strides), ::testing::Values(std::vector({0, 0})), ::testing::Values(std::vector({0, 0})), @@ -143,22 +235,49 @@ INSTANTIATE_TEST_SUITE_P( ::testing::ValuesIn(std::vector {2}), // def. gr. ::testing::ValuesIn(numOutChannels), ::testing::Values(ngraph::op::PadType::EXPLICIT), - ::testing::ValuesIn(with_bilinear_interpolation_pad), - ::testing::ValuesIn(with_modulated_scalar)), + ::testing::ValuesIn(with_bilinear_interpolation_pad)), + ::testing::Values(true), ::testing::ValuesIn(netPrecisions), - ::testing::Values(InferenceEngine::Precision::UNSPECIFIED), - ::testing::Values(InferenceEngine::Precision::UNSPECIFIED), - ::testing::Values(InferenceEngine::Layout::ANY), - ::testing::Values(InferenceEngine::Layout::ANY), - ::testing::Values(std::vector({1, 4, 3, 3})), + ::testing::Values(ov::test::static_shapes_to_test_representation(shapes_multiple_groups_with_modulation)), ::testing::Values(ov::test::utils::DEVICE_CPU)), DeformableConvolutionLayerTest::getTestCaseName); + +const std::vector shapes_multiple_groups_2_no_modulation { + {1, 8, 68, 68}, + {1, 18, 66, 66}, //deformable_vals + {4, 2, 3, 3}, //kernels +}; + +const std::vector shapes_multiple_groups_2_with_modulation { + {1, 8, 68, 68}, + {1, 18, 66, 66}, //deformable_vals + {4, 2, 3, 3}, //kernels + {1, 9, 66, 66}, //modulation_shape +}; + +INSTANTIATE_TEST_SUITE_P( + smoke_DeformableConvolution2D_MultipleGroups_2_NoModulation, DeformableConvolutionLayerTest, + ::testing::Combine( + ::testing::Combine( + ::testing::ValuesIn(strides), + ::testing::Values(std::vector({0, 0})), + ::testing::Values(std::vector({0, 0})), + ::testing::ValuesIn(dilations), + ::testing::ValuesIn(std::vector {4}), // gr. + ::testing::ValuesIn(std::vector {1}), // def. gr. + ::testing::ValuesIn(numOutChannels), + ::testing::Values(ngraph::op::PadType::EXPLICIT), + ::testing::ValuesIn(with_bilinear_interpolation_pad)), + ::testing::Values(false), + ::testing::ValuesIn(netPrecisions), + ::testing::Values(ov::test::static_shapes_to_test_representation(shapes_multiple_groups_2_no_modulation)), + ::testing::Values(ov::test::utils::DEVICE_CPU)), + DeformableConvolutionLayerTest::getTestCaseName); + INSTANTIATE_TEST_SUITE_P( - smoke_DeformableConvolution2D_MultipleGroups_2, DeformableConvolutionLayerTest, + smoke_DeformableConvolution2D_MultipleGroups_2_WithModulation, DeformableConvolutionLayerTest, ::testing::Combine( ::testing::Combine( - ::testing::ValuesIn(std::vector> {{1, 18, 66, 66}}), // offsets - ::testing::ValuesIn(std::vector> {{4, 2, 3, 3}}), // ker. ::testing::ValuesIn(strides), ::testing::Values(std::vector({0, 0})), ::testing::Values(std::vector({0, 0})), @@ -167,14 +286,10 @@ INSTANTIATE_TEST_SUITE_P( ::testing::ValuesIn(std::vector {1}), // def. gr. ::testing::ValuesIn(numOutChannels), ::testing::Values(ngraph::op::PadType::EXPLICIT), - ::testing::ValuesIn(with_bilinear_interpolation_pad), - ::testing::ValuesIn(with_modulated_scalar)), + ::testing::ValuesIn(with_bilinear_interpolation_pad)), + ::testing::Values(true), ::testing::ValuesIn(netPrecisions), - ::testing::Values(InferenceEngine::Precision::UNSPECIFIED), - ::testing::Values(InferenceEngine::Precision::UNSPECIFIED), - ::testing::Values(InferenceEngine::Layout::ANY), - ::testing::Values(InferenceEngine::Layout::ANY), - ::testing::Values(std::vector({1, 8, 68, 68})), + ::testing::Values(ov::test::static_shapes_to_test_representation(shapes_multiple_groups_2_with_modulation)), ::testing::Values(ov::test::utils::DEVICE_CPU)), DeformableConvolutionLayerTest::getTestCaseName); } // namespace diff --git a/src/plugins/intel_cpu/tests/functional/shared_tests_instances/single_layer_tests/shape_of.cpp b/src/plugins/intel_cpu/tests/functional/shared_tests_instances/single_layer_tests/shape_of.cpp index 7f4653dd954cf0..0e9a8e218be244 100644 --- a/src/plugins/intel_cpu/tests/functional/shared_tests_instances/single_layer_tests/shape_of.cpp +++ b/src/plugins/intel_cpu/tests/functional/shared_tests_instances/single_layer_tests/shape_of.cpp @@ -4,22 +4,22 @@ #include -#include "single_layer_tests/shape_of.hpp" +#include "single_op_tests/shape_of.hpp" #include "common_test_utils/test_constants.hpp" -using namespace LayerTestsDefinitions; +using ov::test::ShapeOfLayerTest; namespace { - const std::vector netPrecisions = { - InferenceEngine::Precision::FP32, - InferenceEngine::Precision::I32 + const std::vector model_types = { + ov::element::f32, + ov::element::i32 }; INSTANTIATE_TEST_SUITE_P(smoke_Check, ShapeOfLayerTest, ::testing::Combine( - ::testing::ValuesIn(netPrecisions), - ::testing::Values(InferenceEngine::Precision::I64), - ::testing::Values(std::vector({10, 10, 10})), + ::testing::ValuesIn(model_types), + ::testing::Values(ov::element::i64), + ::testing::Values(ov::test::static_shapes_to_test_representation({{10, 10, 10}})), ::testing::Values(ov::test::utils::DEVICE_CPU)), ShapeOfLayerTest::getTestCaseName); } // namespace diff --git a/src/plugins/intel_cpu/tests/functional/shared_tests_instances/single_layer_tests/shuffle_channels.cpp b/src/plugins/intel_cpu/tests/functional/shared_tests_instances/single_layer_tests/shuffle_channels.cpp index 5aa51ab1487522..7d5b606e3aba46 100644 --- a/src/plugins/intel_cpu/tests/functional/shared_tests_instances/single_layer_tests/shuffle_channels.cpp +++ b/src/plugins/intel_cpu/tests/functional/shared_tests_instances/single_layer_tests/shuffle_channels.cpp @@ -4,35 +4,31 @@ #include -#include "single_layer_tests/shuffle_channels.hpp" +#include "single_op_tests/shuffle_channels.hpp" -using namespace LayerTestsDefinitions; +using ov::test::ShuffleChannelsLayerTest; namespace { -const std::vector netPrecisions = { - InferenceEngine::Precision::U8, - InferenceEngine::Precision::U16, - InferenceEngine::Precision::FP32 +const std::vector model_types = { + ov::element::u8, + ov::element::u16, + ov::element::f32 }; const std::vector axes = {-4, -3, -2, -1, 0, 1, 2, 3}; const std::vector groups = {1, 2, 3, 6}; -const auto shuffleChannelsParams4D = ::testing::Combine( +const auto shuffle_channels_params_4D = ::testing::Combine( ::testing::ValuesIn(axes), ::testing::ValuesIn(groups) ); INSTANTIATE_TEST_SUITE_P(smoke_ShuffleChannels4D, ShuffleChannelsLayerTest, ::testing::Combine( - shuffleChannelsParams4D, - ::testing::ValuesIn(netPrecisions), - ::testing::Values(InferenceEngine::Precision::UNSPECIFIED), - ::testing::Values(InferenceEngine::Precision::UNSPECIFIED), - ::testing::Values(InferenceEngine::Layout::ANY), - ::testing::Values(InferenceEngine::Layout::ANY), - ::testing::Values(std::vector({12, 18, 30, 36})), + shuffle_channels_params_4D, + ::testing::ValuesIn(model_types), + ::testing::Values(ov::test::static_shapes_to_test_representation({{12, 18, 30, 36}})), ::testing::Values(ov::test::utils::DEVICE_CPU)), ShuffleChannelsLayerTest::getTestCaseName); @@ -40,60 +36,40 @@ INSTANTIATE_TEST_SUITE_P(smoke_ShuffleChannels4D, ShuffleChannelsLayerTest, INSTANTIATE_TEST_SUITE_P(smoke_ShuffleChannels6D, ShuffleChannelsLayerTest, ::testing::Combine( ::testing::Values(std::tuple(2, 3)), - ::testing::ValuesIn(netPrecisions), - ::testing::Values(InferenceEngine::Precision::UNSPECIFIED), - ::testing::Values(InferenceEngine::Precision::UNSPECIFIED), - ::testing::Values(InferenceEngine::Layout::ANY), - ::testing::Values(InferenceEngine::Layout::ANY), - ::testing::Values(std::vector({24, 6, 12, 18, 30, 36})), + ::testing::ValuesIn(model_types), + ::testing::Values(ov::test::static_shapes_to_test_representation({{24, 6, 12, 18, 30, 36}})), ::testing::Values(ov::test::utils::DEVICE_CPU)), ShuffleChannelsLayerTest::getTestCaseName); INSTANTIATE_TEST_SUITE_P(smoke_ShuffleChannels5D, ShuffleChannelsLayerTest, ::testing::Combine( ::testing::Values(std::tuple(2, 3)), - ::testing::ValuesIn(netPrecisions), - ::testing::Values(InferenceEngine::Precision::UNSPECIFIED), - ::testing::Values(InferenceEngine::Precision::UNSPECIFIED), - ::testing::Values(InferenceEngine::Layout::ANY), - ::testing::Values(InferenceEngine::Layout::ANY), - ::testing::Values(std::vector({6, 12, 18, 30, 36})), + ::testing::ValuesIn(model_types), + ::testing::Values(ov::test::static_shapes_to_test_representation({{6, 12, 18, 30, 36}})), ::testing::Values(ov::test::utils::DEVICE_CPU)), ShuffleChannelsLayerTest::getTestCaseName); INSTANTIATE_TEST_SUITE_P(smoke_ShuffleChannels3D, ShuffleChannelsLayerTest, ::testing::Combine( ::testing::Values(std::tuple(1, 3)), - ::testing::ValuesIn(netPrecisions), - ::testing::Values(InferenceEngine::Precision::UNSPECIFIED), - ::testing::Values(InferenceEngine::Precision::UNSPECIFIED), - ::testing::Values(InferenceEngine::Layout::ANY), - ::testing::Values(InferenceEngine::Layout::ANY), - ::testing::Values(std::vector({18, 30, 36})), + ::testing::ValuesIn(model_types), + ::testing::Values(ov::test::static_shapes_to_test_representation({{18, 30, 36}})), ::testing::Values(ov::test::utils::DEVICE_CPU)), ShuffleChannelsLayerTest::getTestCaseName); INSTANTIATE_TEST_SUITE_P(smoke_ShuffleChannels2D, ShuffleChannelsLayerTest, ::testing::Combine( ::testing::Values(std::tuple(1, 3)), - ::testing::ValuesIn(netPrecisions), - ::testing::Values(InferenceEngine::Precision::UNSPECIFIED), - ::testing::Values(InferenceEngine::Precision::UNSPECIFIED), - ::testing::Values(InferenceEngine::Layout::ANY), - ::testing::Values(InferenceEngine::Layout::ANY), - ::testing::Values(std::vector({18, 30})), + ::testing::ValuesIn(model_types), + ::testing::Values(ov::test::static_shapes_to_test_representation({{18, 30}})), ::testing::Values(ov::test::utils::DEVICE_CPU)), ShuffleChannelsLayerTest::getTestCaseName); INSTANTIATE_TEST_SUITE_P(smoke_ShuffleChannels1D, ShuffleChannelsLayerTest, ::testing::Combine( ::testing::Values(std::tuple(0, 3)), - ::testing::ValuesIn(netPrecisions), - ::testing::Values(InferenceEngine::Precision::UNSPECIFIED), - ::testing::Values(InferenceEngine::Precision::UNSPECIFIED), - ::testing::Values(InferenceEngine::Layout::ANY), - ::testing::Values(InferenceEngine::Layout::ANY), - ::testing::Values(std::vector({30})), + ::testing::ValuesIn(model_types), + ::testing::Values(ov::test::static_shapes_to_test_representation({ov::Shape{30}})), ::testing::Values(ov::test::utils::DEVICE_CPU)), ShuffleChannelsLayerTest::getTestCaseName); diff --git a/src/plugins/intel_cpu/tests/functional/shared_tests_instances/single_layer_tests/slice.cpp b/src/plugins/intel_cpu/tests/functional/shared_tests_instances/single_layer_tests/slice.cpp index 4b7b01f9da4864..440e57c16d0845 100644 --- a/src/plugins/intel_cpu/tests/functional/shared_tests_instances/single_layer_tests/slice.cpp +++ b/src/plugins/intel_cpu/tests/functional/shared_tests_instances/single_layer_tests/slice.cpp @@ -4,28 +4,28 @@ #include -#include "single_layer_tests/slice.hpp" +#include "single_op_tests/slice.hpp" #include "common_test_utils/test_constants.hpp" -using namespace LayerTestsDefinitions; -using namespace ov::test; +using ov::test::Slice8LayerTest; +using ov::test::Slice8SpecificParams; namespace { -const std::vector inputPrecisions = { - ElementType::f32, - ElementType::bf16, - ElementType::i8 +const std::vector model_types = { + ov::element::f32, + ov::element::bf16, + ov::element::i8 }; -const std::vector inputPrecisionsOther = { - ElementType::i64, - ElementType::i32, - ElementType::i16, - ElementType::u8 +const std::vector model_types_extra = { + ov::element::i64, + ov::element::i32, + ov::element::i16, + ov::element::u8 }; -std::vector staticParams = { +std::vector static_params = { Slice8SpecificParams{ {{{}, {{ 16 }}}}, { 4 }, { 12 }, { 1 }, { 0 } }, Slice8SpecificParams{ {{{}, {{ 16 }}}}, { 0 }, { 8 }, { 2 }, { 0 } }, Slice8SpecificParams{ {{{}, {{ 20, 10, 5 }}}}, { 0, 0}, { 10, 20}, { 1, 1 }, { 1, 0 } }, @@ -68,30 +68,20 @@ std::vector staticParams = { INSTANTIATE_TEST_SUITE_P(smoke_Static, Slice8LayerTest, ::testing::Combine( - ::testing::ValuesIn(staticParams), - ::testing::ValuesIn(inputPrecisions), - ::testing::Values(ElementType::undefined), - ::testing::Values(ElementType::undefined), - ::testing::Values(InferenceEngine::Layout::ANY), - ::testing::Values(InferenceEngine::Layout::ANY), - ::testing::Values(ov::test::utils::DEVICE_CPU), - ::testing::Values(std::map())), + ::testing::ValuesIn(static_params), + ::testing::ValuesIn(model_types), + ::testing::Values(ov::test::utils::DEVICE_CPU)), Slice8LayerTest::getTestCaseName); INSTANTIATE_TEST_SUITE_P(smoke_PrecisionTransformation, Slice8LayerTest, ::testing::Combine( - ::testing::Values(staticParams[0]), - ::testing::ValuesIn(inputPrecisionsOther), - ::testing::Values(ElementType::undefined), - ::testing::Values(ElementType::undefined), - ::testing::Values(InferenceEngine::Layout::ANY), - ::testing::Values(InferenceEngine::Layout::ANY), - ::testing::Values(ov::test::utils::DEVICE_CPU), - ::testing::Values(std::map())), + ::testing::Values(static_params[0]), + ::testing::ValuesIn(model_types_extra), + ::testing::Values(ov::test::utils::DEVICE_CPU)), Slice8LayerTest::getTestCaseName); -std::vector dynamicParams = { +std::vector dynamic_params = { Slice8SpecificParams{ {{{ -1 }, {{ 8 }, { 16 }}}}, { 4 }, { 12 }, { 1 }, { 0 } }, Slice8SpecificParams{ {{{ ov::Dimension(2, 20) }, {{ 5 }, { 15 }}}}, { 0 }, { 8 }, { 2 }, { 0 } }, Slice8SpecificParams{ {{{ -1, -1, -1 }, {{ 20, 10, 5 }, {5, 10, 20}}}}, { 0, 0}, { 10, 20}, { 1, 1 }, { 1, 0 } }, @@ -115,13 +105,8 @@ std::vector dynamicParams = { INSTANTIATE_TEST_SUITE_P(smoke_Dynamic, Slice8LayerTest, ::testing::Combine( - ::testing::ValuesIn(dynamicParams), - ::testing::ValuesIn(inputPrecisions), - ::testing::Values(ElementType::undefined), - ::testing::Values(ElementType::undefined), - ::testing::Values(InferenceEngine::Layout::ANY), - ::testing::Values(InferenceEngine::Layout::ANY), - ::testing::Values(ov::test::utils::DEVICE_CPU), - ::testing::Values(std::map())), + ::testing::ValuesIn(dynamic_params), + ::testing::ValuesIn(model_types), + ::testing::Values(ov::test::utils::DEVICE_CPU)), Slice8LayerTest::getTestCaseName); } // namespace diff --git a/src/plugins/intel_cpu/tests/functional/shared_tests_instances/single_layer_tests/space_to_batch.cpp b/src/plugins/intel_cpu/tests/functional/shared_tests_instances/single_layer_tests/space_to_batch.cpp index e5b37defab71e3..a661149078dca7 100644 --- a/src/plugins/intel_cpu/tests/functional/shared_tests_instances/single_layer_tests/space_to_batch.cpp +++ b/src/plugins/intel_cpu/tests/functional/shared_tests_instances/single_layer_tests/space_to_batch.cpp @@ -4,71 +4,65 @@ #include -#include "single_layer_tests/space_to_batch.hpp" +#include "single_op_tests/space_to_batch.hpp" #include "common_test_utils/test_constants.hpp" -using namespace LayerTestsDefinitions; +using ov::test::SpaceToBatchLayerTest; namespace { -const std::vector> blockShapes4D { +const std::vector> block_shapes_4D { {1, 1, 2, 2} }; -const std::vector> padsBegins4D { +const std::vector> pads_begins_4D { {0, 0, 0, 0}, {0, 0, 0, 2} }; -const std::vector> padsEnds4D { +const std::vector> pads_ends_4D { {0, 0, 0, 0}, {0, 0, 0, 2} }; -const std::vector> dataShapes4D { - {1, 1, 2, 2}, {1, 3, 2, 2}, {1, 1, 4, 4}, {2, 1, 2, 4} -}; +const auto data_shapes_4D = ov::test::static_shapes_to_test_representation( + std::vector>{ + {{1, 1, 2, 2}}, {{1, 3, 2, 2}}, {{1, 1, 4, 4}}, {{2, 1, 2, 4}} +}); -const auto SpaceToBatch4D = ::testing::Combine( - ::testing::ValuesIn(blockShapes4D), - ::testing::ValuesIn(padsBegins4D), - ::testing::ValuesIn(padsEnds4D), - ::testing::ValuesIn(dataShapes4D), - ::testing::Values(InferenceEngine::Precision::FP32), - ::testing::Values(InferenceEngine::Precision::FP32), - ::testing::Values(InferenceEngine::Precision::FP32), - ::testing::Values(InferenceEngine::Layout::ANY), - ::testing::Values(InferenceEngine::Layout::ANY), +const auto space_to_batch_4D = ::testing::Combine( + ::testing::ValuesIn(block_shapes_4D), + ::testing::ValuesIn(pads_begins_4D), + ::testing::ValuesIn(pads_ends_4D), + ::testing::ValuesIn(data_shapes_4D), + ::testing::Values(ov::element::f32), ::testing::Values(ov::test::utils::DEVICE_CPU) ); INSTANTIATE_TEST_SUITE_P( - smoke_spacetobatch4D, SpaceToBatchLayerTest, SpaceToBatch4D, + smoke_spacetobatch4D, SpaceToBatchLayerTest, space_to_batch_4D, SpaceToBatchLayerTest::getTestCaseName); -const std::vector> blockShapes5D { +const std::vector> block_shapes_5D { {1, 1, 3, 2, 2} }; -const std::vector> padsBegins5D { +const std::vector> pads_begins_5D { {0, 0, 1, 0, 3} }; -const std::vector> padsEnds5D { +const std::vector> pads_ends_5D { {0, 0, 2, 0, 0} }; -const std::vector> dataShapes5D { - {1, 1, 3, 2, 1} -}; +const auto data_shapes_5D = ov::test::static_shapes_to_test_representation( + std::vector>{ + {{1, 1, 3, 2, 1}} +}); -const auto SpaceToBatch5D = ::testing::Combine( - ::testing::ValuesIn(blockShapes5D), - ::testing::ValuesIn(padsBegins5D), - ::testing::ValuesIn(padsEnds5D), - ::testing::ValuesIn(dataShapes5D), - ::testing::Values(InferenceEngine::Precision::FP32), - ::testing::Values(InferenceEngine::Precision::FP32), - ::testing::Values(InferenceEngine::Precision::FP32), - ::testing::Values(InferenceEngine::Layout::ANY), - ::testing::Values(InferenceEngine::Layout::ANY), +const auto space_to_batch_5D = ::testing::Combine( + ::testing::ValuesIn(block_shapes_5D), + ::testing::ValuesIn(pads_begins_5D), + ::testing::ValuesIn(pads_ends_5D), + ::testing::ValuesIn(data_shapes_5D), + ::testing::Values(ov::element::f32), ::testing::Values(ov::test::utils::DEVICE_CPU) ); INSTANTIATE_TEST_SUITE_P( - smoke_spacetobatch5D, SpaceToBatchLayerTest, SpaceToBatch5D, + smoke_spacetobatch5D, SpaceToBatchLayerTest, space_to_batch_5D, SpaceToBatchLayerTest::getTestCaseName); } // namespace diff --git a/src/plugins/intel_cpu/tests/functional/shared_tests_instances/single_layer_tests/space_to_depth.cpp b/src/plugins/intel_cpu/tests/functional/shared_tests_instances/single_layer_tests/space_to_depth.cpp index e7f280baeceaa3..655d1b0a1647ae 100644 --- a/src/plugins/intel_cpu/tests/functional/shared_tests_instances/single_layer_tests/space_to_depth.cpp +++ b/src/plugins/intel_cpu/tests/functional/shared_tests_instances/single_layer_tests/space_to_depth.cpp @@ -3,54 +3,52 @@ // #include -#include -#include "single_layer_tests/space_to_depth.hpp" +#include "single_op_tests/space_to_depth.hpp" #include "common_test_utils/test_constants.hpp" -using namespace LayerTestsDefinitions; -using namespace ngraph::opset3; +using ov::test::SpaceToDepthLayerTest; namespace { -const std::vector inputPrecisions = { - InferenceEngine::Precision::FP32, - InferenceEngine::Precision::U8, - InferenceEngine::Precision::I16, +const std::vector model_types = { + ov::element::f32, + ov::element::u8, + ov::element::i16, }; -const std::vector modes = { - SpaceToDepth::SpaceToDepthMode::BLOCKS_FIRST, - SpaceToDepth::SpaceToDepthMode::DEPTH_FIRST +const std::vector modes = { + ov::op::v0::SpaceToDepth::SpaceToDepthMode::BLOCKS_FIRST, + ov::op::v0::SpaceToDepth::SpaceToDepthMode::DEPTH_FIRST }; -const std::vector> inputShapesBS2 = { - {1, 1, 2, 2}, {1, 1, 4, 4}, {1, 1, 6, 6}, {2, 8, 6, 6}, {2, 4, 10, 8}, - {1, 1, 2, 2, 2}, {1, 1, 4, 4, 4}, {1, 1, 6, 6, 6}, {2, 8, 6, 6, 6}, {2, 4, 10, 8, 12} -}; +const auto input_shapes_BS2 = ov::test::static_shapes_to_test_representation(std::vector>{ + {{1, 1, 2, 2}}, {{1, 1, 4, 4}}, {{1, 1, 6, 6}}, {{2, 8, 6, 6}}, {{2, 4, 10, 8}}, + {{1, 1, 2, 2, 2}}, {{1, 1, 4, 4, 4}}, {{1, 1, 6, 6, 6}}, {{2, 8, 6, 6, 6}}, {{2, 4, 10, 8, 12}} +}); -const auto SpaceToDepthBS2 = ::testing::Combine( - ::testing::ValuesIn(inputShapesBS2), - ::testing::ValuesIn(inputPrecisions), +const auto space_to_depth_BS2 = ::testing::Combine( + ::testing::ValuesIn(input_shapes_BS2), + ::testing::ValuesIn(model_types), ::testing::ValuesIn(modes), ::testing::Values(1, 2), ::testing::Values(ov::test::utils::DEVICE_CPU) ); -INSTANTIATE_TEST_SUITE_P(smoke_SpaceToDepthBS2, SpaceToDepthLayerTest, SpaceToDepthBS2, SpaceToDepthLayerTest::getTestCaseName); +INSTANTIATE_TEST_SUITE_P(smoke_SpaceToDepthBS2, SpaceToDepthLayerTest, space_to_depth_BS2, SpaceToDepthLayerTest::getTestCaseName); -const std::vector> inputShapesBS3 = { - {1, 1, 3, 3}, {1, 1, 6, 6}, {1, 1, 9, 9}, {2, 4, 9, 9}, {2, 3, 15, 12}, - {1, 1, 3, 3, 3}, {1, 1, 6, 6, 6}, {1, 1, 9, 9, 9}, {2, 4, 9, 9, 9}, {2, 3, 15, 12, 18} -}; +const auto input_shapes_BS3 = ov::test::static_shapes_to_test_representation(std::vector>{ + {{1, 1, 3, 3}}, {{1, 1, 6, 6}}, {{1, 1, 9, 9}}, {{2, 4, 9, 9}}, {{2, 3, 15, 12}}, + {{1, 1, 3, 3, 3}}, {{1, 1, 6, 6, 6}}, {{1, 1, 9, 9, 9}}, {{2, 4, 9, 9, 9}}, {{2, 3, 15, 12, 18}} +}); -const auto SpaceToDepthBS3 = ::testing::Combine( - ::testing::ValuesIn(inputShapesBS3), - ::testing::ValuesIn(inputPrecisions), +const auto space_to_depth_BS3 = ::testing::Combine( + ::testing::ValuesIn(input_shapes_BS3), + ::testing::ValuesIn(model_types), ::testing::ValuesIn(modes), ::testing::Values(1, 3), ::testing::Values(ov::test::utils::DEVICE_CPU) ); -INSTANTIATE_TEST_SUITE_P(smoke_SpaceToDepthBS3, SpaceToDepthLayerTest, SpaceToDepthBS3, SpaceToDepthLayerTest::getTestCaseName); +INSTANTIATE_TEST_SUITE_P(smoke_SpaceToDepthBS3, SpaceToDepthLayerTest, space_to_depth_BS3, SpaceToDepthLayerTest::getTestCaseName); } // namespace diff --git a/src/plugins/intel_cpu/tests/functional/shared_tests_instances/single_layer_tests/split.cpp b/src/plugins/intel_cpu/tests/functional/shared_tests_instances/single_layer_tests/split.cpp index 2c949242067421..b855eb73f8c6f4 100644 --- a/src/plugins/intel_cpu/tests/functional/shared_tests_instances/single_layer_tests/split.cpp +++ b/src/plugins/intel_cpu/tests/functional/shared_tests_instances/single_layer_tests/split.cpp @@ -4,30 +4,26 @@ #include -#include "single_layer_tests/split.hpp" +#include "single_op_tests/split.hpp" #include "common_test_utils/test_constants.hpp" -using namespace LayerTestsDefinitions; +using ov::test::SplitLayerTest; namespace { -const std::vector netPrecisions = { - InferenceEngine::Precision::FP32, - InferenceEngine::Precision::FP16, - InferenceEngine::Precision::I32, - InferenceEngine::Precision::U8 +const std::vector model_types = { + ov::element::f32, + ov::element::f16, + ov::element::i32, + ov::element::u8 }; INSTANTIATE_TEST_SUITE_P(smoke_NumSplitsCheck, SplitLayerTest, ::testing::Combine( ::testing::Values(1, 2, 3, 5), ::testing::Values(0, 1, 2, 3), - ::testing::ValuesIn(netPrecisions), - ::testing::Values(InferenceEngine::Precision::UNSPECIFIED), - ::testing::Values(InferenceEngine::Precision::UNSPECIFIED), - ::testing::Values(InferenceEngine::Layout::ANY), - ::testing::Values(InferenceEngine::Layout::ANY), - ::testing::Values(std::vector({30, 30, 30, 30})), + ::testing::ValuesIn(model_types), + ::testing::Values(ov::test::static_shapes_to_test_representation({{30, 30, 30, 30}})), ::testing::Values(std::vector({})), ::testing::Values(ov::test::utils::DEVICE_CPU)), SplitLayerTest::getTestCaseName); @@ -36,12 +32,8 @@ INSTANTIATE_TEST_SUITE_P(smoke_splitWithUnusedOutputsTest, SplitLayerTest, ::testing::Combine( ::testing::Values(5), ::testing::Values(0), - ::testing::ValuesIn(netPrecisions), - ::testing::Values(InferenceEngine::Precision::UNSPECIFIED), - ::testing::Values(InferenceEngine::Precision::UNSPECIFIED), - ::testing::Values(InferenceEngine::Layout::ANY), - ::testing::Values(InferenceEngine::Layout::ANY), - ::testing::Values(std::vector({30, 30, 30, 30})), + ::testing::ValuesIn(model_types), + ::testing::Values(ov::test::static_shapes_to_test_representation({{30, 30, 30, 30}})), ::testing::Values(std::vector({0, 3})), ::testing::Values(ov::test::utils::DEVICE_CPU)), SplitLayerTest::getTestCaseName); diff --git a/src/plugins/intel_cpu/tests/functional/shared_tests_instances/single_layer_tests/squeeze_unsqueeze.cpp b/src/plugins/intel_cpu/tests/functional/shared_tests_instances/single_layer_tests/squeeze_unsqueeze.cpp index a643318eaaeac9..5f7da02a3d8f14 100644 --- a/src/plugins/intel_cpu/tests/functional/shared_tests_instances/single_layer_tests/squeeze_unsqueeze.cpp +++ b/src/plugins/intel_cpu/tests/functional/shared_tests_instances/single_layer_tests/squeeze_unsqueeze.cpp @@ -4,61 +4,67 @@ #include -#include "single_layer_tests/squeeze_unsqueeze.hpp" +#include "single_op_tests/squeeze_unsqueeze.hpp" #include "common_test_utils/test_constants.hpp" -using namespace LayerTestsDefinitions; +using ov::test::SqueezeUnsqueezeLayerTest; namespace { -std::map, std::vector>> axesVectors = { - {{1, 1, 1, 1}, {{-1}, {0}, {1}, {2}, {3}, {0, 1}, {0, 2}, {0, 3}, {1, 2}, {2, 3}, {0, 1, 2}, {0, 2, 3}, {1, 2, 3}, {0, 1, 2, 3}}}, - {{1, 2, 3, 4}, {{0}}}, - {{2, 1, 3, 4}, {{1}}}, - {{1}, {{-1}, {0}}}, - {{1, 2}, {{0}}}, - {{2, 1}, {{1}, {-1}}}, +std::map, std::vector>> raw_axes = { + {{{1, 1, 1, 1}}, {{-1}, {0}, {1}, {2}, {3}, {0, 1}, {0, 2}, {0, 3}, {1, 2}, {2, 3}, {0, 1, 2}, {0, 2, 3}, {1, 2, 3}, {0, 1, 2, 3}}}, + {{{1, 2, 3, 4}}, {{0}}}, + {{{2, 1, 3, 4}}, {{1}}}, + {{{1}}, {{-1}, {0}}}, + {{{1, 2}}, {{0}}}, + {{{2, 1}}, {{1}, {-1}}}, }; -std::map, std::vector>> emptyAxesVectors = { - {{1, 1, 1, 1}, {{}}}, - {{1, 2, 3, 4}, {{}}}, - {{2, 1, 3, 4}, {{}}}, - {{1}, {{}}}, - {{1, 2}, {{}}}, - {{2, 1}, {{}}}, +std::map, std::vector>> raw_empty_axes = { + {{{1, 1, 1, 1}}, {{}}}, + {{{1, 2, 3, 4}}, {{}}}, + {{{2, 1, 3, 4}}, {{}}}, + {{{1}}, {{}}}, + {{{1, 2}}, {{}}}, + {{{2, 1}}, {{}}}, }; -const std::vector netPrecisions = { - InferenceEngine::Precision::FP32, - InferenceEngine::Precision::FP16 +auto combined_axes = ov::test::utils::combineParams(raw_axes); +auto combined_empty_axes = ov::test::utils::combineParams(raw_empty_axes); + +auto prepare_cases = [](const std::vector, std::vector>>& raw_axes) { + std::vector, std::vector>> cases; + for (const auto& raw_case : raw_axes) + cases.emplace_back(ov::test::static_shapes_to_test_representation(raw_case.first), + raw_case.second); + return cases; +}; + +auto axes = prepare_cases(combined_axes); +auto empty_axes = prepare_cases(combined_empty_axes); + +const std::vector model_types = { + ov::element::f32, + ov::element::f16 }; -const std::vector opTypes = { - ngraph::helpers::SqueezeOpType::SQUEEZE, - ngraph::helpers::SqueezeOpType::UNSQUEEZE +const std::vector opTypes = { + ov::test::utils::SqueezeOpType::SQUEEZE, + ov::test::utils::SqueezeOpType::UNSQUEEZE }; INSTANTIATE_TEST_SUITE_P(smoke_Basic, SqueezeUnsqueezeLayerTest, ::testing::Combine( - ::testing::ValuesIn(ov::test::utils::combineParams(axesVectors)), + ::testing::ValuesIn(axes), ::testing::ValuesIn(opTypes), - ::testing::ValuesIn(netPrecisions), - ::testing::Values(InferenceEngine::Precision::UNSPECIFIED), - ::testing::Values(InferenceEngine::Precision::UNSPECIFIED), - ::testing::Values(InferenceEngine::Layout::ANY), - ::testing::Values(InferenceEngine::Layout::ANY), + ::testing::ValuesIn(model_types), ::testing::Values(ov::test::utils::DEVICE_CPU)), SqueezeUnsqueezeLayerTest::getTestCaseName); INSTANTIATE_TEST_SUITE_P(smoke_Basic_emptyAxes, SqueezeUnsqueezeLayerTest, ::testing::Combine( - ::testing::ValuesIn(ov::test::utils::combineParams(emptyAxesVectors)), - ::testing::Values(ngraph::helpers::SqueezeOpType::SQUEEZE), - ::testing::ValuesIn(netPrecisions), - ::testing::Values(InferenceEngine::Precision::UNSPECIFIED), - ::testing::Values(InferenceEngine::Precision::UNSPECIFIED), - ::testing::Values(InferenceEngine::Layout::ANY), - ::testing::Values(InferenceEngine::Layout::ANY), + ::testing::ValuesIn(empty_axes), + ::testing::Values(ov::test::utils::SqueezeOpType::SQUEEZE), + ::testing::ValuesIn(model_types), ::testing::Values(ov::test::utils::DEVICE_CPU)), SqueezeUnsqueezeLayerTest::getTestCaseName); } // namespace diff --git a/src/plugins/intel_cpu/tests/functional/shared_tests_instances/single_layer_tests/strided_slice.cpp b/src/plugins/intel_cpu/tests/functional/shared_tests_instances/single_layer_tests/strided_slice.cpp index 1c3f691c998b2e..91b899090b6792 100644 --- a/src/plugins/intel_cpu/tests/functional/shared_tests_instances/single_layer_tests/strided_slice.cpp +++ b/src/plugins/intel_cpu/tests/functional/shared_tests_instances/single_layer_tests/strided_slice.cpp @@ -4,119 +4,130 @@ #include -#include "single_layer_tests/strided_slice.hpp" +#include "single_op_tests/strided_slice.hpp" #include "common_test_utils/test_constants.hpp" -using namespace LayerTestsDefinitions; +using ov::test::StridedSliceLayerTest; namespace { +struct RawParams { + std::vector input_shape; + std::vector begin; + std::vector end; + std::vector strides; + std::vector begin_mask; + std::vector end_mask; + std::vector new_axis_mask; + std::vector shrink_axis_mask; + std::vector ellipsis_axis_mask; +}; -std::vector ss_only_test_cases = { - StridedSliceSpecificParams{ { 16 }, { 4 }, { 12 }, { 1 }, +std::vector raw_test_cases = { + RawParams{ {{ 16 }}, { 4 }, { 12 }, { 1 }, { 0 }, { 0 }, { }, { }, { } }, - StridedSliceSpecificParams{ { 16 }, { 0 }, { 8 }, { 2 }, + RawParams{ {{ 16 }}, { 0 }, { 8 }, { 2 }, { 1 }, { 0 }, { }, { }, { } }, - StridedSliceSpecificParams{ { 128, 1 }, { 0, 0, 0 }, { 0, 0, 0 }, { 1, 1, 1 }, + RawParams{ {{ 128, 1 }}, { 0, 0, 0 }, { 0, 0, 0 }, { 1, 1, 1 }, { 0, 1, 1 }, { 0, 1, 1 }, { 1, 0, 0 }, { 0, 0, 0 }, { 0, 0, 0 } }, - StridedSliceSpecificParams{ { 128, 1 }, { 0, 0, 0 }, { 0, 0, 0 }, { 1, 1, 1}, + RawParams{ {{ 128, 1 }}, { 0, 0, 0 }, { 0, 0, 0 }, { 1, 1, 1}, { 1, 0, 1 }, { 1, 0, 1 }, { 0, 1, 0 }, { 0, 0, 0 }, { 0, 0, 0 } }, - StridedSliceSpecificParams{ { 2, 3 }, { 1, 0 }, { 2, 3 }, { 1, 1 }, + RawParams{ {{ 2, 3 }}, { 1, 0 }, { 2, 3 }, { 1, 1 }, { 0, 0 }, { 0, 0 }, { }, { }, { } }, - StridedSliceSpecificParams{ { 10, 3 }, { 0, 0 }, { 20, 20 }, { 1, 1 }, + RawParams{ {{ 10, 3 }}, { 0, 0 }, { 20, 20 }, { 1, 1 }, { 0, 1 }, { 0, 1 }, { }, { }, { } }, - StridedSliceSpecificParams{ { 1, 12, 100 }, { 0, -1, 0 }, { 0, 0, 0 }, { 1, 1, 1 }, + RawParams{ {{ 1, 12, 100 }}, { 0, -1, 0 }, { 0, 0, 0 }, { 1, 1, 1 }, { 1, 0, 1 }, { 1, 0, 1 }, { 0, 0, 0 }, { 0, 1, 0 }, { 0, 0, 0 } }, - StridedSliceSpecificParams{ { 1, 12, 100 }, { 0, 9, 0 }, { 0, 11, 0 }, { 1, 1, 1 }, + RawParams{ {{ 1, 12, 100 }}, { 0, 9, 0 }, { 0, 11, 0 }, { 1, 1, 1 }, { 1, 0, 1 }, { 1, 0, 1 }, { 0, 0, 0 }, { 0, 0, 0 }, { 0, 0, 0 } }, - StridedSliceSpecificParams{ { 1, 12, 100 }, { 0, 1, 0 }, { 0, -1, 0 }, { 1, 1, 1 }, + RawParams{ {{ 1, 12, 100 }}, { 0, 1, 0 }, { 0, -1, 0 }, { 1, 1, 1 }, { 1, 0, 1 }, { 1, 0, 1 }, { 0, 0, 0 }, { 0, 0, 0 }, { 0, 0, 0 } }, - StridedSliceSpecificParams{ { 2, 12, 100 }, { 0, 9, 0 }, { 0, 7, 0 }, { -1, -1, -1 }, + RawParams{ {{ 2, 12, 100 }}, { 0, 9, 0 }, { 0, 7, 0 }, { -1, -1, -1 }, { 1, 0, 1 }, { 1, 0, 1 }, { 0, 0, 0 }, { 0, 0, 0 }, { 0, 0, 0 } }, - StridedSliceSpecificParams{ { 2, 12, 100 }, { 0, 7, 0 }, { 0, 9, 0 }, { -1, 1, -1 }, + RawParams{ {{ 2, 12, 100 }}, { 0, 7, 0 }, { 0, 9, 0 }, { -1, 1, -1 }, { 1, 0, 1 }, { 1, 0, 1 }, { 0, 0, 0 }, { 0, 0, 0 }, { 0, 0, 0 } }, - StridedSliceSpecificParams{ { 1, 12, 100 }, { 0, 4, 0 }, { 0, 9, 0 }, { -1, 2, -1 }, + RawParams{ {{ 1, 12, 100 }}, { 0, 4, 0 }, { 0, 9, 0 }, { -1, 2, -1 }, { 1, 0, 1 }, { 1, 0, 1 }, { 0, 0, 0 }, { 0, 0, 0 }, { 0, 0, 0 } }, - StridedSliceSpecificParams{ { 1, 12, 100 }, { 0, 4, 0 }, { 0, 10, 0 }, { -1, 2, -1 }, + RawParams{ {{ 1, 12, 100 }}, { 0, 4, 0 }, { 0, 10, 0 }, { -1, 2, -1 }, { 1, 0, 1 }, { 1, 0, 1 }, { 0, 0, 0 }, { 0, 0, 0 }, { 0, 0, 0 } }, - StridedSliceSpecificParams{ { 1, 12, 100 }, { 0, 9, 0 }, { 0, 4, 0 }, { -1, -2, -1 }, + RawParams{ {{ 1, 12, 100 }}, { 0, 9, 0 }, { 0, 4, 0 }, { -1, -2, -1 }, { 1, 0, 1 }, { 1, 0, 1 }, { 0, 0, 0 }, { 0, 0, 0 }, { 0, 0, 0 } }, - StridedSliceSpecificParams{ { 2, 12, 100 }, { 0, 10, 0 }, { 0, 4, 0 }, { -1, -2, -1 }, + RawParams{ {{ 2, 12, 100 }}, { 0, 10, 0 }, { 0, 4, 0 }, { -1, -2, -1 }, { 1, 0, 1 }, { 1, 0, 1 }, { 0, 0, 0 }, { 0, 0, 0 }, { 0, 0, 0 } }, - StridedSliceSpecificParams{ { 1, 12, 100 }, { 0, 11, 0 }, { 0, 0, 0 }, { -1, -2, -1 }, + RawParams{ {{ 1, 12, 100 }}, { 0, 11, 0 }, { 0, 0, 0 }, { -1, -2, -1 }, { 1, 0, 1 }, { 1, 0, 1 }, { 0, 0, 0 }, { 0, 0, 0 }, { 0, 0, 0 } }, - StridedSliceSpecificParams{ { 1, 12, 100 }, { 0, -6, 0 }, { 0, -8, 0 }, { -1, -2, -1 }, + RawParams{ {{ 1, 12, 100 }}, { 0, -6, 0 }, { 0, -8, 0 }, { -1, -2, -1 }, { 1, 0, 1 }, { 1, 0, 1 }, { 0, 0, 0 }, { 0, 0, 0 }, { 0, 0, 0 } }, - StridedSliceSpecificParams{ { 20, 10, 5 }, { 0, 0, 0 }, { 3, 10, 0 }, { 1, 1, 1 }, + RawParams{ {{ 20, 10, 5 }}, { 0, 0, 0 }, { 3, 10, 0 }, { 1, 1, 1 }, { 0, 0, 1 }, { 0, 0, 1 }, { 0, 0, 0 }, { 0, 0, 0 }, { 0, 0, 0 } }, - StridedSliceSpecificParams{ { 1, 10, 20 }, { 0, 0, 2 }, { 0, 0, 1000 }, { 1, 1, 1 }, + RawParams{ {{ 1, 10, 20 }}, { 0, 0, 2 }, { 0, 0, 1000 }, { 1, 1, 1 }, { 1, 1, 0 }, { 1, 1, 0 }, { }, { }, { } }, - StridedSliceSpecificParams{ { 1, 10, 10 }, { 0, 1, 0 }, { 0, 1000, 0 }, { 1, 1, 1 }, + RawParams{ {{ 1, 10, 10 }}, { 0, 1, 0 }, { 0, 1000, 0 }, { 1, 1, 1 }, { 1, 0, 1 }, { 1, 0, 1 }, { }, { }, { } }, - StridedSliceSpecificParams{ { 1, 10, 4 }, { 0, 0, 0 }, { 0, 0, 2 }, { 1, 1, 1 }, + RawParams{ {{ 1, 10, 4 }}, { 0, 0, 0 }, { 0, 0, 2 }, { 1, 1, 1 }, { 1, 1, 0 }, { 1, 1, 0 }, { }, { }, { } }, - StridedSliceSpecificParams{ { 1, 10, 4 }, { 0, 0, 2 }, { 0, 0, 1000 }, { 1, 1, 1 }, + RawParams{ {{ 1, 10, 4 }}, { 0, 0, 2 }, { 0, 0, 1000 }, { 1, 1, 1 }, { 1, 1, 0 }, { 1, 1, 0 }, { }, { }, { } }, - StridedSliceSpecificParams{ { 1, 10, 2 }, { 0, 0, 0 }, { 0, 0, 1 }, { 1, 1, 1 }, + RawParams{ {{ 1, 10, 2 }}, { 0, 0, 0 }, { 0, 0, 1 }, { 1, 1, 1 }, { 1, 1, 0 }, { 1, 1, 0 }, { }, { }, { } }, - StridedSliceSpecificParams{ { 1, 10, 2 }, { 0, 0, 0 }, { 1000, 0, 0 }, { 1, 1, 1 }, + RawParams{ {{ 1, 10, 2 }}, { 0, 0, 0 }, { 1000, 0, 0 }, { 1, 1, 1 }, { 0, 1, 1 }, { 0, 1, 1 }, { }, { }, { } }, - StridedSliceSpecificParams{ { 1, 10, 2 }, { 0, 0, 0 }, { 0, 1000, 0 }, { 1, 1, 1 }, + RawParams{ {{ 1, 10, 2 }}, { 0, 0, 0 }, { 0, 1000, 0 }, { 1, 1, 1 }, { 1, 0, 1 }, { 1, 0, 1 }, { }, { }, { } }, - StridedSliceSpecificParams{ { 20, 10, 5 }, { 0, 3 }, { 0, 4 }, { 1, 1 }, + RawParams{ {{ 20, 10, 5 }}, { 0, 3 }, { 0, 4 }, { 1, 1 }, { 1, 0 }, { 1, 0 }, { }, { }, { 1, 0 } }, - StridedSliceSpecificParams{ { 20, 10, 5 }, { 0, 0 }, { 0, -1 }, { 1, 1 }, + RawParams{ {{ 20, 10, 5 }}, { 0, 0 }, { 0, -1 }, { 1, 1 }, { 1, 0 }, { 1, 0 }, { }, { }, { 1, 0 } }, - StridedSliceSpecificParams{ { 20, 10, 5 }, { 0, 0 }, { 0, -1 }, { 1, 1 }, + RawParams{ {{ 20, 10, 5 }}, { 0, 0 }, { 0, -1 }, { 1, 1 }, { 1, 0 }, { 1, 0 }, { 0, 0 }, { 0, 0 }, { 0, 0 } }, - StridedSliceSpecificParams{ { 1, 12, 100, 1, 1 }, { 0, -1, 0, 0 }, { 0, 0, 0, 0 }, { 1, 1, 1, 1 }, + RawParams{ {{ 1, 12, 100, 1, 1 }}, { 0, -1, 0, 0 }, { 0, 0, 0, 0 }, { 1, 1, 1, 1 }, { 1, 0, 1, 0 }, { 1, 0, 1, 0 }, { }, { 0, 1, 0, 1 }, {} }, - StridedSliceSpecificParams{ { 2, 2, 2, 2 }, { 0, 0, 0, 0 }, { 2, 2, 2, 2 }, { 1, 1, 1, 1 }, + RawParams{ {{ 2, 2, 2, 2 }}, { 0, 0, 0, 0 }, { 2, 2, 2, 2 }, { 1, 1, 1, 1 }, { 1, 1, 1, 1}, { 1, 1, 1, 1}, {}, {}, {} }, - StridedSliceSpecificParams{ { 2, 2, 2, 2 }, { 0, 0 }, { 2, 2 }, { 1, 1 }, + RawParams{ {{ 2, 2, 2, 2 }}, { 0, 0 }, { 2, 2 }, { 1, 1 }, { 1, 1 }, { 1, 1 }, {}, {}, {} }, - StridedSliceSpecificParams{ { 2, 2, 3, 3 }, { 0, -2, -2 }, { 2, -1, -1 }, { 1, 1, 1 }, + RawParams{ {{ 2, 2, 3, 3 }}, { 0, -2, -2 }, { 2, -1, -1 }, { 1, 1, 1 }, { 1, 0 }, { 1, 0 }, {}, {}, {} }, - StridedSliceSpecificParams{ { 2, 2, 2, 2 }, { 1, 1, 1, 1 }, { 2, 2, 2, 2 }, { 1, 1, 1, 1 }, + RawParams{ {{ 2, 2, 2, 2 }}, { 1, 1, 1, 1 }, { 2, 2, 2, 2 }, { 1, 1, 1, 1 }, { 0, 0, 0, 0}, { 1, 1, 1, 1}, {}, {}, {} }, - StridedSliceSpecificParams{ { 2, 2, 2, 2 }, { 1, 1, 1, 1 }, { 2, 2, 2, 2 }, { 1, 1, 1, 1 }, + RawParams{ {{ 2, 2, 2, 2 }}, { 1, 1, 1, 1 }, { 2, 2, 2, 2 }, { 1, 1, 1, 1 }, { 0, 0, 0, 0}, { 0, 0, 0, 0}, {}, {}, {} }, - StridedSliceSpecificParams{ { 1, 2, 6, 4 }, { 0, 0, 4, 0 }, { 1, 2, 6, 4 }, { 1, 1, 1, 1 }, + RawParams{ {{ 1, 2, 6, 4 }}, { 0, 0, 4, 0 }, { 1, 2, 6, 4 }, { 1, 1, 1, 1 }, { 0, 0, 0, 0 }, { 0, 0, 0, 0 }, {}, {}, {} }, - StridedSliceSpecificParams{ { 1, 2, 6, 4 }, { 0, 0, -3, 0 }, { 1, 2, 6, 4 }, { 1, 1, 1, 1 }, + RawParams{ {{ 1, 2, 6, 4 }}, { 0, 0, -3, 0 }, { 1, 2, 6, 4 }, { 1, 1, 1, 1 }, { 0, 0, 0, 0 }, { 0, 0, 0, 0 }, {}, {}, {} }, - StridedSliceSpecificParams{ { 1, 2, 6, 4 }, { 0, 0, 4, 0 }, { 1, 2, 6, 4 }, { 1, 1, 1, 1 }, + RawParams{ {{ 1, 2, 6, 4 }}, { 0, 0, 4, 0 }, { 1, 2, 6, 4 }, { 1, 1, 1, 1 }, { 1, 1, 0, 1}, { 1, 1, 1, 1}, {}, {}, {} }, - StridedSliceSpecificParams{ { 10, 2, 2, 2 }, { 0, 0, 0, 0 }, { 0, 0, 0, 0 }, { 2, 1, 1, 1 }, + RawParams{ {{ 10, 2, 2, 2 }}, { 0, 0, 0, 0 }, { 0, 0, 0, 0 }, { 2, 1, 1, 1 }, { 1, 1, 1, 1}, { 1, 1, 1, 1}, {}, {}, {} }, - StridedSliceSpecificParams{ { 2, 2, 4, 3 }, { 0, 0, 0, 0 }, { 2, 2, 4, 3 }, { 1, 1, 2, 1 }, + RawParams{ {{ 2, 2, 4, 3 }}, { 0, 0, 0, 0 }, { 2, 2, 4, 3 }, { 1, 1, 2, 1 }, { 1, 1, 1, 1}, { 1, 1, 1, 1}, {}, {}, {} }, - StridedSliceSpecificParams{ { 2, 2, 4, 2 }, { 1, 0, 0, 1 }, { 2, 2, 4, 2 }, { 1, 1, 2, 1 }, + RawParams{ {{ 2, 2, 4, 2 }}, { 1, 0, 0, 1 }, { 2, 2, 4, 2 }, { 1, 1, 2, 1 }, { 0, 1, 1, 0}, { 1, 1, 0, 0}, {}, {}, {} }, - StridedSliceSpecificParams{ { 1, 2, 4, 2 }, { 1, 0, 0, 0 }, { 1, 2, 4, 2 }, { 1, 1, -2, -1 }, + RawParams{ {{ 1, 2, 4, 2 }}, { 1, 0, 0, 0 }, { 1, 2, 4, 2 }, { 1, 1, -2, -1 }, { 1, 1, 1, 1}, { 1, 1, 1, 1}, {}, {}, {} }, - StridedSliceSpecificParams{ { 2, 2, 4, 2 }, { 1, 0, 0, 0 }, { 1, 2, 4, 2 }, { 1, 1, -2, -1 }, + RawParams{ {{ 2, 2, 4, 2 }}, { 1, 0, 0, 0 }, { 1, 2, 4, 2 }, { 1, 1, -2, -1 }, { 0, 1, 1, 1}, { 1, 1, 1, 1}, {}, {}, {} }, - StridedSliceSpecificParams{ { 2, 3, 4, 5, 6 }, { 0, 1, 0, 0, 0 }, { 2, 3, 4, 5, 6 }, { 1, 1, 1, 1, 1 }, + RawParams{ {{ 2, 3, 4, 5, 6 }}, { 0, 1, 0, 0, 0 }, { 2, 3, 4, 5, 6 }, { 1, 1, 1, 1, 1 }, { 1, 0, 1, 1, 1}, { 1, 0, 1, 1, 1 }, {}, { 0, 1, 0, 0, 0 }, {} }, - StridedSliceSpecificParams{ { 2, 3, 4, 5, 6 }, { 0, 0, 3, 0, 0 }, { 2, 3, 4, 3, 6 }, { 1, 1, 1, 1, 1 }, + RawParams{ {{ 2, 3, 4, 5, 6 }}, { 0, 0, 3, 0, 0 }, { 2, 3, 4, 3, 6 }, { 1, 1, 1, 1, 1 }, { 1, 1, 0, 1, 1}, { 1, 1, 0, 0, 1 }, {}, { 0, 0, 1, 0, 0 }, {} }, - StridedSliceSpecificParams{ { 2, 3, 4, 5, 6 }, { 0, 0, 0, 0, 3 }, { 1, 3, 4, 5, 6 }, { 1, 1, 1, 1, 1 }, + RawParams{ {{ 2, 3, 4, 5, 6 }}, { 0, 0, 0, 0, 3 }, { 1, 3, 4, 5, 6 }, { 1, 1, 1, 1, 1 }, { 0, 1, 1, 1, 0}, { 0, 1, 1, 1, 0 }, {}, { 1, 0, 0, 0, 1 }, {} }, - StridedSliceSpecificParams{ { 2, 3, 4, 5 }, { 0, 0, 0, 0, 0 }, { 0, 2, 3, 4, 5 }, { 1, 1, 1, 1, 1 }, + RawParams{ {{ 2, 3, 4, 5 }}, { 0, 0, 0, 0, 0 }, { 0, 2, 3, 4, 5 }, { 1, 1, 1, 1, 1 }, { 1, 1, 1, 1, 1 }, { 1, 1, 1, 1, 1 }, { 1, 0, 0, 0, 0 }, {}, {} }, - StridedSliceSpecificParams{ { 2, 3, 4, 5 }, { 0, 0, 0, 0, 0 }, { 0, 2, 3, 4, 5 }, { 1, 1, 1, 1, 1 }, + RawParams{ {{ 2, 3, 4, 5 }}, { 0, 0, 0, 0, 0 }, { 0, 2, 3, 4, 5 }, { 1, 1, 1, 1, 1 }, { 1, 1, 1, 1, 1 }, { 1, 1, 1, 1, 1 }, { 0, 0, 1, 0, 0 }, {}, {} }, - StridedSliceSpecificParams{ { 10, 12 }, { -1, 1 }, { -9999, 0 }, { -1, 1 }, + RawParams{ {{ 10, 12 }}, { -1, 1 }, { -9999, 0 }, { -1, 1 }, { 0, 1 }, { 0, 1 }, { 0, 0 }, { 0, 0 }, { 0, 0 } }, - StridedSliceSpecificParams{ { 5, 5, 5, 5 }, { -1, 0, -1, 0 }, { -50, 0, -60, 0 }, { -1, 1, -1, 1 }, + RawParams{ {{ 5, 5, 5, 5 }}, { -1, 0, -1, 0 }, { -50, 0, -60, 0 }, { -1, 1, -1, 1 }, { 0, 0, 0, 0 }, { 0, 1, 0, 1 }, { 0, 0, 0, 0 }, { 0, 0, 0, 0 }, { 0, 0, 0, 0 } }, - StridedSliceSpecificParams{ { 1, 2, 4 }, { 0, 2000, 3, 5 }, { 0, 0, 0, 2 }, { 1, 1, 1, 1 }, + RawParams{ {{ 1, 2, 4 }}, { 0, 2000, 3, 5 }, { 0, 0, 0, 2 }, { 1, 1, 1, 1 }, { 1, 0, 1, 1 }, { 1, 0, 1, 0 }, { 0, 1, 0, 0 }, { }, { } }, - StridedSliceSpecificParams{ { 2, 2, 4, 4 }, { 0, 0, 0, 1, 0 }, { 0, 0, 0, 2, 0 }, { 1, 1, 1, 1, 1 }, + RawParams{ {{ 2, 2, 4, 4 }}, { 0, 0, 0, 1, 0 }, { 0, 0, 0, 2, 0 }, { 1, 1, 1, 1, 1 }, { 1, 1, 1, 0, 1 }, { 1, 1, 1, 0, 1 }, { 0, 1, 0, 0, 0 }, { }, { } }, - StridedSliceSpecificParams{ { 2, 2, 2, 4, 4 }, { 0, 0, 0, 1, 0 }, { 0, 0, 0, 2, 0 }, { 1, 1, 1, 1, 1 }, + RawParams{ {{ 2, 2, 2, 4, 4 }}, { 0, 0, 0, 1, 0 }, { 0, 0, 0, 2, 0 }, { 1, 1, 1, 1, 1 }, { 1, 1, 1, 0, 1 }, { 1, 1, 1, 0, 1 }, { }, { 0, 1, 0, 0, 0 }, { } }, - StridedSliceSpecificParams{{1, 6400, 3, 85}, + RawParams{ {{1, 6400, 3, 85}}, {0, 0}, {0, 2}, {1, 1}, @@ -127,17 +138,28 @@ std::vector ss_only_test_cases = { {1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0}}, }; +auto ss_test_cases = [](const std::vector& raw_test_cases) { + std::vector cases; + for (const auto& raw_case : raw_test_cases) + cases.push_back(ov::test::StridedSliceSpecificParams{ + ov::test::static_shapes_to_test_representation(raw_case.input_shape), + raw_case.begin, + raw_case.end, + raw_case.strides, + raw_case.begin_mask, + raw_case.end_mask, + raw_case.new_axis_mask, + raw_case.shrink_axis_mask, + raw_case.ellipsis_axis_mask}); + return cases; +}(raw_test_cases); + INSTANTIATE_TEST_SUITE_P( smoke, StridedSliceLayerTest, ::testing::Combine( - ::testing::ValuesIn(ss_only_test_cases), - ::testing::Values(InferenceEngine::Precision::FP32), - ::testing::Values(InferenceEngine::Precision::UNSPECIFIED), - ::testing::Values(InferenceEngine::Precision::UNSPECIFIED), - ::testing::Values(InferenceEngine::Layout::ANY), - ::testing::Values(InferenceEngine::Layout::ANY), - ::testing::Values(ov::test::utils::DEVICE_CPU), - ::testing::Values(std::map())), + ::testing::ValuesIn(ss_test_cases), + ::testing::Values(ov::element::f32), + ::testing::Values(ov::test::utils::DEVICE_CPU)), StridedSliceLayerTest::getTestCaseName); } // namespace diff --git a/src/plugins/intel_cpu/tests/functional/shared_tests_instances/skip_tests_config.cpp b/src/plugins/intel_cpu/tests/functional/shared_tests_instances/skip_tests_config.cpp index f9670d2912134f..0b79292469176d 100644 --- a/src/plugins/intel_cpu/tests/functional/shared_tests_instances/skip_tests_config.cpp +++ b/src/plugins/intel_cpu/tests/functional/shared_tests_instances/skip_tests_config.cpp @@ -196,6 +196,12 @@ std::vector disabledTestPatterns() { R"(.*smoke_TopK/TopKLayerTest.Inference.*_k=18_.*_modelType=f16_trgDev=CPU.*)", R"(.*smoke_TopK/TopKLayerTest.Inference.*_k=21_.*_sort=value_modelType=f16_trgDev=CPU.*)", }; +#if defined(__APPLE__) && defined(OPENVINO_ARCH_ARM64) + // Issue: 120950 + retVector.emplace_back(R"(.*smoke_TensorIteratorCommon/TensorIteratorTest.Inference.*_modelType=f16_targetDevice=CPU.*)"); + retVector.emplace_back(R"(.*smoke_CtcGreedyDecoderBasic/CTCGreedyDecoderLayerTest.Inference.*netPRC=f16.*trgDev=CPU.*)"); + retVector.emplace_back(R"(.*CTCGreedyDecoderSeqLenLayerTest.Inference.*dataPRC=f16.*trgDev=CPU.*)"); +#endif #if defined(OPENVINO_ARCH_X86) retVector.emplace_back(R"(.*DetectionOutputLayerTest.*)"); diff --git a/src/plugins/intel_cpu/thirdparty/ACLConfig.cmake b/src/plugins/intel_cpu/thirdparty/ACLConfig.cmake index b2614a73014b50..ef0427aa9c2168 100644 --- a/src/plugins/intel_cpu/thirdparty/ACLConfig.cmake +++ b/src/plugins/intel_cpu/thirdparty/ACLConfig.cmake @@ -335,7 +335,7 @@ elseif(NOT TARGET arm_compute::arm_compute) set(arm_compute_full_path "${arm_compute}") endif() - list(APPEND ARM_COMPUTE_OPTIONS experimental_fixed_format_kernels=True) + list(APPEND ARM_COMPUTE_OPTIONS fixed_format_kernels=True) add_custom_command( OUTPUT diff --git a/src/plugins/intel_cpu/thirdparty/ComputeLibrary b/src/plugins/intel_cpu/thirdparty/ComputeLibrary index d8bf9b53752a4f..874e0c7b3fe93a 160000 --- a/src/plugins/intel_cpu/thirdparty/ComputeLibrary +++ b/src/plugins/intel_cpu/thirdparty/ComputeLibrary @@ -1 +1 @@ -Subproject commit d8bf9b53752a4f573120cf51b31055de8b3c7d29 +Subproject commit 874e0c7b3fe93a6764ecb2d8cfad924af19a9d25 diff --git a/src/plugins/intel_cpu/thirdparty/onednn b/src/plugins/intel_cpu/thirdparty/onednn index 31c8555b923e16..3767662f257270 160000 --- a/src/plugins/intel_cpu/thirdparty/onednn +++ b/src/plugins/intel_cpu/thirdparty/onednn @@ -1 +1 @@ -Subproject commit 31c8555b923e16b4ddfdcd1d1f126c115b5e0da7 +Subproject commit 3767662f257270921b64ec9a40e45a46b4ef048c diff --git a/src/plugins/intel_gpu/README.md b/src/plugins/intel_gpu/README.md index d8b81154f9368e..0cecca2eb7edbe 100644 --- a/src/plugins/intel_gpu/README.md +++ b/src/plugins/intel_gpu/README.md @@ -73,7 +73,7 @@ The software dependencies are: * clang 3.5 or later * [Intel® C++ Compiler](https://software.intel.com/en-us/intel-parallel-studio-xe) 17.0 or later * Visual C++ 2015 (MSVC++ 19.0) or later -- [python™](https://www.python.org/downloads/) 3.7 or later. +- [python™](https://www.python.org/downloads/) 3.8 or later. ## Trademark Information diff --git a/src/plugins/intel_gpu/src/graph/graph_optimizer/post_optimize_weights.cpp b/src/plugins/intel_gpu/src/graph/graph_optimizer/post_optimize_weights.cpp index 5aade76ae00035..175a6eef54b48f 100644 --- a/src/plugins/intel_gpu/src/graph/graph_optimizer/post_optimize_weights.cpp +++ b/src/plugins/intel_gpu/src/graph/graph_optimizer/post_optimize_weights.cpp @@ -84,20 +84,6 @@ void post_optimize_weights::optimize_weights(T& node, program& p) { !prev_node.has_fused_primitives() && !prev_node.as().has_mean() && prev_node.as().get_primitive()->subtract_per_feature.empty(); - if (impl->is_dynamic()) { - if (weights_reorder_params->get_output_layout().compatible(prev_node.get_output_layout())) { - // if compatible, it can be reinterpreted, thus no need to reorder at build time - continue; - } - // Need to restore the original shape - auto updated_output_layout = weights_reorder_params->get_output_layout(); - auto orig_rank = prev_node.get_output_layout().get_partial_shape().size(); - auto weight_format_dims = format::dimension(weights_reorder_params->get_output_layout().format); - updated_output_layout.set_partial_shape( - updated_output_layout.get_tensor().get_partial_shape(orig_rank, weight_format_dims)); - if (updated_output_layout != weights_reorder_params->get_output_layout()) - weights_reorder_params->set_output_layout(updated_output_layout); - } if (can_be_fused) { // Need to update input data_type for correct merging format reorder with precision reorder data_types input_dtype = prev_node.get_input_layouts()[0].data_type; diff --git a/src/plugins/intel_gpu/src/graph/impls/ocl/fully_connected.cpp b/src/plugins/intel_gpu/src/graph/impls/ocl/fully_connected.cpp index ab61746eec04c1..43ce081d2f69ea 100644 --- a/src/plugins/intel_gpu/src/graph/impls/ocl/fully_connected.cpp +++ b/src/plugins/intel_gpu/src/graph/impls/ocl/fully_connected.cpp @@ -19,6 +19,31 @@ struct fully_connected_impl : typed_primitive_impl_ocl { DECLARE_OBJECT_TYPE_SERIALIZATION(cldnn::ocl::fully_connected_impl) + fully_connected_impl() = default; + + fully_connected_impl(const kernel_selector::kernel_data& kd) { + const auto& params = kd.weightsReorderParams; + + if (params.is_initialized) { + // Assumption that kernel data contains already reshaped 2d weights + auto crop_to_2d = [](const ov::PartialShape& shape) { + return ov::PartialShape({shape[0], shape[1]}); + }; + + auto weights_reorder_params = std::make_shared(from_weights_tensor(params.src), + from_weights_tensor(params.dest), + params.rotate); + auto output_layout = weights_reorder_params->get_output_layout(); + output_layout.set_partial_shape(crop_to_2d(output_layout.get_partial_shape())); + weights_reorder_params->set_output_layout(output_layout); + + _weights_reorder_params = weights_reorder_params; + } + _kernel_data = kd; + _kernel_name = kd.kernelName; + can_reuse_memory = _kernel_data.can_reuse_memory; + } + std::unique_ptr clone() const override { return make_unique(*this); } diff --git a/src/plugins/intel_gpu/src/graph/impls/onednn/primitive_onednn_base.h b/src/plugins/intel_gpu/src/graph/impls/onednn/primitive_onednn_base.h index 92f68bf3105413..37d4fc5e67da3b 100644 --- a/src/plugins/intel_gpu/src/graph/impls/onednn/primitive_onednn_base.h +++ b/src/plugins/intel_gpu/src/graph/impls/onednn/primitive_onednn_base.h @@ -89,6 +89,7 @@ struct typed_primitive_onednn_impl : public typed_primitive_impl { typed_primitive_onednn_impl() : typed_primitive_impl({}, "undef"), + _engine(nullptr), _pd(), _prim() { _attrs = std::make_shared(); } diff --git a/src/plugins/intel_gpu/src/graph/include/scatter_elements_update_inst.h b/src/plugins/intel_gpu/src/graph/include/scatter_elements_update_inst.h index ec6676418d7dbc..c8d332124af1fd 100644 --- a/src/plugins/intel_gpu/src/graph/include/scatter_elements_update_inst.h +++ b/src/plugins/intel_gpu/src/graph/include/scatter_elements_update_inst.h @@ -22,6 +22,11 @@ class typed_primitive_inst : public typed_primitive_ins public: typed_primitive_inst(network& network, scatter_elements_update_node const& desc); + void update_output_memory() override; + +private: + void on_execute() override; + void reuse_input(); }; using scatter_elements_update_inst = typed_primitive_inst; diff --git a/src/plugins/intel_gpu/src/graph/include/scatter_nd_update_inst.h b/src/plugins/intel_gpu/src/graph/include/scatter_nd_update_inst.h index d494a9501742d5..4d4a12f9df2ed9 100644 --- a/src/plugins/intel_gpu/src/graph/include/scatter_nd_update_inst.h +++ b/src/plugins/intel_gpu/src/graph/include/scatter_nd_update_inst.h @@ -24,6 +24,11 @@ class typed_primitive_inst : public typed_primitive_inst_base public: typed_primitive_inst(network& network, scatter_nd_update_node const& desc); + void update_output_memory() override; + +private: + void on_execute() override; + void reuse_input(); }; using scatter_nd_update_inst = typed_primitive_inst; diff --git a/src/plugins/intel_gpu/src/graph/include/scatter_update_inst.h b/src/plugins/intel_gpu/src/graph/include/scatter_update_inst.h index bd067ca8267246..61f872724438f8 100644 --- a/src/plugins/intel_gpu/src/graph/include/scatter_update_inst.h +++ b/src/plugins/intel_gpu/src/graph/include/scatter_update_inst.h @@ -37,6 +37,11 @@ class typed_primitive_inst : public typed_primitive_inst_base; diff --git a/src/plugins/intel_gpu/src/graph/scatter_elements_update.cpp b/src/plugins/intel_gpu/src/graph/scatter_elements_update.cpp index 5c12acafd5982a..88224259ce5a58 100644 --- a/src/plugins/intel_gpu/src/graph/scatter_elements_update.cpp +++ b/src/plugins/intel_gpu/src/graph/scatter_elements_update.cpp @@ -52,5 +52,27 @@ std::string scatter_elements_update_inst::to_string(scatter_elements_update_node } scatter_elements_update_inst::typed_primitive_inst(network& network, scatter_elements_update_node const& node) : parent(network, node) {} +void scatter_elements_update_inst::on_execute() { + auto input1_shape = _impl_params->input_layouts[1].get_partial_shape(); + auto input2_shape = _impl_params->input_layouts[2].get_partial_shape(); + if ((ov::shape_size(input1_shape.to_shape()) == 0) || (ov::shape_size(input2_shape.to_shape()) == 0)) + reuse_input(); +} + +void scatter_elements_update_inst::reuse_input() { + update_output_memory(); +} + +void scatter_elements_update_inst::update_output_memory() { + if (_outputs.size() > 0 && static_cast(_outputs[0]) + && _network.get_engine().is_the_same_buffer(output_memory(), input_memory())) + return; + + if (_node != nullptr) + build_deps(); + + _outputs = {_network.get_engine().reinterpret_buffer(input_memory(), _impl_params->get_output_layout())}; + _mem_allocated = false; +} } // namespace cldnn diff --git a/src/plugins/intel_gpu/src/graph/scatter_nd_update.cpp b/src/plugins/intel_gpu/src/graph/scatter_nd_update.cpp index 6ce0a50970d2cf..40d2b48d8edfaf 100644 --- a/src/plugins/intel_gpu/src/graph/scatter_nd_update.cpp +++ b/src/plugins/intel_gpu/src/graph/scatter_nd_update.cpp @@ -66,4 +66,27 @@ std::string scatter_nd_update_inst::to_string(scatter_nd_update_node const& node scatter_nd_update_inst::typed_primitive_inst(network& network, scatter_nd_update_node const& node) : parent(network, node) {} +void scatter_nd_update_inst::on_execute() { + auto input1_shape = _impl_params->input_layouts[1].get_partial_shape(); + auto input2_shape = _impl_params->input_layouts[2].get_partial_shape(); + + if ((ov::shape_size(input1_shape.to_shape()) == 0) || (ov::shape_size(input2_shape.to_shape()) == 0)) + reuse_input(); +} + +void scatter_nd_update_inst::reuse_input() { + update_output_memory(); +} + +void scatter_nd_update_inst::update_output_memory() { + if (_outputs.size() > 0 && static_cast(_outputs[0]) + && _network.get_engine().is_the_same_buffer(output_memory(), input_memory())) + return; + + if (_node != nullptr) + build_deps(); + + _outputs = {_network.get_engine().reinterpret_buffer(input_memory(), _impl_params->get_output_layout())}; + _mem_allocated = false; +} } // namespace cldnn diff --git a/src/plugins/intel_gpu/src/graph/scatter_update.cpp b/src/plugins/intel_gpu/src/graph/scatter_update.cpp index 07c8e0ab8f98b2..157a0dd54f3b6a 100644 --- a/src/plugins/intel_gpu/src/graph/scatter_update.cpp +++ b/src/plugins/intel_gpu/src/graph/scatter_update.cpp @@ -46,4 +46,27 @@ std::string scatter_update_inst::to_string(scatter_update_node const& node) { scatter_update_inst::typed_primitive_inst(network& network, scatter_update_node const& node) : parent(network, node) {} +void scatter_update_inst::on_execute() { + auto input1_shape = _impl_params->input_layouts[1].get_partial_shape(); + auto input2_shape = _impl_params->input_layouts[2].get_partial_shape(); + + if ((ov::shape_size(input1_shape.to_shape()) == 0) || (ov::shape_size(input2_shape.to_shape()) == 0)) + reuse_input(); +} + +void scatter_update_inst::reuse_input() { + update_output_memory(); +} + +void scatter_update_inst::update_output_memory() { + if (_outputs.size() > 0 && static_cast(_outputs[0]) + && _network.get_engine().is_the_same_buffer(output_memory(), input_memory())) + return; + + if (_node != nullptr) + build_deps(); + + _outputs = {_network.get_engine().reinterpret_buffer(input_memory(), _impl_params->get_output_layout())}; + _mem_allocated = false; +} } // namespace cldnn diff --git a/src/plugins/intel_gpu/tests/functional/single_layer_tests/dynamic/scatter_nd_update.cpp b/src/plugins/intel_gpu/tests/functional/single_layer_tests/dynamic/scatter_nd_update.cpp index 051084a042cdf0..49f73670e0cbe7 100644 --- a/src/plugins/intel_gpu/tests/functional/single_layer_tests/dynamic/scatter_nd_update.cpp +++ b/src/plugins/intel_gpu/tests/functional/single_layer_tests/dynamic/scatter_nd_update.cpp @@ -14,30 +14,38 @@ using namespace InferenceEngine; using namespace ov::test; namespace GPULayerTestsDefinitions { -using ScatterNDUpdateShapes = std::vector; +using ScatterUpdateShapes = std::vector; using IndicesValues = std::vector; -struct ScatterNDUpdateLayerParams { - ScatterNDUpdateShapes inputShapes; +enum class Scatterupdate_type { + Basic, + ND, + Elements +}; + +struct ScatterUpdateLayerParams { + ScatterUpdateShapes inputShapes; IndicesValues indicesValues; + Scatterupdate_type scType; // scatter update type }; typedef std::tuple< - ScatterNDUpdateLayerParams, + ScatterUpdateLayerParams, ElementType, // input precision ElementType // indices precision > ScatterUpdateParams; -class ScatterNDUpdateLayerGPUTest : public testing::WithParamInterface, +class ScatterUpdateLayerGPUTest : public testing::WithParamInterface, virtual public SubgraphBaseTest { public: static std::string getTestCaseName(testing::TestParamInfo obj) { - ScatterNDUpdateLayerParams scatterParams; + ScatterUpdateLayerParams scatterParams; ElementType inputPrecision; ElementType idxPrecision; std::tie(scatterParams, inputPrecision, idxPrecision) = obj.param; const auto inputShapes = scatterParams.inputShapes; const auto indicesValues = scatterParams.indicesValues; + const auto scType = scatterParams.scType; std::ostringstream result; result << inputPrecision << "_IS="; @@ -54,6 +62,18 @@ class ScatterNDUpdateLayerGPUTest : public testing::WithParamInterfaceGetParam(); const auto inputShapes = scatterParams.inputShapes; + const auto scType = scatterParams.scType; - init_input_shapes(inputShapes); + init_input_shapes({inputShapes[0], inputShapes[1], inputShapes[2]}); ov::ParameterVector dataParams{std::make_shared(inputPrecision, inputDynamicShapes[0]), @@ -113,7 +134,23 @@ class ScatterNDUpdateLayerGPUTest : public testing::WithParamInterfaceset_friendly_name("Param_2"); dataParams[1]->set_friendly_name("Param_3"); - auto scatter = std::make_shared(dataParams[0], indicesParam, dataParams[1]); + std::shared_ptr scatter; + switch (scType) { + case Scatterupdate_type::ND: { + scatter = std::make_shared(dataParams[0], indicesParam, dataParams[1]); + break; + } + case Scatterupdate_type::Elements: { + auto axis = ov::op::v0::Constant::create(ov::element::i32, inputShapes[3].first.get_shape(), inputShapes[3].second[0]); + scatter = std::make_shared(dataParams[0], indicesParam, dataParams[1], axis); + break; + } + case Scatterupdate_type::Basic: + default: { + auto axis = ov::op::v0::Constant::create(ov::element::i32, inputShapes[3].first.get_shape(), inputShapes[3].second[0]); + scatter = std::make_shared(dataParams[0], indicesParam, dataParams[1], axis); + } + } ngraph::ParameterVector allParams{ dataParams[0], indicesParam, dataParams[1] }; @@ -123,51 +160,55 @@ class ScatterNDUpdateLayerGPUTest : public testing::WithParamInterfaceget_output_size(); i++) results.push_back(std::make_shared(lastNode->output(i))); - return std::make_shared(results, params, "ScatterNDUpdateLayerGPUTest"); + return std::make_shared(results, params, "ScatterUpdateLayerGPUTest"); }; function = makeFunction(allParams, scatter); } }; -TEST_P(ScatterNDUpdateLayerGPUTest, CompareWithRefs) { +TEST_P(ScatterUpdateLayerGPUTest, CompareWithRefs) { SKIP_IF_CURRENT_TEST_IS_DISABLED() run(); } namespace ScatterNDUpdate { -const std::vector scatterParams = { - ScatterNDUpdateLayerParams{ - ScatterNDUpdateShapes{ +const std::vector scatterParams = { + ScatterUpdateLayerParams{ + ScatterUpdateShapes{ {{-1, -1, -1, -1, -1}, {{10, 9, 10, 9, 10}, {10, 1, 11, 2, 5}, {10, 15, 8, 1, 7}}}, {{2, 2, 1}, {{2, 2, 1}, {2, 2, 1}, {2, 2, 1}}}, {{-1, -1, -1, -1, -1, -1}, {{2, 2, 9, 10, 9, 10}, {2, 2, 1, 11, 2, 5}, {2, 2, 15, 8, 1, 7}}}, }, - IndicesValues{ 5, 6, 2, 8 } + IndicesValues{ 5, 6, 2, 8 }, + Scatterupdate_type::ND }, - ScatterNDUpdateLayerParams{ - ScatterNDUpdateShapes{ + ScatterUpdateLayerParams{ + ScatterUpdateShapes{ {{-1, -1, -1, -1}, {{ 10, 9, 9, 11 }, { 7, 5, 3, 12 }, { 3, 4, 9, 8 }}}, {{2, 3}, {{2, 3}, {2, 3}, {2, 3}}}, {{-1, -1}, {{2, 11}, {2, 12}, {2, 8}}} }, - IndicesValues{ 0, 1, 1, 2, 2, 2 } + IndicesValues{ 0, 1, 1, 2, 2, 2 }, + Scatterupdate_type::ND }, - ScatterNDUpdateLayerParams{ - ScatterNDUpdateShapes{ + ScatterUpdateLayerParams{ + ScatterUpdateShapes{ {{{3, 10}, -1, {3, 9}, -1}, {{ 10, 9, 9, 11 }, { 7, 5, 3, 12 }, { 3, 4, 9, 8 }}}, {{2, 3}, {{2, 3}, {2, 3}, {2, 3}}}, {{{2, 4}, -1}, {{2, 11}, {2, 12}, {2, 8}}} }, - IndicesValues{ 0, 1, 1, 2, 2, 2 } + IndicesValues{ 0, 1, 1, 2, 2, 2 }, + Scatterupdate_type::ND }, - ScatterNDUpdateLayerParams{ - ScatterNDUpdateShapes{ + ScatterUpdateLayerParams{ + ScatterUpdateShapes{ {{{3, 10}, {4, 11}, {3, 9}, {8, 15}}, {{ 10, 9, 9, 11 }, { 7, 5, 3, 12 }, { 3, 4, 9, 8 }}}, {{2, 3}, {{2, 3}, {2, 3}, {2, 3}}}, {{{2, 4}, -1}, {{2, 11}, {2, 12}, {2, 8}}} }, - IndicesValues{ 0, 1, 1, 2, 2, 2 } + IndicesValues{ 0, 1, 1, 2, 2, 2 }, + Scatterupdate_type::ND }, }; @@ -179,12 +220,71 @@ const std::vector constantPrecisions = { ElementType::i32, }; -INSTANTIATE_TEST_SUITE_P(smoke_scatterndupdate_CompareWithRefs_dynamic, ScatterNDUpdateLayerGPUTest, +const std::vector scatterUpdate_EmptyInput1_2Params = { + ScatterUpdateLayerParams{ + ScatterUpdateShapes{ + {{-1, -1, -1, -1}, {{ 100, 256, 14, 14 }}}, + {{-1}, {{ 0 }}}, + {{-1, 256, 14, 14}, {{ 0, 256, 14, 14 }}}, + {{1}, {{0}}} + }, + IndicesValues{ 0 }, + Scatterupdate_type::Basic + }, +}; + +const std::vector scatterNDUpdate_EmptyInput1_2Params = { + ScatterUpdateLayerParams{ + ScatterUpdateShapes{ + {{-1, -1, -1, -1}, {{ 100, 256, 14, 14 }}}, + {{-1, 1}, {{ 0, 1 }}}, + {{-1, 256, 14, 14}, {{ 0, 256, 14, 14 }}} + }, + IndicesValues{ 0 }, + Scatterupdate_type::ND + }, +}; + +const std::vector scatterElementsUpdate_EmptyInput1_2Params = { + ScatterUpdateLayerParams{ + ScatterUpdateShapes{ + {{-1, -1, -1, -1}, {{ 100, 256, 14, 14 }}}, + {{-1, -1, 14, 14}, {{ 0, 256, 14, 14 }}}, + {{-1, 256, 14, 14}, {{ 0, 256, 14, 14 }}}, + {{1}, {{0}}} + }, + IndicesValues{ 0 }, + Scatterupdate_type::Elements + }, +}; + +INSTANTIATE_TEST_SUITE_P(smoke_ScatterNDUpdate_CompareWithRefs_dynamic, ScatterUpdateLayerGPUTest, ::testing::Combine( ::testing::ValuesIn(scatterParams), ::testing::ValuesIn(inputPrecisions), ::testing::ValuesIn(constantPrecisions)), - ScatterNDUpdateLayerGPUTest::getTestCaseName); + ScatterUpdateLayerGPUTest::getTestCaseName); + +INSTANTIATE_TEST_SUITE_P(smoke_ScatterUpdate_EmptyInput1_2_CompareWithRefs_dynamic, ScatterUpdateLayerGPUTest, + ::testing::Combine( + ::testing::ValuesIn(scatterUpdate_EmptyInput1_2Params), + ::testing::ValuesIn(inputPrecisions), + ::testing::ValuesIn(constantPrecisions)), + ScatterUpdateLayerGPUTest::getTestCaseName); +INSTANTIATE_TEST_SUITE_P(smoke_ScatterNDUpdate_EmptyInput1_2_CompareWithRefs_dynamic, ScatterUpdateLayerGPUTest, + ::testing::Combine( + ::testing::ValuesIn(scatterNDUpdate_EmptyInput1_2Params), + ::testing::ValuesIn(inputPrecisions), + ::testing::ValuesIn(constantPrecisions)), + ScatterUpdateLayerGPUTest::getTestCaseName); + +// ScatterELementsUpdate doesn't support dynamic shape yet. Need to enable when it supports. +INSTANTIATE_TEST_SUITE_P(DISABLED_smoke_ScatterElementsUpdate_EmptyInput1_2_CompareWithRefs_dynamic, ScatterUpdateLayerGPUTest, + ::testing::Combine( + ::testing::ValuesIn(scatterElementsUpdate_EmptyInput1_2Params), + ::testing::ValuesIn(inputPrecisions), + ::testing::ValuesIn(constantPrecisions)), + ScatterUpdateLayerGPUTest::getTestCaseName); } // namespace ScatterNDUpdate } // namespace GPULayerTestsDefinitions diff --git a/src/plugins/intel_gpu/tests/unit/test_cases/fully_connected_gpu_test.cpp b/src/plugins/intel_gpu/tests/unit/test_cases/fully_connected_gpu_test.cpp index 89f3598f2d421f..664e91d3a63017 100644 --- a/src/plugins/intel_gpu/tests/unit/test_cases/fully_connected_gpu_test.cpp +++ b/src/plugins/intel_gpu/tests/unit/test_cases/fully_connected_gpu_test.cpp @@ -3012,3 +3012,51 @@ INSTANTIATE_TEST_SUITE_P( ), fully_connected_types_u8_f32_test::PrintToStringParamName ); + +TEST(fully_connected_gpu, weights_reorder_shapes_update_test) { + auto& engine = get_test_engine(); + + const int32_t input_f = 3, input_b = 1, weight_b = 4; + + auto input_dyn_layout = layout{ ov::PartialShape{ ov::Dimension(1, 10), input_f }, data_types::f32, format::bfyx }; + auto input_data = engine.allocate_memory(layout{ ov::PartialShape{ input_b, input_f }, data_types::f32, format::bfyx }); + auto weights_data = engine.allocate_memory({ ov::PartialShape{ weight_b, input_f }, data_types::f32, format::bfyx }); + + set_values(input_data, { -0.5f, 2.0f, 0.5f }); + set_values(weights_data, { 1.5f, 1.0f, 0.5f, -1.0f, 0.0f, 0.5f, 0.5f, -0.5f, -2.0f, -0.5f, 1.0f, 1.5f }); + + cldnn::topology topology{ + input_layout("input", input_dyn_layout), + data("weights", weights_data), + fully_connected("fc", input_info("input"), "weights") + }; + + ExecutionConfig config = get_test_default_config(engine); + config.set_property(ov::intel_gpu::optimize_data(true)); + config.set_property(ov::intel_gpu::allow_new_shape_infer(true)); + network network(engine, topology, config); + network.set_input_data("input", input_data); + + auto outputs = network.execute(); + ASSERT_EQ(outputs.size(), size_t(1)); + ASSERT_EQ(outputs.begin()->first, "fc"); + + auto inst = network.get_primitive("fc"); + auto impl = inst->get_impl(); + ASSERT_TRUE(impl != nullptr); + ASSERT_TRUE(impl->is_dynamic()); + + ASSERT_TRUE(impl->need_weights_reorder()); + auto weights_reorder_params = impl->get_weights_reorder_params(); + auto out_weights_reorder_layout = weights_reorder_params->get_output_layout(); + auto out_weights_reorder_pshape = out_weights_reorder_layout.get_partial_shape(); + ASSERT_EQ(weights_data->get_layout().get_partial_shape(), out_weights_reorder_pshape); + + auto output_prim_mem = outputs.begin()->second.get_memory(); + cldnn::mem_lock output_ptr (output_prim_mem, get_test_stream()); + + ASSERT_EQ(1.5f, output_ptr[0]); + ASSERT_EQ(0.75f, output_ptr[1]); + ASSERT_EQ(-2.25f, output_ptr[2]); + ASSERT_EQ(3.0f, output_ptr[3]); +} diff --git a/src/plugins/template/tests/functional/shared_tests_instances/behavior/plugin/synthetic.cpp b/src/plugins/template/tests/functional/shared_tests_instances/behavior/plugin/synthetic.cpp index 6a152b6bdcd615..96444bd1dda303 100644 --- a/src/plugins/template/tests/functional/shared_tests_instances/behavior/plugin/synthetic.cpp +++ b/src/plugins/template/tests/functional/shared_tests_instances/behavior/plugin/synthetic.cpp @@ -43,7 +43,7 @@ INSTANTIATE_TEST_SUITE_P( ::testing::ValuesIn(HeteroTests::HeteroSyntheticTest::_randomMajorNodeFunctions)), HeteroSyntheticTest::getTestCaseName); -static std::vector()>> dynamicBuilders = { +static std::vector()>> dynamicBuilders = { [] { return ngraph::builder::subgraph::makeConvPoolReluNonZero(); }, diff --git a/src/tests/functional/plugin/shared/include/single_op_tests/convolution.hpp b/src/tests/functional/plugin/shared/include/single_op_tests/convolution.hpp new file mode 100644 index 00000000000000..c06b0109855df1 --- /dev/null +++ b/src/tests/functional/plugin/shared/include/single_op_tests/convolution.hpp @@ -0,0 +1,15 @@ +// Copyright (C) 2018-2023 Intel Corporation +// SPDX-License-Identifier: Apache-2.0 +// + +#pragma once + +#include "shared_test_classes/single_op/convolution.hpp" + +namespace ov { +namespace test { +TEST_P(ConvolutionLayerTest, Inference) { + run(); +} +} // namespace test +} // namespace ov diff --git a/src/tests/functional/plugin/shared/include/single_op_tests/convolution_backprop_data.hpp b/src/tests/functional/plugin/shared/include/single_op_tests/convolution_backprop_data.hpp new file mode 100644 index 00000000000000..856158791caa91 --- /dev/null +++ b/src/tests/functional/plugin/shared/include/single_op_tests/convolution_backprop_data.hpp @@ -0,0 +1,16 @@ +// Copyright (C) 2018-2023 Intel Corporation +// SPDX-License-Identifier: Apache-2.0 +// + +// DEPRECATED, can't be removed currently due to arm and kmb-plugin dependency (#55568) +#pragma once + +#include "shared_test_classes/single_op/convolution_backprop_data.hpp" + +namespace ov { +namespace test { +TEST_P(ConvolutionBackpropDataLayerTest, Inference) { + run(); +} +} // namespace test +} // namespace ov diff --git a/src/tests/functional/plugin/shared/include/single_op_tests/deformable_convolution.hpp b/src/tests/functional/plugin/shared/include/single_op_tests/deformable_convolution.hpp new file mode 100644 index 00000000000000..eb21366c53f910 --- /dev/null +++ b/src/tests/functional/plugin/shared/include/single_op_tests/deformable_convolution.hpp @@ -0,0 +1,15 @@ +// Copyright (C) 2018-2023 Intel Corporation +// SPDX-License-Identifier: Apache-2.0 +// + +#pragma once + +#include "shared_test_classes/single_op/deformable_convolution.hpp" + +namespace ov { +namespace test { +TEST_P(DeformableConvolutionLayerTest, Inference) { + run(); +} +} // namespace test +} // namespace ov diff --git a/src/tests/functional/plugin/shared/include/single_op_tests/shape_of.hpp b/src/tests/functional/plugin/shared/include/single_op_tests/shape_of.hpp new file mode 100644 index 00000000000000..50b9a388e9b6dd --- /dev/null +++ b/src/tests/functional/plugin/shared/include/single_op_tests/shape_of.hpp @@ -0,0 +1,15 @@ +// Copyright (C) 2018-2023 Intel Corporation +// SPDX-License-Identifier: Apache-2.0 +// + +#pragma once + +#include "shared_test_classes/single_op/shape_of.hpp" + +namespace ov { +namespace test { +TEST_P(ShapeOfLayerTest, Inference) { + run(); +} +} // namespace test +} // namespace ov diff --git a/src/tests/functional/plugin/shared/include/single_op_tests/shuffle_channels.hpp b/src/tests/functional/plugin/shared/include/single_op_tests/shuffle_channels.hpp new file mode 100644 index 00000000000000..10d0af958f3470 --- /dev/null +++ b/src/tests/functional/plugin/shared/include/single_op_tests/shuffle_channels.hpp @@ -0,0 +1,15 @@ +// Copyright (C) 2018-2023 Intel Corporation +// SPDX-License-Identifier: Apache-2.0 +// + +#pragma once + +#include "shared_test_classes/single_op/shuffle_channels.hpp" + +namespace ov { +namespace test { +TEST_P(ShuffleChannelsLayerTest, Inference) { + run(); +} +} // namespace test +} // namespace ov diff --git a/src/tests/functional/plugin/shared/include/single_op_tests/slice.hpp b/src/tests/functional/plugin/shared/include/single_op_tests/slice.hpp new file mode 100644 index 00000000000000..b452ce192fa651 --- /dev/null +++ b/src/tests/functional/plugin/shared/include/single_op_tests/slice.hpp @@ -0,0 +1,15 @@ +// Copyright (C) 2018-2023 Intel Corporation +// SPDX-License-Identifier: Apache-2.0 +// + +#pragma once + +#include "shared_test_classes/single_op/slice.hpp" + +namespace ov { +namespace test { +TEST_P(Slice8LayerTest, Inference) { + run(); +} +} // namespace test +} // namespace ov diff --git a/src/tests/functional/plugin/shared/include/single_op_tests/space_to_batch.hpp b/src/tests/functional/plugin/shared/include/single_op_tests/space_to_batch.hpp new file mode 100644 index 00000000000000..074ce3be677172 --- /dev/null +++ b/src/tests/functional/plugin/shared/include/single_op_tests/space_to_batch.hpp @@ -0,0 +1,15 @@ +// Copyright (C) 2018-2023 Intel Corporation +// SPDX-License-Identifier: Apache-2.0 +// + +#pragma once + +#include "shared_test_classes/single_op/space_to_batch.hpp" + +namespace ov { +namespace test { +TEST_P(SpaceToBatchLayerTest, Inference) { + run(); +} +} // namespace test +} // namespace ov diff --git a/src/tests/functional/plugin/shared/include/single_op_tests/space_to_depth.hpp b/src/tests/functional/plugin/shared/include/single_op_tests/space_to_depth.hpp new file mode 100644 index 00000000000000..2ceabfcb076dbe --- /dev/null +++ b/src/tests/functional/plugin/shared/include/single_op_tests/space_to_depth.hpp @@ -0,0 +1,15 @@ +// Copyright (C) 2018-2023 Intel Corporation +// SPDX-License-Identifier: Apache-2.0 +// + +#pragma once + +#include "shared_test_classes/single_op/space_to_depth.hpp" + +namespace ov { +namespace test { +TEST_P(SpaceToDepthLayerTest, Inference) { + run(); +}; +} // namespace test +} // namespace ov diff --git a/src/tests/functional/plugin/shared/include/single_op_tests/split.hpp b/src/tests/functional/plugin/shared/include/single_op_tests/split.hpp new file mode 100644 index 00000000000000..cc8f862fc14410 --- /dev/null +++ b/src/tests/functional/plugin/shared/include/single_op_tests/split.hpp @@ -0,0 +1,15 @@ +// Copyright (C) 2018-2023 Intel Corporation +// SPDX-License-Identifier: Apache-2.0 +// + +#pragma once + +#include "shared_test_classes/single_op/split.hpp" + +namespace ov { +namespace test { +TEST_P(SplitLayerTest, Inference) { + run(); +}; +} // namespace test +} // namespace ov diff --git a/src/tests/functional/plugin/shared/include/single_op_tests/squeeze_unsqueeze.hpp b/src/tests/functional/plugin/shared/include/single_op_tests/squeeze_unsqueeze.hpp new file mode 100644 index 00000000000000..be43c56b7ac86e --- /dev/null +++ b/src/tests/functional/plugin/shared/include/single_op_tests/squeeze_unsqueeze.hpp @@ -0,0 +1,15 @@ +// Copyright (C) 2018-2023 Intel Corporation +// SPDX-License-Identifier: Apache-2.0 +// + +#pragma once + +#include "shared_test_classes/single_op/squeeze_unsqueeze.hpp" + +namespace ov { +namespace test { +TEST_P(SqueezeUnsqueezeLayerTest, Inference) { + run(); +} +} // namespace test +} // namespace ov diff --git a/src/tests/functional/plugin/shared/include/single_op_tests/strided_slice.hpp b/src/tests/functional/plugin/shared/include/single_op_tests/strided_slice.hpp new file mode 100644 index 00000000000000..785d8cc6f76d55 --- /dev/null +++ b/src/tests/functional/plugin/shared/include/single_op_tests/strided_slice.hpp @@ -0,0 +1,15 @@ +// Copyright (C) 2018-2023 Intel Corporation +// SPDX-License-Identifier: Apache-2.0 +// + +#pragma once + +#include "shared_test_classes/single_op/strided_slice.hpp" + +namespace ov { +namespace test { +TEST_P(StridedSliceLayerTest, Inference) { + run(); +} +} // namespace test +} // namespace ov diff --git a/src/tests/functional/shared_test_classes/include/shared_test_classes/single_op/convolution.hpp b/src/tests/functional/shared_test_classes/include/shared_test_classes/single_op/convolution.hpp new file mode 100644 index 00000000000000..f690f977f0bfb5 --- /dev/null +++ b/src/tests/functional/shared_test_classes/include/shared_test_classes/single_op/convolution.hpp @@ -0,0 +1,43 @@ +// Copyright (C) 2018-2023 Intel Corporation +// SPDX-License-Identifier: Apache-2.0 +// + +#pragma once + +#include +#include +#include +#include + +#include "shared_test_classes/base/ov_subgraph.hpp" + +namespace ov { +namespace test { +// ! [test_convolution:definition] +typedef std::tuple< + std::vector, // Kernel size + std::vector, // Strides + std::vector, // Pad begin + std::vector, // Pad end + std::vector, // Dilation + size_t, // Num out channels + ov::op::PadType // Padding type +> convSpecificParams; +typedef std::tuple< + convSpecificParams, + ov::element::Type, // Model type + std::vector, // Input shapes + std::string // Device name +> convLayerTestParamsSet; + +class ConvolutionLayerTest : public testing::WithParamInterface, + virtual public ov::test::SubgraphBaseTest { +public: + static std::string getTestCaseName(const testing::TestParamInfo& obj); + +protected: + void SetUp() override; +}; +// ! [test_convolution:definition] +} // namespace test +} // namespace ov diff --git a/src/tests/functional/shared_test_classes/include/shared_test_classes/single_op/convolution_backprop_data.hpp b/src/tests/functional/shared_test_classes/include/shared_test_classes/single_op/convolution_backprop_data.hpp new file mode 100644 index 00000000000000..110cc7a301ef8b --- /dev/null +++ b/src/tests/functional/shared_test_classes/include/shared_test_classes/single_op/convolution_backprop_data.hpp @@ -0,0 +1,44 @@ +// Copyright (C) 2018-2023 Intel Corporation +// SPDX-License-Identifier: Apache-2.0 +// + +// DEPRECATED, can't be removed currently due to arm and kmb-plugin dependency (#55568) + +#pragma once + +#include +#include +#include + +#include "shared_test_classes/base/ov_subgraph.hpp" + +namespace ov { +namespace test { +typedef std::tuple< + std::vector, // Kernel size + std::vector, // Strides + std::vector, // Pad begin + std::vector, // Pad end + std::vector, // Dilation + size_t, // Num out channels + ov::op::PadType, // Padding type + std::vector // Output padding +> convBackpropDataSpecificParams; +typedef std::tuple< + convBackpropDataSpecificParams, + ov::element::Type, // Net precision + std::vector, // Input shapes + ov::Shape, // Output shapes + std::string // Device name +> convBackpropDataLayerTestParamsSet; + +class ConvolutionBackpropDataLayerTest : public testing::WithParamInterface, + virtual public ov::test::SubgraphBaseTest { +public: + static std::string getTestCaseName(const testing::TestParamInfo& obj); + +protected: + void SetUp() override; +}; +} // namespace test +} // namespace ov diff --git a/src/tests/functional/shared_test_classes/include/shared_test_classes/single_op/deformable_convolution.hpp b/src/tests/functional/shared_test_classes/include/shared_test_classes/single_op/deformable_convolution.hpp new file mode 100644 index 00000000000000..e88d9871b48d49 --- /dev/null +++ b/src/tests/functional/shared_test_classes/include/shared_test_classes/single_op/deformable_convolution.hpp @@ -0,0 +1,43 @@ +// Copyright (C) 2018-2023 Intel Corporation +// SPDX-License-Identifier: Apache-2.0 +// + +#pragma once + +#include +#include +#include + +#include "shared_test_classes/base/ov_subgraph.hpp" + +namespace ov { +namespace test { +typedef std::tuple< + std::vector, // Strides + std::vector, // Pad begin + std::vector, // Pad end + std::vector, // Dilation + size_t, // Groups + size_t, // Deformable groups + size_t, // Num out channels + ov::op::PadType, // Padding type + bool // Bilinear interpolation pad +> deformableConvSpecificParams; +typedef std::tuple< + deformableConvSpecificParams, + bool, // Modulation + ov::element::Type, // Model type + std::vector, // Input shapes + std::string // Device name +> deformableConvLayerTestParamsSet; + +class DeformableConvolutionLayerTest : public testing::WithParamInterface, + virtual public ov::test::SubgraphBaseTest { +public: + static std::string getTestCaseName(const testing::TestParamInfo& obj); +protected: + void SetUp() override; +}; + +} // namespace test +} // namespace ov diff --git a/src/tests/functional/shared_test_classes/include/shared_test_classes/single_op/shape_of.hpp b/src/tests/functional/shared_test_classes/include/shared_test_classes/single_op/shape_of.hpp new file mode 100644 index 00000000000000..f48238967c8c41 --- /dev/null +++ b/src/tests/functional/shared_test_classes/include/shared_test_classes/single_op/shape_of.hpp @@ -0,0 +1,32 @@ +// Copyright (C) 2018-2023 Intel Corporation +// SPDX-License-Identifier: Apache-2.0 +// + +#pragma once + +#include +#include +#include +#include + +#include "shared_test_classes/base/ov_subgraph.hpp" + +namespace ov { +namespace test { +typedef std::tuple< + ov::element::Type, // Model type + ov::element::Type, // Output type + std::vector, // Input shapes + ov::test::TargetDevice // Device name +> shapeOfParams; + +class ShapeOfLayerTest : public testing::WithParamInterface, + virtual public ov::test::SubgraphBaseTest { +public: + static std::string getTestCaseName(testing::TestParamInfo obj); + +protected: + void SetUp() override; +}; +} // namespace test +} // namespace ov diff --git a/src/tests/functional/shared_test_classes/include/shared_test_classes/single_op/shuffle_channels.hpp b/src/tests/functional/shared_test_classes/include/shared_test_classes/single_op/shuffle_channels.hpp new file mode 100644 index 00000000000000..8f315bde169c6c --- /dev/null +++ b/src/tests/functional/shared_test_classes/include/shared_test_classes/single_op/shuffle_channels.hpp @@ -0,0 +1,38 @@ +// Copyright (C) 2018-2023 Intel Corporation +// SPDX-License-Identifier: Apache-2.0 +// + +#pragma once + +#include +#include +#include +#include + +#include "shared_test_classes/base/layer_test_utils.hpp" +#include "shared_test_classes/base/ov_subgraph.hpp" + +namespace ov { +namespace test { +typedef std::tuple< + int, // axis + int // group +> shuffleChannelsSpecificParams; + +typedef std::tuple< + shuffleChannelsSpecificParams, + ov::element::Type, // Model type + std::vector, // Input shapes + ov::test::TargetDevice // Device name +> shuffleChannelsLayerTestParamsSet; + +class ShuffleChannelsLayerTest : public testing::WithParamInterface, + virtual public ov::test::SubgraphBaseTest { +public: + static std::string getTestCaseName(const testing::TestParamInfo& obj); + +protected: + void SetUp() override; +}; +} // namespace test +} // namespace ov diff --git a/src/tests/functional/shared_test_classes/include/shared_test_classes/single_op/slice.hpp b/src/tests/functional/shared_test_classes/include/shared_test_classes/single_op/slice.hpp new file mode 100644 index 00000000000000..f98fd74cb745c0 --- /dev/null +++ b/src/tests/functional/shared_test_classes/include/shared_test_classes/single_op/slice.hpp @@ -0,0 +1,40 @@ +// SPDX-License-Identifier: Apache-2.0 +// Copyright (C) 2018-2023 Intel Corporation +// + +#pragma once + +#include +#include +#include +#include + +#include "shared_test_classes/base/layer_test_utils.hpp" +#include "shared_test_classes/base/ov_subgraph.hpp" + +namespace ov { +namespace test { +struct Slice8SpecificParams { + std::vector shapes; + std::vector start; + std::vector stop; + std::vector step; + std::vector axes; +}; + +using Slice8Params = std::tuple< + Slice8SpecificParams, // Slice-8 specific parameters + ov::element::Type, // Model type + ov::test::TargetDevice // Device name +>; + +class Slice8LayerTest : public testing::WithParamInterface, + virtual public ov::test::SubgraphBaseTest { +public: + static std::string getTestCaseName(const testing::TestParamInfo &obj); + +protected: + void SetUp() override; +}; +} // namespace test +} // namespace ov diff --git a/src/tests/functional/shared_test_classes/include/shared_test_classes/single_op/space_to_batch.hpp b/src/tests/functional/shared_test_classes/include/shared_test_classes/single_op/space_to_batch.hpp new file mode 100644 index 00000000000000..0e2af06c3f9f39 --- /dev/null +++ b/src/tests/functional/shared_test_classes/include/shared_test_classes/single_op/space_to_batch.hpp @@ -0,0 +1,33 @@ +// Copyright (C) 2018-2023 Intel Corporation +// SPDX-License-Identifier: Apache-2.0 +// + +#pragma once + +#include +#include +#include +#include + +#include "shared_test_classes/base/ov_subgraph.hpp" + +namespace ov { +namespace test { +using spaceToBatchParamsTuple = typename std::tuple< + std::vector, // block_shape + std::vector, // pads_begin + std::vector, // pads_end + std::vector, // Input shapes + ov::element::Type, // Model type + std::string>; // Device name + +class SpaceToBatchLayerTest : public testing::WithParamInterface, + virtual public ov::test::SubgraphBaseTest { +public: + static std::string getTestCaseName(const testing::TestParamInfo &obj); + +protected: + void SetUp() override; +}; +} // namespace test +} // namespace ov diff --git a/src/tests/functional/shared_test_classes/include/shared_test_classes/single_op/space_to_depth.hpp b/src/tests/functional/shared_test_classes/include/shared_test_classes/single_op/space_to_depth.hpp new file mode 100644 index 00000000000000..170c0c5d5f6bcd --- /dev/null +++ b/src/tests/functional/shared_test_classes/include/shared_test_classes/single_op/space_to_depth.hpp @@ -0,0 +1,32 @@ +// Copyright (C) 2018-2023 Intel Corporation +// SPDX-License-Identifier: Apache-2.0 +// + +#pragma once + +#include +#include +#include +#include + +#include "shared_test_classes/base/ov_subgraph.hpp" + +namespace ov { +namespace test { +using spaceToDepthParamsTuple = typename std::tuple< + std::vector, // Input shape + ov::element::Type, // Model type + ov::op::v0::SpaceToDepth::SpaceToDepthMode, // Mode + std::size_t, // Block size + std::string>; // Device name + +class SpaceToDepthLayerTest : public testing::WithParamInterface, + virtual public ov::test::SubgraphBaseTest { +public: + static std::string getTestCaseName(const testing::TestParamInfo &obj); + +protected: + void SetUp() override; +}; +} // namespace test +} // namespace ov diff --git a/src/tests/functional/shared_test_classes/include/shared_test_classes/single_op/split.hpp b/src/tests/functional/shared_test_classes/include/shared_test_classes/single_op/split.hpp new file mode 100644 index 00000000000000..a7cb1495a34385 --- /dev/null +++ b/src/tests/functional/shared_test_classes/include/shared_test_classes/single_op/split.hpp @@ -0,0 +1,34 @@ +// Copyright (C) 2018-2023 Intel Corporation +// SPDX-License-Identifier: Apache-2.0 +// + +#pragma once + +#include +#include +#include +#include + +#include "shared_test_classes/base/ov_subgraph.hpp" + +namespace ov { +namespace test { +typedef std::tuple< + size_t, // Num splits + int64_t, // Axis + ov::element::Type, // Model type + std::vector, // Input shapes + std::vector, // Used outputs indices + std::string // Target device name +> splitParams; + +class SplitLayerTest : public testing::WithParamInterface, + virtual public ov::test::SubgraphBaseTest { +public: + static std::string getTestCaseName(const testing::TestParamInfo& obj); + +protected: + void SetUp() override; +}; +} // namespace test +} // namespace ov diff --git a/src/tests/functional/shared_test_classes/include/shared_test_classes/single_op/squeeze_unsqueeze.hpp b/src/tests/functional/shared_test_classes/include/shared_test_classes/single_op/squeeze_unsqueeze.hpp new file mode 100644 index 00000000000000..7a12a5ba68dd6b --- /dev/null +++ b/src/tests/functional/shared_test_classes/include/shared_test_classes/single_op/squeeze_unsqueeze.hpp @@ -0,0 +1,34 @@ +// Copyright (C) 2018-2023 Intel Corporation +// SPDX-License-Identifier: Apache-2.0 +// + +#pragma once + +#include +#include +#include +#include + +#include "shared_test_classes/base/ov_subgraph.hpp" +#include "common_test_utils/test_enums.hpp" + +namespace ov { +namespace test { +using ShapeAxesTuple = std::pair, std::vector>; + +typedef std::tuple< + ShapeAxesTuple, // InputShape (required), Squeeze indexes (if empty treated as non-existent) + ov::test::utils::SqueezeOpType, // Op type + ov::element::Type, // Model type + ov::test::TargetDevice // Target device name +> squeezeParams; + +class SqueezeUnsqueezeLayerTest : public testing::WithParamInterface, + virtual public ov::test::SubgraphBaseTest { +public: + static std::string getTestCaseName(const testing::TestParamInfo& obj); +protected: + void SetUp() override; +}; +} // namespace test +} // namespace ov diff --git a/src/tests/functional/shared_test_classes/include/shared_test_classes/single_op/strided_slice.hpp b/src/tests/functional/shared_test_classes/include/shared_test_classes/single_op/strided_slice.hpp new file mode 100644 index 00000000000000..fca314279af011 --- /dev/null +++ b/src/tests/functional/shared_test_classes/include/shared_test_classes/single_op/strided_slice.hpp @@ -0,0 +1,43 @@ +// Copyright (C) 2018-2023 Intel Corporation +// SPDX-License-Identifier: Apache-2.0 +// + +#pragma once + +#include +#include +#include +#include + +#include "shared_test_classes/base/ov_subgraph.hpp" + +namespace ov { +namespace test { +struct StridedSliceSpecificParams { + std::vector input_shape; + std::vector begin; + std::vector end; + std::vector strides; + std::vector begin_mask; + std::vector end_mask; + std::vector new_axis_mask; + std::vector shrink_axis_mask; + std::vector ellipsis_axis_mask; +}; + +using StridedSliceParams = std::tuple< + StridedSliceSpecificParams, + ov::element::Type, // Model type + ov::test::TargetDevice // Device name +>; + +class StridedSliceLayerTest : public testing::WithParamInterface, + virtual public ov::test::SubgraphBaseTest { +public: + static std::string getTestCaseName(const testing::TestParamInfo &obj); + +protected: + void SetUp() override; +}; +} // namespace test +} // namespace ov diff --git a/src/tests/functional/shared_test_classes/src/base/utils/generate_inputs.cpp b/src/tests/functional/shared_test_classes/src/base/utils/generate_inputs.cpp index 7a7e2bc4a89879..be877939273ae7 100644 --- a/src/tests/functional/shared_test_classes/src/base/utils/generate_inputs.cpp +++ b/src/tests/functional/shared_test_classes/src/base/utils/generate_inputs.cpp @@ -7,7 +7,7 @@ #include "ov_ops/augru_cell.hpp" #include "ov_ops/augru_sequence.hpp" -#include +#include "common_test_utils/ov_tensor_utils.hpp" #include "shared_test_classes/single_layer/roi_align.hpp" #include "shared_test_classes/single_layer/psroi_pooling.hpp" @@ -963,6 +963,29 @@ ov::runtime::Tensor generate(const return tensor; } +ov::runtime::Tensor generate(const + std::shared_ptr& node, + size_t port, + const ov::element::Type& elemType, + const ov::Shape& targetShape) { + InputGenerateData in_gen_data; + if (elemType.is_real()) { + set_real_number_generation_data(in_gen_data); + } + + if (1 == port) { + in_gen_data.range = 2; + in_gen_data.start_from = 0; + in_gen_data.resolution = 10; + } else if (2 == port) { + in_gen_data.range = 1; + in_gen_data.start_from = 0; + in_gen_data.resolution = 20; + } + return ov::test::utils::create_and_fill_tensor(elemType, targetShape, in_gen_data.range, + in_gen_data.start_from, in_gen_data.resolution, in_gen_data.seed); +} + namespace comparison { void fill_tensor(ov::Tensor& tensor) { auto data_ptr = static_cast(tensor.data()); diff --git a/src/tests/functional/shared_test_classes/src/single_op/convolution.cpp b/src/tests/functional/shared_test_classes/src/single_op/convolution.cpp new file mode 100644 index 00000000000000..0e44df42406b60 --- /dev/null +++ b/src/tests/functional/shared_test_classes/src/single_op/convolution.cpp @@ -0,0 +1,78 @@ +// Copyright (C) 2018-2023 Intel Corporation +// SPDX-License-Identifier: Apache-2.0 +// + +#include "shared_test_classes/single_op/convolution.hpp" + +#include "openvino/op/parameter.hpp" +#include "openvino/op/constant.hpp" +#include "openvino/op/result.hpp" +#include "openvino/op/convolution.hpp" +#include "common_test_utils/data_utils.hpp" +#include "common_test_utils/ov_tensor_utils.hpp" + +namespace ov { +namespace test { +std::string ConvolutionLayerTest::getTestCaseName(const testing::TestParamInfo& obj) { + convSpecificParams conv_params; + ov::element::Type model_type; + std::vector shapes; + std::string targetDevice; + std::tie(conv_params, model_type, shapes, targetDevice) = obj.param; + ngraph::op::PadType pad_type; + InferenceEngine::SizeVector kernel, stride, dilation; + std::vector pad_begin, pad_end; + size_t conv_out_channels; + std::tie(kernel, stride, pad_begin, pad_end, dilation, conv_out_channels, pad_type) = conv_params; + + std::ostringstream result; + result << "IS=("; + for (size_t i = 0lu; i < shapes.size(); i++) { + result << ov::test::utils::partialShape2str({shapes[i].first}) << (i < shapes.size() - 1lu ? "_" : ""); + } + result << ")_TS="; + for (size_t i = 0lu; i < shapes.front().second.size(); i++) { + result << "{"; + for (size_t j = 0lu; j < shapes.size(); j++) { + result << ov::test::utils::vec2str(shapes[j].second[i]) << (j < shapes.size() - 1lu ? "_" : ""); + } + result << "}_"; + } + result << "K" << ov::test::utils::vec2str(kernel) << "_"; + result << "S" << ov::test::utils::vec2str(stride) << "_"; + result << "PB" << ov::test::utils::vec2str(pad_begin) << "_"; + result << "PE" << ov::test::utils::vec2str(pad_end) << "_"; + result << "D=" << ov::test::utils::vec2str(dilation) << "_"; + result << "O=" << conv_out_channels << "_"; + result << "AP=" << pad_type << "_"; + result << "netPRC=" << model_type.get_type_name() << "_"; + result << "trgDev=" << targetDevice; + return result.str(); +} + +void ConvolutionLayerTest::SetUp() { + convSpecificParams conv_params; + std::vector shapes; + ov::element::Type model_type; + std::tie(conv_params, model_type, shapes, targetDevice) = this->GetParam(); + init_input_shapes(shapes); + + ov::op::PadType pad_type; + InferenceEngine::SizeVector kernel, stride, dilation; + std::vector pad_begin, pad_end; + size_t conv_out_channels; + std::tie(kernel, stride, pad_begin, pad_end, dilation, conv_out_channels, pad_type) = conv_params; + + ov::ParameterVector params{std::make_shared(model_type, inputDynamicShapes.front())}; + + ov::Shape filterWeightsShape = {conv_out_channels, static_cast(inputDynamicShapes.front()[1].get_length())}; + filterWeightsShape.insert(filterWeightsShape.end(), kernel.begin(), kernel.end()); + + auto tensor = ov::test::utils::create_and_fill_tensor(model_type, filterWeightsShape); + auto filter_weights_node = std::make_shared(tensor); + auto conv = std::make_shared(params[0], filter_weights_node, stride, pad_begin, pad_end, dilation, pad_type); + + function = std::make_shared(std::make_shared(conv), params, "convolution"); +} +} // namespace test +} // namespace ov diff --git a/src/tests/functional/shared_test_classes/src/single_op/convolution_backprop_data.cpp b/src/tests/functional/shared_test_classes/src/single_op/convolution_backprop_data.cpp new file mode 100644 index 00000000000000..c858fac6a3e97f --- /dev/null +++ b/src/tests/functional/shared_test_classes/src/single_op/convolution_backprop_data.cpp @@ -0,0 +1,87 @@ +// Copyright (C) 2018-2023 Intel Corporation +// SPDX-License-Identifier: Apache-2.0 +// + +// DEPRECATED, can't be removed currently due to arm and kmb-plugin dependency (#55568) + +#include "shared_test_classes/single_op/convolution_backprop_data.hpp" + +#include "openvino/op/parameter.hpp" +#include "openvino/op/constant.hpp" +#include "openvino/op/result.hpp" +#include "openvino/op/convolution.hpp" +#include "ngraph_functions/builders.hpp" + +namespace ov { +namespace test { +std::string ConvolutionBackpropDataLayerTest::getTestCaseName(const testing::TestParamInfo& obj) { + convBackpropDataSpecificParams convBackpropDataParams; + ov::element::Type model_type; + std::vector shapes; + ov::Shape output_shapes; + std::string target_device; + std::tie(convBackpropDataParams, model_type, shapes, output_shapes, target_device) = obj.param; + ov::op::PadType pad_type; + InferenceEngine::SizeVector kernel, stride, dilation; + std::vector pad_begin, pad_end, out_padding; + size_t convOutChannels; + std::tie(kernel, stride, pad_begin, pad_end, dilation, convOutChannels, pad_type, out_padding) = convBackpropDataParams; + + std::ostringstream result; + result << "IS=("; + for (size_t i = 0lu; i < shapes.size(); i++) { + result << ov::test::utils::partialShape2str({shapes[i].first}) << (i < shapes.size() - 1lu ? "_" : ""); + } + result << ")_TS="; + for (size_t i = 0lu; i < shapes.front().second.size(); i++) { + result << "{"; + for (size_t j = 0lu; j < shapes.size(); j++) { + result << ov::test::utils::vec2str(shapes[j].second[i]) << (j < shapes.size() - 1lu ? "_" : ""); + } + result << "}_"; + } + result << "OS=" << ov::test::utils::vec2str(output_shapes) << "_"; + result << "K" << ov::test::utils::vec2str(kernel) << "_"; + result << "S" << ov::test::utils::vec2str(stride) << "_"; + result << "PB" << ov::test::utils::vec2str(pad_begin) << "_"; + result << "PE" << ov::test::utils::vec2str(pad_end) << "_"; + result << "D=" << ov::test::utils::vec2str(dilation) << "_"; + result << "OP=" << ov::test::utils::vec2str(out_padding) << "_"; + result << "O=" << convOutChannels << "_"; + result << "AP=" << pad_type << "_"; + result << "netPRC=" << model_type.get_type_name() << "_"; + result << "trgDev=" << target_device; + return result.str(); +} + +void ConvolutionBackpropDataLayerTest::SetUp() { + convBackpropDataSpecificParams convBackpropDataParams; + std::vector shapes; + ov::Shape output_shape; + ov::element::Type model_type; + std::tie(convBackpropDataParams, model_type, shapes, output_shape, targetDevice) = this->GetParam(); + init_input_shapes(shapes); + + ov::op::PadType pad_type; + InferenceEngine::SizeVector kernel, stride, dilation; + std::vector pad_begin, pad_end, out_padding; + size_t convOutChannels; + std::tie(kernel, stride, pad_begin, pad_end, dilation, convOutChannels, pad_type, out_padding) = convBackpropDataParams; + + ov::ParameterVector params{std::make_shared(model_type, inputDynamicShapes.front())}; + + std::shared_ptr convBackpropData; + if (!output_shape.empty()) { + auto outShape = ov::op::v0::Constant::create(ov::element::i64, {output_shape.size()}, output_shape); + convBackpropData = std::dynamic_pointer_cast( + ngraph::builder::makeConvolutionBackpropData(params[0]->output(0), outShape, model_type, kernel, stride, pad_begin, + pad_end, dilation, pad_type, convOutChannels)); + } else { + convBackpropData = std::dynamic_pointer_cast( + ngraph::builder::makeConvolutionBackpropData(params[0]->output(0), model_type, kernel, stride, pad_begin, + pad_end, dilation, pad_type, convOutChannels, false, out_padding)); + } + function = std::make_shared(std::make_shared(convBackpropData), params, "convolutionBackpropData"); +} +} // namespace test +} // namespace ov diff --git a/src/tests/functional/shared_test_classes/src/single_op/deformable_convolution.cpp b/src/tests/functional/shared_test_classes/src/single_op/deformable_convolution.cpp new file mode 100644 index 00000000000000..82367cfc3b6c96 --- /dev/null +++ b/src/tests/functional/shared_test_classes/src/single_op/deformable_convolution.cpp @@ -0,0 +1,98 @@ +// Copyright (C) 2018-2023 Intel Corporation +// SPDX-License-Identifier: Apache-2.0 +// +#include "shared_test_classes/single_op/deformable_convolution.hpp" + +#include "common_test_utils/ov_tensor_utils.hpp" +#include "openvino/op/parameter.hpp" +#include "openvino/op/constant.hpp" +#include "openvino/op/result.hpp" +#include "openvino/op/deformable_convolution.hpp" + + +namespace ov { +namespace test { +std::string DeformableConvolutionLayerTest::getTestCaseName(const testing::TestParamInfo& obj) { + deformableConvSpecificParams convParams; + ov::element::Type model_type; + std::vector shapes; + std::string target_device; + bool with_modulation; + std::tie(convParams, with_modulation, model_type, shapes, target_device) = obj.param; + ov::op::PadType padType; + std::vector stride, dilation; + std::vector pad_begin, pad_end; + size_t groups, deformable_groups, conv_out_channels; + bool with_bilinear_interpolation_pad; + std::tie(stride, pad_begin, pad_end, dilation, groups, deformable_groups, conv_out_channels, padType, with_bilinear_interpolation_pad) = convParams; + + std::ostringstream result; + result << "IS=("; + for (size_t i = 0lu; i < shapes.size(); i++) { + result << ov::test::utils::partialShape2str({shapes[i].first}) << (i < shapes.size() - 1lu ? "_" : ""); + } + result << ")_TS="; + for (size_t i = 0lu; i < shapes.front().second.size(); i++) { + result << "{"; + for (size_t j = 0lu; j < shapes.size(); j++) { + result << ov::test::utils::vec2str(shapes[j].second[i]) << (j < shapes.size() - 1lu ? "_" : ""); + } + result << "}_"; + } + result << "S" << ov::test::utils::vec2str(stride) << "_"; + result << "PB" << ov::test::utils::vec2str(pad_begin) << "_"; + result << "PE" << ov::test::utils::vec2str(pad_end) << "_"; + result << "D=" << ov::test::utils::vec2str(dilation) << "_"; + result << "G=" << groups << "_"; + result << "DG=" << deformable_groups << "_"; + result << "O=" << conv_out_channels << "_"; + result << "AP=" << padType << "_"; + result << "BI_PAD=" << with_bilinear_interpolation_pad << "_"; + result << "MODULATION=" << with_modulation << "_"; + result << "netPRC=" << model_type.get_type_name() << "_"; + result << "trgDev=" << target_device; + return result.str(); +} + +void DeformableConvolutionLayerTest::SetUp() { + deformableConvSpecificParams convParams; + ov::element::Type model_type; + std::vector shapes; + bool with_modulation; + std::tie(convParams, with_modulation, model_type, shapes, targetDevice) = this->GetParam(); + init_input_shapes(shapes); + + ov::op::PadType padType; + std::vector stride, dilation; + std::vector pad_begin, pad_end; + size_t groups, deformable_groups, conv_out_channels; + bool with_bilinear_interpolation_pad; + std::tie(stride, pad_begin, pad_end, dilation, groups, deformable_groups, conv_out_channels, padType, with_bilinear_interpolation_pad) = convParams; + + auto data = std::make_shared(model_type, inputDynamicShapes[0]); + data->set_friendly_name("a_data"); + auto offset_vals = std::make_shared(model_type, inputDynamicShapes[1]); + offset_vals->set_friendly_name("b_offset_vals"); + auto filter_vals = std::make_shared(model_type, inputDynamicShapes[2]); + filter_vals->set_friendly_name("c_filter_vals"); + + ov::ParameterVector parameters{data, offset_vals, filter_vals}; + std::shared_ptr deformable_conv; + if (with_modulation) { + auto modulation_scalars = std::make_shared(model_type, inputDynamicShapes[3]); + modulation_scalars->set_friendly_name("c_modulation_scalars"); + + deformable_conv = std::make_shared(data, offset_vals, filter_vals, modulation_scalars, stride, pad_begin, + pad_end, dilation, padType, groups, deformable_groups, + with_bilinear_interpolation_pad); + parameters.push_back(modulation_scalars); + } else { + deformable_conv = std::make_shared(data, offset_vals, filter_vals, stride, pad_begin, pad_end, dilation, + padType, groups, deformable_groups, with_bilinear_interpolation_pad); + } + + auto result = std::make_shared(deformable_conv); + function = std::make_shared(result, parameters, "deformable_convolution"); +} +} // namespace test +} // namespace ov diff --git a/src/tests/functional/shared_test_classes/src/single_op/shape_of.cpp b/src/tests/functional/shared_test_classes/src/single_op/shape_of.cpp new file mode 100644 index 00000000000000..2d0cd3896aedc9 --- /dev/null +++ b/src/tests/functional/shared_test_classes/src/single_op/shape_of.cpp @@ -0,0 +1,46 @@ +// Copyright (C) 2018-2023 Intel Corporation +// SPDX-License-Identifier: Apache-2.0 +// + +#include "shared_test_classes/single_op/shape_of.hpp" + +namespace ov { +namespace test { +std::string ShapeOfLayerTest::getTestCaseName(testing::TestParamInfo obj) { + std::vector input_shapes; + ov::element::Type model_type, out_type; + std::string target_device; + std::tie(model_type, out_type, input_shapes, target_device) = obj.param; + std::ostringstream result; + result << "IS=("; + for (size_t i = 0lu; i < input_shapes.size(); i++) { + result << ov::test::utils::partialShape2str({input_shapes[i].first}) + << (i < input_shapes.size() - 1lu ? "_" : ""); + } + result << ")_TS="; + for (size_t i = 0lu; i < input_shapes.front().second.size(); i++) { + result << "{"; + for (size_t j = 0lu; j < input_shapes.size(); j++) { + result << ov::test::utils::vec2str(input_shapes[j].second[i]) << (j < input_shapes.size() - 1lu ? "_" : ""); + } + result << "}_"; + } + result << "modelType=" << model_type.to_string() << "_"; + result << "outType=" << out_type.to_string() << "_"; + result << "trgDev=" << target_device; + return result.str(); +} + +void ShapeOfLayerTest::SetUp() { + std::vector input_shapes; + ov::element::Type model_type; + std::tie(model_type, outType, input_shapes, targetDevice) = this->GetParam(); + + init_input_shapes(input_shapes); + + auto param = std::make_shared(model_type, inputDynamicShapes.front()); + auto shape_of = std::make_shared(param, outType); + function = std::make_shared(shape_of->outputs(), ov::ParameterVector{param}, "ShapeOf"); +} +} // namespace test +} // namespace ov diff --git a/src/tests/functional/shared_test_classes/src/single_op/shuffle_channels.cpp b/src/tests/functional/shared_test_classes/src/single_op/shuffle_channels.cpp new file mode 100644 index 00000000000000..f67cab2107092f --- /dev/null +++ b/src/tests/functional/shared_test_classes/src/single_op/shuffle_channels.cpp @@ -0,0 +1,55 @@ +// Copyright (C) 2018-2023 Intel Corporation +// SPDX-License-Identifier: Apache-2.0 +// + +#include "shared_test_classes/single_op/shuffle_channels.hpp" + +namespace ov { +namespace test { +std::string ShuffleChannelsLayerTest::getTestCaseName(const testing::TestParamInfo& obj) { + shuffleChannelsSpecificParams test_params; + ov::element::Type model_type; + std::vector input_shapes; + std::string target_device; + std::tie(test_params, model_type, input_shapes, target_device) = obj.param; + int axis, group; + std::tie(axis, group) = test_params; + + std::ostringstream result; + result << "IS=("; + for (size_t i = 0lu; i < input_shapes.size(); i++) { + result << ov::test::utils::partialShape2str({input_shapes[i].first}) + << (i < input_shapes.size() - 1lu ? "_" : ""); + } + result << ")_TS="; + for (size_t i = 0lu; i < input_shapes.front().second.size(); i++) { + result << "{"; + for (size_t j = 0lu; j < input_shapes.size(); j++) { + result << ov::test::utils::vec2str(input_shapes[j].second[i]) << (j < input_shapes.size() - 1lu ? "_" : ""); + } + result << "}_"; + } + result << "Axis=" << std::to_string(axis) << "_"; + result << "Group=" << std::to_string(group) << "_"; + result << "modelType=" << model_type.to_string() << "_"; + result << "trgDev=" << target_device; + return result.str(); +} + +void ShuffleChannelsLayerTest::SetUp() { + shuffleChannelsSpecificParams test_params; + ov::element::Type model_type; + std::vector input_shapes; + std::string target_device; + std::tie(test_params, model_type, input_shapes, targetDevice) = this->GetParam(); + int axis, group; + std::tie(axis, group) = test_params; + + init_input_shapes(input_shapes); + + auto param = std::make_shared(model_type, inputDynamicShapes.front()); + auto shuffle_channels = std::make_shared(param, axis, group); + function = std::make_shared(shuffle_channels->outputs(), ov::ParameterVector{param}, "ShuffleChannels"); +} +} // namespace test +} // namespace ov diff --git a/src/tests/functional/shared_test_classes/src/single_op/slice.cpp b/src/tests/functional/shared_test_classes/src/single_op/slice.cpp new file mode 100644 index 00000000000000..9257ffe812399a --- /dev/null +++ b/src/tests/functional/shared_test_classes/src/single_op/slice.cpp @@ -0,0 +1,64 @@ +// Copyright (C) 2018-2023 Intel Corporation +// SPDX-License-Identifier: Apache-2.0 +// + +#include "shared_test_classes/single_op/slice.hpp" + +namespace ov { +namespace test { +std::string Slice8LayerTest::getTestCaseName(const testing::TestParamInfo &obj) { + Slice8SpecificParams params; + ov::element::Type model_type; + std::string target_device; + std::tie(params, model_type, target_device) = obj.param; + std::ostringstream result; + result << "IS=("; + for (size_t i = 0lu; i < params.shapes.size(); i++) { + result << ov::test::utils::partialShape2str({params.shapes[i].first}) + << (i < params.shapes.size() - 1lu ? "_" : ""); + } + result << ")_TS="; + for (size_t i = 0lu; i < params.shapes.front().second.size(); i++) { + result << "{"; + for (size_t j = 0lu; j < params.shapes.size(); j++) { + result << ov::test::utils::vec2str(params.shapes[j].second[i]) << (j < params.shapes.size() - 1lu ? "_" : ""); + } + result << "}_"; + } + result << "start=" << ov::test::utils::vec2str(params.start) << "_"; + result << "stop=" << ov::test::utils::vec2str(params.stop) << "_"; + result << "step=" << ov::test::utils::vec2str(params.step) << "_"; + result << "axes=" << ov::test::utils::vec2str(params.axes) << "_"; + result << "modelType=" << model_type.to_string() << "_"; + result << "trgDev=" << target_device; + return result.str(); +} + +void Slice8LayerTest::SetUp() { + Slice8SpecificParams test_params; + ov::element::Type model_type; + std::tie(test_params, model_type, targetDevice) = this->GetParam(); + + init_input_shapes(test_params.shapes); + + auto param = std::make_shared(model_type, inputDynamicShapes.front()); + ov::Shape const_shape = {test_params.start.size()}; + + ASSERT_EQ(shape_size(const_shape), test_params.stop.size()); + ASSERT_EQ(shape_size(const_shape), test_params.step.size()); + + auto begin_node = std::make_shared(ov::element::i64, const_shape, test_params.start.data()); + auto end_node = std::make_shared(ov::element::i64, const_shape, test_params.stop.data()); + auto stride_node = std::make_shared(ov::element::i64, const_shape, test_params.step.data()); + std::shared_ptr slice; + if (!test_params.axes.empty()) { + ASSERT_EQ(shape_size(const_shape), test_params.axes.size()); + auto axesNode = std::make_shared(ov::element::i64, const_shape, test_params.axes.data()); + slice = std::make_shared(param, begin_node, end_node, stride_node, axesNode); + } else { + slice = std::make_shared(param, begin_node, end_node, stride_node); + } + function = std::make_shared(slice->outputs(), ov::ParameterVector{param}, "Slice-8"); +} +} // namespace test +} // namespace ov diff --git a/src/tests/functional/shared_test_classes/src/single_op/space_to_batch.cpp b/src/tests/functional/shared_test_classes/src/single_op/space_to_batch.cpp new file mode 100644 index 00000000000000..16c0c5871d6c43 --- /dev/null +++ b/src/tests/functional/shared_test_classes/src/single_op/space_to_batch.cpp @@ -0,0 +1,59 @@ +// Copyright (C) 2018-2023 Intel Corporation +// SPDX-License-Identifier: Apache-2.0 +// + +#include "shared_test_classes/single_op/space_to_batch.hpp" + +namespace ov { +namespace test { +std::string SpaceToBatchLayerTest::getTestCaseName(const testing::TestParamInfo &obj) { + std::vector input_shapes; + std::vector block_shapes, pads_begin, pads_end; + ov::element::Type model_type; + std::string target_device; + std::tie(block_shapes, pads_begin, pads_end, input_shapes, model_type, target_device) = obj.param; + std::ostringstream result; + result << "IS=("; + for (size_t i = 0lu; i < input_shapes.size(); i++) { + result << ov::test::utils::partialShape2str({input_shapes[i].first}) + << (i < input_shapes.size() - 1lu ? "_" : ""); + } + result << ")_TS="; + for (size_t i = 0lu; i < input_shapes.front().second.size(); i++) { + result << "{"; + for (size_t j = 0lu; j < input_shapes.size(); j++) { + result << ov::test::utils::vec2str(input_shapes[j].second[i]) << (j < input_shapes.size() - 1lu ? "_" : ""); + } + result << "}_"; + } + result << "modelType=" << model_type.to_string() << "_"; + result << "BS=" << ov::test::utils::vec2str(block_shapes) << "_"; + result << "PB=" << ov::test::utils::vec2str(pads_begin) << "_"; + result << "PE=" << ov::test::utils::vec2str(pads_end) << "_"; + result << "trgDev=" << target_device; + return result.str(); +} + +void SpaceToBatchLayerTest::SetUp() { + std::vector input_shapes; + std::vector block_shapes, pads_begin, pads_end; + ov::element::Type model_type; + std::tie(block_shapes, pads_begin, pads_end, input_shapes, model_type, targetDevice) = this->GetParam(); + + init_input_shapes(input_shapes); + + auto param = std::make_shared(model_type, inputDynamicShapes.front()); + ov::Shape const_shape = {param->get_partial_shape().size()}; + + ASSERT_EQ(shape_size(const_shape), block_shapes.size()); + ASSERT_EQ(shape_size(const_shape), pads_begin.size()); + ASSERT_EQ(shape_size(const_shape), pads_end.size()); + + auto block_shapes_node = std::make_shared(ov::element::i64, const_shape, block_shapes.data()); + auto pads_begin_node = std::make_shared(ov::element::i64, const_shape, pads_begin.data()); + auto pads_end_node = std::make_shared(ov::element::i64, const_shape, pads_end.data()); + auto s2b = std::make_shared(param, block_shapes_node, pads_begin_node, pads_end_node); + function = std::make_shared(s2b->outputs(), ov::ParameterVector{param}, "SpaceToBatch"); +} +} // namespace test +} // namespace ov diff --git a/src/tests/functional/shared_test_classes/src/single_op/space_to_depth.cpp b/src/tests/functional/shared_test_classes/src/single_op/space_to_depth.cpp new file mode 100644 index 00000000000000..328ae13f99b361 --- /dev/null +++ b/src/tests/functional/shared_test_classes/src/single_op/space_to_depth.cpp @@ -0,0 +1,67 @@ +// Copyright (C) 2018-2023 Intel Corporation +// SPDX-License-Identifier: Apache-2.0 +// + +#include "shared_test_classes/single_op/space_to_depth.hpp" + +namespace ov { +namespace test { + +using ov::op::v0::SpaceToDepth; + +static inline std::string SpaceToDepthModeToString(const SpaceToDepth::SpaceToDepthMode& mode) { + static std::map names = { + {SpaceToDepth::SpaceToDepthMode::BLOCKS_FIRST, "BLOCKS_FIRST"}, + {SpaceToDepth::SpaceToDepthMode::DEPTH_FIRST, "DEPTH_FIRST"}, + }; + + auto i = names.find(mode); + if (i != names.end()) + return i->second; + else + throw std::runtime_error("Unsupported SpaceToDepthMode"); +} + +std::string SpaceToDepthLayerTest::getTestCaseName(const testing::TestParamInfo &obj) { + std::vector input_shapes; + SpaceToDepth::SpaceToDepthMode mode; + std::size_t block_size; + ov::element::Type model_type; + std::string target_device; + std::tie(input_shapes, model_type, mode, block_size, target_device) = obj.param; + std::ostringstream result; + result << "IS=("; + for (size_t i = 0lu; i < input_shapes.size(); i++) { + result << ov::test::utils::partialShape2str({input_shapes[i].first}) + << (i < input_shapes.size() - 1lu ? "_" : ""); + } + result << ")_TS="; + for (size_t i = 0lu; i < input_shapes.front().second.size(); i++) { + result << "{"; + for (size_t j = 0lu; j < input_shapes.size(); j++) { + result << ov::test::utils::vec2str(input_shapes[j].second[i]) << (j < input_shapes.size() - 1lu ? "_" : ""); + } + result << "}_"; + } + result << "modelType=" << model_type.to_string() << "_"; + result << "M=" << SpaceToDepthModeToString(mode) << "_"; + result << "BS=" << block_size << "_"; + result << "targetDevice=" << target_device << "_"; + return result.str(); +} + +void SpaceToDepthLayerTest::SetUp() { + std::vector input_shapes; + SpaceToDepth::SpaceToDepthMode mode; + std::size_t block_size; + ov::element::Type model_type; + std::tie(input_shapes, model_type, mode, block_size, targetDevice) = this->GetParam(); + + init_input_shapes(input_shapes); + + auto param = std::make_shared(model_type, inputDynamicShapes.front()); + auto s2d = std::make_shared(param, mode, block_size); + function = std::make_shared(s2d->outputs(), ov::ParameterVector{param}, "SpaceToDepth"); +} +} // namespace test +} // namespace ov diff --git a/src/tests/functional/shared_test_classes/src/single_op/split.cpp b/src/tests/functional/shared_test_classes/src/single_op/split.cpp new file mode 100644 index 00000000000000..fa4a3b83ab0f40 --- /dev/null +++ b/src/tests/functional/shared_test_classes/src/single_op/split.cpp @@ -0,0 +1,62 @@ +// Copyright (C) 2018-2023 Intel Corporation +// SPDX-License-Identifier: Apache-2.0 +// + +#include "shared_test_classes/single_op/split.hpp" + +namespace ov { +namespace test { +std::string SplitLayerTest::getTestCaseName(const testing::TestParamInfo& obj) { + size_t num_splits; + int64_t axis; + ov::element::Type model_type; + std::vector out_indices; + std::vector input_shapes; + std::string target_device; + std::tie(num_splits, axis, model_type, input_shapes, out_indices, target_device) = obj.param; + std::ostringstream result; + result << "IS=("; + for (size_t i = 0lu; i < input_shapes.size(); i++) { + result << ov::test::utils::partialShape2str({input_shapes[i].first}) + << (i < input_shapes.size() - 1lu ? "_" : ""); + } + result << ")_TS="; + for (size_t i = 0lu; i < input_shapes.front().second.size(); i++) { + result << "{"; + for (size_t j = 0lu; j < input_shapes.size(); j++) { + result << ov::test::utils::vec2str(input_shapes[j].second[i]) << (j < input_shapes.size() - 1lu ? "_" : ""); + } + result << "}_"; + } + result << "numSplits=" << num_splits << "_"; + result << "axis=" << axis << "_"; + if (!out_indices.empty()) { + result << "outIndices" << ov::test::utils::vec2str(out_indices) << "_"; + } + result << "IS"; + result << "modelType=" << model_type.to_string() << "_"; + result << "trgDev=" << target_device; + return result.str(); +} + +void SplitLayerTest::SetUp() { + size_t num_splits; + int64_t axis; + ov::element::Type model_type; + std::vector out_indices; + std::vector input_shapes; + std::tie(num_splits, axis, model_type, input_shapes, out_indices, targetDevice) = this->GetParam(); + if (out_indices.empty()) { + for (int i = 0; i < num_splits; ++i) + out_indices.push_back(i); + } + init_input_shapes(input_shapes); + + auto param = std::make_shared(model_type, inputDynamicShapes.front()); + auto split_axis_op = + std::make_shared(ov::element::Type_t::i64, ov::Shape{}, std::vector{axis}); + auto splitNode = std::make_shared(param, split_axis_op, num_splits); + function = std::make_shared(splitNode->outputs(), ov::ParameterVector{param}, "Split"); +} +} // namespace test +} // namespace ov diff --git a/src/tests/functional/shared_test_classes/src/single_op/squeeze_unsqueeze.cpp b/src/tests/functional/shared_test_classes/src/single_op/squeeze_unsqueeze.cpp new file mode 100644 index 00000000000000..ec87fbe6e5eb5c --- /dev/null +++ b/src/tests/functional/shared_test_classes/src/single_op/squeeze_unsqueeze.cpp @@ -0,0 +1,67 @@ +// Copyright (C) 2018-2023 Intel Corporation +// SPDX-License-Identifier: Apache-2.0 +// + +#include "shared_test_classes/single_op/squeeze_unsqueeze.hpp" + +namespace ov { +namespace test { +std::string SqueezeUnsqueezeLayerTest::getTestCaseName(const testing::TestParamInfo& obj) { + ov::element::Type model_type; + ShapeAxesTuple shape_item; + std::string targetDevice; + ov::test::utils::SqueezeOpType op_type; + std::tie(shape_item, op_type, model_type, targetDevice) = obj.param; + + std::ostringstream result; + const char separator = '_'; + result << "IS=("; + for (size_t i = 0lu; i < shape_item.first.size(); i++) { + result << ov::test::utils::partialShape2str({shape_item.first[i].first}) + << (i < shape_item.first.size() - 1lu ? "_" : ""); + } + result << ")_TS="; + for (size_t i = 0lu; i < shape_item.first.front().second.size(); i++) { + result << "{"; + for (size_t j = 0lu; j < shape_item.first.size(); j++) { + result << ov::test::utils::vec2str(shape_item.first[j].second[i]) << (j < shape_item.first.size() - 1lu ? "_" : ""); + } + result << "}_"; + } + result << "OpType=" << op_type << separator; + result << "Axes=" << (shape_item.second.empty() ? "default" : ov::test::utils::vec2str(shape_item.second)) << separator; + result << "modelType=" << model_type.to_string() << separator; + result << "trgDev=" << targetDevice; + return result.str(); +} + +void SqueezeUnsqueezeLayerTest::SetUp() { + ov::element::Type model_type; + std::vector input_shapes; + std::vector axes; + ShapeAxesTuple shape_item; + ov::test::utils::SqueezeOpType op_type; + std::tie(shape_item, op_type, model_type, targetDevice) = GetParam(); + std::tie(input_shapes, axes) = shape_item; + + init_input_shapes(input_shapes); + + auto param = std::make_shared(model_type, inputDynamicShapes.front()); + std::shared_ptr op; + + if (axes.empty() && op_type == ov::test::utils::SqueezeOpType::SQUEEZE) { + op = std::make_shared(param); + } else { + auto constant = std::make_shared(ov::element::i64, ov::Shape{axes.size()}, axes); + if (op_type == op_type == ov::test::utils::SqueezeOpType::SQUEEZE) + op = std::make_shared(param, constant); + else + op = std::make_shared(param, constant); + } + + auto name = op_type == ov::test::utils::SqueezeOpType::SQUEEZE ? "Squeeze" : "Unsqueeze"; + + function = std::make_shared(op->outputs(), ov::ParameterVector{param}, name); +} +} // namespace test +} // namespace ov diff --git a/src/tests/functional/shared_test_classes/src/single_op/strided_slice.cpp b/src/tests/functional/shared_test_classes/src/single_op/strided_slice.cpp new file mode 100644 index 00000000000000..227bc14e779acc --- /dev/null +++ b/src/tests/functional/shared_test_classes/src/single_op/strided_slice.cpp @@ -0,0 +1,70 @@ +// Copyright (C) 2018-2023 Intel Corporation +// SPDX-License-Identifier: Apache-2.0 +// + +#include "shared_test_classes/single_op/strided_slice.hpp" + +namespace ov { +namespace test { +std::string StridedSliceLayerTest::getTestCaseName(const testing::TestParamInfo &obj) { + StridedSliceSpecificParams params; + ov::element::Type model_type; + std::string target_device; + std::tie(params, model_type, target_device) = obj.param; + std::ostringstream result; + result << "IS=("; + for (size_t i = 0lu; i < params.input_shape.size(); i++) { + result << ov::test::utils::partialShape2str({params.input_shape[i].first}) + << (i < params.input_shape.size() - 1lu ? "_" : ""); + } + result << ")_TS="; + for (size_t i = 0lu; i < params.input_shape.front().second.size(); i++) { + result << "{"; + for (size_t j = 0lu; j < params.input_shape.size(); j++) { + result << ov::test::utils::vec2str(params.input_shape[j].second[i]) << (j < params.input_shape.size() - 1lu ? "_" : ""); + } + result << "}_"; + } + result << "modelType=" << model_type.to_string() << "_"; + result << "begin=" << ov::test::utils::vec2str(params.begin) << "_"; + result << "end=" << ov::test::utils::vec2str(params.end) << "_"; + result << "stride=" << ov::test::utils::vec2str(params.strides) << "_"; + result << "begin_m=" << ov::test::utils::vec2str(params.begin_mask) << "_"; + result << "end_m=" << ov::test::utils::vec2str(params.end_mask) << "_"; + result << "new_axis_m=" << (params.new_axis_mask.empty() ? "def" : ov::test::utils::vec2str(params.new_axis_mask)) << "_"; + result << "shrink_m=" << (params.shrink_axis_mask.empty() ? "def" : ov::test::utils::vec2str(params.shrink_axis_mask)) << "_"; + result << "ellipsis_m=" << (params.ellipsis_axis_mask.empty() ? "def" : ov::test::utils::vec2str(params.ellipsis_axis_mask)) << "_"; + result << "trgDev=" << target_device; + return result.str(); +} + +void StridedSliceLayerTest::SetUp() { + StridedSliceSpecificParams ssParams; + ov::element::Type model_type; + std::tie(ssParams, model_type, targetDevice) = this->GetParam(); + + init_input_shapes(ssParams.input_shape); + + ASSERT_EQ(ssParams.begin.size(), ssParams.end.size()); + ASSERT_EQ(ssParams.begin.size(), ssParams.strides.size()); + + auto param = std::make_shared(model_type, inputDynamicShapes.front()); + ov::Shape const_shape = {ssParams.begin.size()}; + auto begin_node = std::make_shared(ov::element::i64, const_shape, ssParams.begin.data()); + auto end_node = std::make_shared(ov::element::i64, const_shape, ssParams.end.data()); + auto stride_node = std::make_shared(ov::element::i64, const_shape, ssParams.strides.data()); + auto stridedSlice = std::make_shared(param, + begin_node, + end_node, + stride_node, + ssParams.begin_mask, + ssParams.end_mask, + ssParams.new_axis_mask, + ssParams.shrink_axis_mask, + ssParams.ellipsis_axis_mask); + + auto result = std::make_shared(stridedSlice); + function = std::make_shared(ov::ResultVector{result}, ov::ParameterVector{param}, "StridedSlice"); +} +} // namespace test +} // namespace ov diff --git a/tests/layer_tests/pytorch_tests/test_tuple_construct.py b/tests/layer_tests/pytorch_tests/test_tuple_construct.py index a8bd03731c644c..b4f48354dcfdb6 100644 --- a/tests/layer_tests/pytorch_tests/test_tuple_construct.py +++ b/tests/layer_tests/pytorch_tests/test_tuple_construct.py @@ -79,21 +79,21 @@ def forward(self, x): def prepare_input(self, x): return x, x + 2, None, x.reshape(-1), (x * 10).to(torch.int32) - ref_net = None return prim_tuple_construct_tuple_unpack(), ref_net, ["prim::TupleConstruct", "prim::TupleUnpack"] @pytest.mark.nightly def test_tuple_construct_unpack(self, ie_device, precision, ir_version): - self._test(*self.create_model(), ie_device, precision, ir_version, freeze_model=False) + self._test(*self.create_model(), ie_device, + precision, ir_version, freeze_model=False) class TestTupleUnpackParameterSingle(PytorchLayerTest): def _prepare_input(self): def tensor_gen(): return np.random.uniform(0, 50, (1, 2, 10)).astype(np.float32) - return ( (tensor_gen(), tensor_gen()), ) + return ((tensor_gen(), tensor_gen()), ) def create_model(self): import torch @@ -105,7 +105,6 @@ def forward(self, x: Tuple[torch.Tensor, torch.Tensor]): x1, x2 = x return x1, x2 - return model(), None, ["prim::TupleUnpack"] @pytest.mark.nightly @@ -118,6 +117,7 @@ def _prepare_input(self): def tensor_gen(): return np.random.uniform(0, 50, (1, 2, 10)).astype(np.float32) # generate tensor with a different shape for easier mismatch detection in case of mixed input order + def tensor_gen_2(): return np.random.uniform(0, 50, (2, 3)).astype(np.float32) return (tensor_gen_2(), (tensor_gen(), tensor_gen()), tensor_gen_2()) @@ -132,7 +132,6 @@ def forward(self, y1, x: Tuple[torch.Tensor, torch.Tensor], y2): x1, x2 = x return x1, x2, y1, y2 - return model(), None, ["prim::TupleUnpack"] @pytest.mark.nightly @@ -144,7 +143,7 @@ class TestTupleUnpackParameterNested(PytorchLayerTest): def _prepare_input(self): def tensor_gen(): return np.random.uniform(0, 50, (1, 2, 10)).astype(np.float32) - return ( ((tensor_gen(), tensor_gen()), (tensor_gen(), tensor_gen())), ) + return (((tensor_gen(), tensor_gen()), (tensor_gen(), tensor_gen())), ) def create_model(self): import torch @@ -158,7 +157,6 @@ def forward(self, x: Tuple[Tuple[torch.Tensor, torch.Tensor], Tuple[torch.Tensor y3, y4 = x2 return y1, y2, y3, y4 - return model(), None, ["prim::TupleUnpack"] @pytest.mark.nightly @@ -170,7 +168,7 @@ class TestTupleUnpackParameterMultiple(PytorchLayerTest): def _prepare_input(self): def tensor_gen(): return np.random.uniform(0, 50, (1, 2, 10)).astype(np.float32) - return ( (tensor_gen(), tensor_gen()), (tensor_gen(), tensor_gen()) ) + return ((tensor_gen(), tensor_gen()), (tensor_gen(), tensor_gen())) def create_model(self): import torch @@ -183,9 +181,31 @@ def forward(self, x: Tuple[torch.Tensor, torch.Tensor], y: Tuple[torch.Tensor, t z3, z4 = y return z1, z2, z3, z4 - return model(), None, ["prim::TupleUnpack"] @pytest.mark.nightly def test(self, ie_device, precision, ir_version): self._test(*self.create_model(), ie_device, precision, ir_version) + + +class TestTupleIndex(PytorchLayerTest): + def _prepare_input(self): + return np.random.uniform(0, 50, (1, 2, 10)).astype(np.float32) + + def create_model(self): + import torch + from typing import Tuple + + class model(torch.nn.Module): + def forward(self, x): + return self.some_func((x,x)) + + def some_func(self, x: Tuple[torch.Tensor, torch.Tensor]): + return x[1] * 2, x[0] * 3 + + return model(), None, "prim::TupleIndex" + + @pytest.mark.nightly + def test(self, ie_device, precision, ir_version): + self._test(*self.create_model(), ie_device, precision, + ir_version, trace_model=False, freeze_model=False) diff --git a/tests/stress_tests/common/ie_pipelines/pipelines.cpp b/tests/stress_tests/common/ie_pipelines/pipelines.cpp index cfefb75aebaa84..6960799a475218 100644 --- a/tests/stress_tests/common/ie_pipelines/pipelines.cpp +++ b/tests/stress_tests/common/ie_pipelines/pipelines.cpp @@ -55,11 +55,9 @@ create_compiled_model(const std::string &model, const std::string &target_device }; } -std::function recreate_compiled_model(std::shared_ptr &ie_wrapper, const std::string &model, +std::function recreate_compiled_model(std::shared_ptr &ie_wrapper, const std::string &target_device, const int &api_version) { - return [&] { - ie_wrapper->load_plugin(target_device); - ie_wrapper->read_network(model); + return [=] { ie_wrapper->load_network(target_device); }; } @@ -77,7 +75,7 @@ create_infer_request(const std::string &model, const std::string &target_device, std::function recreate_infer_request(std::shared_ptr &ie_wrapper) { - return [&] { + return [=] { ie_wrapper->create_infer_request(); }; } @@ -97,14 +95,14 @@ infer_request_inference(const std::string &model, const std::string &target_devi std::function reinfer_request_inference(std::shared_ptr &ie_wrapper) { - return [&] { + return [=] { ie_wrapper->infer(); }; } std::function recreate_and_infer_in_thread(std::shared_ptr &ie_wrapper) { - return [&] { - auto func = [&] { + return [=] { + auto func = [=] { ie_wrapper->create_infer_request(); ie_wrapper->prepare_input(); ie_wrapper->infer(); @@ -133,7 +131,6 @@ inference_with_streams(const std::string &model, const std::string &target_devic for (int counter = 0; counter < nireq; counter++) { ie_api_wrapper->create_infer_request(); ie_api_wrapper->prepare_input(); - ie_api_wrapper->infer(); } }; diff --git a/tests/stress_tests/common/ie_pipelines/pipelines.h b/tests/stress_tests/common/ie_pipelines/pipelines.h index 1ecd40c07b7804..8b5bf905e12c03 100644 --- a/tests/stress_tests/common/ie_pipelines/pipelines.h +++ b/tests/stress_tests/common/ie_pipelines/pipelines.h @@ -29,7 +29,7 @@ inference_with_streams(const std::string &model, const std::string &target_devic const int &api_version); std::function -recreate_compiled_model(std::shared_ptr &ie, const std::string &model, const std::string &target_device, +recreate_compiled_model(std::shared_ptr &ie_wrapper, const std::string &target_device, const int &api_version); std::function recreate_infer_request(std::shared_ptr &ie_wrapper); diff --git a/tests/stress_tests/memleaks_tests/tests.cpp b/tests/stress_tests/memleaks_tests/tests.cpp index e24c2e7a7a251b..71f06bb08fd706 100644 --- a/tests/stress_tests/memleaks_tests/tests.cpp +++ b/tests/stress_tests/memleaks_tests/tests.cpp @@ -95,14 +95,15 @@ TEST_P(MemLeaksTestSuiteNoDevice, set_input_params) { test_runner(test_params.numthreads, test); } -TEST_P(MemLeaksTestSuite, recreate_exenetwork) { +TEST_P(MemLeaksTestSuite, recreate_compiled_model) { auto test_params = GetParam(); std::vector> pipeline; - auto ie_wrapper = create_infer_api_wrapper(test_params.api_version); pipeline.reserve(test_params.models.size()); for (int i = 0; i < test_params.models.size(); i++) { - pipeline.push_back(recreate_compiled_model(ie_wrapper, test_params.models[i]["full_path"], test_params.device, + auto ie_wrapper = create_infer_api_wrapper(test_params.api_version); + ie_wrapper->read_network(test_params.models[i]["full_path"]); + pipeline.push_back(recreate_compiled_model(ie_wrapper, test_params.device, test_params.api_version)); } auto test = [&] { @@ -117,11 +118,10 @@ TEST_P(MemLeaksTestSuite, recreate_exenetwork) { TEST_P(MemLeaksTestSuite, recreate_infer_request) { auto test_params = GetParam(); std::vector> pipeline; - auto ie_wrapper = create_infer_api_wrapper(test_params.api_version); - size_t n_models = test_params.models.size(); for (int i = 0; i < n_models; i++) { + auto ie_wrapper = create_infer_api_wrapper(test_params.api_version); ie_wrapper->read_network(test_params.models[i]["full_path"]); ie_wrapper->load_network(test_params.device); pipeline.push_back(recreate_infer_request(ie_wrapper)); @@ -138,10 +138,10 @@ TEST_P(MemLeaksTestSuite, recreate_infer_request) { TEST_P(MemLeaksTestSuite, reinfer_request_inference) { auto test_params = GetParam(); std::vector> pipeline; - auto ie_wrapper = create_infer_api_wrapper(test_params.api_version); size_t n_models = test_params.models.size(); for (int i = 0; i < n_models; i++) { + auto ie_wrapper = create_infer_api_wrapper(test_params.api_version); ie_wrapper->read_network(test_params.models[i]["full_path"]); ie_wrapper->load_network(test_params.device); ie_wrapper->create_infer_request(); @@ -196,10 +196,10 @@ TEST_P(MemLeaksTestSuite, inference_with_streams) { TEST_P(MemLeaksTestSuite, recreate_and_infer_in_thread) { auto test_params = GetParam(); std::vector> pipeline; - auto ie_wrapper = create_infer_api_wrapper(test_params.api_version); size_t n_models = test_params.models.size(); for (int i = 0; i < n_models; i++) { + auto ie_wrapper = create_infer_api_wrapper(test_params.api_version); ie_wrapper->read_network(test_params.models[i]["full_path"]); ie_wrapper->load_network(test_params.device); pipeline.push_back(recreate_and_infer_in_thread(ie_wrapper)); diff --git a/tests/stress_tests/scripts/get_testdata.py b/tests/stress_tests/scripts/get_testdata.py index 66ac008660a462..df3c84e017bd80 100755 --- a/tests/stress_tests/scripts/get_testdata.py +++ b/tests/stress_tests/scripts/get_testdata.py @@ -138,7 +138,7 @@ def main(): # clone Open Model Zoo into temporary path if os.path.exists(str(omz_path)): shutil.rmtree(str(omz_path)) - cmd = 'git clone --single-branch --branch develop' \ + cmd = 'git clone --single-branch --branch master' \ ' https://github.com/openvinotoolkit/open_model_zoo {omz_path}'.format(omz_path=omz_path) run_in_subprocess(cmd) diff --git a/tools/pot/README_dev.md b/tools/pot/README_dev.md index d524ec5bb59d89..f16baf859d3a41 100644 --- a/tools/pot/README_dev.md +++ b/tools/pot/README_dev.md @@ -20,7 +20,7 @@ Post-Training Optimization Tool includes standalone command-line tool and Python ### System requirements - Ubuntu 18.04 or later (64-bit) -- Python 3.7 or later +- Python 3.8 or later - OpenVINO ### Installation (Temporary)