Skip to content

Commit

Permalink
Merge branch 'master' into e2e_oss
Browse files Browse the repository at this point in the history
  • Loading branch information
rnugmanx authored Jan 24, 2024
2 parents 2655fa1 + 8321a41 commit ffe997d
Show file tree
Hide file tree
Showing 67 changed files with 392 additions and 5,757 deletions.
Original file line number Diff line number Diff line change
Expand Up @@ -40,16 +40,16 @@ The table below demonstrates support of key features by OpenVINO device plugins.
========================================================================================= ============================ ========== ===========
Capability CPU GPU NPU
========================================================================================= ============================ ========== ===========
:doc:`Heterogeneous execution <openvino_docs_OV_UG_Hetero_execution>` Yes Yes No
:doc:`Multi-device execution <openvino_docs_OV_UG_Running_on_multiple_devices>` Yes Yes Partial
:doc:`Automatic batching <openvino_docs_OV_UG_Automatic_Batching>` No Yes No
:doc:`Multi-stream execution <openvino_docs_deployment_optimization_guide_tput>` Yes (Intel® x86-64 only) Yes No
:doc:`Models caching <openvino_docs_OV_UG_Model_caching_overview>` Yes Partial Yes
:doc:`Dynamic shapes <openvino_docs_OV_UG_DynamicShapes>` Yes Partial No
:doc:`Import/Export <openvino_ecosystem>` Yes No Yes
:doc:`Preprocessing acceleration <openvino_docs_OV_UG_Preprocessing_Overview>` Yes Yes No
:doc:`Stateful models <openvino_docs_OV_UG_model_state_intro>` Yes No Yes
:doc:`Extensibility <openvino_docs_Extensibility_UG_Intro>` Yes Yes No
:doc:`Heterogeneous execution <openvino_docs_OV_UG_Hetero_execution>` Yes Yes Partial
:doc:`Multi-device execution <openvino_docs_OV_UG_Running_on_multiple_devices>` Yes Yes Yes
:doc:`Automatic batching <openvino_docs_OV_UG_Automatic_Batching>` No Yes No
:doc:`Multi-stream execution <openvino_docs_deployment_optimization_guide_tput>` Yes (Intel® x86-64 only) Yes Yes
:doc:`Models caching <openvino_docs_OV_UG_Model_caching_overview>` Yes Partial Yes
:doc:`Dynamic shapes <openvino_docs_OV_UG_DynamicShapes>` Yes Partial No
:doc:`Import/Export <openvino_ecosystem>` Yes No Yes*
:doc:`Preprocessing acceleration <openvino_docs_OV_UG_Preprocessing_Overview>` Yes Yes Partial
:doc:`Stateful models <openvino_docs_OV_UG_model_state_intro>` Yes No No
:doc:`Extensibility <openvino_docs_Extensibility_UG_Intro>` Yes Yes Partial
========================================================================================= ============================ ========== ===========

For more details on plugin-specific feature limitations, see the corresponding plugin pages.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -15,11 +15,7 @@ for more streamlined resource management.

Note that the NPU plugin is currently available only with the Archive distribution of OpenVINO™
and you need to :doc:`install a proper NPU driver <openvino_docs_install_guides_configurations_for_intel_npu>`
to use it successfully. For an in-depth description of the NPU plugin, see:

* `NPU plugin developer documentation <https://github.com/openvinotoolkit/npu_plugin/blob/develop/docs/VPUX_DG/index.md>`__
* `OpenVINO Runtime NPU plugin source files <https://github.com/openvinotoolkit/npu_plugin>`__

to use it successfully.

| **Supported Platforms:**
| Host: Intel® Core™ Ultra (former Meteor Lake)
Expand All @@ -29,12 +25,12 @@ to use it successfully. For an in-depth description of the NPU plugin, see:

| **Supported Inference Data Types**
| The NPU plugin supports the following data types as inference precision of internal primitives:
| Floating-point data types: f32, f16O
| Quantized data types: u8 (quantized models may be int8 or mixed fp16-int8)
| Computation precision for the HW is fp16.
| Floating-point data types: F32, F16O
| Quantized data types: U8 (quantized models may be int8 or mixed FP16-INT8)
| Computation precision for the HW is FP16.
|
| For more details on how to get a quantized model, refer to the
:doc:`Model Optimization guide <openvino_docs_model_optimization_guide>`, and
:doc:`Model Optimization guide <openvino_docs_model_optimization_guide>` and
:doc:`NNCF tool quantization guide <basic_quantization_flow>`.

Expand Down Expand Up @@ -86,7 +82,8 @@ For more details about OpenVINO model caching, see the
Supported Features and properties
#######################################

The NPU device is currently supported by AUTO and MULTI inference modes.
The NPU device is currently supported by AUTO and MULTI inference modes
(HETERO execution is partially supported, for certain models).

The NPU support in OpenVINO is still under active development and may
offer a limited set of supported OpenVINO features.
Expand Down Expand Up @@ -150,10 +147,18 @@ Limitations
* Running the Alexnet model with NPU may result in a drop in accuracy.
At this moment, the googlenet-v4 model is recommended for classification tasks.

**Import/Export:**

Offline compilation and blob import is supported but only for development purposes.
Pre-compiled models (blobs) are not recommended to be used in production.
Blob compatibility across different OpenVINO versions/ NPU driver versions is not
guaranteed.

Additional Resources
#############################

* `NPU plugin developer documentation <https://github.com/openvinotoolkit/npu_plugin/blob/develop/docs/VPUX_DG/index.md>`__
* `OpenVINO Runtime NPU plugin source files <https://github.com/openvinotoolkit/npu_plugin>`__
* `Vision colorization Notebook <notebooks/222-vision-image-colorization-with-output.html>`__
* `Classification Benchmark C++ Demo <https://github.com/openvinotoolkit/open_model_zoo/tree/master/demos/classification_benchmark_demo/cpp>`__
* `3D Human Pose Estimation Python Demo <https://github.com/openvinotoolkit/open_model_zoo/tree/master/demos/3d_segmentation_demo/python>`__
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -89,7 +89,6 @@
#include "transformations/op_conversions/convert_roi_align_v3_to_v9.hpp"
#include "transformations/op_conversions/convert_roi_align_v9_to_v3.hpp"
#include "transformations/op_conversions/convert_scatter_elements_update12_downgrade.hpp"
#include "transformations/op_conversions/convert_slice_to_strided_slice.hpp"
#include "transformations/op_conversions/convert_softmax_downgrade.hpp"
#include "transformations/op_conversions/convert_softmax_upgrade.hpp"
#include "transformations/op_conversions/convert_space_to_depth.hpp"
Expand Down Expand Up @@ -123,9 +122,7 @@ bool ov::pass::CommonOptimizations::run_on_model(const std::shared_ptr<ov::Model

using namespace ov::pass;
REGISTER_PASS(manager, DisableDecompressionConvertConstantFolding)
// MOCTransformations contain StridedSliceOptimization transformation,
// so we must call SliceToStridedSlice before MOCTransformations call
REGISTER_PASS(manager, SliceToStridedSlice, true)

// Disable low_precision_enabled as all plugins handle low-precision sub-graph manually
// before CommonOptimization pipeline execution
REGISTER_PASS(manager, MOCTransformations, true, false)
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -17,6 +17,7 @@
#include "openvino/op/util/sub_graph_base.hpp"
#include "openvino/op/variadic_split.hpp"
#include "openvino/pass/manager.hpp"
#include "transformations/op_conversions/convert_slice_to_strided_slice.hpp"

using namespace ov;

Expand Down Expand Up @@ -420,6 +421,12 @@ ov::pass::StridedSliceOptimization::StridedSliceOptimization(bool use_shapes) {

bool ov::pass::StridedSliceOptimization::run_on_model(const std::shared_ptr<ov::Model>& f) {
RUN_ON_FUNCTION_SCOPE(StridedSliceOptimization);

ov::pass::Manager manager(get_pass_config());
using namespace ov::pass;
REGISTER_PASS(manager, SliceToStridedSlice, m_use_shapes)
manager.run_passes(f);

bool rewritten = false;
if (m_use_shapes) {
rewritten = UselessSliceEraser().run_on_model(f);
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -21,7 +21,6 @@
#include "openvino/opsets/opset3.hpp"
#include "openvino/opsets/opset8.hpp"
#include "openvino/pass/constant_folding.hpp"
#include "transformations/op_conversions/convert_slice_to_strided_slice.hpp"
#include "transformations/utils/utils.hpp"

using namespace ov;
Expand Down Expand Up @@ -408,7 +407,6 @@ TEST_F(TransformationTestsF, SliceToStridedSlice_default_axes) {
auto slice = std::make_shared<opset8::Slice>(data, begin, end, step);

model = std::make_shared<ov::Model>(NodeVector{slice}, ParameterVector{data});
manager.register_pass<ov::pass::SliceToStridedSlice>(true);
manager.register_pass<ov::pass::StridedSliceOptimization>();
}
{
Expand Down Expand Up @@ -440,7 +438,7 @@ TEST_F(TransformationTestsF, SliceToStridedSlice_axes_const_sorted_full) {
auto slice = std::make_shared<opset8::Slice>(data, begin, end, step, axes);

model = std::make_shared<ov::Model>(NodeVector{slice}, ParameterVector{data});
manager.register_pass<ov::pass::SliceToStridedSlice>(true);
manager.register_pass<ov::pass::StridedSliceOptimization>();
}
{
auto data = std::make_shared<opset8::Parameter>(element::f32, Shape{2, 4, 3, 5});
Expand Down Expand Up @@ -471,7 +469,6 @@ TEST_F(TransformationTestsF, SliceToStridedSlice_all_const) {
auto slice = std::make_shared<opset8::Slice>(data, begin, end, step, axes);

model = std::make_shared<ov::Model>(NodeVector{slice}, ParameterVector{});
manager.register_pass<ov::pass::SliceToStridedSlice>(true);
manager.register_pass<ov::pass::StridedSliceOptimization>();
}
{
Expand Down Expand Up @@ -525,7 +522,6 @@ TEST_F(TransformationTestsF, SliceToStridedSlice_sss_params_axes_const_sorted_le
auto slice = std::make_shared<opset8::Slice>(data, begin, end, step, axes);

model = std::make_shared<ov::Model>(NodeVector{slice}, ParameterVector{data, begin, end, step});
manager.register_pass<ov::pass::SliceToStridedSlice>(true);
manager.register_pass<ov::pass::StridedSliceOptimization>();
}
{
Expand Down Expand Up @@ -567,7 +563,6 @@ TEST_F(TransformationTestsF, SliceToStridedSlice_sss_params_axes_const_unsorted)
auto slice = std::make_shared<opset8::Slice>(data, begin, end, step, axes);

model = std::make_shared<ov::Model>(NodeVector{slice}, ParameterVector{data, begin, end, step});
manager.register_pass<ov::pass::SliceToStridedSlice>(true);
manager.register_pass<ov::pass::StridedSliceOptimization>();
}
{
Expand Down Expand Up @@ -610,7 +605,6 @@ TEST_F(TransformationTestsF, SliceToStridedSlice_sss_params_axes_const_negative_
auto slice = std::make_shared<opset8::Slice>(data, begin, end, step, axes);

model = std::make_shared<ov::Model>(NodeVector{slice}, ParameterVector{data, begin, end, step});
manager.register_pass<ov::pass::SliceToStridedSlice>(true);
manager.register_pass<ov::pass::StridedSliceOptimization>();
}
{
Expand Down Expand Up @@ -643,7 +637,6 @@ TEST_F(TransformationTestsF, SliceToStridedSlice_sss_params_dyn_shape_axes_const
auto slice = std::make_shared<opset8::Slice>(data, begin, end, step, axes);

model = std::make_shared<ov::Model>(NodeVector{slice}, ParameterVector{data, begin, end, step});
manager.register_pass<ov::pass::SliceToStridedSlice>(true);
manager.register_pass<ov::pass::StridedSliceOptimization>();
}
{
Expand Down Expand Up @@ -687,7 +680,6 @@ TEST_F(TransformationTestsF, SliceToStridedSlice_sss_params_static_shape_axes_co
auto slice = std::make_shared<opset8::Slice>(data, begin, end, step, axes);

model = std::make_shared<ov::Model>(NodeVector{slice}, ParameterVector{data, begin, end, step});
manager.register_pass<ov::pass::SliceToStridedSlice>(true);
manager.register_pass<ov::pass::StridedSliceOptimization>();
}
{
Expand Down Expand Up @@ -730,7 +722,6 @@ TEST_F(TransformationTestsF, SliceToStridedSlice_dyn_rank_axes_const_positive) {
auto slice = std::make_shared<opset8::Slice>(data, begin, end, step, axes);

model = std::make_shared<ov::Model>(NodeVector{slice}, ParameterVector{data, begin, end, step});
manager.register_pass<ov::pass::SliceToStridedSlice>(true);
manager.register_pass<ov::pass::StridedSliceOptimization>();
}
{
Expand Down Expand Up @@ -805,7 +796,6 @@ TEST_F(TransformationTestsF, SliceToStridedSlice_begin_param_shape_of_use_shapes
auto slice = std::make_shared<opset8::Slice>(shape_of_data, begin, end, step, axes);
model = std::make_shared<ov::Model>(NodeVector{slice}, ParameterVector{data, begin});

manager.register_pass<ov::pass::SliceToStridedSlice>(true);
manager.register_pass<ov::pass::StridedSliceOptimization>(true);
manager.register_pass<pass::ConstantFolding>();
}
Expand Down Expand Up @@ -848,7 +838,6 @@ TEST_F(TransformationTestsF, SliceToStridedSlice_begin_param_shape_of_use_shapes
model = std::make_shared<ov::Model>(NodeVector{slice}, ParameterVector{data, begin});

manager.register_pass<pass::ConstantFolding>();
manager.register_pass<ov::pass::SliceToStridedSlice>(false);
manager.register_pass<ov::pass::StridedSliceOptimization>(false);
manager.register_pass<pass::ConstantFolding>();
}
Expand Down Expand Up @@ -951,7 +940,6 @@ TEST_F(TransformationTestsF, SliceToStridedSlice_slice_all_use_shapes_true) {
auto slice = std::make_shared<opset8::Slice>(relu, begin, end, step);

model = std::make_shared<ov::Model>(NodeVector{slice}, ParameterVector{data});
manager.register_pass<ov::pass::SliceToStridedSlice>(true);
manager.register_pass<ov::pass::StridedSliceOptimization>(true);
manager.register_pass<pass::ConstantFolding>();
}
Expand Down Expand Up @@ -991,7 +979,6 @@ TEST_F(TransformationTestsF, SliceToStridedSlice_slice_all_use_shapes_false) {
auto slice = std::make_shared<opset8::Slice>(relu, begin, end, step);

model = std::make_shared<ov::Model>(NodeVector{slice}, ParameterVector{data});
manager.register_pass<ov::pass::SliceToStridedSlice>(false);
manager.register_pass<ov::pass::StridedSliceOptimization>(false);
manager.register_pass<pass::ConstantFolding>();
}
Expand Down
46 changes: 0 additions & 46 deletions src/core/dev_api/validation_util.hpp
Original file line number Diff line number Diff line change
Expand Up @@ -63,29 +63,6 @@ Tensor make_tensor_of_max_value(const element::Type_t et);
/// \return Tensor with minimum value.
Tensor make_tensor_of_min_value(const element::Type_t et);

/// \brief Apply auto padding to padding_above and padding_below inputs
/// if all needed informations are known.
///
/// \param image_shape The shape of input image.
/// \param filter_shape The shape of filter input.
/// \param filter_strides The strides of applied padding.
/// \param filter_dilations The dilations of applied padding.
/// \param pad_type The type of padding. Auto padding is applied only
/// for SAME_UPPER and SAME_LOWER mode.
/// \param padding_above The beginning of padding shape.
/// \param end The beginning of padding shape.
///
/// \return true if auto padding was applied successfully (all needed informations such as
/// spatial dims are known), false otherwise.
OPENVINO_DEPRECATED("This function is deprecated and will be removed.")
OPENVINO_API bool try_apply_auto_padding(const PartialShape& image_shape,
const Shape& filter_shape,
const Strides& filter_strides,
const Strides& filter_dilations,
const op::PadType pad_type,
CoordinateDiff& padding_above,
CoordinateDiff& padding_below);

/// @brief Get the tensors shapes as ov::PartialShape.
///
/// @param tensors Input tensors vector to get their shapes.
Expand All @@ -107,29 +84,6 @@ OPENVINO_API std::vector<PartialShape> get_node_input_partial_shapes(const ov::N
/// \return True if rank compatible to any of others, otherwise false.
OPENVINO_API bool is_rank_compatible_any_of(const Rank& r, std::initializer_list<Rank> others);

/// \brief Infers the output batch shape for convolution forward propagation.
///
/// \return Infered output shape.
OPENVINO_DEPRECATED("This function is deprecated and will be removed.")
OPENVINO_API PartialShape infer_convolution_forward(const Node* node,
const PartialShape& data_batch_shape,
const Strides& data_dilation,
const CoordinateDiff& data_padding_below,
const CoordinateDiff& data_padding_above,
const PartialShape& filters_shape,
const Strides& filter_strides,
const Strides& filter_dilation);

/// \brief Infers image paddings.
OPENVINO_DEPRECATED("This function is deprecated and will be removed.")
OPENVINO_API void infer_auto_padding(const Shape& image_shape,
const Shape& filter_shape,
const Strides& filter_strides,
const Strides& filter_dilations,
const op::PadType pad_type,
CoordinateDiff& padding_above,
CoordinateDiff& padding_below);

/// \brief Evaluates lower and upper value estimations for the output tensor. Estimation would be represented as partial
/// shape object using Dimension(min, max) for each element.
///
Expand Down
Loading

0 comments on commit ffe997d

Please sign in to comment.