Skip to content

Commit

Permalink
Merge remote-tracking branch 'upstream/master' into auto
Browse files Browse the repository at this point in the history
  • Loading branch information
ilya-lavrenov committed May 18, 2021
2 parents c494012 + 21370c7 commit cb84e15
Show file tree
Hide file tree
Showing 90 changed files with 3,073 additions and 345 deletions.
6 changes: 3 additions & 3 deletions cmake/developer_package/download/dependency_solver.cmake
Original file line number Diff line number Diff line change
Expand Up @@ -176,9 +176,9 @@ function(reset_deps_cache)
foreach(var_name IN LISTS ARGN)
unset(${var_name} CACHE)
endforeach()
# foreach(var_name IN LISTS ARGN)
# unset(ENV{${var_name}})
# endforeach()
foreach(var_name IN LISTS ARGN)
unset(ENV{${var_name}})
endforeach()
endif()
endfunction()

Expand Down
31 changes: 20 additions & 11 deletions docs/install_guides/pypi-openvino-dev.md
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
# Intel® Distribution of OpenVINO™ Toolkit Developer Package

Copyright © 2018-2021 Intel Corporation
> **LEGAL NOTICE**: Your use of this software and any required dependent software (the
“Software Package”) is subject to the terms and conditions of the [software license agreements](https://software.intel.com/en-us/license/eula-for-intel-software-development-products) for the Software Package, which may also include notices, disclaimers, or
“Software Package”) is subject to the terms and conditions of the [software license agreements](https://software.intel.com/content/dam/develop/external/us/en/documents/intel-openvino-license-agreements.pdf) for the Software Package, which may also include notices, disclaimers, or
license terms for third party or open source software included in or with the Software Package, and your use indicates your acceptance of all such terms. Please refer to the “third-party-programs.txt” or other similarly-named text file included with the Software Package for additional details.

## Introduction
Expand Down Expand Up @@ -40,11 +40,7 @@ The table below lists the supported operating systems and Python* versions requi
## Install the Developer Package

### Step 1. Install External Software Dependencies

On Windows* OS you are required to install [Microsoft* Visual C++ Redistributable Package (x64)](https://visualstudio.microsoft.com/downloads/#microsoft-visual-c-redistributable-for-visual-studio-2019) to be able to run OpenVINO™ applications.

### Step 2. Set Up Python Virtual Environment
### Step 1. Set Up Python Virtual Environment

To avoid dependency conflicts, use a virtual environment. Skip this
step only if you do want to install all dependencies globally.
Expand All @@ -62,7 +58,7 @@ On Windows:
python -m venv openvino_env
```

### Step 3. Activate Virtual Environment
### Step 2. Activate Virtual Environment

On Linux and macOS:
```sh
Expand All @@ -73,22 +69,22 @@ On Windows:
openvino_env\Scripts\activate
```

### Step 4. Set Up and Update pip to the Highest Version
### Step 3. Set Up and Update PIP to the Highest Version

Run the command below:
```sh
python -m pip install --upgrade pip
```

### Step 5. Install the Package
### Step 4. Install the Package

Run the command below: <br>

```sh
pip install openvino-dev
```

### Step 6. Verify that the Package is Installed
### Step 5. Verify that the Package is Installed

Run the command below (this may take a few seconds):
```sh
Expand All @@ -97,6 +93,19 @@ pot -h

You will see the help message for Post-Training Optimization Tool if installation finished successfully.

## Troubleshooting

#### Error: Microsoft Visual C++ 14.0 is required. Get it with "Build Tools for Visual Studio"

On Windows* some dependencies may require compilation from source when installing. To resolve this issue, you need to install [Build Tools for Visual Studio* 2019](https://visualstudio.microsoft.com/downloads/#build-tools-for-visual-studio-2019) and repeat package installation.

#### ImportError: libpython3.7m.so.1.0: cannot open shared object file: No such file or directory

To resolve missing external dependency on Ubuntu*, execute the following command:
```sh
sudo apt-get install libpython3.7
```

## Additional Resources

- Intel® Distribution of OpenVINO™ toolkit home page: [https://software.intel.com/en-us/openvino-toolkit](https://software.intel.com/en-us/openvino-toolkit)
Expand Down
31 changes: 20 additions & 11 deletions docs/install_guides/pypi-openvino-rt.md
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
# Intel® Distribution of OpenVINO™ Toolkit Runtime Package

Copyright © 2018-2021 Intel Corporation
> **LEGAL NOTICE**: Your use of this software and any required dependent software (the
“Software Package”) is subject to the terms and conditions of the [software license agreements](https://software.intel.com/en-us/license/eula-for-intel-software-development-products) for the Software Package, which may also include notices, disclaimers, or
“Software Package”) is subject to the terms and conditions of the [software license agreements](https://software.intel.com/content/dam/develop/external/us/en/documents/intel-openvino-license-agreements.pdf) for the Software Package, which may also include notices, disclaimers, or
license terms for third party or open source software included in or with the Software Package, and your use indicates your acceptance of all such terms. Please refer to the “third-party-programs.txt” or other similarly-named text file included with the Software Package for additional details.

## Introduction
Expand Down Expand Up @@ -37,11 +37,7 @@ The table below lists supported operating systems and Python* versions required
## Install the Runtime Package

### Step 1. Install External Software Dependencies

On Windows* OS you are required to install [Microsoft* Visual C++ Redistributable Package (x64)](https://visualstudio.microsoft.com/downloads/#microsoft-visual-c-redistributable-for-visual-studio-2019) to be able to run OpenVINO™ applications.

### Step 2. Set Up Python Virtual Environment
### Step 1. Set Up Python Virtual Environment

To avoid dependency conflicts, use a virtual environment. Skip this
step only if you do want to install all dependencies globally.
Expand All @@ -55,7 +51,7 @@ python -m venv openvino_env
> **NOTE**: On Linux and macOS, you may need to type `python3` instead of
`python`. You may also need to [install pip](https://pip.pypa.io/en/stable/installing/).

### Step 3. Activate Virtual Environment
### Step 2. Activate Virtual Environment

On Linux and macOS:
```sh
Expand All @@ -66,22 +62,22 @@ On Windows:
openvino_env\Scripts\activate
```

### Step 4. Set Up and Update pip to the Highest Version
### Step 3. Set Up and Update PIP to the Highest Version

Run the command below:
```sh
python -m pip install --upgrade pip
```

### Step 5. Install the Package
### Step 4. Install the Package

Run the command below: <br>

```sh
pip install openvino
```

### Step 6. Verify that the Package is Installed
### Step 5. Verify that the Package is Installed

Run the command below:
```sh
Expand All @@ -90,6 +86,19 @@ python -c "from openvino.inference_engine import IECore"

You will not see any error messages if installation finished successfully.

## Troubleshooting

#### Error: Microsoft Visual C++ 14.0 is required. Get it with "Build Tools for Visual Studio"

On Windows* some dependencies may require compilation from source when installing. To resolve this issue, you need to install [Build Tools for Visual Studio* 2019](https://visualstudio.microsoft.com/downloads/#build-tools-for-visual-studio-2019) and repeat package installation.

#### ImportError: libpython3.7m.so.1.0: cannot open shared object file: No such file or directory

To resolve missing external dependency on Ubuntu*, execute the following command:
```sh
sudo apt-get install libpython3.7
```

## Additional Resources

- [Intel® Distribution of OpenVINO™ toolkit](https://software.intel.com/en-us/openvino-toolkit).
Expand Down
2 changes: 1 addition & 1 deletion docs/ops/normalization/BatchNormInference_1.md
Original file line number Diff line number Diff line change
Expand Up @@ -58,7 +58,7 @@ For a particular activation, consider a mini-batch \f$\mathcal{B}\f$ of m values

* *epsilon*
* **Description**: *epsilon* is a constant added to the variance for numerical stability.
* **Range of values**: a positive floating-point number
* **Range of values**: a floating-point number greater than or equal to zero
* **Type**: `float`
* **Default value**: none
* **Required**: *yes*
Expand Down
2 changes: 1 addition & 1 deletion docs/ops/normalization/BatchNormInference_5.md
Original file line number Diff line number Diff line change
Expand Up @@ -58,7 +58,7 @@ For a particular activation, consider a mini-batch \f$\mathcal{B}\f$ of m values

* *epsilon*
* **Description**: *epsilon* is a constant added to the variance for numerical stability.
* **Range of values**: a positive floating-point number
* **Range of values**: a floating-point number greater than or equal to zero
* **Type**: `float`
* **Default value**: none
* **Required**: *yes*
Expand Down
4 changes: 2 additions & 2 deletions inference-engine/cmake/dependencies.cmake
Original file line number Diff line number Diff line change
Expand Up @@ -261,8 +261,8 @@ if (ENABLE_GNA)
set(GNA_HASH "cc954e67525006bf8bd353a6682e38bf208f6d74e973e0fc292850e721f17452")
endif()
if(GNA_LIBRARY_VERSION STREQUAL "GNA2")
set(GNA_VERSION "02.00.00.1191.0")
set(GNA_HASH "a61b4a9133549b0a9f0b46d069f72906ced28bcbbe7d5c361e687645f53a1c8b")
set(GNA_VERSION "02.00.00.1226")
set(GNA_HASH "d5450af15c993e264c25ac4591a7dab44722e10d15fca4f222a1b84429d4e5b6")
endif()

set(FILES_TO_EXTRACT_LIST gna_${GNA_VERSION}/include)
Expand Down
6 changes: 3 additions & 3 deletions inference-engine/cmake/ie_parallel.cmake
Original file line number Diff line number Diff line change
Expand Up @@ -25,9 +25,9 @@ function(set_ie_threading_interface_for TARGET_NAME)
else()
find_dependency(TBB COMPONENTS tbb tbbmalloc)
endif()
set("TBB_FOUND" ${TBB_FOUND} PARENT_SCOPE)
set("TBB_IMPORTED_TARGETS" ${TBB_IMPORTED_TARGETS} PARENT_SCOPE)
set("TBB_VERSION" ${TBB_VERSION} PARENT_SCOPE)
set(TBB_FOUND ${TBB_FOUND} PARENT_SCOPE)
set(TBB_IMPORTED_TARGETS ${TBB_IMPORTED_TARGETS} PARENT_SCOPE)
set(TBB_VERSION ${TBB_VERSION} PARENT_SCOPE)
if (NOT TBB_FOUND)
ext_message(WARNING "TBB was not found by the configured TBB_DIR/TBBROOT path.\
SEQ method will be used.")
Expand Down
2 changes: 2 additions & 0 deletions inference-engine/include/ie_blob.h
Original file line number Diff line number Diff line change
Expand Up @@ -799,6 +799,7 @@ class TBlob : public MemoryBlob {
}
};

#ifdef __clang__
extern template class INFERENCE_ENGINE_API_CLASS(InferenceEngine::TBlob<float>);
extern template class INFERENCE_ENGINE_API_CLASS(InferenceEngine::TBlob<double>);
extern template class INFERENCE_ENGINE_API_CLASS(InferenceEngine::TBlob<int8_t>);
Expand All @@ -813,6 +814,7 @@ extern template class INFERENCE_ENGINE_API_CLASS(InferenceEngine::TBlob<unsigned
extern template class INFERENCE_ENGINE_API_CLASS(InferenceEngine::TBlob<unsigned long long>);
extern template class INFERENCE_ENGINE_API_CLASS(InferenceEngine::TBlob<bool>);
extern template class INFERENCE_ENGINE_API_CLASS(InferenceEngine::TBlob<char>);
#endif // __clang__

/**
* @brief Creates a blob with the given tensor descriptor.
Expand Down
4 changes: 4 additions & 0 deletions inference-engine/src/cldnn_engine/cldnn_engine.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -70,6 +70,7 @@
#include <low_precision/pull_reshape_through_dequantization.hpp>
#include <low_precision/pull_transpose_through_dequantization.hpp>
#include <low_precision/transformer.hpp>
#include <low_precision/convolution_backprop_data.hpp>
#include <low_precision/mat_mul.hpp>
#include <low_precision/strided_slice.hpp>
#include <low_precision/network_helper.hpp>
Expand Down Expand Up @@ -381,6 +382,9 @@ InferenceEngine::CNNNetwork clDNNEngine::CloneAndTransformNetwork(const Inferenc
.add<MatMulTransformation, ngraph::opset1::MatMul>(LayerTransformation::Params(params)
.setSupportAsymmetricQuantization(false)
.setSupport3DTensorOnActivations(false))
.add<ConvolutionBackpropDataTransformation, ngraph::opset1::ConvolutionBackpropData>(LayerTransformation::Params(params)
.setSupportAsymmetricQuantization(false)
.setDeconvolutionSpecificChannelsRatio(true))
// INT8 StridedSlice not supported
.remove<StridedSliceTransformation, ngraph::opset1::StridedSlice>());

Expand Down
2 changes: 1 addition & 1 deletion inference-engine/src/gna_plugin/backend/am_intel_dnn.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -1784,7 +1784,7 @@ void GNAPluginNS::backend::AMIntelDNN::InitGNAStruct(intel_nnet_type_t *ptr_nnet
|| (component[i - 1].operation == kDnnConvolutional1dOp)
|| (component[i - 1].operation == kDnnConvolutional2dOp)
|| ((component[i - 1].operation == kDnnMaxPoolOp) &&
(component[i - 2].operation == kDnnConvolutional1dOp))) {
(component[i - 2].operation == kDnnConvolutional1dOp || component[i - 2].operation == kDnnConvolutional2dOp))) {
if (gnaOperation->Operands[PwlOpIdx] == nullptr) {
HelperGna2OperationSetOperand(gnaOperation, gnaUserAllocator, gnaUserFree, PwlOpIdx, createGna2TensorPwl(1, nullptr));
}
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -31,7 +31,7 @@ bool RangeLimit2D::isValid(const uint32_t h, const uint32_t w) const {
}

std::string RangeLimit2D::GetErrorOrEmpty(const uint32_t h, const uint32_t w) const {
return hLimit.GetErrorOrEmpty(h) + hLimit.GetErrorOrEmpty(w);
return hLimit.GetErrorOrEmpty(h) + wLimit.GetErrorOrEmpty(w);
}

RangeMultipleLimit::RangeMultipleLimit(RangeLimit rlIn, uint32_t multiplierIn) : RangeLimit(rlIn), multiplier(multiplierIn) {
Expand Down
46 changes: 32 additions & 14 deletions inference-engine/src/gna_plugin/gna_device.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -156,24 +156,42 @@ void GNADeviceHelper::releaseModel(const uint32_t model_id) {
}

bool GNADeviceHelper::enforceLegacyCnnNeeded() const {
auto devVersion = getExecutionTargetDevice();
return isGnaLibVersion2_1 && isUpTo20HwGnaDevice(devVersion);
const auto compileTargetDevice = getTargetDevice(false);
return isGnaLibVersion2_1 && isUpTo20HwGnaDevice(compileTargetDevice);
}

Gna2DeviceVersion GNADeviceHelper::getExecutionTargetDevice() const {
namespace {
const volatile auto Gna2DeviceVersion3_0 = static_cast<Gna2DeviceVersion>(0x30);
if (executionTarget.empty()) {
if (detectedGnaDevVersion == Gna2DeviceVersionSoftwareEmulation)
return isGnaLibVersion2_1 ? Gna2DeviceVersion3_0 : Gna2DeviceVersion2_0;
return detectedGnaDevVersion;
} else if (executionTarget == InferenceEngine::GNAConfigParams::GNA_TARGET_3_0) {
} // namespace

Gna2DeviceVersion GNADeviceHelper::parseDeclaredTarget(std::string target, const bool execTarget) const {
auto parsed = Gna2DeviceVersion2_0;
auto throwUnsupportedGnaTarget = [&](std::string extraSuffix) {
auto key = execTarget ? InferenceEngine::GNAConfigParams::KEY_GNA_EXEC_TARGET : InferenceEngine::GNAConfigParams::KEY_GNA_COMPILE_TARGET;
THROW_GNA_EXCEPTION << "Unsupported " << key << " = \"" << target << "\"" << extraSuffix;
};
if (target == InferenceEngine::GNAConfigParams::GNA_TARGET_3_0) {
if (!isGnaLibVersion2_1)
THROW_GNA_EXCEPTION << "Unsupported GNA execution target " << executionTarget << " when GNA Library version is 2.0.X.Y";
return Gna2DeviceVersion3_0;
} else if (executionTarget == InferenceEngine::GNAConfigParams::GNA_TARGET_2_0) {
return Gna2DeviceVersion2_0;
throwUnsupportedGnaTarget(", when GNA Library version is 2.0.X.Y");
parsed = Gna2DeviceVersion3_0;
} else if (target != InferenceEngine::GNAConfigParams::GNA_TARGET_2_0) {
throwUnsupportedGnaTarget("");
}
THROW_GNA_EXCEPTION << "Unknown execution target: \"" << executionTarget << "\"";
return parsed;
}

Gna2DeviceVersion GNADeviceHelper::getDefaultTarget() const {
if (detectedGnaDevVersion == Gna2DeviceVersionSoftwareEmulation)
return isGnaLibVersion2_1 ? Gna2DeviceVersion3_0 : Gna2DeviceVersion2_0;
return detectedGnaDevVersion;
}

Gna2DeviceVersion GNADeviceHelper::getTargetDevice(const bool execTarget) const {
const auto declared = execTarget ? executionTarget : compileTarget;
if (declared.empty()) {
return execTarget ? getDefaultTarget() : getTargetDevice(true);
}
return parseDeclaredTarget(declared, execTarget);
}

uint32_t GNADeviceHelper::createRequestConfig(const uint32_t model_id) {
Expand All @@ -186,7 +204,7 @@ uint32_t GNADeviceHelper::createRequestConfig(const uint32_t model_id) {
// (bit exactly) as on the selected GNA execution target generation.
// See the GNA Plugin's GNA_EXEC_TARGET config option description.
if (swExactMode) {
const auto consistentDevice = getExecutionTargetDevice();
const auto consistentDevice = getTargetDevice(true);
status = Gna2RequestConfigEnableHardwareConsistency(reqConfId, consistentDevice);
checkGna2Status(status, "Gna2RequestConfigEnableHardwareConsistency(" + std::to_string(static_cast<long>(consistentDevice)) + ")");
}
Expand Down
4 changes: 3 additions & 1 deletion inference-engine/src/gna_plugin/gna_device.hpp
Original file line number Diff line number Diff line change
Expand Up @@ -145,7 +145,6 @@ class GNADeviceHelper {
return dev <= Gna2DeviceVersion2_0 && isGnaHw(dev);
}
bool enforceLegacyCnnNeeded() const;
Gna2DeviceVersion getExecutionTargetDevice() const;
static void checkGna2Status(Gna2Status status, const std::string& from);
static void checkGna2Status(Gna2Status status, const Gna2Model& gnaModel);
#endif
Expand Down Expand Up @@ -197,6 +196,9 @@ class GNADeviceHelper {
static const std::map <const std::pair<Gna2OperationType, int32_t>, const std::string > operandTypes;

static void enforceLegacyCnns(Gna2Model& gnaModel);
Gna2DeviceVersion parseDeclaredTarget(std::string target, const bool execTarget) const;
Gna2DeviceVersion getDefaultTarget() const;
Gna2DeviceVersion getTargetDevice(bool execTarget) const;
#endif
void setOMPThreads(uint8_t const n_threads);

Expand Down
7 changes: 1 addition & 6 deletions inference-engine/src/gna_plugin/gna_graph_compiler.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -1027,13 +1027,8 @@ void GNAGraphCompiler::ConcatPrimitive(InferenceEngine::CNNLayerPtr layer) {
auto layerInfo = LayerInfo(concatParent);
// auto layerInfo = LayerInfo(getCreatorLayer(concatLayerInput->insData[it].lock()).lock());
if (layerInfo.isInput()) {
auto & bytesAllocated = inputDesc->bytes_allocated_for_input[((InferenceEngine::CNNLayerPtr)layerInfo)->name];

connectInput(layer, &concatLayerInfo.gna_ptr,
concatLayerInfo.reserved_size, inputLayer.offset, idx, false);

// TODO: currently connectInput api accept only total size, for concat we need extension for allocated, and actual sizes
bytesAllocated = inputLayer.tensorSize;
inputLayer.tensorSize, inputLayer.offset, idx, false);

concatLayerInfo.input_allocated = true;
} else if (layerInfo.isMemory()) {
Expand Down
2 changes: 2 additions & 0 deletions inference-engine/src/gna_plugin/gna_plugin.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -54,6 +54,7 @@
#include <transformations/common_optimizations/pull_transpose_through_fq.hpp>
#include <transformations/common_optimizations/relu_fake_quantize_fusion.hpp>
#include <transformations/common_optimizations/add_fake_quantize_fusion.hpp>
#include <transformations/op_conversions/convert_padded2valid_conv.hpp>

#include "transformations/remove_extra_reshapes.hpp"

Expand Down Expand Up @@ -662,6 +663,7 @@ void GNAPlugin::LoadNetwork(CNNNetwork & _network) {
manager.register_pass<ngraph::pass::InitNodeInfo>();
// WA: ConvertPriorBox must be executed before the 1st ConstantFolding pass
manager.register_pass<ngraph::pass::ConvertPriorBox>();
manager.register_pass<ngraph::pass::ConvertPadded2ValidConv>();
manager.register_pass<ngraph::pass::CommonOptimizations>();
manager.register_pass<ngraph::pass::ConvertOpSet3ToOpSet2>();
manager.register_pass<ngraph::pass::ConvertOpSet2ToOpSet1>();
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -1189,7 +1189,7 @@ void InsertConcatAligningFilterPass::run() {
getCreatorLayer(outData) = filterWithQuant;
filterWithQuant->outData.push_back(outData);

CNNNetworkInsertLayer(prevLayer, l, filterWithQuant);
CNNNetworkInsertLayer(prevLayer, l, filterWithQuant, invalid_data_idx, input_idx);
}
offset += outputSize;
}
Expand Down
1 change: 0 additions & 1 deletion inference-engine/src/inference_engine/CMakeLists.txt
Original file line number Diff line number Diff line change
Expand Up @@ -201,7 +201,6 @@ if(WIN32)
endif()

target_link_libraries(${TARGET_NAME}_s PRIVATE openvino::itt ${CMAKE_DL_LIBS} ${NGRAPH_LIBRARIES}
inference_engine_snippets
inference_engine_transformations pugixml)

target_compile_definitions(${TARGET_NAME}_s PUBLIC USE_STATIC_IE)
Expand Down
Loading

0 comments on commit cb84e15

Please sign in to comment.