Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

R5 cannot be built with Visual C++ 2017 #76

Closed
emptyVoid opened this issue Jan 22, 2019 · 12 comments
Closed

R5 cannot be built with Visual C++ 2017 #76

emptyVoid opened this issue Jan 22, 2019 · 12 comments

Comments

@emptyVoid
Copy link

I get a few errors while trying to build R5 with Visual C++ 2017:

dldt\inference-engine\src\gna_plugin\floatmath.cpp(65): error C4146: unary minus operator applied to
unsigned type, result still unsigned [dldt\inference-engine\build\src\gna_plugin\GNAPlugin.vcxproj]
dldt\inference-engine\src\gna_plugin\floatmath.cpp(65): error C4146: unary minus operator applied to
unsigned type, result still unsigned [dldt\inference-engine\build\src\gna_plugin\GNAPlugin_test_stati
c.vcxproj]

dldt\inference-engine\thirdparty\mkl-dnn\src\common\query.cpp(54): error C4703: potentially uninitial
ized local pointer variable 'res_pd' used [dldt\inference-engine\build\thirdparty\mkldnn.vcxproj]
dldt\inference-engine\thirdparty\mkl-dnn\src\common\query.cpp(44): error C4703: potentially uninitial
ized local pointer variable 'res_md' used [dldt\inference-engine\build\thirdparty\mkldnn.vcxproj]

dldt\inference-engine\src\cldnn_engine\cldnn_graph.cpp(1406): error C4146: unary minus operator appli
ed to unsigned type, result still unsigned [dldt\inference-engine\build\src\cldnn_engine\clDNNPlugin.
vcxproj]
dldt\inference-engine\src\cldnn_engine\cldnn_graph.cpp(1912): error C4146: unary minus operator appli
ed to unsigned type, result still unsigned [dldt\inference-engine\build\src\cldnn_engine\clDNNPlugin.
vcxproj]
dldt\inference-engine\src\cldnn_engine\cldnn_graph.cpp(2997): error C4146: unary minus operator appli
ed to unsigned type, result still unsigned [dldt\inference-engine\build\src\cldnn_engine\clDNNPlugin.
vcxproj]

These are warning elevated to errors due to /sdl compiler switch.

I tried to set a compiler flag to disable SDL checks:

$Env:CFLAGS="/sdl-"
$Env:CXXFLAGS="/sdl-"

to no avail since the build system ignores CMAKE_CXX_FLAGS value.

@asuhov
Copy link
Contributor

asuhov commented Jan 22, 2019

Hello,
Currently we support only Intel C++ Compiler from the Intel Parallel Studio 2018 on Windows.

@emptyVoid
Copy link
Author

That's sad, gating open source software behind a compiler paywall.

@emptyVoid
Copy link
Author

In case someone else would need to build R5 with Visual C++, here's a workaround.

  1. Modify the source code to fix an issue with compiler confusing constructor with a function pointer:
diff --git a/inference-engine/src/mkldnn_plugin/mkldnn_node.cpp b/inference-engine/src/mkldnn_plugin/mkldnn_node.cpp
index 73975b7..aaa291e 100644
--- a/inference-engine/src/mkldnn_plugin/mkldnn_node.cpp
+++ b/inference-engine/src/mkldnn_plugin/mkldnn_node.cpp
@@ -561,7 +561,7 @@ void MKLDNNNode::prepareMemory(const PrimitiveDescInfo *selected_pd, mkldnn::pri
                 Engine::GetWeightsSharing().findOrCreate(string_hash, [&] () {
                     MKLDNNMemoryPtr _ptr = MKLDNNMemoryPtr(new MKLDNNMemory(engine));
                     _ptr->Create(intDescs[i]);
-                    MKLDNNMemory memory(engine);
+                    MKLDNNMemory memory{engine};

                     auto newDesc = MKLDNNMemoryDesc(internalBlob->getTensorDesc());
                     auto newFormat = newDesc.getFormat();
  1. Modify the CMake script to fix a build issue in Debug configuration:
diff --git a/inference-engine/thirdparty/mkldnn.cmake b/inference-engine/thirdparty/mkldnn.cmake
index 0cf5045..ca0209e 100644
--- a/inference-engine/thirdparty/mkldnn.cmake
+++ b/inference-engine/thirdparty/mkldnn.cmake
@@ -79,6 +79,10 @@ include_directories(
         ${MKLDNN_ROOT}/src/cpu/xbyak
 )

+if(MSVC)
+    set_source_files_properties(${MKLDNN_ROOT}/src/cpu/cpu_memory.cpp PROPERTIES COMPILE_FLAGS "$<$<CONFIG:Debug>:/bigobj>")
+    set_source_files_properties(${MKLDNN_ROOT}/src/cpu/cpu_reorder.cpp PROPERTIES COMPILE_FLAGS "$<$<CONFIG:Debug>:/bigobj>")
+endif()
 if(WIN32)
     add_definitions(-D_WIN)
     add_definitions(-DNOMINMAX)
  1. Build the engine from PowerShell:
$Env:_CL_="/sdl-"
cmake -G "Visual Studio 15 2017 Win64" ..
cmake --build . --config Release
cmake --build . --config Debug

Note: this might not build all the R5, I was just interested in the engine, plugins and CPU extension.

@nafest
Copy link

nafest commented Jan 23, 2019

Great! Why not create a PR with these changes?

@emptyVoid
Copy link
Author

It's just a workaround. The proper fix should either disable SDL natively via a CMake script or adjust the source code causing the elevated warnings.

There's something promising in CMake, though I couldn't find a way to exploit it.

@rominf
Copy link

rominf commented May 30, 2019

I've just tried to compile OpenVINO R1.1 with Microsoft Visual C++ v14.16 - no luck. It's very sad because Intel C++ compiler is a paid product.

@emptyVoid
Copy link
Author

In case anyone would want to build 2019 R1.1 with Visual Studio 2019.

Here's a patch fixing the build issues (please note, it disables tests and samples since I'm again only interested in the engine itself):

diff --git a/inference-engine/CMakeLists.txt b/inference-engine/CMakeLists.txt
index 1c3d6ea..2c175e8 100644
--- a/inference-engine/CMakeLists.txt
+++ b/inference-engine/CMakeLists.txt
@@ -128,7 +128,7 @@ message (STATUS "IE_MAIN_SOURCE_DIR .................... " ${IE_MAIN_SOURCE_DIR}
 message (STATUS "CMAKE_GENERATOR ....................... " ${CMAKE_GENERATOR})
 message (STATUS "CMAKE_C_COMPILER_ID ................... " ${CMAKE_C_COMPILER_ID})
 
-include(sdl)
+#include(sdl)
 
 set (CMAKE_POSITION_INDEPENDENT_CODE ON)
 
@@ -139,7 +139,7 @@ include(CheckCXXCompilerFlag)
 include(cpplint)
 
 add_subdirectory(src)
-add_subdirectory(tests)
+#add_subdirectory(tests)
 add_subdirectory(thirdparty)
 set(InferenceEngine_DIR "${CMAKE_BINARY_DIR}")
 
@@ -148,7 +148,7 @@ set (LIB_FOLDER ${IE_MAIN_SOURCE_DIR}/${BIN_FOLDER}/${IE_BUILD_CONFIGURATION}/li
 
 # gflags and format_reader targets are kept inside of samples directory and
 # they must be built even if samples build is disabled (required for tests and tools).
-add_subdirectory(samples)
+#add_subdirectory(samples)
 
 file(GLOB_RECURSE SAMPLES_SOURCES samples/*.cpp samples/*.hpp samples/*.h)
 add_cpplint_target(sample_cpplint
diff --git a/inference-engine/src/CMakeLists.txt b/inference-engine/src/CMakeLists.txt
index 1d68d56..55cd679 100644
--- a/inference-engine/src/CMakeLists.txt
+++ b/inference-engine/src/CMakeLists.txt
@@ -37,7 +37,7 @@ set(InferenceEngine_SRC_DIRS ${CMAKE_SOURCE_DIR}/src)
 function(set_target_cpu_flags TARGET_NAME)
 endfunction()
 
-add_subdirectory(extension EXCLUDE_FROM_ALL)
+add_subdirectory(extension)
 add_library(IE::ie_cpu_extension ALIAS ie_cpu_extension)
 
 file(GLOB_RECURSE EXTENSION_SOURCES extension/*.cpp extension/*.hpp extension/*.h)
diff --git a/inference-engine/src/mkldnn_plugin/mkldnn_node.cpp b/inference-engine/src/mkldnn_plugin/mkldnn_node.cpp
index 5740080..4184388 100644
--- a/inference-engine/src/mkldnn_plugin/mkldnn_node.cpp
+++ b/inference-engine/src/mkldnn_plugin/mkldnn_node.cpp
@@ -609,7 +609,7 @@ void MKLDNNNode::prepareMemory(const PrimitiveDescInfo *selected_pd, mkldnn::pri
                 Engine::GetWeightsSharing().findOrCreate(string_hash, [&] () {
                     MKLDNNMemoryPtr _ptr = MKLDNNMemoryPtr(new MKLDNNMemory(engine));
                     _ptr->Create(intDescs[i]);
-                    MKLDNNMemory memory(engine);
+                    MKLDNNMemory memory{engine};
 
                     auto newDesc = MKLDNNMemoryDesc(internalBlob->getTensorDesc());
                     auto newFormat = newDesc.getFormat();
diff --git a/inference-engine/thirdparty/clDNN/kernel_selector/core/common/primitive_db.cpp b/inference-engine/thirdparty/clDNN/kernel_selector/core/common/primitive_db.cpp
index 26c8b4f..bc3ab28 100644
--- a/inference-engine/thirdparty/clDNN/kernel_selector/core/common/primitive_db.cpp
+++ b/inference-engine/thirdparty/clDNN/kernel_selector/core/common/primitive_db.cpp
@@ -16,6 +16,7 @@
 #include "primitive_db.h"
 #include <assert.h>
 #include <algorithm>
+#include <stdexcept>
 
 #ifndef NDEBUG
 #include <fstream>
diff --git a/inference-engine/thirdparty/mkl-dnn/src/cpu/jit_uni_bin_conv_kernel.cpp b/inference-engine/thirdparty/mkl-dnn/src/cpu/jit_uni_bin_conv_kernel.cpp
index 716770a..f60b8aa 100644
--- a/inference-engine/thirdparty/mkl-dnn/src/cpu/jit_uni_bin_conv_kernel.cpp
+++ b/inference-engine/thirdparty/mkl-dnn/src/cpu/jit_uni_bin_conv_kernel.cpp
@@ -339,12 +339,14 @@ void jit_uni_bin_conv_fwd_kernel<isa>::width_blk_step(int ur_w, int pad_l, int p
         kmovw(ktail_mask, reg_tmp_32);
     }
 
+    std::vector<int> kw_padding_vector(ur_w);
+
     const auto &p = attr_.post_ops_;
     for (int r = 0; r < repeats; r++) {
         int tail_size = isa == sse42 ? nstl::min(jcp.oc_block / 2, oc_step - r * jcp.oc_block / 2) : oc_step;
         bool is_scalar_store = isa == sse42 ? tail_size < jcp.oc_block / 2 : tail_size < jcp.oc_block;
 
-        int kw_padding[ur_w];
+        auto kw_padding = kw_padding_vector.data();
         if (jcp.exclude_pad) {
             mov(reg_tmp_32, jcp.ic);
             imul(reg_tmp_32,  ptr[param1 + GET_OFF(kh_padding)]);
diff --git a/inference-engine/thirdparty/mkl-dnn/src/cpu/jit_uni_planar_convolution.cpp b/inference-engine/thirdparty/mkl-dnn/src/cpu/jit_uni_planar_convolution.cpp
index 5a8f302..e95df41 100644
--- a/inference-engine/thirdparty/mkl-dnn/src/cpu/jit_uni_planar_convolution.cpp
+++ b/inference-engine/thirdparty/mkl-dnn/src/cpu/jit_uni_planar_convolution.cpp
@@ -60,7 +60,8 @@ void _jit_uni_planar_convolution_fwd_t<isa>::execute_forward() const {
     const auto &jcp = kernel_->jcp;
     const int MB = pd()->MB();
 
-    int od_indexes[jcp.od];
+    std::vector<int> od_indexes_vector(jcp.od);
+    auto od_indexes = od_indexes_vector.data();
 
     int idx = 0;
     for (int i = 0; i < (jcp.dilate_d + 1); i++) {
diff --git a/inference-engine/thirdparty/mkldnn.cmake b/inference-engine/thirdparty/mkldnn.cmake
index 2b27602..bed7717 100644
--- a/inference-engine/thirdparty/mkldnn.cmake
+++ b/inference-engine/thirdparty/mkldnn.cmake
@@ -106,6 +106,10 @@ include_directories(
         ${CMAKE_BINARY_DIR}/include/
 )
 
+if(MSVC)
+    set_source_files_properties(${MKLDNN_ROOT}/src/cpu/cpu_memory.cpp PROPERTIES COMPILE_FLAGS "$<$<CONFIG:Debug>:/bigobj>")
+    set_source_files_properties(${MKLDNN_ROOT}/src/cpu/cpu_reorder.cpp PROPERTIES COMPILE_FLAGS "$<$<CONFIG:Debug>:/bigobj>")
+endif()
 if(WIN32)
     add_definitions(-D_WIN)
     add_definitions(-DNOMINMAX)

And CMakeSettings.json file I used to build the engine from within VS 2019:

{
  "configurations": [
    {
      "name": "x64-Debug",
      "generator": "Visual Studio 16 2019 Win64",
      "configurationType": "Debug",
      "inheritEnvironments": [ "msvc_x64_x64" ],
      "buildRoot": "${projectDir}\\build\\${name}",
      "installRoot": "${projectDir}\\install\\${name}",
      "cmakeCommandArgs": "",
      "buildCommandArgs": "",
      "ctestCommandArgs": "",
      "variables": [
        {
          "name": "BUILD_TESTING",
          "value": "False",
          "type": "BOOL"
        },
        {
          "name": "ENABLE_OBJECT_DETECTION_TESTS",
          "value": "False",
          "type": "BOOL"
        },
        {
          "name": "ENABLE_OPENCV",
          "value": "False",
          "type": "BOOL"
        },
        {
          "name": "ENABLE_SAMPLES",
          "value": "False",
          "type": "BOOL"
        },
        {
          "name": "ENABLE_SAMPLES_CORE",
          "value": "False",
          "type": "BOOL"
        },
        {
          "name": "ENABLE_SEGMENTATION_TESTS",
          "value": "False",
          "type": "BOOL"
        },
        {
          "name": "PYTHON_EXECUTABLE",
          "value": "C:/Program Files (x86)/Microsoft Visual Studio/Shared/Python37_64/python.exe",
          "type": "FILEPATH"
        }
      ]
    },
    {
      "name": "x64-Release",
      "generator": "Visual Studio 16 2019 Win64",
      "configurationType": "Release",
      "buildRoot": "${projectDir}\\build\\${name}",
      "installRoot": "${projectDir}\\install\\${name}",
      "cmakeCommandArgs": "",
      "buildCommandArgs": "",
      "ctestCommandArgs": "",
      "inheritEnvironments": [ "msvc_x64_x64" ],
      "variables": [
        {
          "name": "BUILD_TESTING",
          "value": "False",
          "type": "BOOL"
        },
        {
          "name": "ENABLE_OBJECT_DETECTION_TESTS",
          "value": "False",
          "type": "BOOL"
        },
        {
          "name": "ENABLE_OPENCV",
          "value": "False",
          "type": "BOOL"
        },
        {
          "name": "ENABLE_SAMPLES",
          "value": "False",
          "type": "BOOL"
        },
        {
          "name": "ENABLE_SAMPLES_CORE",
          "value": "False",
          "type": "BOOL"
        },
        {
          "name": "ENABLE_SEGMENTATION_TESTS",
          "value": "False",
          "type": "BOOL"
        },
        {
            "name": "PYTHON_EXECUTABLE",
            "value": "C:/Program Files (x86)/Microsoft Visual Studio/Shared/Python37_64/python.exe",
            "type": "FILEPATH"
        }
      ]
    }
  ]
}

@TechnikEmpire
Copy link

TechnikEmpire commented Feb 24, 2020

Cannot compile with openmp support with msvc, not even linking to provided Intel omp libs. Msvc complains that the libs are invalid or corrupt. Deleting the libs and trying to rely on msvc's omp implementation fails because of a ton of references to Intel omp specific function calls.

@ilya-lavrenov
Copy link
Contributor

ilya-lavrenov commented Feb 25, 2020

@TechnikEmpire 2020 branch is compiliable with 2019 and 2017 MSVC compilers. Please, provide compilation log if you have any errors.

@TechnikEmpire
Copy link

@ilya-lavrenov I've edited my previous comment to be more constructive. I had to switch threading to tbb (and in don't want that).

@yury-gorbachev
Copy link
Contributor

For a while TBB is default threading within OpenVINO. In 20.1 we have seen parity between OMP and TBB even for benchmarking scenarios. So, we are not really testing if OMP is working that much.

eshoguli pushed a commit to eshoguli/openvino that referenced this issue Jun 1, 2021
* Azure CI: Add "ref: releases/2021/3"

* Fix Klocwork issues in Java

* Updated arm plugin build script (openvinotoolkit#65)

* Fix some Klocwork issues

* Update README.md (openvinotoolkit#76)

* Fixed Convolution fusion (openvinotoolkit#84)

* Remove MO deps from mo_pytorch module (openvinotoolkit#93)

Co-authored-by: azhogov <[email protected]>
Co-authored-by: Dmitry Kurtaev <[email protected]>
Co-authored-by: Aleksandr Voron <[email protected]>
Co-authored-by: Liubov Batanina <[email protected]>
zhangYiIntel referenced this issue in ceciliapeng2011/openvino Jun 3, 2021
@ilya-lavrenov
Copy link
Contributor

OpenVINO does not support MSVC 2017 for a while

andrew-k-park added a commit to andrew-k-park/openvino that referenced this issue Jun 14, 2022
ahnyoung-paul referenced this issue in ahnyoung-paul/openvino Jun 15, 2022
vladimir-paramuzov pushed a commit to vladimir-paramuzov/openvino that referenced this issue Jul 26, 2022
vladimir-paramuzov pushed a commit to vladimir-paramuzov/openvino that referenced this issue Jul 26, 2022
ahnyoung-paul referenced this issue in ahnyoung-paul/openvino Aug 2, 2022
andrew-k-park added a commit to andrew-k-park/openvino that referenced this issue Aug 3, 2022
andrew-k-park added a commit to andrew-k-park/openvino that referenced this issue Aug 4, 2022
andrew-k-park added a commit to andrew-k-park/openvino that referenced this issue Aug 16, 2022
eaidova pushed a commit to eaidova/openvino that referenced this issue Jan 4, 2023
generalize conv2d, avg_pool2d, max_pool2d to support 1d and 3d cases
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

7 participants