Skip to content
This repository has been archived by the owner on Nov 17, 2023. It is now read-only.

Commit

Permalink
Unify all names used to refer to oneDNN library in logs and docs to o…
Browse files Browse the repository at this point in the history
…neDNN
  • Loading branch information
bartekkuncer committed Nov 2, 2021
1 parent 75e4d1d commit b95a736
Show file tree
Hide file tree
Showing 77 changed files with 162 additions and 162 deletions.
4 changes: 2 additions & 2 deletions CMakeLists.txt
Original file line number Diff line number Diff line change
Expand Up @@ -62,9 +62,9 @@ option(USE_F16C "Build with x86 F16C instruction support" ON) # autodetects supp
option(USE_LAPACK "Build with lapack support" ON)
option(USE_MKL_LAYERNORM "Use layer normalization from MKL, which is currently slower than internal. No effect unless USE_BLAS=MKL (or mkl)." OFF)
if((NOT APPLE) AND (NOT MSVC) AND (CMAKE_HOST_SYSTEM_PROCESSOR STREQUAL "x86_64") AND (NOT CMAKE_CROSSCOMPILING))
option(USE_ONEDNN "Build with ONEDNN support" ON)
option(USE_ONEDNN "Build with oneDNN support" ON)
else()
option(USE_ONEDNN "Build with ONEDNN support" OFF)
option(USE_ONEDNN "Build with oneDNN support" OFF)
endif()
cmake_dependent_option(USE_INTGEMM "Build with x86_64 intgemm library for low-precision multiplication" ON "CMAKE_SYSTEM_PROCESSOR STREQUAL x86_64" OFF)
if(NOT MSVC)
Expand Down
4 changes: 2 additions & 2 deletions NEWS.md
Original file line number Diff line number Diff line change
Expand Up @@ -26,7 +26,7 @@ MXNet Change Log
* [CUDA Graphs](#cuda-graphs)
* [CUDA 11 Support](#cuda-11-support)
* [TensorRT](#tensorrt)
* [OneDNN](#onednn)
* [oneDNN](#onednn)
* [IntGemm](#intgemm)
* [Subgraph API](#subgraph-api)
* [Extensions](#extensions)
Expand Down Expand Up @@ -308,7 +308,7 @@ MXNet Change Log
- Add TRT verbose mode (#19100)
- Backporting TensorRT-Gluon Partition API (and TensorRT 7 support) (#18916)
- Backport TRT test update #19296 (#19298)
#### OneDNN
#### oneDNN
- Upgrade to oneDNN v1.6.3 (#19153) (#19161)
- Update oneDNN to official v1.6 release (#18867) (#18867)
- Upgrade to oneDNN v1.6 (#18822)
Expand Down
2 changes: 1 addition & 1 deletion benchmark/opperf/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -40,7 +40,7 @@ Benchmarks are usually done end-to-end for a given Network Architecture. For exa
2. A standard Network Architecture like ResNet-50 is made up of many operators Ex: Convolution2D, Softmax, Dense and more. Consider the following scenarios:
1. We improved the performance of Convolution2D operator, but due to a bug, Softmax performance went down. Overall, we may observe end to end benchmarks are running fine, we may miss out the performance degradation of a single operator which can accumulate and become untraceable.
2. You need to see in a given network, which operator is taking maximum time and plan optimization work. With end to end benchmarks, it is hard to get more fine grained numbers at operator level.
3. We need to know on different hardware infrastructure (Ex: CPU with ONEDNN, GPU with NVIDIA CUDA and cuDNN) how different operators performs. With these details, we can plan the optimization work at operator level, which could exponentially boost up end to end performance.
3. We need to know on different hardware infrastructure (Ex: CPU with oneDNN, GPU with NVIDIA CUDA and cuDNN) how different operators performs. With these details, we can plan the optimization work at operator level, which could exponentially boost up end to end performance.
4. You want to have nightly performance tests across all operators in a deep learning framework to catch regressions early.
5. We can integrate this framework with a CI/CD system to run per operator performance tests for PRs. Example: When a PR modifies the kernel of TransposeConv2D, we can run benchmarks of TransposeConv2D operator to verify performance.

Expand Down
8 changes: 4 additions & 4 deletions cd/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -22,18 +22,18 @@

## Introduction

MXNet aims to support a variety of frontends, e.g. Python, Java, Perl, R, etc. as well as environments (Windows, Linux, Mac, with or without GPU, with or without ONEDNN support, etc.). This package contains a small continuous delivery (CD) framework used to automate the delivery nightly and release builds across our delivery channels.
MXNet aims to support a variety of frontends, e.g. Python, Java, Perl, R, etc. as well as environments (Windows, Linux, Mac, with or without GPU, with or without oneDNN support, etc.). This package contains a small continuous delivery (CD) framework used to automate the delivery nightly and release builds across our delivery channels.

<!-- TODO: Add links to the actual jobs, once this is live on PROD -->

The CD process is driven by the [CD pipeline job](Jenkinsfile_cd_pipeline), which orchestrates the order in which the artifacts are delivered. For instance, first publish the libmxnet library before publishing the pip package. It does this by triggering the [release job](Jenkinsfile_release_job) with a specific set of parameters for each delivery channel. The release job executes the specific release pipeline for a delivery channel across all MXNet *variants*.

A variant is a specific environment or features for which MXNet is compiled. For instance CPU, GPU with CUDA v10.1, CUDA v10.2 with ONEDNN support, etc.
A variant is a specific environment or features for which MXNet is compiled. For instance CPU, GPU with CUDA v10.1, CUDA v10.2 with oneDNN support, etc.

Currently, below variants are supported. All of these variants except native have ONEDNN backend enabled.
Currently, below variants are supported. All of these variants except native have oneDNN backend enabled.

* *cpu*: CPU
* *native*: CPU without ONEDNN
* *native*: CPU without oneDNN
* *cu101*: CUDA 10.1
* *cu102*: CUDA 10.2
* *cu110*: CUDA 11.0
Expand Down
4 changes: 2 additions & 2 deletions cd/utils/artifact_repository.md
Original file line number Diff line number Diff line change
Expand Up @@ -58,11 +58,11 @@ If not set, derived through the value of sys.platform (https://docs.python.org/3

Manually configured through the --variant argument. The current variants are: cpu, native, cu101, cu102, cu110, cu112.

As long as the tool is being run from the MXNet code base, the runtime feature detection tool (https://github.com/larroy/mxnet/blob/dd432b7f241c9da2c96bcb877c2dc84e6a1f74d4/docs/api/python/libinfo/libinfo.md) can be used to detect whether the library has been compiled with MKL (library has ONEDNN feature enabled) and/or CUDA support (compiled with CUDA feature enabled).
As long as the tool is being run from the MXNet code base, the runtime feature detection tool (https://github.com/larroy/mxnet/blob/dd432b7f241c9da2c96bcb877c2dc84e6a1f74d4/docs/api/python/libinfo/libinfo.md) can be used to detect whether the library has been compiled with oneDNN (library has oneDNN feature enabled) and/or CUDA support (compiled with CUDA feature enabled).

If it has been compiled with CUDA support, the output of /usr/local/cuda/bin/nvcc --version can be mined for the exact CUDA version (eg. 8.0, 9.0, etc.).

By knowing which features are enabled on the binary, and if necessary, which CUDA version is installed on the machine, the value for the variant argument can be calculated. Eg. if CUDA features are enabled, and nvcc reports cuda version 10.2, then the variant would be cu102. If neither ONEDNN nor CUDA features are enabled, the variant would be native.
By knowing which features are enabled on the binary, and if necessary, which CUDA version is installed on the machine, the value for the variant argument can be calculated. Eg. if CUDA features are enabled, and nvcc reports cuda version 10.2, then the variant would be cu102. If neither oneDNN nor CUDA features are enabled, the variant would be native.

**Dependency Linking**

Expand Down
2 changes: 1 addition & 1 deletion cd/utils/artifact_repository.py
Original file line number Diff line number Diff line change
Expand Up @@ -313,7 +313,7 @@ def probe_gpu_variant(mxnet_features: Dict[str, bool]) -> Optional[str]:
if cuda_version:
variant = 'cu{}'.format(cuda_version)
if not mxnet_features['ONEDNN']:
RuntimeError('Error determining mxnet variant: ONEDNN should be enabled for cuda variants')
RuntimeError('Error determining mxnet variant: oneDNN should be enabled for cuda variants')
logger.debug('variant is: {}'.format(variant))
return variant

Expand Down
6 changes: 3 additions & 3 deletions cd/utils/test_artifact_repository.py
Original file line number Diff line number Diff line change
Expand Up @@ -161,15 +161,15 @@ def test_get_cuda_version_not_found(self, mock):
@patch('artifact_repository.get_libmxnet_features')
def test_probe_variant_native(self, mock_features):
"""
Tests 'native' is returned if ONEDNN and CUDA features are OFF
Tests 'native' is returned if oneDNN and CUDA features are OFF
"""
mock_features.return_value = {'ONEDNN': False, 'CUDA': False}
self.assertEqual(probe_mxnet_variant('libmxnet.so'), 'native')

@patch('artifact_repository.get_libmxnet_features')
def test_probe_variant_cpu(self, mock_features):
"""
Tests 'cpu' is returned if ONEDNN is ON and CUDA is OFF
Tests 'cpu' is returned if oneDNN is ON and CUDA is OFF
"""
mock_features.return_value = {'ONEDNN': True, 'CUDA': False}
self.assertEqual(probe_mxnet_variant('libmxnet.so'), 'cpu')
Expand All @@ -178,7 +178,7 @@ def test_probe_variant_cpu(self, mock_features):
@patch('artifact_repository.get_cuda_version')
def test_probe_variant_cuda(self, mock_cuda_version, mock_features):
"""
Tests 'cu102' is returned if ONEDNN is OFF and CUDA is ON and CUDA version is 10.2
Tests 'cu102' is returned if oneDNN is OFF and CUDA is ON and CUDA version is 10.2
"""
mock_features.return_value = {'ONEDNN': True, 'CUDA': True}
mock_cuda_version.return_value = '102'
Expand Down
4 changes: 2 additions & 2 deletions ci/dev_menu.py
Original file line number Diff line number Diff line change
Expand Up @@ -141,12 +141,12 @@ def provision_virtualenv(venv_path=DEFAULT_PYENV):
"ci/build.py --nvidiadocker --platform ubuntu_gpu /work/runtime_functions.sh build_ubuntu_gpu",
"ci/build.py --nvidiadocker --platform ubuntu_gpu /work/runtime_functions.sh unittest_ubuntu_python3_gpu",
]),
('[Docker] Python3 GPU+ONEDNN unittests',
('[Docker] Python3 GPU+oneDNN unittests',
[
"ci/build.py --nvidiadocker --platform ubuntu_gpu /work/runtime_functions.sh build_ubuntu_gpu_onednn",
"ci/build.py --nvidiadocker --platform ubuntu_gpu /work/runtime_functions.sh unittest_ubuntu_python3_gpu",
]),
('[Docker] Python3 CPU Intel ONEDNN unittests',
('[Docker] Python3 CPU oneDNN unittests',
[
"ci/build.py --platform ubuntu_cpu /work/runtime_functions.sh build_ubuntu_cpu_onednn",
"ci/build.py --platform ubuntu_cpu /work/runtime_functions.sh unittest_ubuntu_python3_cpu",
Expand Down
2 changes: 1 addition & 1 deletion ci/docker/runtime_functions.sh
Original file line number Diff line number Diff line change
Expand Up @@ -1394,7 +1394,7 @@ build_static_libmxnet() {
# Tests CD PyPI packaging in CI
ci_package_pypi() {
set -ex
# copies onednn header files to 3rdparty/onednn/include/oneapi/dnnl/ as in CD
# copies oneDNN header files to 3rdparty/onednn/include/oneapi/dnnl/ as in CD
mkdir -p 3rdparty/onednn/include/oneapi/dnnl
cp include/onednn/oneapi/dnnl/dnnl_version.h 3rdparty/onednn/include/oneapi/dnnl/.
cp include/onednn/oneapi/dnnl/dnnl_config.h 3rdparty/onednn/include/oneapi/dnnl/.
Expand Down
30 changes: 15 additions & 15 deletions ci/jenkins/Jenkins_steps.groovy
Original file line number Diff line number Diff line change
Expand Up @@ -174,7 +174,7 @@ def compile_unix_mkl_cpu(lib_name) {
}

def compile_unix_onednn_cpu(lib_name) {
return ['CPU: ONEDNN': {
return ['CPU: oneDNN': {
node(NODE_LINUX_CPU) {
ws('workspace/build-onednn-cpu') {
timeout(time: max_time, unit: 'MINUTES') {
Expand All @@ -188,7 +188,7 @@ def compile_unix_onednn_cpu(lib_name) {
}

def compile_unix_onednn_mkl_cpu(lib_name) {
return ['CPU: ONEDNN_MKL': {
return ['CPU: oneDNN-MKL': {
node(NODE_LINUX_CPU) {
ws('workspace/build-onednn-cpu') {
timeout(time: max_time, unit: 'MINUTES') {
Expand All @@ -202,7 +202,7 @@ def compile_unix_onednn_mkl_cpu(lib_name) {
}

def compile_unix_onednn_gpu(lib_name) {
return ['GPU: ONEDNN': {
return ['GPU: oneDNN': {
node(NODE_LINUX_CPU) {
ws('workspace/build-onednn-gpu') {
timeout(time: max_time, unit: 'MINUTES') {
Expand All @@ -216,7 +216,7 @@ def compile_unix_onednn_gpu(lib_name) {
}

def compile_unix_onednn_nocudnn_gpu(lib_name) {
return ['GPU: ONEDNN_CUDNNOFF': {
return ['GPU: oneDNN-CUDNNOFF': {
node(NODE_LINUX_CPU) {
ws('workspace/build-onednn-gpu-nocudnn') {
timeout(time: max_time, unit: 'MINUTES') {
Expand Down Expand Up @@ -286,7 +286,7 @@ def compile_centos7_cpu(lib_name) {
}

def compile_centos7_cpu_onednn() {
return ['CPU: CentOS 7 ONEDNN': {
return ['CPU: CentOS 7 oneDNN': {
node(NODE_LINUX_CPU) {
ws('workspace/build-centos7-onednn') {
timeout(time: max_time, unit: 'MINUTES') {
Expand Down Expand Up @@ -353,7 +353,7 @@ def compile_unix_clang_tidy_cpu() {
}

def compile_unix_clang_6_onednn_cpu() {
return ['CPU: Clang 6 ONEDNN': {
return ['CPU: Clang 6 oneDNN': {
node(NODE_LINUX_CPU) {
ws('workspace/build-cpu-onednn-clang6') {
timeout(time: max_time, unit: 'MINUTES') {
Expand All @@ -367,7 +367,7 @@ def compile_unix_clang_6_onednn_cpu() {

// TODO(leezu) delete once DUSE_DIST_KVSTORE=ON builds in -WError build
def compile_unix_clang_10_onednn_cpu() {
return ['CPU: Clang 10 ONEDNN': {
return ['CPU: Clang 10 oneDNN': {
node(NODE_LINUX_CPU) {
ws('workspace/build-cpu-onednn-clang100') {
timeout(time: max_time, unit: 'MINUTES') {
Expand Down Expand Up @@ -531,7 +531,7 @@ def compile_windows_cpu(lib_name) {
}

def compile_windows_cpu_onednn(lib_name) {
return ['Build CPU ONEDNN windows':{
return ['Build CPU oneDNN windows':{
node(NODE_WINDOWS_CPU) {
ws('workspace/build-cpu-onednn') {
timeout(time: max_time, unit: 'MINUTES') {
Expand All @@ -545,7 +545,7 @@ def compile_windows_cpu_onednn(lib_name) {
}

def compile_windows_cpu_onednn_mkl(lib_name) {
return ['Build CPU ONEDNN MKL windows':{
return ['Build CPU oneDNN MKL windows':{
node(NODE_WINDOWS_CPU) {
ws('workspace/build-cpu-onednn-mkl') {
timeout(time: max_time, unit: 'MINUTES') {
Expand Down Expand Up @@ -587,7 +587,7 @@ def compile_windows_gpu(lib_name) {
}

def compile_windows_gpu_onednn(lib_name) {
return ['Build GPU ONEDNN windows':{
return ['Build GPU oneDNN windows':{
node(NODE_WINDOWS_CPU) {
ws('workspace/build-gpu') {
timeout(time: max_time, unit: 'MINUTES') {
Expand Down Expand Up @@ -765,7 +765,7 @@ def test_unix_python3_onnx_cpu(lib_name) {
}

def test_unix_python3_onednn_cpu(lib_name) {
return ['Python3: ONEDNN-CPU': {
return ['Python3: oneDNN-CPU': {
node(NODE_LINUX_CPU) {
ws('workspace/ut-python3-onednn-cpu') {
try {
Expand All @@ -782,7 +782,7 @@ def test_unix_python3_onednn_cpu(lib_name) {
}

def test_unix_python3_onednn_mkl_cpu(lib_name) {
return ['Python3: ONEDNN-MKL-CPU': {
return ['Python3: oneDNN-MKL-CPU': {
node(NODE_LINUX_CPU) {
ws('workspace/ut-python3-onednn-mkl-cpu') {
try {
Expand All @@ -799,7 +799,7 @@ def test_unix_python3_onednn_mkl_cpu(lib_name) {
}

def test_unix_python3_onednn_gpu(lib_name) {
return ['Python3: ONEDNN-GPU': {
return ['Python3: oneDNN-GPU': {
node(NODE_LINUX_GPU_G4) {
ws('workspace/ut-python3-onednn-gpu') {
try {
Expand All @@ -815,7 +815,7 @@ def test_unix_python3_onednn_gpu(lib_name) {
}

def test_unix_python3_onednn_nocudnn_gpu(lib_name) {
return ['Python3: ONEDNN-GPU-NOCUDNN': {
return ['Python3: oneDNN-GPU-NOCUDNN': {
node(NODE_LINUX_GPU_G4) {
ws('workspace/ut-python3-onednn-gpu-nocudnn') {
try {
Expand Down Expand Up @@ -1009,7 +1009,7 @@ def test_windows_python3_gpu(lib_name) {
}

def test_windows_python3_gpu_onednn(lib_name) {
return ['Python 3: ONEDNN-GPU Win':{
return ['Python 3: oneDNN-GPU Win':{
node(NODE_WINDOWS_GPU) {
timeout(time: max_time, unit: 'MINUTES') {
ws('workspace/ut-python-gpu') {
Expand Down
2 changes: 1 addition & 1 deletion config/darwin.cmake
Original file line number Diff line number Diff line change
Expand Up @@ -45,7 +45,7 @@ set(OPENCV_ROOT "" CACHE BOOL "OpenCV install path. Supports autodetection.")

set(USE_OPENMP OFF CACHE BOOL "Build with Openmp support")

set(USE_ONEDNN ON CACHE BOOL "Build with ONEDNN support")
set(USE_ONEDNN ON CACHE BOOL "Build with oneDNN support")

set(USE_LAPACK ON CACHE BOOL "Build with lapack support")

Expand Down
2 changes: 1 addition & 1 deletion config/distribution/darwin_cpu.cmake
Original file line number Diff line number Diff line change
Expand Up @@ -24,7 +24,7 @@ set(USE_BLAS "apple" CACHE STRING "BLAS Vendor")
set(USE_CUDA OFF CACHE BOOL "Build with CUDA support")
set(USE_OPENCV ON CACHE BOOL "Build with OpenCV support")
set(USE_OPENMP OFF CACHE BOOL "Build with Openmp support")
set(USE_ONEDNN ON CACHE BOOL "Build with ONEDNN support")
set(USE_ONEDNN ON CACHE BOOL "Build with oneDNN support")
set(USE_LAPACK ON CACHE BOOL "Build with lapack support")
set(USE_TVM_OP OFF CACHE BOOL "Enable use of TVM operator build system.")
set(USE_SSE ON CACHE BOOL "Build with x86 SSE instruction support")
Expand Down
2 changes: 1 addition & 1 deletion config/distribution/darwin_cpu_mkl.cmake
Original file line number Diff line number Diff line change
Expand Up @@ -25,7 +25,7 @@ set(BLA_STATIC ON CACHE BOOL "Use static libraries")
set(USE_CUDA OFF CACHE BOOL "Build with CUDA support")
set(USE_OPENCV ON CACHE BOOL "Build with OpenCV support")
set(USE_OPENMP OFF CACHE BOOL "Build with Openmp support")
set(USE_ONEDNN ON CACHE BOOL "Build with ONEDNN support")
set(USE_ONEDNN ON CACHE BOOL "Build with oneDNN support")
set(USE_LAPACK ON CACHE BOOL "Build with lapack support")
set(USE_TVM_OP OFF CACHE BOOL "Enable use of TVM operator build system.")
set(USE_SSE ON CACHE BOOL "Build with x86 SSE instruction support")
Expand Down
2 changes: 1 addition & 1 deletion config/distribution/darwin_native.cmake
Original file line number Diff line number Diff line change
Expand Up @@ -24,7 +24,7 @@ set(USE_BLAS "apple" CACHE STRING "BLAS Vendor")
set(USE_CUDA OFF CACHE BOOL "Build with CUDA support")
set(USE_OPENCV ON CACHE BOOL "Build with OpenCV support")
set(USE_OPENMP OFF CACHE BOOL "Build with Openmp support")
set(USE_ONEDNN OFF CACHE BOOL "Build with ONEDNN support")
set(USE_ONEDNN OFF CACHE BOOL "Build with oneDNN support")
set(USE_LAPACK ON CACHE BOOL "Build with lapack support")
set(USE_TVM_OP OFF CACHE BOOL "Enable use of TVM operator build system.")
set(USE_SSE ON CACHE BOOL "Build with x86 SSE instruction support")
Expand Down
2 changes: 1 addition & 1 deletion config/distribution/linux_cpu.cmake
Original file line number Diff line number Diff line change
Expand Up @@ -23,7 +23,7 @@ set(USE_BLAS "open" CACHE STRING "BLAS Vendor")
set(USE_CUDA OFF CACHE BOOL "Build with CUDA support")
set(USE_OPENCV ON CACHE BOOL "Build with OpenCV support")
set(USE_OPENMP ON CACHE BOOL "Build with Openmp support")
set(USE_ONEDNN ON CACHE BOOL "Build with ONEDNN support")
set(USE_ONEDNN ON CACHE BOOL "Build with oneDNN support")
set(USE_LAPACK ON CACHE BOOL "Build with lapack support")
set(USE_TVM_OP OFF CACHE BOOL "Enable use of TVM operator build system.")
set(USE_SSE ON CACHE BOOL "Build with x86 SSE instruction support")
Expand Down
2 changes: 1 addition & 1 deletion config/distribution/linux_cpu_mkl.cmake
Original file line number Diff line number Diff line change
Expand Up @@ -25,7 +25,7 @@ set(BLA_STATIC ON CACHE BOOL "Use static libraries")
set(USE_CUDA OFF CACHE BOOL "Build with CUDA support")
set(USE_OPENCV ON CACHE BOOL "Build with OpenCV support")
set(USE_OPENMP ON CACHE BOOL "Build with Openmp support")
set(USE_ONEDNN ON CACHE BOOL "Build with ONEDNN support")
set(USE_ONEDNN ON CACHE BOOL "Build with oneDNN support")
set(USE_LAPACK ON CACHE BOOL "Build with lapack support")
set(USE_TVM_OP OFF CACHE BOOL "Enable use of TVM operator build system.")
set(USE_SSE ON CACHE BOOL "Build with x86 SSE instruction support")
Expand Down
2 changes: 1 addition & 1 deletion config/distribution/linux_cu100.cmake
Original file line number Diff line number Diff line change
Expand Up @@ -25,7 +25,7 @@ set(USE_CUDNN ON CACHE BOOL "Build with CUDNN support")
set(USE_NCCL ON CACHE BOOL "Build with NCCL support")
set(USE_OPENCV ON CACHE BOOL "Build with OpenCV support")
set(USE_OPENMP ON CACHE BOOL "Build with Openmp support")
set(USE_ONEDNN ON CACHE BOOL "Build with ONEDNN support")
set(USE_ONEDNN ON CACHE BOOL "Build with oneDNN support")
set(USE_LAPACK ON CACHE BOOL "Build with lapack support")
set(USE_TVM_OP OFF CACHE BOOL "Enable use of TVM operator build system.")
set(USE_SSE ON CACHE BOOL "Build with x86 SSE instruction support")
Expand Down
2 changes: 1 addition & 1 deletion config/distribution/linux_cu101.cmake
Original file line number Diff line number Diff line change
Expand Up @@ -27,7 +27,7 @@ set(USE_CUDNN ON CACHE BOOL "Build with CUDNN support")
set(USE_NCCL ON CACHE BOOL "Build with NCCL support")
set(USE_OPENCV ON CACHE BOOL "Build with OpenCV support")
set(USE_OPENMP ON CACHE BOOL "Build with Openmp support")
set(USE_ONEDNN ON CACHE BOOL "Build with ONEDNN support")
set(USE_ONEDNN ON CACHE BOOL "Build with oneDNN support")
set(USE_LAPACK ON CACHE BOOL "Build with lapack support")
set(USE_TVM_OP OFF CACHE BOOL "Enable use of TVM operator build system.")
set(USE_SSE ON CACHE BOOL "Build with x86 SSE instruction support")
Expand Down
2 changes: 1 addition & 1 deletion config/distribution/linux_cu102.cmake
Original file line number Diff line number Diff line change
Expand Up @@ -25,7 +25,7 @@ set(USE_CUDNN ON CACHE BOOL "Build with CUDNN support")
set(USE_NCCL ON CACHE BOOL "Build with NCCL support")
set(USE_OPENCV ON CACHE BOOL "Build with OpenCV support")
set(USE_OPENMP ON CACHE BOOL "Build with Openmp support")
set(USE_ONEDNN ON CACHE BOOL "Build with ONEDNN support")
set(USE_ONEDNN ON CACHE BOOL "Build with oneDNN support")
set(USE_LAPACK ON CACHE BOOL "Build with lapack support")
set(USE_TVM_OP OFF CACHE BOOL "Enable use of TVM operator build system.")
set(USE_SSE ON CACHE BOOL "Build with x86 SSE instruction support")
Expand Down
Loading

0 comments on commit b95a736

Please sign in to comment.