Skip to content

Commit

Permalink
switch to C++17
Browse files Browse the repository at this point in the history
* Require C++17 in CMake
* Remove nvcc version below 11 from GitHub CI
* Remove CUDA SDK installer below 9.2 from install scripts
* Update supported CUDA versions in README.md
* Eliminate obsolete C++ version checks
* Remove support for nvcc + clang-5
* Update documentation
* Fix warnings
* Merge GitLab CI CUDA job description files
* Add a GitLab CI test with clang-12 as CUDA compiler and cuda11.0
* Add GitHub CI test with clang-12 + nvcc 11.4
* support passing CMAKE_CUDA_FLAGS variable through CI
  • Loading branch information
bernhardmgruber committed Nov 19, 2021
1 parent aa35987 commit 4467954
Show file tree
Hide file tree
Showing 19 changed files with 98 additions and 181 deletions.
93 changes: 6 additions & 87 deletions .github/workflows/ci.yml

Large diffs are not rendered by default.

3 changes: 1 addition & 2 deletions .gitlab-ci.yml
Original file line number Diff line number Diff line change
Expand Up @@ -28,6 +28,5 @@ variables:

include:
- local: '/script/gitlabci/job_base.yml'
- local: '/script/gitlabci/job_cuda9.2.yml'
- local: '/script/gitlabci/job_cuda11.4.yml'
- local: '/script/gitlabci/job_cuda.yml'
- local: '/script/gitlabci/job_hip4.2.yml'
2 changes: 1 addition & 1 deletion .zenodo.json
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
{
"title": "alpaka: Abstraction Library for Parallel Kernel Acceleration",
"description": "The alpaka library is a header-only C++14 abstraction library for accelerator development. Its aim is to provide performance portability across accelerators through the abstraction (not hiding!) of the underlying levels of parallelism.",
"description": "The alpaka library is a header-only C++17 abstraction library for accelerator development. Its aim is to provide performance portability across accelerators through the abstraction (not hiding!) of the underlying levels of parallelism.",
"creators": [
{
"affiliation": "Helmholtz-Zentrum Dresden-Rossendorf, TU Dresden, LogMeIn Inc.",
Expand Down
2 changes: 1 addition & 1 deletion CMakeLists.txt
Original file line number Diff line number Diff line change
Expand Up @@ -30,7 +30,7 @@ string(REGEX MATCH "([0-9]+)" ALPAKA_VERSION_PATCH ${ALPAKA_VERSION_PATCH_HPP})
set(PACKAGE_VERSION "${ALPAKA_VERSION_MAJOR}.${ALPAKA_VERSION_MINOR}.${ALPAKA_VERSION_PATCH}")

project(alpaka VERSION ${ALPAKA_VERSION_MAJOR}.${ALPAKA_VERSION_MINOR}.${ALPAKA_VERSION_PATCH}
DESCRIPTION "The alpaka library is a header-only C++14 abstraction library for accelerator development."
DESCRIPTION "The alpaka library is a header-only C++17 abstraction library for accelerator development."
HOMEPAGE_URL "https://github.com/alpaka-group/alpaka"
LANGUAGES CXX)

Expand Down
40 changes: 20 additions & 20 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,13 +5,13 @@
[![Continuous Integration](https://github.com/alpaka-group/alpaka/workflows/Continuous%20Integration/badge.svg)](https://github.com/alpaka-group/alpaka/actions?query=workflow%3A%22Continuous+Integration%22)
[![Documentation Status](https://readthedocs.org/projects/alpaka/badge/?version=latest)](https://alpaka.readthedocs.io)
[![Doxygen](https://img.shields.io/badge/API-Doxygen-blue.svg)](https://alpaka-group.github.io/alpaka)
[![Language](https://img.shields.io/badge/language-C%2B%2B14-orange.svg)](https://isocpp.org/)
[![Language](https://img.shields.io/badge/language-C%2B%2B17-orange.svg)](https://isocpp.org/)
[![Platforms](https://img.shields.io/badge/platform-linux%20%7C%20windows%20%7C%20mac-lightgrey.svg)](https://github.com/alpaka-group/alpaka)
[![License](https://img.shields.io/badge/license-MPL--2.0-blue.svg)](https://www.mozilla.org/en-US/MPL/2.0/)

![alpaka](docs/logo/alpaka_401x135.png)

The **alpaka** library is a header-only C++14 abstraction library for accelerator development.
The **alpaka** library is a header-only C++17 abstraction library for accelerator development.

Its aim is to provide performance portability across accelerators through the abstraction (not hiding!) of the underlying levels of parallelism.

Expand Down Expand Up @@ -68,20 +68,20 @@ Accelerator Back-ends
Supported Compilers
-------------------

This library uses C++14 (or newer when available).

|Accelerator Back-end|gcc 7.5 <br/> (Linux)|gcc 8.5 <br/> (Linux)|gcc 9.4 <br/> (Linux)|gcc 10.3 <br/> (Linux)|gcc 11.1 <br/> (Linux)|clang 5/6/7/8 <br/> (Linux)|clang 9 <br/> (Linux)|clang 10 <br/> (Linux)|clang 11 <br/> (Linux)|clang 12 <br/> (Linux)|Apple LLVM 11.3.1/12.4.0/12.5.1/13.0.0 <br /> (macOS)|MSVC 2019 <br/> (Windows)|
|---|---|---|---|---|---|---|---|---|---|---|---|---|
|Serial|:white_check_mark:|:white_check_mark:|:white_check_mark:|:white_check_mark:|:white_check_mark:|:white_check_mark:|:white_check_mark:|:white_check_mark:|:white_check_mark:|:white_check_mark:|:white_check_mark:|:white_check_mark:|
|OpenMP 2.0+ blocks|:white_check_mark:|:white_check_mark:|:white_check_mark:|:white_check_mark:|:white_check_mark:|:white_check_mark:|:white_check_mark:|:white_check_mark:|:white_check_mark:|:white_check_mark:|:x:|:white_check_mark:|
|OpenMP 2.0+ threads|:white_check_mark:|:white_check_mark:|:white_check_mark:|:white_check_mark:|:white_check_mark:|:white_check_mark:|:white_check_mark:|:white_check_mark:|:white_check_mark:|:white_check_mark:|:x:|:white_check_mark:|
|OpenMP 5.0 (CPU)|:x:|:x:|:x:|:x:|:x:|:x:|:x:|:x:|:white_check_mark:|:white_check_mark:|:x:|:x:|
| std::thread |:white_check_mark:|:white_check_mark:|:white_check_mark:|:white_check_mark:|:white_check_mark:|:white_check_mark:|:white_check_mark:|:white_check_mark:|:white_check_mark:|:white_check_mark:|:white_check_mark:|:white_check_mark:|
| Boost.Fiber |:white_check_mark:|:white_check_mark:|:white_check_mark:|:white_check_mark:|:white_check_mark:|:white_check_mark:|:white_check_mark:|:white_check_mark:|:white_check_mark:|:white_check_mark:|:x:|:white_check_mark:|
|TBB|:white_check_mark:|:white_check_mark:|:white_check_mark:|:white_check_mark:|:white_check_mark:|:white_check_mark:|:white_check_mark:|:white_check_mark:|:white_check_mark:|:white_check_mark:|:white_check_mark:|:x:|
|CUDA (nvcc)|:white_check_mark: <br/> (CUDA 9.2-11.4) |:white_check_mark: <br/> (CUDA 10.1-11.4) |:white_check_mark: <br/> (CUDA 11.0-11.4)|:x:|:x:|:white_check_mark: <br/> (CUDA 10.1-11.4)|:white_check_mark: <br/> (CUDA 11.0-11.4)|:white_check_mark: <br/> (CUDA 11.1-11.4)|:white_check_mark: <br/> (CUDA 11.4)| - |:x:|:white_check_mark: <br/> (CUDA 10.1,10.2,11.2,11.3, 11.4)|
|CUDA (clang) | - | - | - | - | - | - | :white_check_mark: <br/> (CUDA 9.2-10.1) | :white_check_mark: <br/> (CUDA 9.2-10.1) | :white_check_mark: <br/> (CUDA 10.0-10.2) | - | - | - |
|[HIP-4.0.1](https://alpaka.readthedocs.io/en/latest/install/HIP.html) (clang)|:x:|:x:|:x:|:x:|:x:|:x:|:x:|:x:|:x:|:white_check_mark:| - | - |
This library uses C++17 (or newer when available).

|Accelerator Back-end|gcc 7.5 <br/> (Linux)|gcc 8.5 <br/> (Linux)|gcc 9.4 <br/> (Linux)|gcc 10.3 <br/> (Linux)|gcc 11.1 <br/> (Linux)|clang 5-9 <br/> (Linux)|clang 10 <br/> (Linux)|clang 11 <br/> (Linux)|clang 12 <br/> (Linux)|Apple LLVM 11.3.1/12.4.0/12.5.1/13.0.0 <br /> (macOS)|MSVC 2019 <br/> (Windows)|
|---|---|---|---|---|---|---|---|---|---|---|---|
|Serial|:white_check_mark:|:white_check_mark:|:white_check_mark:|:white_check_mark:|:white_check_mark:|:white_check_mark:|:white_check_mark:|:white_check_mark:|:white_check_mark:|:white_check_mark:|:white_check_mark:|
|OpenMP 2.0+ blocks|:white_check_mark:|:white_check_mark:|:white_check_mark:|:white_check_mark:|:white_check_mark:|:white_check_mark:|:white_check_mark:|:white_check_mark:|:white_check_mark:|:x:|:white_check_mark:|
|OpenMP 2.0+ threads|:white_check_mark:|:white_check_mark:|:white_check_mark:|:white_check_mark:|:white_check_mark:|:white_check_mark:|:white_check_mark:|:white_check_mark:|:white_check_mark:|:x:|:white_check_mark:|
|OpenMP 5.0 (CPU)|:x:|:x:|:x:|:x:|:x:|:x:|:x:|:white_check_mark:|:white_check_mark:|:x:|:x:|
| std::thread |:white_check_mark:|:white_check_mark:|:white_check_mark:|:white_check_mark:|:white_check_mark:|:white_check_mark:|:white_check_mark:|:white_check_mark:|:white_check_mark:|:white_check_mark:|:white_check_mark:|
| Boost.Fiber |:white_check_mark:|:white_check_mark:|:white_check_mark:|:white_check_mark:|:white_check_mark:|:white_check_mark:|:white_check_mark:|:white_check_mark:|:white_check_mark:|:x:|:white_check_mark:|
|TBB|:white_check_mark:|:white_check_mark:|:white_check_mark:|:white_check_mark:|:white_check_mark:|:white_check_mark:|:white_check_mark:|:white_check_mark:|:white_check_mark:|:white_check_mark:|:x:|
|CUDA (nvcc)|:white_check_mark: <br/> (CUDA 11.0-11.4) |:white_check_mark: <br/> (CUDA 11.0-11.4) |:white_check_mark: <br/> (CUDA 11.0-11.4)|:x:|:x:|:white_check_mark: <br/> (CUDA 11.0-11.4)|:white_check_mark: <br/> (CUDA 11.1-11.4)|:white_check_mark: <br/> (CUDA 11.4)|:white_check_mark: <br/> (CUDA 11.4)|:x:|:white_check_mark: <br/> (CUDA 11.2-11.4)|
|CUDA (clang) | - | - | - | - | - | - | :x: | :white_check_mark: <br/> (CUDA 9.2-10.1) | :white_check_mark: <br/> (CUDA 10.0-10.2) | - | - |
|[HIP-4.0.1](https://alpaka.readthedocs.io/en/latest/install/HIP.html) (clang)|:x:|:x:|:x:|:x:|:x:|:x:|:x:|:x:|:white_check_mark:| - | - |

Other compilers or combinations marked with :x: in the table above may work but are not tested in CI and are therefore not explicitly supported.

Expand All @@ -92,10 +92,10 @@ Dependencies
The **alpaka** library itself just requires header-only libraries.
However some of the accelerator back-end implementations require different boost libraries to be built.

When an accelerator back-end using *Boost.Fiber* is enabled, `boost-fiber` and all of its dependencies are required to be built in C++14 mode `./b2 cxxflags="-std=c++14"`.
When *Boost.Fiber* is enabled and alpaka is built in C++17 mode with clang and libstc++, Boost >= 1.67.0 is required.
When an accelerator back-end using *Boost.Fiber* is enabled, `boost-fiber` and all of its dependencies are required to be built in C++17 mode `./b2 cxxflags="-std=c++17"`.
When *Boost.Fiber* is enabled and alpaka is built with clang and libstc++, Boost >= 1.67.0 is required.

When an accelerator back-end using *CUDA* is enabled, version *9.0* of the *CUDA SDK* is the minimum requirement.
When an accelerator back-end using *CUDA* is enabled, version *11.0* (with nvcc as CUDA compiler) or version *9.2* (with clang as CUDA compiler) of the *CUDA SDK* is the minimum requirement.
*NOTE*: When using nvcc as *CUDA* compiler, the *CUDA accelerator back-end* can not be enabled together with the *Boost.Fiber accelerator back-end* due to bugs in the nvcc compiler.
*NOTE*: When using clang as a native *CUDA* compiler, the *CUDA accelerator back-end* can not be enabled together with the *Boost.Fiber accelerator back-end* or any *OpenMP accelerator back-end* because this combination is currently unsupported.
*NOTE*: Separable compilation is disabled by default and can be enabled via the CMake flag `CMAKE_CUDA_SEPARABLE_COMPILATION`.
Expand All @@ -109,7 +109,7 @@ Usage
-----

The library is header only so nothing has to be built.
CMake 3.15+ is required to provide the correct defines and include paths.
CMake 3.18+ is required to provide the correct defines and include paths.
Just call `ALPAKA_ADD_EXECUTABLE` instead of `CUDA_ADD_EXECUTABLE` or `ADD_EXECUTABLE` and the difficulties of the CUDA nvcc compiler in handling `.cu` and `.cpp` files are automatically taken care of.
Source files do not need any special file ending.
Examples of how to utilize alpaka within CMake can be found in the `example` folder.
Expand Down
18 changes: 4 additions & 14 deletions cmake/alpakaCommon.cmake
Original file line number Diff line number Diff line change
Expand Up @@ -125,14 +125,14 @@ endif()
set(ALPAKA_DEBUG "0" CACHE STRING "Debug level")
set_property(CACHE ALPAKA_DEBUG PROPERTY STRINGS "0;1;2")

set(ALPAKA_CXX_STANDARD_DEFAULT "14")
set(ALPAKA_CXX_STANDARD_DEFAULT "17")
# Check whether ALPAKA_CXX_STANDARD has already been defined as a non-cached variable.
if(DEFINED ALPAKA_CXX_STANDARD)
set(ALPAKA_CXX_STANDARD_DEFAULT ${ALPAKA_CXX_STANDARD})
endif()

set(ALPAKA_CXX_STANDARD ${ALPAKA_CXX_STANDARD_DEFAULT} CACHE STRING "C++ standard version")
set_property(CACHE ALPAKA_CXX_STANDARD PROPERTY STRINGS "14;17;20")
set_property(CACHE ALPAKA_CXX_STANDARD PROPERTY STRINGS "17;20")

if(NOT TARGET alpaka)
add_library(alpaka INTERFACE)
Expand Down Expand Up @@ -415,11 +415,10 @@ if(ALPAKA_ACC_GPU_CUDA_ENABLE)
endif()
endif()

# NOTE: Since CUDA 10.2 this option is also alternatively called '--extended-lambda'
if(ALPAKA_CUDA_EXPT_EXTENDED_LAMBDA STREQUAL ON)
alpaka_set_compiler_options(DEVICE target alpaka $<$<COMPILE_LANGUAGE:CUDA>:--expt-extended-lambda>)
alpaka_set_compiler_options(DEVICE target alpaka $<$<COMPILE_LANGUAGE:CUDA>:--extended-lambda>)
endif()
# This is mandatory because with c++14 many standard library functions we rely on are constexpr (std::min, std::multiplies, ...)
# This is mandatory because with c++17 many standard library functions we rely on are constexpr (std::min, std::multiplies, ...)
alpaka_set_compiler_options(DEVICE target alpaka $<$<COMPILE_LANGUAGE:CUDA>:--expt-relaxed-constexpr>)

if((CMAKE_BUILD_TYPE STREQUAL "Debug") OR (CMAKE_BUILD_TYPE STREQUAL "RelWithDebInfo"))
Expand Down Expand Up @@ -455,15 +454,6 @@ if(ALPAKA_ACC_GPU_CUDA_ENABLE)
# avoids warnings on host-device signatured, default constructors/destructors
alpaka_set_compiler_options(DEVICE target alpaka $<$<COMPILE_LANGUAGE:CUDA>:-Xcudafe=--diag_suppress=esa_on_defaulted_function_ignored>)

# avoids warnings on host-device signature of 'std::__shared_count<>'
if(CUDAToolkit_VERSION VERSION_EQUAL 10.0)
alpaka_set_compiler_options(DEVICE target alpaka $<$<COMPILE_LANGUAGE:CUDA>:-Xcudafe=--diag_suppress=2905>)
elseif(CUDAToolkit_VERSION VERSION_EQUAL 10.1)
alpaka_set_compiler_options(DEVICE target alpaka $<$<COMPILE_LANGUAGE:CUDA>:-Xcudafe=--diag_suppress=2912>)
elseif(CUDAToolkit_VERSION VERSION_EQUAL 10.2)
alpaka_set_compiler_options(DEVICE target alpaka $<$<COMPILE_LANGUAGE:CUDA>:-Xcudafe=--diag_suppress=2976>)
endif()

if(ALPAKA_CUDA_KEEP_FILES STREQUAL ON)
alpaka_set_compiler_options(DEVICE target alpaka $<$<COMPILE_LANGUAGE:CUDA>:--keep>)
endif()
Expand Down
2 changes: 1 addition & 1 deletion docs/source/basic/library.rst
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@ Library Interface
As described in the chapter about the :doc:`Abstraction </basic/abstraction>`, the general design of the library is very similar to *CUDA* and *OpenCL* but extends both by some points, while not requiring any language extensions.
General interface design as well as interface implementation decisions differentiating *alpaka* from those libraries are described in the Rationale section.
It uses C++ because it is one of the most performant languages available on nearly all systems.
Furthermore, C++14 allows to describe the concepts in a very abstract way that is not possible with many other languages.
Furthermore, C++17 allows to describe the concepts in a very abstract way that is not possible with many other languages.
The *alpaka* library extensively makes use of advanced functional C++ template meta-programming techniques.
The Implementation Details section discusses the C++ library and the way it provides extensibility and optimizability.

Expand Down
2 changes: 1 addition & 1 deletion docs/source/conf.py
Original file line number Diff line number Diff line change
Expand Up @@ -126,7 +126,7 @@
(master_doc, 'alpaka', u'alpaka Documentation',
author, 'alpaka', 'Abstraction Library for Parallel Kernel Acceleration',
"""
The alpaka library is a header-only C++14 abstraction library for
The alpaka library is a header-only C++17 abstraction library for
accelerator development. Its aim is to provide performance portability
across accelerators through the abstraction (not hiding!) of the underlying
levels of parallelism.
Expand Down
6 changes: 3 additions & 3 deletions include/alpaka/core/UniformCudaHip.hpp
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@
#if defined(ALPAKA_ACC_GPU_CUDA_ENABLED) || defined(ALPAKA_ACC_GPU_HIP_ENABLED)

# include <alpaka/core/BoostPredef.hpp>

# include <alpaka/core/Unused.hpp>

# if defined(ALPAKA_ACC_GPU_CUDA_ENABLED) && !BOOST_LANG_CUDA
# error If ALPAKA_ACC_GPU_CUDA_ENABLED is set, the compiler has to support CUDA!
Expand Down Expand Up @@ -63,7 +63,7 @@ namespace alpaka
# endif
ALPAKA_DEBUG_BREAK;
// reset the last error to allow user side error handling
ALPAKA_API_PREFIX(GetLastError)();
ignore_unused(ALPAKA_API_PREFIX(GetLastError)());
throw std::runtime_error(sError);
}
}
Expand All @@ -90,7 +90,7 @@ namespace alpaka
else
{
// reset the last error to avoid propagation to the next CUDA/HIP API call
ALPAKA_API_PREFIX(GetLastError)();
ignore_unused(ALPAKA_API_PREFIX(GetLastError)());
}
}
}
Expand Down
3 changes: 2 additions & 1 deletion include/alpaka/kernel/TaskKernelGpuUniformCudaHipRt.hpp
Original file line number Diff line number Diff line change
Expand Up @@ -39,6 +39,7 @@
// Implementation details.
# include <alpaka/acc/AccGpuUniformCudaHipRt.hpp>
# include <alpaka/core/Decay.hpp>
# include <alpaka/core/Unused.hpp>
# include <alpaka/dev/DevUniformCudaHipRt.hpp>
# include <alpaka/kernel/Traits.hpp>
# include <alpaka/queue/QueueUniformCudaHipRtBlocking.hpp>
Expand Down Expand Up @@ -423,7 +424,7 @@ namespace alpaka
// Wait for the kernel execution to finish but do not check error return of this call.
// Do not use the alpaka::wait method because it checks the error itself but we want to give a custom
// error message.
ALPAKA_API_PREFIX(StreamSynchronize)(queue.m_spQueueImpl->m_UniformCudaHipQueue);
ignore_unused(ALPAKA_API_PREFIX(StreamSynchronize)(queue.m_spQueueImpl->m_UniformCudaHipQueue));
# if ALPAKA_DEBUG >= ALPAKA_DEBUG_MINIMAL
std::string const msg(
"'execution of kernel: '" + std::string(typeid(TKernelFnObj).name()) + "' failed with");
Expand Down
3 changes: 2 additions & 1 deletion include/alpaka/pltf/PltfUniformCudaHipRt.hpp
Original file line number Diff line number Diff line change
Expand Up @@ -12,6 +12,7 @@
#if defined(ALPAKA_ACC_GPU_CUDA_ENABLED) || defined(ALPAKA_ACC_GPU_HIP_ENABLED)

# include <alpaka/core/BoostPredef.hpp>
# include <alpaka/core/Unused.hpp>

# if defined(ALPAKA_ACC_GPU_CUDA_ENABLED) && !BOOST_LANG_CUDA
# error If ALPAKA_ACC_GPU_CUDA_ENABLED is set, the compiler has to support CUDA!
Expand Down Expand Up @@ -148,7 +149,7 @@ namespace alpaka
// Return the previous error from cudaStreamCreate.
ALPAKA_UNIFORM_CUDA_HIP_RT_CHECK(rc);
// Reset the Error state.
ALPAKA_API_PREFIX(GetLastError)();
ignore_unused(ALPAKA_API_PREFIX(GetLastError)());
return false;
}
}
Expand Down
13 changes: 5 additions & 8 deletions script/before_install.sh
Original file line number Diff line number Diff line change
Expand Up @@ -119,16 +119,13 @@ then
then
if [ ! -z "${ALPAKA_CXX_STANDARD+x}" ]
then
if (( "${ALPAKA_CXX_STANDARD}" >= 17 ))
if [ "${ALPAKA_CI_INSTALL_FIBERS}" == "ON" ]
then
if [ "${ALPAKA_CI_INSTALL_FIBERS}" == "ON" ]
if (( ( ( "${ALPAKA_CI_BOOST_BRANCH_MAJOR}" == 1 ) && ( "${ALPAKA_CI_BOOST_BRANCH_MINOR}" < 67 ) ) || ( "${ALPAKA_CI_BOOST_BRANCH_MAJOR}" < 1 ) ))
then
if (( ( ( "${ALPAKA_CI_BOOST_BRANCH_MAJOR}" == 1 ) && ( "${ALPAKA_CI_BOOST_BRANCH_MINOR}" < 67 ) ) || ( "${ALPAKA_CI_BOOST_BRANCH_MAJOR}" < 1 ) ))
then
# https://github.com/boostorg/coroutine2/issues/26
echo "libstdc++ in c++17 mode is not compatible with boost.fibers in boost-1.66 and below."
exit 1
fi
# https://github.com/boostorg/coroutine2/issues/26
echo "libstdc++ in c++17 mode is not compatible with boost.fibers in boost-1.66 and below."
exit 1
fi
fi
fi
Expand Down
4 changes: 4 additions & 0 deletions script/docker_ci.sh
Original file line number Diff line number Diff line change
Expand Up @@ -119,6 +119,10 @@ then
then
ALPAKA_DOCKER_ENV_LIST+=("--env" "CMAKE_CUDA_ARCHITECTURES=${CMAKE_CUDA_ARCHITECTURES}")
fi
if [ ! -z "${CMAKE_CUDA_FLAGS+x}" ]
then
ALPAKA_DOCKER_ENV_LIST+=("--env" "CMAKE_CUDA_FLAGS=${CMAKE_CUDA_FLAGS}")
fi
fi
ALPAKA_DOCKER_ENV_LIST+=("--env" "ALPAKA_CI_INSTALL_HIP=${ALPAKA_CI_INSTALL_HIP}")
if [ "${ALPAKA_CI_INSTALL_HIP}" == "ON" ]
Expand Down
Loading

0 comments on commit 4467954

Please sign in to comment.