You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Jul 14, 2022. It is now read-only.
I am not sure if this is still a hold over from VTK-m needing static libs with CUDA or if there is an issue that requires VTK-h to build static libs when using CUDA. There is no conflict for this case in the spack recipe, but when I went to build I hit a CMake error.
On the VTK-m side I believe it is relatively well tested at this point, it is how the CUDA CI is configured now. I have been running builds and simple tests with it without issue (v1.7.1).
Integration with VTK-h on the other hand I am not sure.
I'll just mention things that might be helpful for whoever looks into this. The main differences between what vtkm is likely testing and what vtkh and ascent are doing is (in vtkh) is compiling new vtkm code (i.e., kernels and other code that executes on the device). The only reason we initially enforced that vtkm was static was because in the old days it had to be, so there is CMake code that checks that. Additionally, there are spack variants that enforce this too. In Ascent, we had to perform a final device linking step, which is painful and hard to understand for various cmake reasons, which took a very long time to find a solution for and implement.
If it can be compiled as shared, that would be fantastic since it would reduce the complexity.
I am not sure if this is still a hold over from VTK-m needing static libs with CUDA or if there is an issue that requires VTK-h to build static libs when using CUDA. There is no conflict for this case in the spack recipe, but when I went to build I hit a CMake error.
@cyrush
The text was updated successfully, but these errors were encountered: