Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

CUDA.function() == false; #64

Open
usmcamp0811 opened this issue Feb 8, 2024 · 3 comments
Open

CUDA.function() == false; #64

usmcamp0811 opened this issue Feb 8, 2024 · 3 comments

Comments

@usmcamp0811
Copy link

Not exactly sure how to use this to create a Nix package that has FHS. When I try to use the example tamplate I can't get CUDA.jl to see my GPU. My GPU works on the system with Julia and CUDA but if I use the Julia that is packaged with Julia2Nix the GPU is not functional.

@samrose
Copy link
Collaborator

samrose commented Feb 8, 2024

You probably need to include cudatoolkit

Here is what I had to do to create a dev env shell for pytorch that could access CUDA (for instance) https://gist.github.com/samrose/01fdcc045a262168540ba56ae95d1d26?permalink_comment_id=4816797#gistcomment-4816797

@usmcamp0811
Copy link
Author

ah ok after some struggling I got it to work. thanks

@usmcamp0811
Copy link
Author

After a lot more mucking around I came to realize that I still can't get CUDA to work with Julia2Nix; however this is probably because I am not building the Project correctly.

Initially I thought I had gotten things to work because I used @samrose 's shell config, but didn't realize it didn't even have Julia in it and so the shell defaulted to my system Julia which I have configured using scientific-fhs.

I am unable to generate the julia2nix.toml due to other errors. I hope that that is the root cause here.

If I try to use the julia-wrapped package the flake provides I get either one of three outcomes depending on how I configure my flake. I either get Julia failing to start because of some pathing issues (sorry I don't have the error log handy) or I get Julia to start but CUDA is not functional or finally I can get Julia to start, CUDA to report functional but I am unable to do operations on the GPU resulting in a ptxas error which I likely because I don't have cuDNN in my path or something along those lines.

ERROR: Failed to compile PTX code (ptxas exited with code 127)
Invocation arguments: --generate-line-info --verbose --gpu-name sm_86 --output-file /tmp/jl_21utE96Vkh.cubin /tmp/jl_StC1SjU3g9.ptx
Could not start dynamically linked executable: /home/mcamp/.config/julia/artifacts/913584335ab836f9781a0325178d0949c193f50b/bin/ptxas
NixOS cannot run dynamically linked executables intended for generic
linux environments out of the box. For more information, see:
https://nix.dev/permalink/stub-ld
If you think this is a bug, please file an issue and attach /tmp/jl_StC1SjU3g9.ptx
Stacktrace:
  [1] error(s::String)
    @ Base ./error.jl:35
  [2] cufunction_compile(job::GPUCompiler.CompilerJob, ctx::LLVM.ThreadSafeContext)
    @ CUDA ~/.config/julia/packages/CUDA/BbliS/src/compiler/execution.jl:429
  [3] #228
    @ ~/.config/julia/packages/CUDA/BbliS/src/compiler/execution.jl:348 [inlined]
  [4] LLVM.ThreadSafeContext(f::CUDA.var"#228#229"{GPUCompiler.CompilerJob{GPUCompiler.PTXCompilerTarget, CUDA.CUDACompilerParams, GPUCompiler.FunctionSpec{GPUArrays.var"#broadcast_kernel#26", Tuple{CUDA.CuKernelContext, CuDeviceMatrix{Float32, 1}, Base.Broadcast.Broadcasted{CUDA.CuArrayStyle{2}, Tuple{Base.OneTo{Int64}, Base.OneTo{Int64}}, typeof(*), Tuple{Int64, Base.Broadcast.Extruded{CuDeviceMatrix{Float32, 1}, Tuple{Bool, Bool}, Tuple{Int64, Int64}}}}, Int64}}}})
    @ LLVM ~/.config/julia/packages/LLVM/HykgZ/src/executionengine/ts_module.jl:14
  [5] JuliaContext(f::CUDA.var"#228#229"{GPUCompiler.CompilerJob{GPUCompiler.PTXCompilerTarget, CUDA.CUDACompilerParams, GPUCompiler.FunctionSpec{GPUArrays.var"#broadcast_kernel#26", Tuple{CUDA.CuKernelContext, CuDeviceMatrix{Float32, 1}, Base.Broadcast.Broadcasted{CUDA.CuArrayStyle{2}, Tuple{Base.OneTo{Int64}, Base.OneTo{Int64}}, typeof(*), Tuple{Int64, Base.Broadcast.Extruded{CuDeviceMatrix{Float32, 1}, Tuple{Bool, Bool}, Tuple{Int64, Int64}}}}, Int64}}}})
    @ GPUCompiler ~/.config/julia/packages/GPUCompiler/S3TWf/src/driver.jl:74
  [6] cufunction_compile(job::GPUCompiler.CompilerJob)
    @ CUDA ~/.config/julia/packages/CUDA/BbliS/src/compiler/execution.jl:347
  [7] cached_compilation(cache::Dict{UInt64, Any}, job::GPUCompiler.CompilerJob, compiler::typeof(CUDA.cufunction_compile), linker::typeof(CUDA.cufunction_link))
    @ GPUCompiler ~/.config/julia/packages/GPUCompiler/S3TWf/src/cache.jl:90
  [8] cufunction(f::GPUArrays.var"#broadcast_kernel#26", tt::Type{Tuple{CUDA.CuKernelContext, CuDeviceMatrix{Float32, 1}, Base.Broadcast.Broadcasted{CUDA.CuArrayStyle{2}, Tuple{Base.OneTo{Int64}, Base.OneTo{Int64}}, typeof(*), Tuple{Int64, Base.Broadcast.Extruded{CuDeviceMatrix{Float32, 1}, Tuple{Bool, Bool}, Tuple{Int64, Int64}}}}, Int64}}; name::Nothing, always_inline::Bool, kwargs::Base.Pairs{Symbol, Union{}, Tuple{}, NamedTuple{(), Tuple{}}})
    @ CUDA ~/.config/julia/packages/CUDA/BbliS/src/compiler/execution.jl:300
  [9] cufunction
    @ ~/.config/julia/packages/CUDA/BbliS/src/compiler/execution.jl:293 [inlined]
 [10] macro expansion
    @ ~/.config/julia/packages/CUDA/BbliS/src/compiler/execution.jl:102 [inlined]
 [11] #launch_heuristic#252
    @ ~/.config/julia/packages/CUDA/BbliS/src/gpuarrays.jl:17 [inlined]
 [12] launch_heuristic
    @ ~/.config/julia/packages/CUDA/BbliS/src/gpuarrays.jl:15 [inlined]
 [13] _copyto!
    @ ~/.config/julia/packages/GPUArrays/5XhED/src/host/broadcast.jl:65 [inlined]
 [14] copyto!
    @ ~/.config/julia/packages/GPUArrays/5XhED/src/host/broadcast.jl:46 [inlined]
 [15] copy
    @ ~/.config/julia/packages/GPUArrays/5XhED/src/host/broadcast.jl:37 [inlined]
 [16] materialize
    @ ./broadcast.jl:873 [inlined]
 [17] broadcast_preserving_zero_d
    @ ./broadcast.jl:862 [inlined]
 [18] *(A::Int64, B::CuArray{Float32, 2, CUDA.Mem.DeviceBuffer})
    @ Base ./arraymath.jl:21
 [19] top-level scope
    @ REPL[23]:1
 [20] top-level scope

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants