-
Notifications
You must be signed in to change notification settings - Fork 222
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support for LinearAlgebra.pinv #2070
Comments
CUDA.jl doesn't promise compatibility with, say, all of LinearAlgebra.jl. As there's no functionality, or tests, for Specifically, here the |
Possible fix here https://discourse.julialang.org/t/pinv-not-type-stable/103885/13 |
After JuliaLang/julia#51351, |
Great, thanks for the update! If anybody cares about this functionality on current versions of Julia, please open a PR on e.g. GPUArrays back-porting this definition (but using |
for me pinv(cu(rand(10,10))) results in a
|
Works here
|
thanks for checking - now the obvious question: why?
running on a A6000 edit: (I'm asking because I have no idea how to even approach debugging this - but it is not highest priority - if it works for others, fine with me) |
I'm using |
I tried deving-both but no help here. Thanks for the comment |
Yeah, I'm not sure what to say here... ❯ julia
_
_ _ _(_)_ | Documentation: https://docs.julialang.org
(_) | (_) (_) |
_ _ _| |_ __ _ | Type "?" for help, "]?" for Pkg help.
| | | | | | |/ _` | |
| | |_| | | | (_| | | Version 1.11.1 (2024-10-16)
_/ |\__'_|_|_|\__'_| | Official https://julialang.org/ release
|__/ |
(@v1.11) pkg> activate --temp
Activating new project at `/tmp/jl_lUSpdY`
(jl_lUSpdY) pkg> add CUDA#master
Updating git-repo `https://github.com/JuliaGPU/CUDA.jl.git`
Updating registry at `~/.julia/registries/General.toml`
Resolving package versions...
Installed GPUCompiler ─ v1.0.1
Installed LLVM ──────── v9.1.3
Updating `/tmp/jl_lUSpdY/Project.toml`
[052768ef] + CUDA v5.5.2 `https://github.com/JuliaGPU/CUDA.jl.git#master`
julia> using CUDA, LinearAlgebra
julia> CUDA.allowscalar(false)
julia> pinv(cu(rand(10,10)))
10×10 CuArray{Float32, 2, CUDA.DeviceMemory}:
0.79093 -3.27 1.61866 0.332101 1.51121 -0.141722 -0.789732 2.41107 -0.787362 -0.748518
3.02281 -4.83408 2.19427 0.673688 2.17965 -1.5945 -2.05704 3.44536 0.593075 -1.61866
-4.06717 5.31124 -3.32017 -0.383606 -0.741184 1.35816 0.646221 -3.79391 -0.792143 3.22927
4.30408 -5.67278 3.60602 -0.373965 2.28992 -2.24799 -2.40252 5.19473 0.52331 -1.9391
3.73021 -8.38128 4.81611 1.22624 3.23734 -2.7525 -4.09566 7.25994 -0.109592 -1.62358
0.32791 -0.108507 -0.248915 0.156819 -0.352967 0.598659 -0.900688 0.613525 -0.323573 0.516044
13.3623 -19.4055 13.5906 1.24436 6.29929 -6.5096 -9.11556 16.5429 0.802335 -6.99507
-11.327 19.4026 -12.4005 -1.12684 -7.04506 5.5385 10.12 -16.9265 -0.316425 5.52855
0.606266 2.28777 -0.466847 -1.12594 -2.0018 0.646723 1.4594 -2.69254 1.71311 0.109465
-7.48365 9.57157 -6.09877 0.0717196 -2.85697 3.2981 4.4907 -7.35427 -1.07708 2.55932 |
most likely some dependency in my project lead to issues, not exactly sure why it didnt show up when adding the dev-version with the compat check - super sorry for not testing in a clean environment. thanks for your help, it works now too for me |
Using pinv() to find the partial transpose of a matrix in GPU fails. A minimal example is as follows
correctly gives the equivalent of Matrix(Diagonal([0.5,0,0,0,0])). If however we take
the following error is raised:
GPU compilation of MethodInstance for (::GPUArrays.var"#broadcast_kernel#26")(::CUDA.CuKernelContext, ::SubArray{Float64, 1, CuDeviceVector{Float64, 1}, Tuple{StepRange{Int64, Int64}}, true}, ::Base.Broadcast.Broadcasted{CUDA.CuArrayStyle{1}, Tuple{Base.OneTo{Int64}}, LinearAlgebra.var"#34#35", Tuple{Base.Broadcast.Extruded{SubArray{Int64, 1, CuDeviceVector{Int64, 1}, Tuple{StepRange{Int64, Int64}}, true}, Tuple{Bool}, Tuple{Int64}}}}, ::Int64) failed
KernelError: passing and using non-bitstype argument
Argument 4 to your kernel function is of type Base.Broadcast.Broadcasted{CUDA.CuArrayStyle{1}, Tuple{Base.OneTo{Int64}}, LinearAlgebra.var"#34#35", Tuple{Base.Broadcast.Extruded{SubArray{Int64, 1, CuDeviceVector{Int64, 1}, Tuple{StepRange{Int64, Int64}}, true}, Tuple{Bool}, Tuple{Int64}}}}, which is not isbits:
.f is of type LinearAlgebra.var"#34#35" which is not isbits.
.tol is of type Core.Box which is not isbits.
.contents is of type Any which is not isbits.
The text was updated successfully, but these errors were encountered: