-
Notifications
You must be signed in to change notification settings - Fork 80
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Switch to KernelAbstractions.jl #559
Conversation
2a541d0
to
184b36f
Compare
One of the consequences here is that the NeuralAttentionlib.jl also relies on I also see that Bennu.jl uses |
# WrappedArray from Adapt for Base wrappers. | ||
backend(::Type{WA}) where WA<:WrappedArray = backend(unwrap_type(WA)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@vchuravy Does KA.jl already support recursing into wrapped arrays for get_backend
queries?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Uhm I don't think so, but we can add that.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not entirely sure we want to; it would pull the whole Union mess of Adapt's WrappedArray
into KA.jl. On the other hand, there's not much of an alternative right now if we want the ability to launch kernels on wrapped arrays...
KernelAbstractions.isgpu(b::JLBackend) = false | ||
|
||
function convert_to_cpu(obj::Kernel{JLBackend, W, N, F}) where {W, N, F} | ||
return Kernel{typeof(KernelAbstractions.CPU(; static = obj.backend.static)), W, N, F}(KernelAbstractions.CPU(; static = obj.backend.static), obj.f) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is clever xD
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can you explain? I didn't get what this was for, and it seems unused?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Unless I did something wrong, it's used for kernel configuration:
function (obj::Kernel{JLBackend})(args...; ndrange=nothing, workgroupsize=nothing)
device_args = jlconvert.(args)
new_obj = convert_to_cpu(obj)
new_obj(device_args...; ndrange, workgroupsize)
end
And will essentially transform anything that has a JLBackend into a CPU(static)
(default false
) for KA execution so we can use the GPU kernels on CPU Array
s. It's needed because JLBackend
is within KernelAbstractions.GPU
, but needs to actually call CPU kernels:
struct JLBackend <: KernelAbstractions.GPU
static::Bool
JLBackend(;static::Bool=false) = new(static)
end
6eaa774
to
2f8f813
Compare
d8fb6d3
to
7348bba
Compare
7348bba
to
f418d7a
Compare
Let's do this! |
Woo! Great work! |
Continuation of #525. Separate PR because I reverted the
launch_configuration
removal that's on @leios' branch.