You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Sanity checks (read this first, then remove this section)
[x ] Make sure you're reporting a bug; for general questions, please use Discourse or
Slack.
[x ] If you're dealing with a performance issue, make sure you disable scalar iteration
(CUDA.allowscalar(false)). Only file an issue if that shows scalar iteration happening
in CUDA.jl or Base Julia, as opposed to your own code.
[ x] If you're seeing an error message, follow the error message instructions, if any
(e.g. inspect code with @device_code_warntype). If you can't solve the problem using
that information, make sure to post it as part of the issue.
[x ] Always ensure you're using the latest version of CUDA.jl, and if possible, please
check the master branch to see if your issue hasn't been resolved yet.
If your bug is still valid, please go ahead and fill out the template below.
Describe the bug
The function cudnnFindConvolutionAlgorithmWorkspaceSize uses CUDA.cached_memory which was removed in 6ab0d42. This causes convolution ops using CUDA to fail.
To reproduce
julia>using CUDA
julia> CUDA.CUDNN.cudnnFindConvolutionAlgorithmWorkspaceSize([1])
ERROR: UndefVarError: cached_memory not defined
Stacktrace:
[1] cudnnFindConvolutionAlgorithmWorkspaceSize(x::Vector{Int64})
@ CUDA.CUDNN E:\Programs\julia\.julia\packages\CUDA\zx5iI\lib\cudnn\convolution.jl:235
[2] top-level scope
@ REPL[4]:1
Manifest.toml
Tested in a project with many other deps, but bug should be obvious.
Expected behavior
cudnnFindConvolutionAlgorithmWorkspaceSize should not crash so convolution ops work :)
Version info
Details on Julia:
# please post the output of:
julia> versioninfo()
Julia Version 1.7.0-beta3.0
Commit e76c9dad42 (2021-07-07 08:12 UTC)
Platform Info:
OS: Windows (x86_64-w64-mingw32)
CPU: Intel(R) Core(TM) i7-5820K CPU @ 3.30GHz
WORD_SIZE: 64
LIBM: libopenlibm
LLVM: libLLVM-12.0.0 (ORCJIT, haswell)
Environment:
JULIA_DEPOT_PATH = E:/Programs/julia/.julia
JULIA_EDITOR = code
JULIA_NUM_THREADS = 6
I guess the issue is that NNlib code was removed which also took the tests with it. @maleadt could we bring back CI on NNlib/ Flux specific cases? This would currently break most of Flux.
Sanity checks (read this first, then remove this section)
[x ] Make sure you're reporting a bug; for general questions, please use Discourse or
Slack.
[x ] If you're dealing with a performance issue, make sure you disable scalar iteration
(
CUDA.allowscalar(false)
). Only file an issue if that shows scalar iteration happeningin CUDA.jl or Base Julia, as opposed to your own code.
[ x] If you're seeing an error message, follow the error message instructions, if any
(e.g.
inspect code with @device_code_warntype
). If you can't solve the problem usingthat information, make sure to post it as part of the issue.
[x ] Always ensure you're using the latest version of CUDA.jl, and if possible, please
check the master branch to see if your issue hasn't been resolved yet.
If your bug is still valid, please go ahead and fill out the template below.
Describe the bug
The function cudnnFindConvolutionAlgorithmWorkspaceSize uses CUDA.cached_memory which was removed in 6ab0d42. This causes convolution ops using CUDA to fail.
To reproduce
Manifest.toml
Tested in a project with many other deps, but bug should be obvious.
Expected behavior
cudnnFindConvolutionAlgorithmWorkspaceSize should not crash so convolution ops work :)
Version info
Details on Julia:
Details on CUDA:
Additional context
N/A
The text was updated successfully, but these errors were encountered: