Skip to content
Permalink

Comparing changes

Choose two branches to see what’s changed or to start a new pull request. If you need to, you can also or learn more about diff comparisons.

Open a pull request

Create a new pull request by comparing changes across two branches. If you need to, you can also . Learn more about diff comparisons here.
base repository: JuliaBinaryWrappers/CUDA_Driver_jll.jl
Failed to load repositories. Confirm that selected base ref is valid, then try again.
Loading
base: CUDA_Driver-v0.9.1+0
Choose a base ref
...
head repository: JuliaBinaryWrappers/CUDA_Driver_jll.jl
Failed to load repositories. Confirm that selected head ref is valid, then try again.
Loading
compare: main
Choose a head ref
  • 8 commits
  • 6 files changed
  • 1 contributor

Commits on Jul 4, 2024

  1. Copy the full SHA
    18c89e9 View commit details

Commits on Aug 6, 2024

  1. Copy the full SHA
    0e2a5a6 View commit details

Commits on Aug 7, 2024

  1. Copy the full SHA
    2401ceb View commit details

Commits on Sep 17, 2024

  1. Copy the full SHA
    a707cbd View commit details

Commits on Sep 18, 2024

  1. Copy the full SHA
    eaef112 View commit details

Commits on Oct 1, 2024

  1. Copy the full SHA
    32211b0 View commit details

Commits on Nov 22, 2024

  1. Copy the full SHA
    d10e16a View commit details

Commits on Dec 9, 2024

  1. Copy the full SHA
    076bd58 View commit details
Showing with 123 additions and 292 deletions.
  1. +6 −8 Artifacts.toml
  2. +4 −6 Project.toml
  3. +3 −3 README.md
  4. +0 −1 src/CUDA_Driver_jll.jl
  5. +55 −137 src/wrappers/aarch64-linux-gnu.jl
  6. +55 −137 src/wrappers/x86_64-linux-gnu.jl
14 changes: 6 additions & 8 deletions Artifacts.toml
Original file line number Diff line number Diff line change
@@ -1,20 +1,18 @@
[[CUDA_Driver]]
arch = "x86_64"
git-tree-sha1 = "a86b67fd924e2a8c72d376d301a34b2364281978"
lazy = true
git-tree-sha1 = "e420f89c10caa2cf4a62ef130b12a1ed7f0f7827"
libc = "glibc"
os = "linux"

[[CUDA_Driver.download]]
sha256 = "350b076a65dc548226a91cb53a029647c1264ef6099379eb8f0e5f95dcaa0a15"
url = "https://github.com/JuliaBinaryWrappers/CUDA_Driver_jll.jl/releases/download/CUDA_Driver-v0.9.1+0/CUDA_Driver.v0.9.1.x86_64-linux-gnu.tar.gz"
sha256 = "e708ae2bd87a58872774006c887bae942026492436a7f34a66644af83accd7ae"
url = "https://github.com/JuliaBinaryWrappers/CUDA_Driver_jll.jl/releases/download/CUDA_Driver-v0.11.0+0/CUDA_Driver.v0.11.0.x86_64-linux-gnu.tar.gz"
[[CUDA_Driver]]
arch = "aarch64"
git-tree-sha1 = "056359c8cd352cf6990ab2a77cba5667e3ce752e"
lazy = true
git-tree-sha1 = "8c3671b08a141a62079439a3c0334a8c31d0be58"
libc = "glibc"
os = "linux"

[[CUDA_Driver.download]]
sha256 = "49e52dc966d0fdbb0d52b51f4ec10079e74e40983b180a38f76e79ecd51701f9"
url = "https://github.com/JuliaBinaryWrappers/CUDA_Driver_jll.jl/releases/download/CUDA_Driver-v0.9.1+0/CUDA_Driver.v0.9.1.aarch64-linux-gnu.tar.gz"
sha256 = "414f9428c1e700480e81b326451970b6b460d9f9b904c69df57ad8804cc66a26"
url = "https://github.com/JuliaBinaryWrappers/CUDA_Driver_jll.jl/releases/download/CUDA_Driver-v0.11.0+0/CUDA_Driver.v0.11.0.aarch64-linux-gnu.tar.gz"
10 changes: 4 additions & 6 deletions Project.toml
Original file line number Diff line number Diff line change
@@ -1,18 +1,16 @@
name = "CUDA_Driver_jll"
uuid = "4ee394cb-3365-5eb0-8335-949819d2adfc"
version = "0.9.1+0"
version = "0.11.0+0"

[deps]
JLLWrappers = "692b3bcd-3c85-4b1f-b108-f13ce0eb3210"
Pkg = "44cfe95a-1eb2-52ea-b672-e2afdf69b78f"
LazyArtifacts = "4af54fe1-eca0-43a8-85a7-787d91b784e3"
Libdl = "8f399da3-3557-5675-b5ff-fb832c97cbdb"
Artifacts = "56f22d72-fd6d-98f1-02f0-08ddc0907c33"

[compat]
JLLWrappers = "1.2.0"
julia = "1.0"
Pkg = "1"
LazyArtifacts = "1"
Libdl = "1"
Artifacts = "1"
Pkg = "< 0.0.1, 1"
Libdl = "< 0.0.1, 1"
Artifacts = "< 0.0.1, 1"
6 changes: 3 additions & 3 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,10 +1,10 @@
# `CUDA_Driver_jll.jl` (v0.9.1+0)
# `CUDA_Driver_jll.jl` (v0.11.0+0)

[![deps](https://juliahub.com/docs/CUDA_Driver_jll/deps.svg)](https://juliahub.com/ui/Packages/General/CUDA_Driver_jll/)

This is an autogenerated package constructed using [`BinaryBuilder.jl`](https://github.com/JuliaPackaging/BinaryBuilder.jl).

The originating [`build_tarballs.jl`](https://github.com/JuliaPackaging/Yggdrasil/blob/096e4c2516fea296f70a7cfc5f95d940242c87d9/C/CUDA/CUDA_Driver/build_tarballs.jl) script can be found on [`Yggdrasil`](https://github.com/JuliaPackaging/Yggdrasil/), the community build tree.
The originating [`build_tarballs.jl`](https://github.com/JuliaPackaging/Yggdrasil/blob/88bbdf7fc658047bbdcd49e0a357683eb9a3f78b/C/CUDA/CUDA_Driver/build_tarballs.jl) script can be found on [`Yggdrasil`](https://github.com/JuliaPackaging/Yggdrasil/), the community build tree.

## Bug Reports

@@ -18,7 +18,7 @@ For more details about JLL packages and how to use them, see `BinaryBuilder.jl`

The tarballs for `CUDA_Driver_jll.jl` have been built from these sources:

* file: https://developer.download.nvidia.com/compute/cuda/repos/rhel8/sbsa/cuda-compat-12-5-555.42.06-1.aarch64.rpm (SHA256 checksum: `600d86e143b9fa97ec7862ff634210cacf072ce709c738ee322c932b763ab138`)
* file: https://developer.download.nvidia.com/compute/cuda/repos/rhel8/sbsa/cuda-compat-12-7-565.57.01-1.el8.aarch64.rpm (SHA256 checksum: `68430934bc3de03cc00240b78da39099981fcedb81b48e69cbe4505dc36fffd7`)

## Platforms

1 change: 0 additions & 1 deletion src/CUDA_Driver_jll.jl
Original file line number Diff line number Diff line change
@@ -2,7 +2,6 @@
baremodule CUDA_Driver_jll
using Base
using Base: UUID
using LazyArtifacts
import JLLWrappers

JLLWrappers.@generate_main_file_header("CUDA_Driver")
192 changes: 55 additions & 137 deletions src/wrappers/aarch64-linux-gnu.jl
Original file line number Diff line number Diff line change
@@ -22,7 +22,7 @@ function __init__()

JLLWrappers.@init_library_product(
libnvidia_nvvm,
"lib/libnvidia-nvvm.so",
"lib/libnvidia-nvvm.so.4",
nothing,
)

@@ -34,13 +34,6 @@ function __init__()

JLLWrappers.@generate_init_footer()

global compat_version = v"12.5.0"
# global variables we will set
global libcuda = nothing
global libcuda_version = nothing
global libcuda_original_version = nothing
# compat_version is set in build_tarballs.jl

# manual use of preferences, as we can't depend on additional packages in JLLs.
CUDA_Driver_jll_uuid = Base.UUID("4ee394cb-3365-5eb0-8335-949819d2adfc")
preferences = Base.get_preferences(CUDA_Driver_jll_uuid)
@@ -68,153 +61,78 @@ function __init__()
missing
end

# minimal API call wrappers we need
function driver_version(library_handle)
function_handle = Libdl.dlsym(library_handle, "cuDriverGetVersion"; throw_error=false)
if function_handle === nothing
@debug "Driver library seems invalid (does not contain 'cuDriverGetVersion')"
return nothing
end
version_ref = Ref{Cint}()
status = ccall(function_handle, Cint, (Ptr{Cint},), version_ref)
if status != 0
@debug "Call to 'cuDriverGetVersion' failed with status $status"
return nothing
end
major, ver = divrem(version_ref[], 1000)
minor, patch = divrem(ver, 10)
return VersionNumber(major, minor, patch)
end
function init_driver(library_handle)
function_handle = Libdl.dlsym(library_handle, "cuInit")
status = ccall(function_handle, Cint, (UInt32,), 0)
# libcuda.cuInit dlopens NULL, aka. the main program, which increments the refcount
# of libcuda. this breaks future dlclose calls, so eagerly lower the refcount already.
Libdl.dlclose(library_handle)
return status
end
libcuda_deps = [libcuda_debugger, libnvidia_nvvm, libnvidia_ptxjitcompiler]
libcuda_system = Sys.iswindows() ? "nvcuda" : "libcuda.so.1"
can_use_compat = true

# find the system driver
system_driver = if Sys.iswindows()
Libdl.find_library("nvcuda")
# check if we even have an artifact
if @isdefined(libcuda_compat)
@debug "Forward-compatible driver found at $libcuda_compat"
else
Libdl.find_library(["libcuda.so.1", "libcuda.so"])
end
if system_driver == ""
@debug "No system CUDA driver found"
return
end
libcuda = system_driver

# check if the system driver is already loaded. in that case, we have to use it because
# the code that loaded it in the first place might have made assumptions based on it.
system_driver_loaded = Libdl.dlopen(system_driver, Libdl.RTLD_NOLOAD;
throw_error=false) !== nothing
driver_handle = Libdl.dlopen(system_driver; throw_error=false)
if driver_handle === nothing
@debug "Failed to load system CUDA driver"
return
end

# query the system driver version
# XXX: apparently cuDriverGetVersion can be used before cuInit,
# despite the docs stating "any function [...] will return
# CUDA_ERROR_NOT_INITIALIZED"; is this a recent change?
system_version = driver_version(driver_handle)
if system_version === nothing
@debug "Failed to query system CUDA driver version"
# note that libcuda is already set here, so we'll continue using the system driver
# and CUDA.jl will likely report the reason cuDriverGetVersion didn't work.
return
end
@debug "System CUDA driver found at $system_driver, detected as version $system_version"
libcuda = system_driver
libcuda_version = system_version

# check if the system driver is already loaded (see above)
if system_driver_loaded
@debug "System CUDA driver already loaded, continuing using it"
return
@debug "No forward-compatible driver available for your platform."
can_use_compat = false
end

# check the user preference
if compat_preference !== missing
@debug "CUDA compat preference: $(compat_preference)"
if !compat_preference
@debug "User disallows using forward-compatible driver."
return
can_use_compat = false
end
end

# check the version
if system_version >= compat_version
@debug "System CUDA driver is recent enough; not using forward-compatible driver"
return
# check if the system driver is already loaded. in that case, we have to use it because
# the code that loaded it in the first place might have made assumptions based on it.
if Libdl.dlopen(libcuda_system, Libdl.RTLD_NOLOAD; throw_error=false) !== nothing
@debug "System CUDA driver already loaded, continuing using it."
can_use_compat = false
end

# check if we can unload the system driver.
# if we didn't, we can't consider a forward compatible library because that would
# risk having multiple copies of libcuda.so loaded (also see NVIDIA bug #3418723)
Libdl.dlclose(driver_handle)
system_driver_loaded = Libdl.dlopen(system_driver, Libdl.RTLD_NOLOAD;
throw_error=false) !== nothing
if system_driver_loaded
@debug "Could not unload the system CUDA library;" *
" this prevents use of the forward-compatible driver"
return
end
# check if we can load the forward-compatible driver in a separate process
function try_driver(driver, deps)
script = raw"""
using Libdl
driver, deps... = ARGS
# check if this process is hooked by CUDA's injection libraries, which prevents
# unloading libcuda after dlopening. this is problematic, because we might want to
# after loading a forwards-compatible libcuda and realizing we can't use it. without
# being able to unload the library, we'd run into issues (see NVIDIA bug #3418723)
hooked = haskey(ENV, "CUDA_INJECTION64_PATH")
if hooked
@debug "Running under CUDA injection tools;" *
" this prevents use of the forward-compatible driver"
return
end
for dep in deps
Libdl.dlopen(dep; throw_error=false) === nothing && exit(-1)
end
# check if we even have an artifact
if !@isdefined(libcuda_compat)
@debug "No forward-compatible CUDA library available for your platform."
return
library_handle = Libdl.dlopen(driver; throw_error=false)
library_handle === nothing && exit(-1)
function_handle = Libdl.dlsym(library_handle, "cuInit")
status = ccall(function_handle, Cint, (UInt32,), 0)
status == 0 || exit(-2)
exit(0)
"""
# make sure we don't include any system image flags here since this will cause an infinite loop of __init__()
success(`$(Cmd(filter(!startswith(r"-J|--sysimage"), Base.julia_cmd().exec))) --compile=min -t1 --startup-file=no -e $script $driver $deps`)
end
compat_driver = libcuda_compat
@debug "Forward-compatible CUDA driver found at $compat_driver;" *
" known to be version $(compat_version)"

# finally, load the compatibility driver to see if it supports this platform
driver_handle = Libdl.dlopen(compat_driver; throw_error=true)

init_status = init_driver(driver_handle)
if init_status != 0
@debug "Could not use forward compatibility package (error $init_status)"

# see comment above about unloading the system driver
Libdl.dlclose(driver_handle)
compat_driver_loaded = Libdl.dlopen(compat_driver, Libdl.RTLD_NOLOAD;
throw_error=false) !== nothing
if compat_driver_loaded
error("Could not unload forwards compatible CUDA driver." *
"This is probably caused by running Julia under a tool that hooks CUDA API calls." *
"In that case, prevent Julia from loading multiple drivers" *
" by setting JULIA_CUDA_USE_COMPAT=false in your environment.")
end

return
if can_use_compat && !try_driver(libcuda_compat, libcuda_deps)
@debug "Failed to load forwards-compatible driver."
can_use_compat = false
end

# load dependent libraries
# XXX: we can do this after loading libcuda, because these are runtime dependencies.
# if loading libcuda or calling cuInit would already require these, do so earlier.
Libdl.dlopen(libcuda_debugger; throw_error=true)
Libdl.dlopen(libnvidia_nvvm; throw_error=true)
Libdl.dlopen(libnvidia_ptxjitcompiler; throw_error=true)

@debug "Successfully loaded forwards-compatible CUDA driver"
libcuda = compat_driver
libcuda_version = compat_version
libcuda_original_version = system_version
# finally, load the appropriate driver
if can_use_compat
@debug "Using forwards-compatible CUDA driver."
global libcuda = libcuda_compat

# load the driver and its dependencies; this should now always succeed
# as we've already verified that we can load it in a separate process.
for dep in libcuda_deps
Libdl.dlopen(dep; throw_error=true)
end
Libdl.dlopen(libcuda_compat; throw_error=true)
elseif Libdl.dlopen(libcuda_system; throw_error=false) !== nothing
@debug "Using system CUDA driver."
global libcuda = libcuda_system
else
@debug "Could not load system CUDA driver."
global libcuda = nothing
end

end # __init__()
Loading