Releases: NVIDIA/nvidia-container-toolkit
v1.16.0-rc.1
What's Changed
- Support vulkan ICD files directly in a driver root. This allows for the discovery of vulkan files in GKE driver installations.
- Increase priority of ld.so.conf.d config file injected into container. This ensures that injected libraries are preferred over libraries present in the container.
- Set default CDI spec permissions to 644. This fixes permission issues when using the
nvidia-ctk cdi transform
functions. - Add
dev-root
option tonvidia-ctk system create-device-nodes
command. - Fix location of
libnvidia-ml.so.1
when a non-standard driver root is used. This enabled CDI spec generation when using the driver container on a host. - Recalculate minimum required CDI spec version on save.
- Move
nvidia-ctk hook
commands to a separatenvidia-cdi-hook
binary. The same subcommands are supported. - Use
:
as annvidia-ctk config --set
list separator. This fixes a bug when trying to set config options that are lists.
Changes in the Toolkit Container
- Bump CUDA base image version to 12.5.0
- Allow the path to
toolkit.pid
to be specified directly. - Remove provenance information from image manifests.
- Add
dev-root
option when configuring the toolkit. This adds support for GKE driver installations.
Full Changelog: v1.15.0...v1.16.0-rc.1
v1.15.0
This is a promotion of the v1.15.0-rc.4
release to GA.
NOTE: This release does NOT include the nvidia-container-runtime
and nvidia-docker2
packages. It is recommended that the nvidia-container-toolkit
packages be installed directly.
NOTE: This release is a unified release of the NVIDIA Container Toolkit that consists of the following packages:
The packages for this release are published to the libnvidia-container
package repositories.
Full Changelog: v1.14.0...v1.15.0
What's Changed
- Remove
nvidia-container-runtime
andnvidia-docker2
packages. - Use
XDG_DATA_DIRS
environment variable when locating config files such as graphics config files. - Add support for v0.7.0 Container Device Interface (CDI) specification.
- Add
--config-search-path
option tonvidia-ctk cdi generate
command. These paths are used when locating driver files such as graphics config files. - Add support for v1.2.0 OCI Runtime specification.
- Explicitly set
NVIDIA_VISIBLE_DEVICES=void
in generated CDI specifications. This prevents the NVIDIA Container Runtime from making additional modifications.
Changes in the toolkit-container
- Bump CUDA base image version to 12.4.1
v1.15.0-rc.4
- Fix build and tests targets on darwin by @elezar in #333
- Add spec-dir flag to nvidia-ctk cdi list command by @elezar in #342
- Specify DRIVER_ROOT consistently by @elezar in #346
- Support nvidia and nvidia-frontend names when getting device major by @tariq1890 in #330
- Allow multiple naming strategies when generating CDI specification by @elezar in #314
- Add --create-device-nodes option to toolkit config by @elezar in #345
- Remove additional libnvidia-container0 dependency by @elezar in #370
- Add imex support by @klueska in #375
- [R550 driver support] add fallback logic to device.Exists(name) by @tariq1890 in #379
- Use D3DKMTEnumAdapters3 for adapter enumeration by @jbujak in #397
- Add NVIDIA_VISIBLE_DEVICES=void to CDI specs by @elezar in #395
Changes in libnvidia-container
- Add imex support by @klueska in NVIDIA/libnvidia-container#242
- Add libnvidia-container-libseccomp2 package by @elezar in NVIDIA/libnvidia-container#238
- Use D3DKMTEnumAdapters3 for adapter enumeration by @jbujak in NVIDIA/libnvidia-container#247
Changes in the toolkit-container
v1.15.0-rc.3
- Fix bug in
nvidia-ctk hook update-ldcache
where default--ldconfig-path
value was not applied.
v1.15.0-rc.2
- Extend the
runtime.nvidia.com/gpu
CDI kind to support full-GPUs and MIG devices specified by index or UUID. - Fix bug when specifying
--dev-root
for Tegra-based systems. - Log explicitly requested runtime mode.
- Remove package dependency on libseccomp.
- Added detection of libnvdxgdmal.so.1 on WSL2
- Use devRoot to resolve MIG device nodes.
- Fix bug in determining default nvidia-container-runtime.user config value on SUSE-based systems.
- Add
crun
to the list of configured low-level runtimes. - Added support for
--ldconfig-path
tonvidia-ctk cdi generate
command. - Fix
nvidia-ctk runtime configure --cdi.enabled
for Docker. - Add discovery of the GDRCopy device (
gdrdrv
) if theNVIDIA_GDRCOPY
environment variable of the container is set toenabled
Changes in libnvidia-container
- Added detection of libnvdxgdmal.so.1 on WSL2
Changes in the toolkit-container
- Bump CUDA base image version to 12.3.1.
v1.15.0-rc.1
- Skip update of ldcache in containers without ldconfig. The .so.SONAME symlinks are still created.
- Normalize ldconfig path on use. This automatically adjust the ldconfig setting applied to ldconfig.real on systems where this exists.
- Include
nvidia/nvoptix.bin
in list of graphics mounts. - Include
vulkan/icd.d/nvidia_layers.json
in list of graphics mounts. - Add support for
--library-search-paths
tonvidia-ctk cdi generate
command. - Add support for injecting /dev/nvidia-nvswitch* devices if the NVIDIA_NVSWITCH=enabled envvar is specified.
- Added support for
nvidia-ctk runtime configure --enable-cdi
for thedocker
runtime. Note that this requires Docker >= 25. - Fixed bug in
nvidia-ctk config
command when using--set
. The types of applied config options are now applied correctly. - Add
--relative-to
option tonvidia-ctk transform root
command. This controls whether the root transformation is applied to host or container paths. - Added automatic CDI spec generation when the
runtime.nvidia.com/gpu=all
device is requested by a container.
Changes in libnvidia-container
- Fix device permission check when using cgroupv2 (fixes NVIDIA/libnvidia-container/#227)
v1.15.0-rc.4
What's Changed
- Fix build and tests targets on darwin by @elezar in #333
- Add spec-dir flag to nvidia-ctk cdi list command by @elezar in #342
- Specify DRIVER_ROOT consistently by @elezar in #346
- Support nvidia and nvidia-frontend names when getting device major by @tariq1890 in #330
- Allow multiple naming strategies when generating CDI specification by @elezar in #314
- Add --create-device-nodes option to toolkit config by @elezar in #345
- Remove additional libnvidia-container0 dependency by @elezar in #370
- Add imex support by @klueska in #375
- [R550 driver support] add fallback logic to device.Exists(name) by @tariq1890 in #379
- Use D3DKMTEnumAdapters3 for adapter enumeration by @jbujak in #397
- Add NVIDIA_VISIBLE_DEVICES=void to CDI specs by @elezar in #395
Changes in libnvidia-container
- Add imex support by @klueska in NVIDIA/libnvidia-container#242
- Add libnvidia-container-libseccomp2 package by @elezar in NVIDIA/libnvidia-container#238
- Use D3DKMTEnumAdapters3 for adapter enumeration by @jbujak in NVIDIA/libnvidia-container#247
Changes in the toolkit-container
Full Changelog: v1.15.0-rc.3...v1.15.0-rc.4
v1.14.6
What's Changed
- Add support for extracting device major number from
/proc/devices
ifnvidia
is used as a device name overnvidia-frontend
. This is required to support the creation of/dev/char
symlinks on NVIDIA CUDA drivers with version550.x
. - Add support for selecting IMEX channels using the
NVIDIA_IMEX_CHANNELS
environement variable.
Changes in libnvidia-container
- Added creation and injection of IMEX channels.
Dependency updates
- Bump github.com/sirupsen/logrus from 1.9.0 to 1.9.3 by @dependabot in #355
- Bump golang.org/x/sys from 0.7.0 to 0.17.0 by @dependabot in #357
- Bump github.com/pelletier/go-toml from 1.9.4 to 1.9.5 by @dependabot in #359
- Bump github.com/fsnotify/fsnotify from 1.5.4 to 1.7.0 by @dependabot in #358
- Bump github.com/urfave/cli/v2 from 2.3.0 to 2.27.1 by @dependabot in #356
- Bump golang.org/x/mod from 0.5.0 to 0.15.0 by @dependabot in #367
- Bump github.com/stretchr/testify from 1.8.1 to 1.8.4 by @dependabot in #366
- Bump github.com/NVIDIA/go-nvml from 0.12.0-1 to 0.12.0-2 by @dependabot in #365
- Bump github.com/opencontainers/runtime-spec from 1.1.0 to 1.2.0 by @dependabot in #368
Full Changelog: v1.14.5...v1.14.6
v1.14.5
What's Changed
- Update dependencies to address CVE in runc.
- Fix
nvidia-ctk runtime configure --cdi.enabled
for Docker. This was incorrectly settingexperimental = true
instead
of settingfeatures.cdi = true
.
Full Changelog: v1.14.4...v1.14.5
v1.15.0-rc.3
What's Changed
- Fix bug in
nvidia-ctk hook update-ldcache
where default--ldconfig-path
value was not applied.
Full Changelog: v1.15.0-rc.2...v1.15.0-rc.3
v1.15.0-rc.2
What's changed
- Extend the
runtime.nvidia.com/gpu
CDI kind to support full-GPUs and MIG devices specified by index or UUID. - Fix bug when specifying
--dev-root
for Tegra-based systems. - Log explicitly requested runtime mode.
- Remove package dependency on libseccomp.
- Added detection of libnvdxgdmal.so.1 on WSL2
- Use devRoot to resolve MIG device nodes.
- Fix bug in determining default nvidia-container-runtime.user config value on SUSE-based systems.
- Add
crun
to the list of configured low-level runtimes. - Added support for
--ldconfig-path
tonvidia-ctk cdi generate
command. - Fix
nvidia-ctk runtime configure --cdi.enabled
for Docker. - Add discovery of the GDRCopy device (
gdrdrv
) if theNVIDIA_GDRCOPY
environment variable of the container is set toenabled
Changes in libnvidia-container
- Added detection of libnvdxgdmal.so.1 on WSL2
Changes in the toolkit-container
- Bump CUDA base image version to 12.3.1.
Full Changelog: v1.15.0-rc.1...v1.15.0-rc.2
v1.14.4
What's Changed
- Include
nvidia/nvoptix.bin
in list of graphics mounts. (#127) - Include
vulkan/icd.d/nvidia_layers.json
in list of graphics mounts. (#127) - Fixed bug in
nvidia-ctk config
command when using--set
. The types of applied config options are now applied correctly. - Log explicitly requested runtime mode.
- Remove package dependency on libseccomp. (#110)
- Added detection of libnvdxgdmal.so.1 on WSL2.
- Fix bug in determining default nvidia-container-runtime.user config value on SUSE-based systems. (#110)
- Add
crun
to the list of configured low-level runtimes. - Add
--cdi.enabled
option tonvidia-ctk runtime configure
command to enable CDI in containerd. - Added support for
nvidia-ctk runtime configure --enable-cdi
for thedocker
runtime. Note that this requires Docker >= 25.
Changes in livnvidia-container
- Added detection of libnvdxgdmal.so.1 on WSL2.
Changes in the toolkit-container
- Bumped CUDA base image version to 12.3.1.
Full Changelog: v1.14.3...v1.14.4
v1.15.0-rc.1
What's Changed
- Skip update of ldcache in containers without ldconfig. The .so.SONAME symlinks are still created.
- Normalize ldconfig path on use. This automatically adjust the ldconfig setting applied to ldconfig.real on systems where this exists.
- Include
nvidia/nvoptix.bin
in list of graphics mounts. - Include
vulkan/icd.d/nvidia_layers.json
in list of graphics mounts. - Add support for
--library-search-paths
tonvidia-ctk cdi generate
command. - Add support for injecting /dev/nvidia-nvswitch* devices if the NVIDIA_NVSWITCH=enabled envvar is specified.
- Added support for
nvidia-ctk runtime configure --enable-cdi
for thedocker
runtime. Note that this requires Docker >= 25. - Fixed bug in
nvidia-ctk config
command when using--set
. The types of applied config options are now applied correctly. - Add
--relative-to
option tonvidia-ctk transform root
command. This controls whether the root transformation is applied to host or container paths.
Changes in livnvidia-container
- Fix device permission check when using cgroupv2 (fixes NVIDIA/libnvidia-container/#227)
Full Changelog: v1.14.3...v1.15.0-rc.1
v1.14.3
What's Changed
Changes in livnvidia-container
- Bumped version to
v1.14.3
for NVIDIA Container Toolkit release
Changes in the toolkit-container
- Bumped CUDA base image version to 12.2.2.
Full Changelog: v1.14.1...v1.14.2