Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

--gpus is silently ignored. #19330

Open
raldone01 opened this issue Jul 24, 2023 · 14 comments
Open

--gpus is silently ignored. #19330

raldone01 opened this issue Jul 24, 2023 · 14 comments
Labels
kind/bug Categorizes issue or PR as related to a bug. stale-issue

Comments

@raldone01
Copy link

raldone01 commented Jul 24, 2023

Issue Description

Note: docker connects to the podman-docker-emulation-daemon.
See also: NVIDIA/nvidia-container-toolkit#126

Steps to reproduce the issue

Steps to reproduce the issue
Works:

  • sudo docker run --rm --device nvidia.com/gpu=all ubuntu nvidia-smi -L
  • sudo podman run --rm --device nvidia.com/gpu=all ubuntu nvidia-smi -L

Does not work:

  • sudo docker run --rm --gpus all ubuntu nvidia-smi -L
  • sudo podman run --rm --gpus all ubuntu nvidia-smi -L

Describe the results you received

The --gpus option is silently ignored.

Describe the results you expected

The --gpus option should work or issue a warning that it has been ignored.

podman info output

host:
  arch: amd64
  buildahVersion: 1.30.0
  cgroupControllers:
  - cpu
  - memory
  - pids
  cgroupManager: systemd
  cgroupVersion: v2
  conmon:
    package: /usr/bin/conmon is owned by conmon 1:2.1.7-1
    path: /usr/bin/conmon
    version: 'conmon version 2.1.7, commit: f633919178f6c8ee4fb41b848a056ec33f8d707d'
  cpuUtilization:
    idlePercent: 98.61
    systemPercent: 0.37
    userPercent: 1.02
  cpus: 72
  databaseBackend: boltdb
  distribution:
    distribution: arch
    version: unknown
  eventLogger: journald
  hostname: argon
  idMappings:
    gidmap:
    - container_id: 0
      host_id: 1000
      size: 1
    - container_id: 1
      host_id: 100000
      size: 65536
    uidmap:
    - container_id: 0
      host_id: 1000
      size: 1
    - container_id: 1
      host_id: 100000
      size: 65536
  kernel: 6.4.4-arch1-1
  linkmode: dynamic
  logDriver: journald
  memFree: 65719042048
  memTotal: 135022321664
  networkBackend: netavark
  ociRuntime:
    name: crun
    package: /usr/bin/crun is owned by crun 1.8.5-1
    path: /usr/bin/crun
    version: |-
      crun version 1.8.5
      commit: b6f80f766c9a89eb7b1440c0a70ab287434b17ed
      rundir: /run/user/1000/crun
      spec: 1.0.0
      +SYSTEMD +SELINUX +APPARMOR +CAP +SECCOMP +EBPF +CRIU +YAJL
  os: linux
  remoteSocket:
    path: /run/user/1000/podman/podman.sock
  security:
    apparmorEnabled: false
    capabilities: CAP_CHOWN,CAP_DAC_OVERRIDE,CAP_FOWNER,CAP_FSETID,CAP_KILL,CAP_NET_BIND_SERVICE,CAP_SETFCAP,CAP_SETGID,CAP_SETPCAP,CAP_SETUID,CAP_SYS_CHROOT
    rootless: true
    seccompEnabled: true
    seccompProfilePath: /etc/containers/seccomp.json
    selinuxEnabled: false
  serviceIsRemote: false
  slirp4netns:
    executable: /usr/bin/slirp4netns
    package: /usr/bin/slirp4netns is owned by slirp4netns 1.2.0-1
    version: |-
      slirp4netns version 1.2.0
      commit: 656041d45cfca7a4176f6b7eed9e4fe6c11e8383
      libslirp: 4.7.0
      SLIRP_CONFIG_VERSION_MAX: 4
      libseccomp: 2.5.4
  swapFree: 42947567616
  swapTotal: 42947567616
  uptime: 17h 49m 6.00s (Approximately 0.71 days)
plugins:
  authorization: null
  log:
  - k8s-file
  - none
  - passthrough
  - journald
  network:
  - bridge
  - macvlan
  - ipvlan
  volume:
  - local
registries: {}
store:
  configFile: /home/main/.config/containers/storage.conf
  containerStore:
    number: 3
    paused: 0
    running: 0
    stopped: 3
  graphDriverName: overlay
  graphOptions: {}
  graphRoot: /home/main/.local/share/containers/storage
  graphRootAllocated: 857601998848
  graphRootUsed: 271071322112
  graphStatus:
    Backing Filesystem: btrfs
    Native Overlay Diff: "true"
    Supports d_type: "true"
    Using metacopy: "false"
  imageCopyTmpDir: /var/tmp
  imageStore:
    number: 2
  runRoot: /run/user/1000/containers
  transientStore: false
  volumePath: /home/main/.local/share/containers/storage/volumes
version:
  APIVersion: 4.5.1
  Built: 1685139594
  BuiltTime: Sat May 27 00:19:54 2023
  GitCommit: 9eef30051c83f62816a1772a743e5f1271b196d7-dirty
  GoVersion: go1.20.4
  Os: linux
  OsArch: linux/amd64
  Version: 4.5.1

Podman in a container

No

Privileged Or Rootless

Privileged

Upstream Latest Release

Yes

Additional environment details

No response

Additional information

No response

@elezar
Copy link
Contributor

elezar commented Jul 24, 2023

As a maintainer of the NVIDIA Container Toolkit which provides the functionality that Docker leverages to support the --gpus flag in their CLI, I would prefer that the existing CDI device support in the --device flag be used instead of --gpus.

I would therefore recommend that one of the following options:

  • Using the --gpus flag in Podman issues a clear error or warning.
  • The --gpus flag is mapped to an equivalent --device flag: For example: --gpus all is mapped to --device=nvidia.com/gpu=all.

@rhatdan
Copy link
Member

rhatdan commented Jul 24, 2023

Interested in opening a PR?

@raldone01

This comment was marked as off-topic.

@rhatdan
Copy link
Member

rhatdan commented Jul 24, 2023

I am not sure what Podman is supposed to do with this information.

@rhatdan
Copy link
Member

rhatdan commented Jul 24, 2023

@elezar would love to meet with you and discuss how we could better integrate nvidia into Podman. We have lots of HPC customers and partners who are using nvidia devices with Podman, (I believe without requiring the hook).

@elezar
Copy link
Contributor

elezar commented Jul 25, 2023

I may have some cycles to look into this starting next week.

@rhatdan we can try to set something up if you like. As a summary, we're pushing CDI as the mechanism for interacting with NVIDIA GPUs going forward. This allows us to focus on generating CDI specifications for supported platforms with the generated specs consumable by all CDI-enabled clients.

@rhatdan
Copy link
Member

rhatdan commented Jul 27, 2023

We have been working with the HPC community on some of the features that they would like to see to make running containers with a GPU easier. Have a look at #19309

Does this help your situation out.

@github-actions
Copy link

A friendly reminder that this issue had no activity for 30 days.

@jboero
Copy link

jboero commented Feb 26, 2024

I see this is still a bug today but since this uses NVidia's container toolkit package and applies to NV GPUs anyway I managed to get what I need with the CUDA_VISIBLE_DEVICES env var. If anybody needs a workaround for NVidia:

podman run -e CUDA_VISIBLE_DEVICES=1 ghcr.io/ggerganov/llama.cpp:server-cuda etc...

https://docs.nvidia.com/cuda/cuda-c-programming-guide/index.html#env-vars

@rhatdan apologies if this conflicts with another fix but it seems to work for me and I'm not sure if it's an acceptable workaround.

@elezar
Copy link
Contributor

elezar commented Feb 26, 2024

@jboero your workaround already assumes that the NVIDIA devices are made available in the container since setting CUDA_VISIBLE_DEVICES will only affect the selection of devices that are already present. This is most likely because the nvidia-container-runtime-hook was installed and configured at some point.

Note that as mentioned above, we currently recommend using CDI in Podman since this is supported natively.

Please see https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/latest/install-guide.html and https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/latest/install-guide.html#configuring-podman for links to Podman-specific instructions.

@jboero
Copy link

jboero commented Feb 26, 2024

@elezar Thanks for the tip! You're right, this box has been fed-upped all the way back to F36. I never set up the hooks manually myself but it was added by NVidia's older official nvidia-container-toolkit package (still recommended and supported by NVidia in Fedora). It looks like they've updated their empty F39 repo with a few more packages, but still no nvidia-container-toolkit which is unfortunate. The only way I can get any of this close to working in F39 is to hardcode the nvidia repo for F37. I would love to fix NVidia's repos but I think they're still catching up to GCCv13. Is there an official guide for this on Fedora 39? For all practical purposes the CUDA_VISIBLE_DEVICES worked for me fine because the old hook automatically included all GPUs. Personally I would love to package (or see packaged) a Fedora RPM including a standard post-script for /etc/cdi/nvidia.yaml. I've been knocking on NVidia's door every few years trying to fix packaging from the inside. They're missing Fedora 38 entirely and partially complete F39 with F40 just around the corner in April.
https://developer.download.nvidia.com/compute/cuda/repos/

@elezar
Copy link
Contributor

elezar commented Feb 27, 2024

@jboero for the NVIDIA Container Toolkit, it is not required to use the CUDA Download repositories. We've recently revamped our packaging to produce a set of deb and rpm pacakges that are compatible with any platforms where the driver can be installed (or should be). This includes all modern Fedora distributions.

You can follow the updated instructions here: https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/latest/install-guide.html#installing-with-yum-or-dnf to install the latest version of the NVIDIA Container Toolkit (v1.14.5). Note that this does not install the OCI runtime hook.

If you do come across any inconsistencies, please feel free to open an issue against https://github.com/NVIDIA/nvidia-container-toolkit or https://github.com/NVIDIA/cloud-native-docs.

@jboero
Copy link

jboero commented Feb 27, 2024

Oh thanks when did that repo emerge? Is that the favoured repo in the future or will the standard CUDA repos be updated also? In my case I also need the cuBLAS packages of the main CUDA repos. Is there any conflict to enabling both at the same time?

@elezar
Copy link
Contributor

elezar commented Feb 28, 2024

The switch to this repo coincided with the v1.14.0 release of the NVIDIA Container Toolkit. We will continue to publish packages there as well as to the CUDA Download repos ... Although as you point out there may be some delay in getting repos for specific distributions. There should be no problems with having both repos enabled, although the priority of the CUDA repos may mean that the latest versions of the packages are only available if explicitly requested.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Categorizes issue or PR as related to a bug. stale-issue
Projects
None yet
Development

No branches or pull requests

4 participants