-
Notifications
You must be signed in to change notification settings - Fork 154
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
nvidia-driver-installer failing to install cuda libraries on some pods #139
Comments
From my experience, this is due to the GPU not being "ready to use". Usually happens when resources:
limits:
nvidia.com/gpu: 1 is not specified. The same problem occurs when trying to access GPU on From what I remember, the driver daemonset sets Setting nodeSelector:
cloud.google.com/gke-accelerator: XXX
resources:
limits:
nvidia.com/gpu: 1 should help. |
Thanks for the suggestion. The specification I use is correct, and most pods are setup correctly and do run successfully. It's only an occasional problem. The issue turns out to be with preemptible nodes, which can restart so quickly that the system does not correctly setup the GPU (via the deamonset, I think). Here is the note from Google support: "Since this period was very short, it means that api-server and k8s-scheduler were not aware that the node was preempted in the first place (this is a known issue in GKE with preemptible VMs). Since after the preemption, the node started with the same name, the workloads that were scheduled on the node were simply restarted by the kubelet." The workaround provided by Google is to add a node termination handler which shuts down the pods gracefully on node termination. It seems to be working for my case. |
This is still an intermittent issue for me. I upgraded to gke 1.21 in hopes that the Graceful Shutdown, which is enabled in gke 1.21, would mean the end of the Node Termination Handler, as indicated in its README. But I still get crashes because of the missing libcuda.so.1 |
I have many pods running on the same cluster/node pool, which has the nvidia-driver-installer daemonset installed. A small fraction of them (~few percent) have workloads that fail due to missing libcuda.so.1. When I manually check, I find the /usr/local/nvidia directory is not present. See below, showing two pods -- one incorrectly installed, the other correctly installed.
Worst case, if this is not easily resolvable, is there some way to automatically detect and remove the pods/nodes that get setup incorrectly?
The text was updated successfully, but these errors were encountered: