GPU inference in Docker container fails due to missing libdevice directory #2201
Labels
stale
This label marks the issue/pr stale - to be closed automatically if no activity
stat:awaiting response
type:bug
Bug Report
System information
Describe the problem
With the latest version of Docker image tensorflow/serving:2.14.1-gpu, I cannot run inference of my model with GPU using Docker image tensorflow/serving. The following error is shown in the logs:
It appears that the CUDA libraries are not installed completely. The
libdevice
directory doesn't exist in the Docker image. I expected CUDA to be fully installed to support serving models with GPU.I encounter no problems with tensorflow/serving:2.11.0-gpu.
I considered the following solutions before raising this issue:
Workaround
Install the
cuda-toolkit
package in the Docker image.This increases the size of the Docker image by ~4GB (uncompressed).
Alternatively, it also works with the
tensorflow/serving:2.14.1-devel-gpu
Docker image, but this is even larger in size.Exact Steps to Reproduce
ptxas is not available:
$ ptxas --version bash: ptxas: command not found
Searching for a directory
nvvm
orlibdevice
returns nothing:find / -type d -name nvvm 2>/dev/null
When using 2.11.0, it does work:
ptxas is available:
Searching for a directory
nvvm
returns the directory in the cuda installation directory:$ find / -type d -name nvvm 2>/dev/null /usr/local/cuda-11.2/nvvm
The text was updated successfully, but these errors were encountered: