Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

关于Paddlle无法识别CUDA的问题 #1726

Closed
byteszard opened this issue Mar 18, 2022 · 3 comments
Closed

关于Paddlle无法识别CUDA的问题 #1726

byteszard opened this issue Mar 18, 2022 · 3 comments

Comments

@byteszard
Copy link

byteszard commented Mar 18, 2022

第一次使用GPU资源部署PaddleOCR,出现了一些问题,多次尝试无法解决后求助。

  • 宿主机是ubuntu20.04,只安装了nvidia驱动,没有安装cudacudnn
  • 使用官方镜像:registry.baidubce.com/paddlepaddle/serving:0.8.0-cuda11.2-cudnn8-runtime

出现如下问题:

  1. 问题一(docker启动时):
    image

  2. 问题二(docker启动后,送入图片预测):
    image

在容器中执行nvidia-smi
image
镜像里的安装包
image

部分dockerfile代码如下:

FROM registry.baidubce.com/paddlepaddle/serving:0.8.0-cuda11.2-cudnn8-runtime as serving

ENV TZ=Asia/Shanghai

RUN ln -s /usr/local/cuda/lib64/libcublas.so.11   /usr/lib/libcublas.so && \
    ln -s /usr/local/cuda/lib64/libcusolver.so.11   /usr/lib/libcusolver.so && \
    ln -s /usr/lib/x86_64-linux-gnu/libcudnn.so.8 /usr/lib/libcudnn.so

FROM serving as prepare

WORKDIR ppocr

RUN wget https://paddleocr.bj.bcebos.com/PP-OCRv2/chinese/ch_PP-OCRv2_det_infer.tar -O ch_PP-OCRv2_det_infer.tar && \
    tar -xf ch_PP-OCRv2_det_infer.tar && \
    wget https://paddleocr.bj.bcebos.com/PP-OCRv2/chinese/ch_PP-OCRv2_rec_infer.tar -O ch_PP-OCRv2_rec_infer.tar && \
    tar -xf ch_PP-OCRv2_rec_infer.tar && \
    ln -s /usr/local/bin/python3.7 /usr/local/bin/python && \
    python -m paddle_serving_client.convert --dirname ./ch_PP-OCRv2_det_infer/ \
                                         --model_filename inference.pdmodel          \
                                         --params_filename inference.pdiparams       \
                                         --serving_server ./ppocrv2_det_serving/ \
                                         --serving_client ./ppocrv2_det_client/ && \
    python -m paddle_serving_client.convert --dirname ./ch_PP-OCRv2_rec_infer/ \
                                         --model_filename inference.pdmodel          \
                                         --params_filename inference.pdiparams       \
                                         --serving_server ./ppocrv2_rec_serving/  \
                                         --serving_client ./ppocrv2_rec_client/ && \
    rm -rf *_infer.tar *_infer

启动命令尝试使用过如下几种方式,均无法解决;

  1. docker run -itd --gpus all
  2. docker run -itd --gpus all -e NVIDIA_DRIVER_CAPABILITIES=compute,utility -e NVIDIA_VISIBLE_DEVICES=all
  3. nvidia-docker run --runtime=nvidia
@github-actions
Copy link

Message that will be displayed on users' first issue

@ShiningZhang
Copy link
Collaborator

这边看到送入图片预测时日志显示使用0卡执行了预测,也获取了正常结果

@byteszard
Copy link
Author

日志上的Warnning问题怎么处理,如果GPU能够识别,那docker启动时,paddle无法识别cuda问题那个Warnning怎么处理;而且还没有本地Mac上起docker,通过CPU预测速度快;通过日志可以看见,大量时间耗在device_context检查环境那块;

@paddle-bot paddle-bot bot closed this as completed Apr 16, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants