We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
第一次使用GPU资源部署PaddleOCR,出现了一些问题,多次尝试无法解决后求助。
ubuntu20.04
nvidia
cuda
cudnn
registry.baidubce.com/paddlepaddle/serving:0.8.0-cuda11.2-cudnn8-runtime
出现如下问题:
问题一(docker启动时):
问题二(docker启动后,送入图片预测):
在容器中执行nvidia-smi 镜像里的安装包
nvidia-smi
部分dockerfile代码如下:
FROM registry.baidubce.com/paddlepaddle/serving:0.8.0-cuda11.2-cudnn8-runtime as serving ENV TZ=Asia/Shanghai RUN ln -s /usr/local/cuda/lib64/libcublas.so.11 /usr/lib/libcublas.so && \ ln -s /usr/local/cuda/lib64/libcusolver.so.11 /usr/lib/libcusolver.so && \ ln -s /usr/lib/x86_64-linux-gnu/libcudnn.so.8 /usr/lib/libcudnn.so FROM serving as prepare WORKDIR ppocr RUN wget https://paddleocr.bj.bcebos.com/PP-OCRv2/chinese/ch_PP-OCRv2_det_infer.tar -O ch_PP-OCRv2_det_infer.tar && \ tar -xf ch_PP-OCRv2_det_infer.tar && \ wget https://paddleocr.bj.bcebos.com/PP-OCRv2/chinese/ch_PP-OCRv2_rec_infer.tar -O ch_PP-OCRv2_rec_infer.tar && \ tar -xf ch_PP-OCRv2_rec_infer.tar && \ ln -s /usr/local/bin/python3.7 /usr/local/bin/python && \ python -m paddle_serving_client.convert --dirname ./ch_PP-OCRv2_det_infer/ \ --model_filename inference.pdmodel \ --params_filename inference.pdiparams \ --serving_server ./ppocrv2_det_serving/ \ --serving_client ./ppocrv2_det_client/ && \ python -m paddle_serving_client.convert --dirname ./ch_PP-OCRv2_rec_infer/ \ --model_filename inference.pdmodel \ --params_filename inference.pdiparams \ --serving_server ./ppocrv2_rec_serving/ \ --serving_client ./ppocrv2_rec_client/ && \ rm -rf *_infer.tar *_infer
启动命令尝试使用过如下几种方式,均无法解决;
docker run -itd --gpus all
docker run -itd --gpus all -e NVIDIA_DRIVER_CAPABILITIES=compute,utility -e NVIDIA_VISIBLE_DEVICES=all
nvidia-docker run --runtime=nvidia
The text was updated successfully, but these errors were encountered:
Message that will be displayed on users' first issue
Sorry, something went wrong.
这边看到送入图片预测时日志显示使用0卡执行了预测,也获取了正常结果
日志上的Warnning问题怎么处理,如果GPU能够识别,那docker启动时,paddle无法识别cuda问题那个Warnning怎么处理;而且还没有本地Mac上起docker,通过CPU预测速度快;通过日志可以看见,大量时间耗在device_context检查环境那块;
device_context
No branches or pull requests
第一次使用GPU资源部署PaddleOCR,出现了一些问题,多次尝试无法解决后求助。
ubuntu20.04
,只安装了nvidia
驱动,没有安装cuda
和cudnn
。registry.baidubce.com/paddlepaddle/serving:0.8.0-cuda11.2-cudnn8-runtime
。出现如下问题:
问题一(docker启动时):
问题二(docker启动后,送入图片预测):
在容器中执行
nvidia-smi
镜像里的安装包
部分dockerfile代码如下:
启动命令尝试使用过如下几种方式,均无法解决;
docker run -itd --gpus all
docker run -itd --gpus all -e NVIDIA_DRIVER_CAPABILITIES=compute,utility -e NVIDIA_VISIBLE_DEVICES=all
nvidia-docker run --runtime=nvidia
The text was updated successfully, but these errors were encountered: