Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

use multiprocessing cause error #71

Open
segalinc opened this issue Mar 6, 2024 · 1 comment
Open

use multiprocessing cause error #71

segalinc opened this issue Mar 6, 2024 · 1 comment

Comments

@segalinc
Copy link

segalinc commented Mar 6, 2024

I am trying to use the library for a large dataset
I am setting up a multiprocessing Pool to speed up the processing
However, for example for the function detect_censors I get this error

RuntimeError: [/onnxruntime_src/onnxruntime/core/providers/cuda/cuda_call.cc:121](https://vscode-remote+ssh-002dremote-002bwb1a10.vscode-resource.vscode-cdn.net/onnxruntime_src/onnxruntime/core/providers/cuda/cuda_call.cc:121) std::conditional_t<THRW, void, onnxruntime::common::Status> onnxruntime::CudaCall(ERRTYPE, const char*, const char*, ERRTYPE, const char*, const char*, int) [with ERRTYPE = cudaError; bool THRW = true; std::conditional_t<THRW, void, onnxruntime::common::Status> = void] [/onnxruntime_src/onnxruntime/core/providers/cuda/cuda_call.cc:114](https://vscode-remote+ssh-002dremote-002bwb1a10.vscode-resource.vscode-cdn.net/onnxruntime_src/onnxruntime/core/providers/cuda/cuda_call.cc:114) std::conditional_t<THRW, void, onnxruntime::common::Status> onnxruntime::CudaCall(ERRTYPE, const char*, const char*, ERRTYPE, const char*, const char*, int) [with ERRTYPE = cudaError; bool THRW = true; std::conditional_t<THRW, void, onnxruntime::common::Status> = void] CUDA failure 100: no CUDA-capable device is detected ; GPU=-1905859077 ; hostname=2edfd084-8003-4bed-a5e6-d03d1198eede ; file=/onnxruntime_src/onnxruntime/core/providers/cuda/cuda_execution_provider.cc ; line=236 ; expr=cudaSetDevice(info_.device_id);

any idea?

@narugo1992
Copy link
Contributor

narugo1992 commented Mar 15, 2024

@segalinc im not able to reproduce this error

from concurrent.futures import ProcessPoolExecutor

from imgutils.tagging import get_wd14_tags
from test.testings import get_testfile


def f(i):
    print(f'start {i}')
    rating, tags, chars = get_wd14_tags(get_testfile('nude_girl.png'), drop_overlap=True)
    print(f'end {i}')


if __name__ == '__main__':
    ex = ProcessPoolExecutor(max_workers=4)
    for i in range(8):
        ex.submit(f, i)

    ex.shutdown()

works fine on onnxruntime-gpu 1.17.1, can u provide your parallel code?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants