You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Another question is whether onnxrt python package supports the imminent sub-interpreters / per-interpreter GIL: python/cpython#104210 (I guess mostly a question of managing global state / op registrations and so on)
And one more question is that it would really be great to have an example of producer-consumer patterns of ONNXRT and TPL.
One pattern can be single-threaded ONNXRT GPU worker consuming inputs from a TPL queue (and putting outputs to another TPL queue). Another pattern can be multi-threaded ONNXRT CPU workers consuming inputs from a TPL queue (and putting outputs to another TPL queue)
Describe the feature request
I have an existing TPL pipeline (https://learn.microsoft.com/en-us/dotnet/api/system.threading.tasks.dataflow.dataflowblockoptions.taskscheduler?view=net-7.0 https://github.com/dotnet/runtime/tree/main/src/libraries/System.Threading.Tasks.Dataflow). I'd like to use CPU-backed ONNX models in its threads.
How can I do it properly?
I guess there should be some custom ThreadPool threads init function that loads the ONNX model?
If the functions are lightweight, then it's critical that every thread has its own loaded ONNX model and loads it only once
Describe scenario use case
Migrating existing TPL pipeline to using python/ONNX-exported graphs in its components
The text was updated successfully, but these errors were encountered: