-
Notifications
You must be signed in to change notification settings - Fork 1.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Nano: trace to openvino inference engine need a thread(core) num control option #5608
Comments
@qiyuangong Is this behavior expected? |
After huge amount of experiment and feedback from team members and customers I found ONNXRuntime's core num control is completely fragile and useless, it may not respect user's input randomly For fp32:
For int8
We are contacting onnxruntime support team (MS and Intel) to find some solution. |
After |
For onnxruntime, users could control how many cores their accelerated model could use, while we don't have such option for openvino right now.
The text was updated successfully, but these errors were encountered: