You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
@tpoisonooo
I apologize for the sudden bother you.
I would like to ask you 2 questions.
Does mmdeploy only support fp16 level quantization for onnxruntime at this moment?
I currently would like to quantize rtmpose to int8. I try to quantize rtmpose to int8 using onnxruntime's static quantization. but the accuracy of the quantized model has been zero. Can I modify the pose-detection_onnxruntime-fp16_static.py to pose-detection_onnxruntime-int8_static.py by myself to realize the int8 quantization using mmdeploy?
Hi. I need to deploy my model(any object detection model) in onnx format in fp16 mode.Is it possible in mmdeploy?
Thanks in advance..
The text was updated successfully, but these errors were encountered: