Comparison Between Quantized Yolov8n model and Unquantized Yolov8n Model by using MSCOCO Validation Dataset #14865
-
I quantized yolov8n model using post-training static quantization method with ONNX Runtime on Kaggle. I can do inference operation for both models, i can see that in the image. Now, i want to compare their performance without visualizing, still on Kaggle environment. I want to validate both of them and see their mAP value, speed etc... I tried to use YOLO's val method but i do not have enough memory space. So, i decided to use simplified MSCOCO dataset. Since i will use pretrained model, i didn't download MSCOCO train2017.zip . I only used val2017 and annotations_trainval2017 folders.
yolov8n = '/kaggle/working/yolov8n.onnx' def load_model(model_path): #preprocess and postprocess operations def postprocess(results, img_shape, confidence=0.35, iou=0.35):
#simply the datasets(val2017) like you did in the calibration dataset*** #Performans Estimation Function
original_model = load_model(yolov8n) ` |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment 8 replies
-
@BeyzaSimsekk to simplify your Regarding your code, it seems you are on the right track but may need to refine your approach to calculating mAP and handling the dataset. Ensure that your preprocessing and postprocessing steps align with the model's requirements and that you correctly handle the dataset paths and annotations. For memory issues, consider using a smaller batch size or reducing the image resolution during validation. This can help manage memory usage more effectively. If you need further assistance with specific parts of your code or additional guidance on dataset handling, feel free to ask. For more detailed information on TensorRT export and performance optimization, you can refer to the Ultralytics TensorRT documentation. |
Beta Was this translation helpful? Give feedback.
Thank you for sharing your detailed code and explanation. It seems like you've made significant progress. To address the mAP discrepancy, ensure that your preprocessing and postprocessing steps align with the model's requirements. Also, verify that the
detections.json
andgt_boxes.json
files are correctly formatted and that the category IDs match between your annotations and detections. If the issue persists, please check if it reproduces with the latest version of the Ultralytics package. This can help identify if there are any underlying issues that have been resolved in recent updates.