-
Notifications
You must be signed in to change notification settings - Fork 2.5k
Lower accuracy of pretrained model in model zoo.... #1051
Comments
And when use the pretrained mask-rcnn R-101 model in /configs/caffe2, the result is quite same and lower than report Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.386 Thanks for your attention |
It is maybe related to #672. Maybe this comment can help you get the reported performance. |
Thank you very much for your reply. However, I have noticed that comment and it's used for training when not using 8 GPUs. However, in my problem I just use the pretrain model in the model zoo to do the inference (evaluation). Moreover, the image pre GPU is still 2 during inference, so maybe I needn't to change the |
And here is the yaml file in the /configs/caffe2 and /configs MODEL: MODEL: I am quitely confused and I don't know where is the problem |
-----Updated----- The result of another pretrained model : configs/caffe2/e2e_mask_rcnn_R_50_FPN_1x_caffe2 : AP, AP50, AP75, APs, APm, APl is still lower than the report in model zoo.... |
Sorry, I read the issue too quickly. Indeed, this is weird I'm gonna check it in some environments later, I'll keep you informed. |
Very appreciate for informing~ Thanks |
Same problem here. much lower than 37.8 box AP in model_zoo reported. |
Same problem. The model is MaskRCNN R-50-FPN. I got 35.1% mAP for object detection mission, and 32.3% mAP for segmentation mission which should be 37.8% and 34.2%. |
❓ Questions and Help
When I use the /configs/2e_mask_rcnn_R_101_FPN_1x.yaml and /configs/caffe2/2e_mask_rcnn_R_101_FPN_1x.yaml to evaluate the model (I didn't change any code in the mask-rcnn benchmark), the results accuracy is 2~3% lower than your report in model zoo.
Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.385
Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = 0.597
Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = 0.418
Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.208
Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.418
Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.514
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 1 ] = 0.316
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 10 ] = 0.493
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.516
Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.313
Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.556
Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.659
Loading and preparing results...
DONE (t=1.69s)
creating index...
index created!
Running per image evaluation...
Evaluate annotation type segm
DONE (t=27.56s).
Accumulating evaluation results...
DONE (t=3.49s).
Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.346
Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = 0.560
Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = 0.367
Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.147
Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.371
Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.520
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 1 ] = 0.294
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 10 ] = 0.447
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.466
Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.255
Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.505
Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.632
2019-08-19 08:43:41,982 maskrcnn_benchmark.inference INFO:
Task: bbox
AP, AP50, AP75, APs, APm, APl
0.3846, 0.5967, 0.4184, 0.2077, 0.4183, 0.5139
Task: segm
AP, AP50, AP75, APs, APm, APl
0.3464, 0.5595, 0.3668, 0.1475, 0.3708, 0.5204
The accuracy you report in model zoo is about 40.1 and 36.1.
I'm very confused, and I just run
python -m torch.distributed.launch --nproc_per_node=4 tools/test_net.py --config-file "/data1/48data/maskrcnn-benchmark/configs/e2e_mask_rcnn_R_101_FPN_1x.yaml" TEST.IMS_PER_BATCH 8
(I just have 4 GPUs so I change TEST.IMS_PER_BATCH = 8)
The text was updated successfully, but these errors were encountered: