Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Can you release the perfoemance on 13 OdinW datasets like GLIP? #13

Open
Kegard opened this issue May 17, 2023 · 7 comments
Open

Can you release the perfoemance on 13 OdinW datasets like GLIP? #13

Kegard opened this issue May 17, 2023 · 7 comments

Comments

@Kegard
Copy link

Kegard commented May 17, 2023

I have used the checkpoint "end-to-end-stage" released on other dataset. and my result is this:

Average Precision  (AP) @[ IoU=0.50:0.95 | area=   all | maxDets=100 ] = 0.012
 Average Precision  (AP) @[ IoU=0.50      | area=   all | maxDets=1000 ] = 0.025
 Average Precision  (AP) @[ IoU=0.75      | area=   all | maxDets=1000 ] = 0.011
 Average Precision  (AP) @[ IoU=0.50:0.95 | area= small | maxDets=1000 ] = -1.000
 Average Precision  (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=1000 ] = 0.000
 Average Precision  (AP) @[ IoU=0.50:0.95 | area= large | maxDets=1000 ] = 0.015
 Average Recall     (AR) @[ IoU=0.50:0.95 | area=   all | maxDets=100 ] = 0.513
 Average Recall     (AR) @[ IoU=0.50:0.95 | area=   all | maxDets=300 ] = 0.513
 Average Recall     (AR) @[ IoU=0.50:0.95 | area=   all | maxDets=1000 ] = 0.513
 Average Recall     (AR) @[ IoU=0.50:0.95 | area= small | maxDets=1000 ] = -1.000
 Average Recall     (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=1000 ] = 0.000
 Average Recall     (AR) @[ IoU=0.50:0.95 | area= large | maxDets=1000 ] = 0.513
OrderedDict([('bbox_mAP', 0.012), ('bbox_mAP_50', 0.025), ('bbox_mAP_75', 0.011), ('bbox_mAP_s', -1.0), ('bbox_mAP_m', 0.0), ('bbox_mAP_l', 0.015), ('bbox_mAP_copypaste', '0.012 0.025 0.011 -1.000 0.000 0.015')])

i want to know whether the code is erro or the result is really bad. Or can you release the code inference on OdinW datasets?

@suekarry
Copy link

I have used the checkpoint "end-to-end-stage" released on other dataset. and my result is this:

Average Precision  (AP) @[ IoU=0.50:0.95 | area=   all | maxDets=100 ] = 0.012
 Average Precision  (AP) @[ IoU=0.50      | area=   all | maxDets=1000 ] = 0.025
 Average Precision  (AP) @[ IoU=0.75      | area=   all | maxDets=1000 ] = 0.011
 Average Precision  (AP) @[ IoU=0.50:0.95 | area= small | maxDets=1000 ] = -1.000
 Average Precision  (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=1000 ] = 0.000
 Average Precision  (AP) @[ IoU=0.50:0.95 | area= large | maxDets=1000 ] = 0.015
 Average Recall     (AR) @[ IoU=0.50:0.95 | area=   all | maxDets=100 ] = 0.513
 Average Recall     (AR) @[ IoU=0.50:0.95 | area=   all | maxDets=300 ] = 0.513
 Average Recall     (AR) @[ IoU=0.50:0.95 | area=   all | maxDets=1000 ] = 0.513
 Average Recall     (AR) @[ IoU=0.50:0.95 | area= small | maxDets=1000 ] = -1.000
 Average Recall     (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=1000 ] = 0.000
 Average Recall     (AR) @[ IoU=0.50:0.95 | area= large | maxDets=1000 ] = 0.513
OrderedDict([('bbox_mAP', 0.012), ('bbox_mAP_50', 0.025), ('bbox_mAP_75', 0.011), ('bbox_mAP_s', -1.0), ('bbox_mAP_m', 0.0), ('bbox_mAP_l', 0.015), ('bbox_mAP_copypaste', '0.012 0.025 0.011 -1.000 0.000 0.015')])

i want to know whether the code is erro or the result is really bad. Or can you release the code inference on OdinW datasets?

Excuse me, have you solved your problem?

@Kegard
Copy link
Author

Kegard commented Sep 20, 2023

I have changed my dataset and test again, then I get a normal result.

@suekarry
Copy link

I have changed my dataset and test again, then I get a normal result.

Thanks your reply!!1. What is the meaning of changing the data set, modifying the validation set part?2、When you have -1 your loss is normally decreasing and converging, right?

@Kegard
Copy link
Author

Kegard commented Sep 20, 2023

  1. I changed all dataset ,contains train set and val set.
  2. I just have made a zero-shot on other dataset, so I haven't train the model.
    if you can't solve the problem, I think you can change your dataset have a test. I remember the documents of mmdet have explain why -1 happend. you can read it.

@suekarry
Copy link

  1. I changed all dataset ,contains train set and val set.
  2. I just have made a zero-shot on other dataset, so I haven't train the model.
    if you can't solve the problem, I think you can change your dataset have a test. I remember the documents of mmdet have explain why -1 happend. you can read it.
    I see, Thank you! but I don't know where the mmdet's explain.I have checked the comments section on mmdet's github and searched in mmdet's manual

@Kegard
Copy link
Author

Kegard commented Sep 20, 2023

I cann't find the link, but I remebered that there is some problem with your dataset if mAP=-1

@suekarry
Copy link

I cann't find the link, but I remebered that there is some problem with your dataset if mAP=-1

thanks your reply,I across change the iou_threshold=None in coco.py to slove this problem.(thank you again~)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants