-
Notifications
You must be signed in to change notification settings - Fork 23
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Try to reproduce voc 10-1 results #37
Comments
I was using 2 V100 GPUs. Do you also use the same batch size as me? I know that the mixed precision could something give different results depending on the GPU/CUDA. Have you tried without it?
Unfortunately I have left my lab as I have finished my lab, and I don't have anymore those intermediary results. |
I have tried it on 2x RTX 2080Ti with cuda 10.2 (see environment.txt for the full environment). However, I got almost similar results as running with 2x RTX 3090. I think the cuda version and pytorch version does not affect the results that much. It still exists some gap (4.5 percent point) in the old class performance. Looking forward to your advice. Thanks.
Yes, I run with the default batch size 24, each GPU with 12.
|
@arthurdouillard May I know do you use different hyperparameter settings for different tasks, e.g., 10-1, 15-5, 15-1, etc.? Because I can reproduce the results for 15-1. |
I am trying to reproduce 10-1 results as shown in the table below. I notice a large gap of the old class mIoU between my reproduce result (38.82) and your reported one (44.03), roughly 5 percent points. I am wondering what will cause this problem. I run the experiments with 2 x RTX 3090 GPU. I follow your original implementation except for the cuda version. I am using cuda 11.3 because cuda 10.2 does not support RTX 3090. Does it matter?
Btw, may I know what GPU model do you use? I think it requires to have at least 16G to hold a batch of 12 on each device and needs to support cuda 10.2 as well. V100? I guess.
Meanwhile, I notice a weird phenomenon that background performance drops drastically starting from the 8-th step and becomes 0 at 9-th step. I think this harms the old class performance a lot. Do you have a similar issue?
Thanks.
The text was updated successfully, but these errors were encountered: