-
Notifications
You must be signed in to change notification settings - Fork 1.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Experiment on DenseNet with dilation #139
Comments
@Xiangyu-CAS would you mind sharing your result of your experiment? |
@Ai-is-light ,
would you mind sharing yours ? |
@Xiangyu-CAS I haven't try the ResNet and DenseNet , but I will do it. I have tried the original network changed by the dilated work. In my work , I need pay more attention to the speed. However I failed when I try to use the BN-layer at the stage of 2-6. In your work, ResNet50 (conv4,5 dilation, bn_istrain) +2tstage : 55% (2644 images), which means you did use bn-layer in training including every layer of stage2-6 ??? I trained the original network and got coco-2644 about 0.584(mAP), and dilate-conv about 0.543 (mAP). And , now, I try to use the mobileNet to train more faster model. So, mAP/coco-datasetmAP/coco-dataset |
Is the performance you reported obtained on COCO2017 or COCO2014? |
Hi, did you use mobileNet to train the model? I have tried it for a long time, I want to ask if you change the cropsize to 128 to train this model? How about the performance? Thanks |
Hi, recently, I have carried out some experiment based on DenseNet backbone. In order to produce feature map at stride 8, dilation or a trous trick was used.
However, the results are really confusing. Most of the refinement stages were not working. In original implementation, loss reduce from stage 1 to stage 6, but in my experiment with dilation, loss stay constant from stage 3 to stage 6.
Did you carry out any experiment using dilation and met the same issue ?
Original:
DenseNet + dilation
The text was updated successfully, but these errors were encountered: