-
Notifications
You must be signed in to change notification settings - Fork 21.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Yolo V3 performance much lower than Yolo Tiny V3 #901
Comments
@ssanzr
|
1.What params did you use in the Makefile? 2.Did you check your dataset by Yolo_mark? 3.Can you show content of file bad.list and bad_label.list if it will be created after training? bad_label.list not found. How can i enable its creation?
|
bad_label.list file will be created automatically only if your dataset is wrong. So your dataset is correct. Also Yolo v3 requires more iterations to get high accuracy (mAP), so in general you should train it more iterations than Yolo v3 tiny.
|
@AlexeyAB thanks a lot for your support here. With Yolo Tiny v3 i am able to get very good results with 416x416 Images, so i will try this resolution for YoloV3; getting 50000 iterations with large images will take a couple of weeks on my system. I will change the anchors and the steps as you suggested. Let me know if this does not sound ok. Anyways i am very curious to understand why the Yolo Tiny works quite good "out of the box" and not YoloV3 for the same input. Lets see if it makes sense after this round of testing
num_of_clusters = 9, width = 416, height = 416 calculating k-means++ ... Saving anchors to the file: anchors.txt
|
For your dataset the Yolo v3 with high resolution should give much higher accuracy. Try to calculate anchors for 832x832: And Train Yolo v3 with |
I trained with 608x608 and i detected with 608x608
Yes
This trial was done with the default anchors. I will try now with the calculated anchors: num_of_clusters = 9, width = 608, height = 608 calculating k-means++ ... Saving anchors to the file: anchors.txt
|
It seems that something went wrong.
You should train Yolo v3 much more iterations.
For video Yolo averages detections for |
@AlexeyAB Your support is really great, and really appreciated!!
Using the MSVS 2017. Where can i change this?
NVIDIA GeForce GTX 1060 with Max-Q Design
June 12
Sorry, i am quite beginner. Do you mean replacing "#define FRAMES 3" by "#define FRAMES 1"?. I believe the video is not averaging the position in n-1, i have just checked it and it really seems that frame n is showing the bounding x box label for n+1. I think i explained wrongly before. |
Yes.
This is normal. For your GPU you shouldn't use CUDNN_HALF. |
Hi again @AlexeyAB It seems that setting new anchors calculated for -width 608 -height 608 an setting steps steps=40000,45000 does make the performance worse.
I will keep training and i will let you know the progress |
I continue training, but the results does not see to be improving. The avg loss is slightly decreasing, but the avg IOU and mAP are not improving, or even getting worst. Any other idea that might help here?
|
@ssanzr This is very strange.
If the batch was the same, but subdivisions was smaller for 416x416, then
If anchors in Test dataset are very different from anchors from Training dataset - it might be the reason that 608x608 has lower mAP than 416x416. |
416x416: This values are chosen based on the maximum i achieve without CUDA error
It makes sense to me. In tiny Yolo there is no big difference between different resolutions
Yes
num_of_clusters = 9, width = 608, height = 608 calculating k-means++ ... Saving anchors to the file: anchors.txt |
|
@AlexeyAB @ssanzr Actually, I found that yolov3 is very sensitive to the anchors from dimension clustering. When using 9 anchors (yolov3) instead 6 anchors (tiny yolov3), some problems caused. Maybe this is the reason for your low accuracy. Also, I think dimension clustering has some problem on small dataset. The generated anchors are usually very close to each other and this lead to low accuracy. |
That is a good point, @ZHANGKEON @. I will try YoloV3 with the 6 anchors from Yolo tiny V3 and see what happens. |
@ssanzr |
Hi @ssanzr
I'm trying to run Yolov3 on grayscale by changing color channel=1 in yolov3.cfg file Would you please guide me that what should I do now to use grayscale images with Yolov3? Thank You |
I have never seen any segmentation error with my grayscale images. Actually. My issue was that for Yolo Tiny V3 worked, and for Yolo V3, it did not. Sorry for being of big help... |
A quick reason is overfitting. The dataset is too small and yolov3 is deep which yolov3_tiny is small. This can cause this problem. When you use the model in the real environment, the well-trained yolov3 has a better performance. |
@AlexeyAB Here is my Question
|
@Mahibro Hi,
|
@AlexeyAB If i set Batch & Subdivision=1(Training will not progress,it will ask to set batch=64).Should i make any changes to use GPU? (please elaborate me i dont know much). |
@Mahibro You must set batch=64. Read this: https://github.com/AlexeyAB/darknet#how-to-train-to-detect-your-custom-objects |
@AlexeyAB Hi, thanks for your help. I have a small dataset, about 1000 images, I am training my own dataset on Yolov3. In training, the avg loss is decreasing but when I check it with a new dataset, the result of weights of 1000th iteration is better than 4000th one. Am I facing overfitting or should I run it for more iterations? How many iterations is good for Yolov3, for a small dataset with 8 classes? Should I use tiny yolov3 instead? Thanks again! |
@aaobscure What mAP do you get for weights of 1000 and 4000 iterations? |
another question is that it is not decreasing lot after 3000 iterations, what should I do? |
@aaobscure Do you get only 0.72% ? It is very low.
You should have ~16 000 images and you should train ~16 000 iterations with batch=64.
You can try |
another question is that how often I should change the learning rate? and how to do it? |
@ssanzr @AlexeyAB
I'm sorry for my bad English |
@fashlaa Hi, Just set |
@AlexeyAB |
Hey @AlexeyAB Please help, I used Tiny Yolov3, 6 anchors, 64 batch, 8 subdivision. 200 images in windows without GPU, I have no idea why Avg become -Nan after 30th or 40th iteration. I relabeled image and redownload repo, it still has same issue. Thank you in advance, Sir. Here is some details: num_of_clusters = 6, width = 416, height = 416 |
@AlexeyAB which repository should we use for tiny yolov3 ? |
How do you calculate the anchor for Tiny YoloV3?, your help please. |
@intelltech |
Okay. Thank you. But in my YoloV3-Tiny.cfg it is configured as: 416x416 (which I also use to train) because 640x640? |
@joelmatt |
Hello, @ssanzr. What is the number of classes that you are using in this test? I'm asking you because I'm facing the similar question on my experiments, but in my case I'm just using one class. I have similar results to full YOLO and Tiny YOLO and, in some cases, Tiny YOLO has better results. |
Weights only save every 100 iterations until 900, then saves every 10,000. |
did you manage to solve your issue I am facing the same issue and I don't know how to go about solving this. |
Hello @AlexeyAB, I am facing the following issue that for my custom dataset the avg loss is going down but the mAP is still 0. I am training on EC2 with 4 GPUs. I have first trained for 1000 iterations as you suggested on one GPU and now I am training for 8000 iterations on all 4 GPUs. The avg loss is going down but mAP is still 0, and I don't know what I should check. All my images are 1600 x 256 and I have kept them that way. In cfg file I have I have check with -show_imgs that bounding boxes were showing properly Do you have any suggestions on why the mAP is showing 0. Regards |
Hi Everyone,
I have been training different YOLO networks for my custom dataset following the repository from @AlexeyAB and i am quite puzzled about the performance obtained for each network.
I am using exactly the same testing and training .png dataset for every network. I have 700 training images and 300 for testing.
Performance summary for the different networks:
Why when using yolo V3 the performance is much worst than when using tiny yolo V3?
Why the input resolution does not play a role in the tiny yolo performance while it has a high impact in Yolo V3?
Anyone any idea?
Thanks
The text was updated successfully, but these errors were encountered: