Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Low loss but low mAP #3026

Closed
DarylWM opened this issue Apr 24, 2019 · 8 comments
Closed

Low loss but low mAP #3026

DarylWM opened this issue Apr 24, 2019 · 8 comments

Comments

@DarylWM
Copy link

DarylWM commented Apr 24, 2019

Hi @AlexeyAB .

When training yolov3_5l, I'm seeing low loss but also low mAP.

image

The machine has CUDA 9, CuDNN 7.5, and OpenCV 2.4.9. I'm using your latest repo.

image

The chart above was for the default anchors. I've just started a training run with custom anchors. Would I get low loss if anchors were the problem though?

@xiaohai12
Copy link

xiaohai12 commented Apr 24, 2019

maybe you can use map command to see the mAP for each class to check whether there are some problem with class index or others.

@AlexeyAB
Copy link
Owner

@DarylWM Hi,

The chart above was for the default anchors. I've just started a training run with custom anchors. Would I get low loss if anchors were the problem though?
When training yolov3_5l, I'm seeing low loss but also low mAP.

May be most of yolo-layers (f.e. 4 last layers) don't correspond to your object sizes, so most of layers generate very low loss.

Do you get the same low mAP on training dataset as on validation dataset?

@DarylWM
Copy link
Author

DarylWM commented Apr 24, 2019

Thanks @xiaohai12 . I've reduced my problem to only one class. The map command shows a large number of false negatives (also many false positives but for my problem I'm more concerned about FN).

 calculation mAP (mean average precision)...
3568
 detections_count = 141900, unique_truth_count = 23803  
class_id = 0, name = feature, ap = 8.38%   	 (TP = 2221, FP = 4720) 

 for thresh = 0.25, precision = 0.32, recall = 0.09, F1-score = 0.14 
 for thresh = 0.25, TP = 2221, FP = 4720, FN = 21582, average IoU = 21.12 % 

 IoU threshold = 50 %, used Area-Under-Curve for each unique Recall 
 mean average precision ([email protected]) = 0.083833, or 8.38 % 
Total Detection Time: 37.000000 Seconds

@DarylWM
Copy link
Author

DarylWM commented Apr 25, 2019

Hi @AlexeyAB .

Do you get the same low mAP on training dataset as on validation dataset?

Do you mean the mAP I get when I point valid= to different sets? It's 8.95% when I point to the validation set, and 8.38% when I point to the test set.

May be most of yolo-layers (f.e. 4 last layers) don't correspond to your object sizes, so most of layers generate very low loss.

I found your explanation here helpful but so far I'm not understanding how to apply that guidance to yolov3_5l. For example I didn't count 5 subsampling layers with stride=2 in the first yolo layer of yolov3.

@AlexeyAB
Copy link
Owner

@DarylWM

Do you mean the mAP I get when I point valid= to different sets? It's 8.95% when I point to the validation set, and 8.38% when I point to the test set.

I mean, what mAP if you set valid=train.txt


There are 5 subsampling layers in yolov3_5l.cfg

  1. stride=2
  2. stride=2
  3. stride=2
  4. stride=2
  5. stride=2

Also there are

@DarylWM
Copy link
Author

DarylWM commented Apr 26, 2019

Hi @AlexeyAB .

what mAP if you set valid=train.txt

When I set valid=train.txt, the mAP is 71.45%.

My custom anchors are:
anchors = 15, 5, 14, 10, 23, 7, 23, 11, 14, 20, 21, 14, 23, 17, 34, 13, 23, 27, 31, 20, 43, 21, 35, 29, 28, 44, 47, 38, 49, 68

How should I apply to yolov3_5l the guidance about size of objects for each layer being 2^n * 2 where n = number of subsampling layers, or does that only apply to yolov3?

@AlexeyAB
Copy link
Owner

@DarylWM Hi,

Set masks for anchors in the [yolo] layers in yolov3_5l.cfg:

  1. > 64x64 , 5 subsampling - 0 upsampling = 5 subsampling
  2. > 32x32 , 5 subsampling - 1 upsampling = 4 subsampling
  3. > 16x16 , 5 subsampling - 2 upsampling = 3 subsampling
  4. > 8x8 , 5 subsampling - 3 upsampling = 2 subsampling
  5. > 4x4 , 5 subsampling - 4 upsampling = 1 subsampling

For
anchors = 15, 5, 14, 10, 23, 7, 23, 11, 14, 20, 21, 14, 23, 17, 34, 13, 23, 27, 31, 20, 43, 21, 35, 29, 28, 44, 47, 38, 49, 68

Something like this:

  1. masks=14
  2. masks=11, 12, 13
  3. masks=4,5,6,7,8,9,10
  4. masks=1,2,3
  5. masks=0

@DarylWM
Copy link
Author

DarylWM commented Apr 26, 2019

That's very helpful - thanks @AlexeyAB .

@DarylWM DarylWM closed this as completed Apr 26, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants