-
-
Notifications
You must be signed in to change notification settings - Fork 16.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Is there a way to generate the number of TP/TN/FP/FN for each test image using the detect.py script? #5713
Comments
@ib124 no of course not. Where do you expect TP values to be produced in detect.py exactly? Where are the labels used in your imaginary detect.py coming from? |
@glenn-jocher In that case, is there a way to print out these values using the |
@ib124 yes, that's a possibility! Several users have been asking for this but we don't have it enabled by default. You can access these values directly in the code here, there is one FP/TP vector per IoU threshold 0.5:0.05:0.95: Lines 54 to 56 in eb51ffd
|
That is what I needed. Thank you! |
@ib124 another reason we don't display TP and FP is because the information displayed is a sufficient statistic to reconstruct these, so displaying these would be redundant. Anyone can reconstruct these using the provided metrics, same with F1. See https://en.wikipedia.org/wiki/Precision_and_recall Class Images Labels P R [email protected] [email protected]:.95
all 128 929 0.577 0.414 0.46 0.279
person 128 254 0.723 0.531 0.601 0.35 For person class:
|
@ib124 good news 😃! Your original issue may now be fixed ✅ in PR #5727. This PR explicitly computes TP and FP from the existing Labels, P, and R metrics: TP = Recall * Labels
FP = TP / Precision - TP These TP and FP per-class vectors are left in val.py for users to access if they want: Line 240 in 36d12a5
To receive this update:
Thank you for spotting this issue and informing us of the problem. Please let us know if this update resolves the issue for you, and feel free to inform us of any other issues you discover or feature requests that come to mind. Happy trainings with YOLOv5 🚀! |
@glenn-jocher This is awesome, thank you! I greatly appreciate this. |
@ib124 you're welcome! One thing to note is that these TP and FP values are computed at max-F1 confidence (same as P and R results): Lines 82 to 87 in 7a39803
|
Search before asking
Question
I am doing some detection accuracy analysis, and I am looking to model how different confidence settings (i.e.
--conf 0.6
affect the number of true positive/false positive detections for my data. Is there any way thedetect.py
script can be modified to list the TP/FP/TN/FN values for each class of each image? I have a custom model that has multiple classes trained, but I only want these values for one class.Note: I know that the
val.py
script graphs metrics such as the F1, Precision-Recall curve, etc., but I'm just trying to get some of the raw values for my individual calculations.Additional
No response
The text was updated successfully, but these errors were encountered: