-
-
Notifications
You must be signed in to change notification settings - Fork 16.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
TP/FP metrics per image #5725
Comments
👋 Hello @dlg4, thank you for your interest in YOLOv5 🚀! Please visit our ⭐️ Tutorials to get started, where you can find quickstart guides for simple tasks like Custom Data Training all the way to advanced concepts like Hyperparameter Evolution. If this is a 🐛 Bug Report, please provide screenshots and minimum viable code to reproduce your issue, otherwise we can not help you. If this is a custom training ❓ Question, please provide as much information as possible, including dataset images, training logs, screenshots, and a public link to online W&B logging if available. For business inquiries or professional support requests please visit https://ultralytics.com or email Glenn Jocher at [email protected]. RequirementsPython>=3.6.0 with all requirements.txt installed including PyTorch>=1.7. To get started: $ git clone https://github.com/ultralytics/yolov5
$ cd yolov5
$ pip install -r requirements.txt EnvironmentsYOLOv5 may be run in any of the following up-to-date verified environments (with all dependencies including CUDA/CUDNN, Python and PyTorch preinstalled):
StatusIf this badge is green, all YOLOv5 GitHub Actions Continuous Integration (CI) tests are currently passing. CI tests verify correct operation of YOLOv5 training (train.py), validation (val.py), inference (detect.py) and export (export.py) on MacOS, Windows, and Ubuntu every 24 hours and on every commit. |
@dlg4 if we take a step back, all metrics are computed per image and per class (and per IoU), but are aggregated over the image space to produce a per-class, per-IoU metric set, which is then averaged over classes and over IoUs to display a single metric of each type, i.e. [email protected]:0.05:0.95 You would have to make significant breaking changes to val.py to output this information more granularly as you propose. |
@dlg4 another reason we don't display TP and FP is because the information displayed is a sufficient statistic to reconstruct these, so displaying these would be redundant. Anyone can reconstruct these using the provided metrics, same with F1. See https://en.wikipedia.org/wiki/Precision_and_recall Class Images Labels P R [email protected] mAP@
all 128 929 0.577 0.414 0.46 0.279
person 128 254 0.723 0.531 0.601 0.35 For person class:
|
@dlg4 good news 😃! Your original issue may now be fixed ✅ in PR #5727. This PR explicitly computes TP and FP from the existing Labels, P, and R metrics: TP = Recall * Labels
FP = TP / Precision - TP These TP and FP per-class vectors are left in val.py for users to access if they want: Line 240 in 36d12a5
To receive this update:
Thank you for spotting this issue and informing us of the problem. Please let us know if this update resolves the issue for you, and feel free to inform us of any other issues you discover or feature requests that come to mind. Happy trainings with YOLOv5 🚀! |
@glenn-jocher Thanks so much! This is wonderful. |
Search before asking
Question
I am new to YOLOv5.
I am trying to modify the scripts to output true positives, true negatives, false positives, and false negatives for each test image. It would be nice for these metrics to be output in a .csv table, as the
save_stats=True
command does inval.py
.Where do I even begin in this process? I am not sure as to which script to modify. For example, in the
utils\metrics.py
script, we see the ap_per_class function:I see that the true positives and false positives are defined in this portion:
...but how can I get these metrics (including TN and FN) to print out for each test image?
Additional
No response
The text was updated successfully, but these errors were encountered: