Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

About TP, TN, FP, FN #1201

Closed
ferro07 opened this issue Oct 24, 2020 · 8 comments · Fixed by #5727
Closed

About TP, TN, FP, FN #1201

ferro07 opened this issue Oct 24, 2020 · 8 comments · Fixed by #5727
Labels
question Further information is requested Stale Stale and schedule for closing soon

Comments

@ferro07
Copy link

ferro07 commented Oct 24, 2020

❔Question

Hi @glenn-jocher
Thanks for this great work.
I am trying to add Tp, Fp, Fn to the test results. Firstly, I am using this class for calling only tp : def ap_per_class(tp, conf, pred_cls, target_cls, plot=False, fname='precision-recall_curve.png'):

On the test.py, adding tp to the # Compute statistics, but I am getting followings

raceback (most recent call last):
File "test.py", line 272, in
test(opt.data,
File "test.py", line 88, in test
s = ('%20s' + '%12s' * 6) % ('Class', 'Images', 'Targets', 'P', 'R', '[email protected]', '[email protected]:.95', 'tp')
TypeError: not all arguments converted during string formatting

Thanks in advance for your help

Additional context

@ferro07 ferro07 added the question Further information is requested label Oct 24, 2020
@glenn-jocher
Copy link
Member

@ferro07 you need to put breakpoints near these variables to understand what you're getting into, as these are large multidimensional matrices of 1's and 0's. I don't know what you want to do with them exactly but I doubt printing them to screen would be very useful.

@jaqub-manuel
Copy link

jaqub-manuel commented Oct 25, 2020

@glenn-jocher thanks for quick reply.
I also want to find the number of FNs and thus, I can try to apply different techniques to reduce FP, FN or increase TP so on.
(Precision = TP/TP+FP and Recall = TP/TP+FN), if I print this at test.py, I will analyze my custom dataset in more detail.
You have this values in general.py (def ap_per_class(tp, conf, pred_cls, target_cls, plot=False, fname='precision-recall_curve.png'):
but, I got errors.
Thanks

@glenn-jocher
Copy link
Member

@jaqub-manuel sure. Any code modifications are up to you.

@ZwNSW
Copy link

ZwNSW commented Oct 31, 2020

@glenn-jocher I want to talk about tp, fp, fn output to the screen, but I don't know how to modify the code. This is a difficult problem.
Thanks!

@ZwNSW
Copy link

ZwNSW commented Oct 31, 2020

@ferro07 Hi,Do you have solved this problem?I want to get the tp,fp,fn to analyze my problom,but I can not print them out to my terminal.

@glenn-jocher
Copy link
Member

@ZwNSW TP, FP are computed here:

yolov5/utils/general.py

Lines 250 to 263 in c8c5ef3

def ap_per_class(tp, conf, pred_cls, target_cls, plot=False, fname='precision-recall_curve.png'):
""" Compute the average precision, given the recall and precision curves.
Source: https://github.com/rafaelpadilla/Object-Detection-Metrics.
# Arguments
tp: True positives (nparray, nx1 or nx10).
conf: Objectness value from 0-1 (nparray).
pred_cls: Predicted object classes (nparray).
target_cls: True object classes (nparray).
plot: Plot precision-recall curve at [email protected]
fname: Plot filename
# Returns
The average precision as computed in py-faster-rcnn.
"""

@github-actions
Copy link
Contributor

github-actions bot commented Dec 1, 2020

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.

@github-actions github-actions bot added the Stale Stale and schedule for closing soon label Dec 1, 2020
@github-actions github-actions bot closed this as completed Dec 6, 2020
@glenn-jocher
Copy link
Member

glenn-jocher commented Nov 20, 2021

@ferro07 @jaqub-manuel good news 😃! Your original issue may now be fixed ✅ in PR #5727. This PR explicitly computes TP and FP from the existing Labels, P, and R metrics:

TP = Recall * Labels
FP = TP / Precision - TP

These TP and FP per-class vectors are left in val.py for users to access if they want:

yolov5/val.py

Line 240 in 36d12a5

tp, fp, p, r, f1, ap, ap_class = ap_per_class(*stats, plot=plots, save_dir=save_dir, names=names)

To receive this update:

  • Gitgit pull from within your yolov5/ directory or git clone https://github.com/ultralytics/yolov5 again
  • PyTorch Hub – Force-reload model = torch.hub.load('ultralytics/yolov5', 'yolov5s', force_reload=True)
  • Notebooks – View updated notebooks Open In Colab Open In Kaggle
  • Dockersudo docker pull ultralytics/yolov5:latest to update your image Docker Pulls

Thank you for spotting this issue and informing us of the problem. Please let us know if this update resolves the issue for you, and feel free to inform us of any other issues you discover or feature requests that come to mind. Happy trainings with YOLOv5 🚀!

@glenn-jocher glenn-jocher linked a pull request Nov 20, 2021 that will close this issue
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Further information is requested Stale Stale and schedule for closing soon
Projects
None yet
Development

Successfully merging a pull request may close this issue.

4 participants