Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

generating prediction-stat:"P" ,"R" , "mAP" per image in test.py #2437

Closed
tjbe2021 opened this issue Mar 12, 2021 · 9 comments · Fixed by #5727
Closed

generating prediction-stat:"P" ,"R" , "mAP" per image in test.py #2437

tjbe2021 opened this issue Mar 12, 2021 · 9 comments · Fixed by #5727
Labels
question Further information is requested Stale Stale and schedule for closing soon

Comments

@tjbe2021
Copy link

❔Question

Additional context

Hi there, I need some help to generate prediction-stats per image in the "Val" folder when run the test.py script, I couldn't find a way to do it by going through the code. Appreciate the help.

@tjbe2021 tjbe2021 added the question Further information is requested label Mar 12, 2021
@github-actions
Copy link
Contributor

github-actions bot commented Mar 12, 2021

👋 Hello @tjbe2021, thank you for your interest in 🚀 YOLOv5! Please visit our ⭐️ Tutorials to get started, where you can find quickstart guides for simple tasks like Custom Data Training all the way to advanced concepts like Hyperparameter Evolution.

If this is a 🐛 Bug Report, please provide screenshots and minimum viable code to reproduce your issue, otherwise we can not help you.

If this is a custom training ❓ Question, please provide as much information as possible, including dataset images, training logs, screenshots, and a public link to online W&B logging if available.

For business inquiries or professional support requests please visit https://www.ultralytics.com or email Glenn Jocher at [email protected].

Requirements

Python 3.8 or later with all requirements.txt dependencies installed, including torch>=1.7. To install run:

$ pip install -r requirements.txt

Environments

YOLOv5 may be run in any of the following up-to-date verified environments (with all dependencies including CUDA/CUDNN, Python and PyTorch preinstalled):

Status

CI CPU testing

If this badge is green, all YOLOv5 GitHub Actions Continuous Integration (CI) tests are currently passing. CI tests verify correct operation of YOLOv5 training (train.py), testing (test.py), inference (detect.py) and export (export.py) on MacOS, Windows, and Ubuntu every 24 hours and on every commit.

@maheshmechengg
Copy link

maheshmechengg commented Mar 12, 2021

When i give image size other than model (416) to test.py my anchors misplace on test labels & test predict:
image

@glenn-jocher
Copy link
Member

@tjbe2021 test.py runs by default on your data.yaml test: directory. You can point it to the data.yaml val: directory by:
python test.py --task val

@maheshmechengg please raise a new issue for new topics. Your labels are incorrect.

@tjbe2021
Copy link
Author

@glenn-jocher Thanks for the prompt reply, I've changed the default to val, tI get the values for each class, and for all.

However , I'm trying to generate the P, R, mAP for the each image inside the val directory. So that I could perform some sandy checks, is that possible with this ? I tried tweaking the code, but doesn't seems it's working.

@glenn-jocher
Copy link
Member

@tjbe2021 AP is not computed per image, it is computed per class over all images, and then averaged as mAP. If you want to obtain mAP on one image, then your dataset would have to be 1 image.

@tjbe2021
Copy link
Author

@glenn-jocher great thanks for the answer, that make sense! By any chance under metrics.py are you producing any results on FP,TP, values?

@glenn-jocher
Copy link
Member

@tjbe2021 YOLOv5 TP and FP vectors are computed here:

yolov5/utils/general.py

Lines 250 to 319 in c8c5ef3

def ap_per_class(tp, conf, pred_cls, target_cls, plot=False, fname='precision-recall_curve.png'):
""" Compute the average precision, given the recall and precision curves.
Source: https://github.com/rafaelpadilla/Object-Detection-Metrics.
# Arguments
tp: True positives (nparray, nx1 or nx10).
conf: Objectness value from 0-1 (nparray).
pred_cls: Predicted object classes (nparray).
target_cls: True object classes (nparray).
plot: Plot precision-recall curve at [email protected]
fname: Plot filename
# Returns
The average precision as computed in py-faster-rcnn.
"""
# Sort by objectness
i = np.argsort(-conf)
tp, conf, pred_cls = tp[i], conf[i], pred_cls[i]
# Find unique classes
unique_classes = np.unique(target_cls)
# Create Precision-Recall curve and compute AP for each class
px, py = np.linspace(0, 1, 1000), [] # for plotting
pr_score = 0.1 # score to evaluate P and R https://github.com/ultralytics/yolov3/issues/898
s = [unique_classes.shape[0], tp.shape[1]] # number class, number iou thresholds (i.e. 10 for mAP0.5...0.95)
ap, p, r = np.zeros(s), np.zeros(s), np.zeros(s)
for ci, c in enumerate(unique_classes):
i = pred_cls == c
n_gt = (target_cls == c).sum() # Number of ground truth objects
n_p = i.sum() # Number of predicted objects
if n_p == 0 or n_gt == 0:
continue
else:
# Accumulate FPs and TPs
fpc = (1 - tp[i]).cumsum(0)
tpc = tp[i].cumsum(0)
# Recall
recall = tpc / (n_gt + 1e-16) # recall curve
r[ci] = np.interp(-pr_score, -conf[i], recall[:, 0]) # r at pr_score, negative x, xp because xp decreases
# Precision
precision = tpc / (tpc + fpc) # precision curve
p[ci] = np.interp(-pr_score, -conf[i], precision[:, 0]) # p at pr_score
# AP from recall-precision curve
for j in range(tp.shape[1]):
ap[ci, j], mpre, mrec = compute_ap(recall[:, j], precision[:, j])
if j == 0:
py.append(np.interp(px, mrec, mpre)) # precision at [email protected]
# Compute F1 score (harmonic mean of precision and recall)
f1 = 2 * p * r / (p + r + 1e-16)
if plot:
py = np.stack(py, axis=1)
fig, ax = plt.subplots(1, 1, figsize=(5, 5))
ax.plot(px, py, linewidth=0.5, color='grey') # plot(recall, precision)
ax.plot(px, py.mean(1), linewidth=2, color='blue', label='all classes %.3f [email protected]' % ap[:, 0].mean())
ax.set_xlabel('Recall')
ax.set_ylabel('Precision')
ax.set_xlim(0, 1)
ax.set_ylim(0, 1)
plt.legend()
fig.tight_layout()
fig.savefig(fname, dpi=200)
return p, r, ap, f1, unique_classes.astype('int32')

They don't print out by default, you'd have to introduce some custom code to see them.

fpc and tpc are the FP and TP arrays of shape (n,10), for the 10 iou thresholds of 0.5:0.95. If you look at the last row, this is the FP and TP count per iou threshold:

tpc.shape
Out[3]: (3444, 10)
fpc.shape
Out[4]: (3444, 10)
tpc[-1]
Out[5]: array([138, 124, 105,  91,  80,  66,  54,  38,  22,   9])
fpc[-1]
Out[6]: array([3306, 3320, 3339, 3353, 3364, 3378, 3390, 3406, 3422, 3435])

So at 0.5 iou and 0.001 confidence threshold, for class 0, dataset inference results in 138 TPs and 3306 FPs.

@github-actions
Copy link
Contributor

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.

@github-actions github-actions bot added the Stale Stale and schedule for closing soon label Apr 12, 2021
@glenn-jocher
Copy link
Member

glenn-jocher commented Nov 20, 2021

@tjbe2021 good news 😃! Your original issue may now be fixed ✅ in PR #5727. This PR explicitly computes TP and FP from the existing Labels, P, and R metrics:

TP = Recall * Labels
FP = TP / Precision - TP

These TP and FP per-class vectors are left in val.py for users to access if they want:

yolov5/val.py

Line 240 in 36d12a5

tp, fp, p, r, f1, ap, ap_class = ap_per_class(*stats, plot=plots, save_dir=save_dir, names=names)

To receive this update:

  • Gitgit pull from within your yolov5/ directory or git clone https://github.com/ultralytics/yolov5 again
  • PyTorch Hub – Force-reload model = torch.hub.load('ultralytics/yolov5', 'yolov5s', force_reload=True)
  • Notebooks – View updated notebooks Open In Colab Open In Kaggle
  • Dockersudo docker pull ultralytics/yolov5:latest to update your image Docker Pulls

Thank you for spotting this issue and informing us of the problem. Please let us know if this update resolves the issue for you, and feel free to inform us of any other issues you discover or feature requests that come to mind. Happy trainings with YOLOv5 🚀!

@glenn-jocher glenn-jocher linked a pull request Nov 20, 2021 that will close this issue
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Further information is requested Stale Stale and schedule for closing soon
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants