Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

TP/FP metrics per image #5725

Closed
1 task done
dlg4 opened this issue Nov 19, 2021 · 5 comments · Fixed by #5727
Closed
1 task done

TP/FP metrics per image #5725

dlg4 opened this issue Nov 19, 2021 · 5 comments · Fixed by #5727
Labels
question Further information is requested

Comments

@dlg4
Copy link

dlg4 commented Nov 19, 2021

Search before asking

Question

I am new to YOLOv5.

I am trying to modify the scripts to output true positives, true negatives, false positives, and false negatives for each test image. It would be nice for these metrics to be output in a .csv table, as the save_stats=True command does in val.py.

Where do I even begin in this process? I am not sure as to which script to modify. For example, in the utils\metrics.py script, we see the ap_per_class function:

def ap_per_class(tp, conf, pred_cls, target_cls, plot=False, save_dir='.', names=()):
    """ Compute the average precision, given the recall and precision curves.
    Source: https://github.com/rafaelpadilla/Object-Detection-Metrics.
    # Arguments
        tp:  True positives (nparray, nx1 or nx10).
        conf:  Objectness value from 0-1 (nparray).
        pred_cls:  Predicted object classes (nparray).
        target_cls:  True object classes (nparray).
        plot:  Plot precision-recall curve at [email protected]
        save_dir:  Plot save directory
    # Returns
        The average precision as computed in py-faster-rcnn.
    """

    # Sort by objectness
    i = np.argsort(-conf)
    tp, conf, pred_cls = tp[i], conf[i], pred_cls[i]

    # Find unique classes
    unique_classes = np.unique(target_cls)
    nc = unique_classes.shape[0]  # number of classes, number of detections

    # Create Precision-Recall curve and compute AP for each class
    px, py = np.linspace(0, 1, 1000), []  # for plotting
    ap, p, r = np.zeros((nc, tp.shape[1])), np.zeros((nc, 1000)), np.zeros((nc, 1000))
    for ci, c in enumerate(unique_classes):
        i = pred_cls == c
        n_l = (target_cls == c).sum()  # number of labels
        n_p = i.sum()  # number of predictions

        if n_p == 0 or n_l == 0:
            continue
        else:
            # Accumulate FPs and TPs
            fpc = (1 - tp[i]).cumsum(0)
            tpc = tp[i].cumsum(0)
            

            # Recall
            recall = tpc / (n_l + 1e-16)  # recall curve
            r[ci] = np.interp(-px, -conf[i], recall[:, 0], left=0)  # negative x, xp because xp decreases

            # Precision
            precision = tpc / (tpc + fpc)  # precision curve
            p[ci] = np.interp(-px, -conf[i], precision[:, 0], left=1)  # p at pr_score

            # AP from recall-precision curve
            for j in range(tp.shape[1]):
                ap[ci, j], mpre, mrec = compute_ap(recall[:, j], precision[:, j])
                if plot and j == 0:
                    py.append(np.interp(px, mrec, mpre))  # precision at [email protected]

    # Compute F1 (harmonic mean of precision and recall)
    f1 = 2 * p * r / (p + r + 1e-16)
    names = [v for k, v in names.items() if k in unique_classes]  # list: only classes that have data
    names = {i: v for i, v in enumerate(names)}  # to dict
    if plot:
        plot_pr_curve(px, py, ap, Path(save_dir) / 'PR_curve.png', names)
        plot_mc_curve(px, f1, Path(save_dir) / 'F1_curve.png', names, ylabel='F1')
        plot_mc_curve(px, p, Path(save_dir) / 'P_curve.png', names, ylabel='Precision')
        plot_mc_curve(px, r, Path(save_dir) / 'R_curve.png', names, ylabel='Recall')

    i = f1.mean(0).argmax()  # max F1 index
    return p[:, i], r[:, i], ap, f1[:, i], unique_classes.astype('int32')

I see that the true positives and false positives are defined in this portion:

# Accumulate FPs and TPs
            fpc = (1 - tp[i]).cumsum(0)
            tpc = tp[i].cumsum(0)

...but how can I get these metrics (including TN and FN) to print out for each test image?

Additional

No response

@dlg4 dlg4 added the question Further information is requested label Nov 19, 2021
@github-actions
Copy link
Contributor

github-actions bot commented Nov 19, 2021

👋 Hello @dlg4, thank you for your interest in YOLOv5 🚀! Please visit our ⭐️ Tutorials to get started, where you can find quickstart guides for simple tasks like Custom Data Training all the way to advanced concepts like Hyperparameter Evolution.

If this is a 🐛 Bug Report, please provide screenshots and minimum viable code to reproduce your issue, otherwise we can not help you.

If this is a custom training ❓ Question, please provide as much information as possible, including dataset images, training logs, screenshots, and a public link to online W&B logging if available.

For business inquiries or professional support requests please visit https://ultralytics.com or email Glenn Jocher at [email protected].

Requirements

Python>=3.6.0 with all requirements.txt installed including PyTorch>=1.7. To get started:

$ git clone https://github.com/ultralytics/yolov5
$ cd yolov5
$ pip install -r requirements.txt

Environments

YOLOv5 may be run in any of the following up-to-date verified environments (with all dependencies including CUDA/CUDNN, Python and PyTorch preinstalled):

Status

CI CPU testing

If this badge is green, all YOLOv5 GitHub Actions Continuous Integration (CI) tests are currently passing. CI tests verify correct operation of YOLOv5 training (train.py), validation (val.py), inference (detect.py) and export (export.py) on MacOS, Windows, and Ubuntu every 24 hours and on every commit.

@glenn-jocher
Copy link
Member

glenn-jocher commented Nov 19, 2021

@dlg4 if we take a step back, all metrics are computed per image and per class (and per IoU), but are aggregated over the image space to produce a per-class, per-IoU metric set, which is then averaged over classes and over IoUs to display a single metric of each type, i.e. [email protected]:0.05:0.95

You would have to make significant breaking changes to val.py to output this information more granularly as you propose.

@glenn-jocher glenn-jocher added the TODO High priority items label Nov 19, 2021
@glenn-jocher glenn-jocher changed the title Trying to modify scripts to print out true positives, false positives, true positives, false positives TP/FP metrics per image Nov 19, 2021
@glenn-jocher
Copy link
Member

@dlg4 another reason we don't display TP and FP is because the information displayed is a sufficient statistic to reconstruct these, so displaying these would be redundant. Anyone can reconstruct these using the provided metrics, same with F1. See https://en.wikipedia.org/wiki/Precision_and_recall

               Class     Images     Labels          P          R     [email protected] mAP@
                 all        128        929      0.577      0.414       0.46      0.279
              person        128        254      0.723      0.531      0.601       0.35

For person class:

TP = Recall * Labels = 135
FP = TP / Precision - TP = 52

@glenn-jocher glenn-jocher removed the TODO High priority items label Nov 19, 2021
@glenn-jocher glenn-jocher linked a pull request Nov 20, 2021 that will close this issue
@glenn-jocher
Copy link
Member

glenn-jocher commented Nov 20, 2021

@dlg4 good news 😃! Your original issue may now be fixed ✅ in PR #5727. This PR explicitly computes TP and FP from the existing Labels, P, and R metrics:

TP = Recall * Labels
FP = TP / Precision - TP

These TP and FP per-class vectors are left in val.py for users to access if they want:

yolov5/val.py

Line 240 in 36d12a5

tp, fp, p, r, f1, ap, ap_class = ap_per_class(*stats, plot=plots, save_dir=save_dir, names=names)

To receive this update:

  • Gitgit pull from within your yolov5/ directory or git clone https://github.com/ultralytics/yolov5 again
  • PyTorch Hub – Force-reload model = torch.hub.load('ultralytics/yolov5', 'yolov5s', force_reload=True)
  • Notebooks – View updated notebooks Open In Colab Open In Kaggle
  • Dockersudo docker pull ultralytics/yolov5:latest to update your image Docker Pulls

Thank you for spotting this issue and informing us of the problem. Please let us know if this update resolves the issue for you, and feel free to inform us of any other issues you discover or feature requests that come to mind. Happy trainings with YOLOv5 🚀!

@dlg4
Copy link
Author

dlg4 commented Nov 22, 2021

@glenn-jocher Thanks so much! This is wonderful.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Further information is requested
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants