Evaluating a Predction.slp #1663
Replies: 4 comments
-
Hi @jramborger78, I think what you are looking for is a way to evaluate metrics on a per video basis? That way you can see how the model performs on the high resolution videos as compared to the lower resolution videos. If that is what you want, then I copy/pasted from a script I had already written (and used, although the frankensteined script by itself is untested - so let me know if you need fixes made or made fixes). Liezl P.S. In case the gist link stops working at any time in the future: """This module defines functions to evaluate metrics on a per video basis."""
from pathlib import Path
from typing import Optional
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import seaborn as sns
import sleap
from sleap import Labels
from sleap.nn.evals import evaluate
def plot_distances(metrics, labels_gt, save=False, oks_stddev: float = 0.025, oks_scale: Optional[float] = None, match_threshold: float = 0,
user_labels_only: bool = True):
"""Plots node distances (using matplotlib widget)."""
fig = plt.figure()
ax = fig.add_subplot(111)
node_names = labels_gt.skeleton.node_names if labels_gt.skeleton else None
dists = pd.DataFrame(metrics["dist.dists"], columns=node_names).melt(
var_name="Part", value_name="Error"
)
sns.boxplot(data=dists, x="Error", y="Part", fliersize=0, ax=ax)
sns.stripplot(
data=dists, x="Error", y="Part", alpha=0.25, linewidth=1, jitter=0.2, ax=ax
)
#changing the title to the names of the videos
title = "Node distances (ground truth vs prediction)"
if len(labels_gt.videos) == 1:
fn_video = Path(labels_gt.video.filename)
title += f": {fn_video.name}"
else:
fn_video = "all videos"
ax.set_title(title)
dist_1d = metrics["dist.dists"].flatten()
xmax = np.ceil(np.ceil(np.nanpercentile(dist_1d, 95) / 5) + 1) * 5
ax.set_xlim([0, xmax])
ax.set_xlabel("Error (px)")
# Get the figure filename
fn_fig = Path(fn_models, fn_video.stem)
fn_fig = fn_fig.with_suffix(".png")
fn_fig
if save:
plt.savefig(fn_fig)
return fig
def main(fn_models: str, fn_labels_gt: str):
# Create a predictor from the model
predictor = sleap.load_model(fn_models)
# Load the labels from each video and create metrics for each video
labels_gt = sleap.load_file(fn_labels_gt)
labels_pr = predictor.predict(labels_gt)
for video in labels_gt.videos:
# Extract labeled frames from single video
lfs_video = labels_gt.find(video)
labels_gt_video = Labels(lfs_video)
labels_pr_video = predictor.predict(labels_gt_video)
# Calculate metrics
video_metrics = evaluate(labels_gt_video, labels_pr_video) # This need to be the updated evalauate function SLEAPv()
# Save metrics to excel format
fn_video = Path(video.filename)
fn_csv = Path(fn_models, fn_video.stem)
fn_csv = fn_csv.with_suffix(".csv")
df = pd.DataFrame(video_metrics["dist.dists"], columns=labels_gt_video.skeleton.node_names)
df = df.assign(frame_idx=video_metrics["dist.frame_idxs"])
df = df.assign(video_fn=video_metrics["dist.video_paths"])
df.to_csv(fn_csv, index=False)
# Plot and save firgure
fig = plot_distances(video_metrics, labels_gt_video, save=True)
fig.show()
if __name__ == "__main__":
fn_models = "path/to/models/folder/date_time_model_type.n=N" # TODO: Replace with real path
fn_labels_gt = "path/to/the/labels.slp" # TODO: Replace with real path
main(fn_models, fn_labels_gt) |
Beta Was this translation helpful? Give feedback.
-
This is awesome, thank you so much!
I think there is a bug of some kind by chance. I believe I have everything
correct, only needing to add the path to my centered model folder and to
the .slp track result but got this:
```python
Traceback (most recent call last):
File "sleap_metrics.py", line 93, in <module>
main(fn_models, fn_labels_gt)
File "sleap_metrics.py", line 81, in main
df = pd.DataFrame(video_metrics["dist.dists"],
columns=labels_gt_video.skeleton.node_names)
KeyError: 'dist.dists'
```
Also, had to add this at the top for the "Optional" mention.
```python
from typing import Optional
```
|
Beta Was this translation helpful? Give feedback.
-
Another small note,
```python
import numpy as np
```
should be added to the top.
I figured out the issue i had, i had a track result slp as the input but noticed it needed the labels from the model itself and it then provided metrics for the labels/predictions within the labels slp, am i following that correctly?
I am going to work on it tomorrow but the aim is to get metrics for a tracked slp.
Sincerely,
Jarryd Ramborger
|
Beta Was this translation helpful? Give feedback.
-
Hey guys, following up on this thread about evaluating a tracked slp. Thanks! |
Beta Was this translation helpful? Give feedback.
-
Hey guys,
Was curious if there was a way to evaluate an slp generated by a tracking inference similar to that generated in metrics for a model. As mentioned prior #1657, I am attempting to add higher resolution videos (720x1280) to my present model (800x600) and it isn't going too well. If i attempt to use the prior model weights everything looks the same across the board metric-wise and the localization boxes but the 99th is upwards of 400px error. If I attempt to start fresh I can't seem to get the catheter centroid to be nearly as precise as the prior model somehow, it isn't terrible, but it was within 1px in the present model. Long story short, when I run a tracking inference with a higher resolution video, given SLEAP will resize it, I am trying to see if metrically it is similar to one of my lower resolution videos as gliding through the track in the GUI they appear similar in accuracy and error. For visualization I added the metric outputs of the present model, the best prior weights attempt, and the best fresh weights attempt.
Present Model
Prior Weights Attempt
New Weights Attempt
Beta Was this translation helpful? Give feedback.
All reactions