Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix VisualPredictor (and Visualize Model Outputs...) #1104

Closed
wants to merge 7 commits into from

Conversation

roomrys
Copy link
Collaborator

@roomrys roomrys commented Jan 4, 2023

Description

A previous refactoring seems to have broken the VisualPredictor subclass. This PR get the Visualize Model Outputs... menu action working again.

Moreover, this PR converts VisualPredictor from a subclass of Predictor to a wrapper class of any Predictor subclass.


This PR is getting to be a bit large and broadly affecting the codebase. It will likely need to be broken up into smaller PRs where 1-3 build the base for a smooth VisualPredictor integration:

  1. Support custom callbacks for add_menu_check_item
  2. Add from_trained_models method as an abstractmethod to the Predictor class and add a categorize_model_paths method to Predictor (unless it fits better elsewhere: Model?)
  3. Change from_trained_models on all Predictor subclasses to take in generic arguments for the model paths model_paths OR specific paths for each model e.g. centroid_model_path, confmap_model_path (how it is right now)
  4. Add the VisualPredictor wrapper class with all its features

Types of changes

  • Bugfix
  • New feature
  • Refactor / Code style update (no logical changes)
  • Build / CI changes
  • Documentation Update
  • Other (explain)

Does this address any currently open issues?

Outside contributors checklist

  • Review the guidelines for contributing to this repository
  • Read and sign the CLA and add yourself to the authors list
  • Make sure you are making a pull request against the develop branch (not main). Also you should start your branch off develop
  • Add tests that prove your fix is effective or that your feature works
  • Add necessary documentation (if appropriate)

Thank you for contributing to SLEAP!

❤️

Comment on lines +2007 to +2014


if __name__ == "__main__":
import os

ds = os.environ["ds-dmc"]

main([ds])
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Remove

Comment on lines -1667 to +1734
text="In order to use this function you must first quit and "
"re-open SLEAP to release resources used by visualizing "
"model outputs."
text="In order to use this function you must uncheck "
"'Visualize Model Outputs' to release resources used."
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Flagging this @talmo to double check that this is true. I just self.overlays.pop("inference", None) to free resources. I've tested this (not extensively) and haven't run into any problems.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Does GPU memory get released? This might be from loading the model.

Comment on lines +513 to +515
def addInferenceToOverlays(self):
self.execute(AddInferenceToOverlays)

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Either get it working or remove

Comment on lines +2875 to +2959
# XXX(LM): Using this command instead of _handle_model_overlay_command results in
# inability to propogate pop "inference" overlay to MainWindow
class AddInferenceToOverlays(EditCommand):
topics = [UpdateTopic.frame] # Need the self.plotFrame() callback

@staticmethod
def should_visualize(context: CommandContext) -> bool:
"""Returns whether to visualize models and handles removal of visuals."""
if not context.state["visualize models"]:
# Remove inference from overlays
context.app.overlays.pop("inference", None)
return False
return True

@staticmethod
def ask(context: CommandContext, params: dict) -> bool:
"""Open `FileDialog` to select which model to use for inference overlay."""

# Handle case when inference overlay should not be visualized
if not AddInferenceToOverlays.should_visualize(context):
return False

# Otherwise, open FileDialog for user to select which model to use.
filters = ["Model (*.json)"]

# Default to opening from models directory from project
models_dir = None
if context.state["filename"] is not None:
models_dir = os.path.join(
os.path.dirname(context.state["filename"]), "models/"
)

# TODO(LM): Filter through trained models (folders containing a best.h5)
# Show dialog
filename, selected_filter = FileDialog.open(
context.app,
dir=models_dir,
caption="Import model outputs...",
filter=";;".join(filters),
)

# If no file was selected, set "visualize models" to False
if len(filename) == 0:
context.state["visualize models"] = False # XXX(LM): Does not work
return False

params["filename"] = filename

return True

@staticmethod
def do_action(context: CommandContext, params: dict):
"""Add live inference results to overlays."""

if not AddInferenceToOverlays.should_visualize(context):
return

from sleap.gui.overlays.base import DataOverlay

filename = params["filename"]
predictor: VisualPredictor = DataOverlay.make_viz_predictor(filename)

# If multi-head model with both confmaps and pafs, ask user which to show.
show_pafs = False
if (
predictor.confidence_maps_key_name
and predictor.part_affinity_fields_key_name
):
results = FormBuilderModalDialog(form_name="head_type_form").get_results()
show_pafs = "Part Affinity" in results["head_type"]

try:
overlay = DataOverlay.from_predictor(
predictor=predictor,
video=context.state["video"],
player=context.app.player,
show_pafs=show_pafs,
)
except Exception as e:
context.app.state["visualize models"] = False # XXX(LM): Does not work
raise Exception("Error visualizing model") from e

context.app.overlays["inference"] = overlay


Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Either get it working or remove

return getattr(self.predictor, attr)

@classmethod
def get_supported_predictors(
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Until we get the centered instance "confmap_model" supported, this would be more accurately named

Suggested change
def get_supported_predictors(
def get_expected_predictors(

Comment on lines +4241 to +4242
# XXX(LM): Can we replace this with model's Pipeline.make_viz_predictor method?
# Maybe, but best just to leave as is.
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Either do it or remove it.

@codecov
Copy link

codecov bot commented Jan 4, 2023

Codecov Report

Merging #1104 (db11202) into develop (9ae2941) will decrease coverage by 0.21%.
The diff coverage is 18.08%.

❗ Current head db11202 differs from pull request most recent head d0ad90a. Consider uploading reports for the commit d0ad90a to get more accurate results

@@             Coverage Diff             @@
##           develop    #1104      +/-   ##
===========================================
- Coverage    69.30%   69.08%   -0.22%     
===========================================
  Files          130      130              
  Lines        21978    22071      +93     
===========================================
+ Hits         15231    15247      +16     
- Misses        6747     6824      +77     
Impacted Files Coverage Δ
sleap/gui/overlays/base.py 53.96% <0.00%> (ø)
sleap/nn/inference.py 79.19% <17.44%> (-0.64%) ⬇️
sleap/gui/app.py 73.96% <18.33%> (-3.18%) ⬇️
sleap/gui/commands.py 54.44% <20.00%> (-1.04%) ⬇️

📣 We’re building smart automated test selection to slash your CI/CD build times. Learn more

@roomrys roomrys added the stale but not fixed Issues that have been backlogged for a long time, but may be addressed in the future label Jan 21, 2023
@roomrys roomrys closed this Apr 20, 2023
@roomrys roomrys mentioned this pull request Dec 16, 2024
11 tasks
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
stale but not fixed Issues that have been backlogged for a long time, but may be addressed in the future
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants