-
Notifications
You must be signed in to change notification settings - Fork 100
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix VisualPredictor
(and Visualize Model Outputs...)
#1104
Conversation
|
||
|
||
if __name__ == "__main__": | ||
import os | ||
|
||
ds = os.environ["ds-dmc"] | ||
|
||
main([ds]) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Remove
text="In order to use this function you must first quit and " | ||
"re-open SLEAP to release resources used by visualizing " | ||
"model outputs." | ||
text="In order to use this function you must uncheck " | ||
"'Visualize Model Outputs' to release resources used." |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Flagging this @talmo to double check that this is true. I just self.overlays.pop("inference", None)
to free resources. I've tested this (not extensively) and haven't run into any problems.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Does GPU memory get released? This might be from loading the model.
def addInferenceToOverlays(self): | ||
self.execute(AddInferenceToOverlays) | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Either get it working or remove
# XXX(LM): Using this command instead of _handle_model_overlay_command results in | ||
# inability to propogate pop "inference" overlay to MainWindow | ||
class AddInferenceToOverlays(EditCommand): | ||
topics = [UpdateTopic.frame] # Need the self.plotFrame() callback | ||
|
||
@staticmethod | ||
def should_visualize(context: CommandContext) -> bool: | ||
"""Returns whether to visualize models and handles removal of visuals.""" | ||
if not context.state["visualize models"]: | ||
# Remove inference from overlays | ||
context.app.overlays.pop("inference", None) | ||
return False | ||
return True | ||
|
||
@staticmethod | ||
def ask(context: CommandContext, params: dict) -> bool: | ||
"""Open `FileDialog` to select which model to use for inference overlay.""" | ||
|
||
# Handle case when inference overlay should not be visualized | ||
if not AddInferenceToOverlays.should_visualize(context): | ||
return False | ||
|
||
# Otherwise, open FileDialog for user to select which model to use. | ||
filters = ["Model (*.json)"] | ||
|
||
# Default to opening from models directory from project | ||
models_dir = None | ||
if context.state["filename"] is not None: | ||
models_dir = os.path.join( | ||
os.path.dirname(context.state["filename"]), "models/" | ||
) | ||
|
||
# TODO(LM): Filter through trained models (folders containing a best.h5) | ||
# Show dialog | ||
filename, selected_filter = FileDialog.open( | ||
context.app, | ||
dir=models_dir, | ||
caption="Import model outputs...", | ||
filter=";;".join(filters), | ||
) | ||
|
||
# If no file was selected, set "visualize models" to False | ||
if len(filename) == 0: | ||
context.state["visualize models"] = False # XXX(LM): Does not work | ||
return False | ||
|
||
params["filename"] = filename | ||
|
||
return True | ||
|
||
@staticmethod | ||
def do_action(context: CommandContext, params: dict): | ||
"""Add live inference results to overlays.""" | ||
|
||
if not AddInferenceToOverlays.should_visualize(context): | ||
return | ||
|
||
from sleap.gui.overlays.base import DataOverlay | ||
|
||
filename = params["filename"] | ||
predictor: VisualPredictor = DataOverlay.make_viz_predictor(filename) | ||
|
||
# If multi-head model with both confmaps and pafs, ask user which to show. | ||
show_pafs = False | ||
if ( | ||
predictor.confidence_maps_key_name | ||
and predictor.part_affinity_fields_key_name | ||
): | ||
results = FormBuilderModalDialog(form_name="head_type_form").get_results() | ||
show_pafs = "Part Affinity" in results["head_type"] | ||
|
||
try: | ||
overlay = DataOverlay.from_predictor( | ||
predictor=predictor, | ||
video=context.state["video"], | ||
player=context.app.player, | ||
show_pafs=show_pafs, | ||
) | ||
except Exception as e: | ||
context.app.state["visualize models"] = False # XXX(LM): Does not work | ||
raise Exception("Error visualizing model") from e | ||
|
||
context.app.overlays["inference"] = overlay | ||
|
||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Either get it working or remove
return getattr(self.predictor, attr) | ||
|
||
@classmethod | ||
def get_supported_predictors( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Until we get the centered instance "confmap_model"
supported, this would be more accurately named
def get_supported_predictors( | |
def get_expected_predictors( |
# XXX(LM): Can we replace this with model's Pipeline.make_viz_predictor method? | ||
# Maybe, but best just to leave as is. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Either do it or remove it.
Codecov Report
@@ Coverage Diff @@
## develop #1104 +/- ##
===========================================
- Coverage 69.30% 69.08% -0.22%
===========================================
Files 130 130
Lines 21978 22071 +93
===========================================
+ Hits 15231 15247 +16
- Misses 6747 6824 +77
📣 We’re building smart automated test selection to slash your CI/CD build times. Learn more |
Description
A previous refactoring seems to have broken the
VisualPredictor
subclass. This PR get the Visualize Model Outputs... menu action working again.Moreover, this PR converts
VisualPredictor
from a subclass ofPredictor
to a wrapper class of anyPredictor
subclass.This PR is getting to be a bit large and broadly affecting the codebase. It will likely need to be broken up into smaller PRs where 1-3 build the base for a smooth
VisualPredictor
integration:add_menu_check_item
from_trained_models
method as anabstractmethod
to thePredictor
class and add acategorize_model_paths
method toPredictor
(unless it fits better elsewhere:Model
?)from_trained_models
on allPredictor
subclasses to take in generic arguments for the model pathsmodel_paths
OR specific paths for each model e.g.centroid_model_path
,confmap_model_path
(how it is right now)VisualPredictor
wrapper class with all its featuresTypes of changes
Does this address any currently open issues?
Outside contributors checklist
Thank you for contributing to SLEAP!
❤️