Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add object keypoint similarity method #1003

Merged
merged 4 commits into from
Jul 23, 2024

Conversation

getzze
Copy link
Contributor

@getzze getzze commented Oct 19, 2022

Using the flow tracker with two mice, I was sometimes getting unexpected identity switches when one of the mouse was missing half or more of its keypoints.
I figured it was a problem with the instance_similarity function. So I wrote a new object_keypoint_similarity function (in fact a function factory because it has parameters). The instance_similarity function compute the distance between each keypoints from a reference instance and a query instance, takes the exp(-d**2), sum for all the keypoints and divide by the number of visible keypoints in the reference instance.

This is a description of the three changes I did and why:

  1. Adding an scale to the distance between the reference and query keypoint. Otherwise, if the ref and query keypoints are 3 pixels apart, they contribute to 0.0001 to the similarity score, versus 0.36 if they are 1 pixel apart. This is very sensitive to single pixel fluctuations.
    Instead, the distance is divided by a user-defined pixel scale before applying the gaussian function. The scale can be chosen to be the error for each keypoint found during training of the model with the validation set. Ideally this could be retrieved automatically, it is now hidden in the metrics.val.npz file of the model.
    This is what they use in this paper.

  2. The prediction score for each keypoint can be used to weigh the influence of each keypoint similarity in the total similarity. Like this, uncertain keypoints will not bias the total similarity.

  3. Dividing the sum of individual keypoint similarities by the number of visible keypoints in the reference instance results in higher similarity scores if the reference has few keypoints (meaning a bad reference instance). Imagine a query instance with 4 keypoints:

    • a first ref instance with 1 keypoint that matches exactly one keypoint: similarity = exp(-0)/1 = 1
    • a second ref instance with 4 keypoints, where 3 keypoints only match exactly the query instance: similarity = (1+1+1)/4 = 0.75
      Dividing by the total number of keypoints instead gives 0.25 and 0.75 respectively, which is preferable.

I didn't create a cli option to change the point 3, but it can be easily added. Implementing points 1 and 3 dramatically improved the tracking.

Summary by CodeRabbit

  • New Features

    • Enhanced tracking capabilities with new parameters for object keypoint similarity.
    • Added a new pytest fixture for sorted frame predictions to improve test usability.
  • Bug Fixes

    • Improved handling of inference parameters for better prediction behavior.
  • Tests

    • Updated tracking test functions to incorporate new similarity method parameters and improved data handling.

@codecov
Copy link

codecov bot commented Oct 19, 2022

Codecov Report

Attention: Patch coverage is 76.25000% with 19 lines in your changes missing coverage. Please review.

Project coverage is 74.33%. Comparing base (7ed1229) to head (207d749).
Report is 22 commits behind head on develop.

Files Patch % Lines
sleap/nn/tracker/components.py 74.35% 10 Missing ⚠️
sleap/nn/tracking.py 80.55% 7 Missing ⚠️
sleap/gui/learning/runners.py 60.00% 2 Missing ⚠️
Additional details and impacted files
@@             Coverage Diff             @@
##           develop    #1003      +/-   ##
===========================================
+ Coverage    73.30%   74.33%   +1.02%     
===========================================
  Files          134      135       +1     
  Lines        24087    24705     +618     
===========================================
+ Hits         17658    18364     +706     
+ Misses        6429     6341      -88     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

Copy link
Collaborator

@roomrys roomrys left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sorry for the delay! This is a great new feature! OKS implementation looks good. A few suggestions to the pipeline form for display purposes. (Ideally we make this as a stacked option under the similarity method, but Qt is battling me - I'll mess with this a bit more).

sleap/config/pipeline_form.yaml Outdated Show resolved Hide resolved
sleap/config/pipeline_form.yaml Show resolved Hide resolved
sleap/config/pipeline_form.yaml Show resolved Hide resolved
@@ -323,7 +339,7 @@ inference:
label: Similarity Method
type: list
default: iou
options: instance,centroid,iou
options: instance,centroid,iou,object_keypoint
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
options: instance,centroid,iou,object_keypoint
options: "instance,centroid,iou,object keypoint"

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I left "object_keypoint" without a space, otherwise you have to use quotes when using this option from the CLI.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

CLI arguments for similarity policies are directly pulled from the similarity_policies dictionary, so I added the space. No quotes needed when calling from CLI, but you will need the underscore.

sleap/sleap/nn/tracking.py

Lines 752 to 755 in eac2e2b

option = dict(name="similarity", default="instance")
option["type"] = str
option["options"] = list(similarity_policies.keys())
options.append(option)

sleap/sleap/nn/tracking.py

Lines 342 to 346 in eac2e2b

similarity_policies = dict(
instance=instance_similarity,
centroid=centroid_distance,
iou=instance_iou,
)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

When running from the GUI, what is passed to the CLI is:

tracking.similarity = 'object keypoint'

(with space), which is not recognized as a valid similarity function.

The entry from the GUI should be formatted to replace the space by an underscore in gui/learning/runners.py. I will push a commit.

sleap/nn/tracker/components.py Outdated Show resolved Hide resolved
sleap/nn/tracker/components.py Outdated Show resolved Hide resolved
sleap/nn/tracking.py Outdated Show resolved Hide resolved
sleap/nn/tracker/components.py Outdated Show resolved Hide resolved
@getzze getzze force-pushed the object_keypoint_similarity branch from 82472a7 to 84473d8 Compare November 15, 2022 23:14
@getzze
Copy link
Contributor Author

getzze commented Nov 15, 2022

I added the option to change normalization_keypoints from the CLI at least.

@talmo talmo requested a review from roomrys November 24, 2022 01:28
Copy link
Collaborator

@roomrys roomrys left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I added tests to get line coverage, but haven't done any tests on performance.

tests/nn/test_tracker_components.py Show resolved Hide resolved
@@ -770,6 +770,7 @@ def __init__(
self, mode: Text, skeleton: Optional["Skeleton"] = None, *args, **kwargs
):
super(TrainingPipelineWidget, self).__init__(*args, **kwargs)
self.setMinimumHeight(720) # Hard-code minimum size due to layout problems
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Flagging this because it's a shortcut to dealing with a jumpy GUI.

sleap/nn/tracker/components.py Outdated Show resolved Hide resolved
@getzze getzze force-pushed the object_keypoint_similarity branch 2 times, most recently from 6d606fa to 7181c65 Compare December 12, 2022 11:04
@getzze
Copy link
Contributor Author

getzze commented Dec 12, 2022

Just a small comment but the stack widget is very annoying.
When I select "object keypoint", I don't see the options, the only way is to increase the window size vertically. But because the window is already high, I first have to unselect the model, increase the window size, fill the options and then change back the model.

Maybe adding a scroll widget to the window would solve the problem?
Otherwise I think it's more usable to add the options below like in my original proposal.
What do you think?

@roomrys
Copy link
Collaborator

roomrys commented Dec 12, 2022

I think you are right - I also dislike how large the inference GUI has become. I like the organization of the stack widget, but adding a scroll widget is definitely a better idea than hardcoding a minimum size.

@talmo
Copy link
Collaborator

talmo commented Jan 6, 2023

Hi folks, are we ready to merge this? Do we need to change the UI a bit more still?

@roomrys roomrys added the stale but not fixed Issues that have been backlogged for a long time, but may be addressed in the future label Jan 19, 2023
@getzze
Copy link
Contributor Author

getzze commented Feb 1, 2023

Hi, I reverted the GUI to the original proposal (without stacked widget) that was more convenient (or less annoying).
I think this is good to go now!

But maybe there should be a revamp of the tracking window (in another PR) because it has grown quite a bit with my PRs :D
Maybe the Kalman filter part could go as it is not working great. Or the tracking options could be in another tab like the inference options...

@roomrys
Copy link
Collaborator

roomrys commented Feb 1, 2023

Hi @getzze,

Yes, you are adding too many features for the GUI to handle! kudos 😎 The hold-up to merge this has indeed been displaying all the new features. I like your proposals for re-organizing the Training/Inference Pipeline dialog. Also agreed that those should be handled in a different PR.

I am going to hold-off on merging this until after our next release (I'd like both the Training dialog and this PR to be included in the same release, but the revamping won't be happening prior to the long over-do 1.3.0). Aiming to get 1.3.0 out by the end of this week, then will be working on much needed GUI revamping to accompany this PR.

Thanks!
Liezl

@getzze
Copy link
Contributor Author

getzze commented Sep 28, 2023

Hey @roomrys , I just wanted to bump this PR, as it is very useful (at least to me) so I would like to see you in the main branch. Thanks!

@roomrys roomrys mentioned this pull request Oct 4, 2023
3 tasks
@roomrys roomrys self-assigned this Jan 9, 2024
@getzze getzze force-pushed the object_keypoint_similarity branch from 41d9be7 to 1bcca03 Compare July 23, 2024 14:39
Copy link

coderabbitai bot commented Jul 23, 2024

Warning

Rate limit exceeded

@getzze has exceeded the limit for the number of commits or files that can be reviewed per hour. Please wait 8 minutes and 35 seconds before requesting another review.

How to resolve this issue?

After the wait time has elapsed, a review can be triggered using the @coderabbitai review command as a PR comment. Alternatively, push new commits to this PR.

We recommend that you space out your commits to avoid hitting the rate limit.

How do rate limits work?

CodeRabbit enforces hourly rate limits for each developer per organization.

Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout.

Please see our FAQ for further information.

Commits

Files that changed from the base of the PR and between a2954a7 and 207d749.

Walkthrough

The recent changes boost the functionality and flexibility of the Sleap tracking framework. Key updates include the addition of parameters for object keypoint similarity, enhancing accuracy in keypoint assessments. The codebase also saw structural improvements for better parameter handling and testing, leading to a more robust and maintainable tracking process.

Changes

Files Change Summary
sleap/config/pipeline_form.yaml Added parameters for object keypoint similarity in the inference section; updated similarity options.
sleap/gui/learning/runners.py Enhanced make_predict_cli_call to include new tracking parameters; improved space handling.
sleap/nn/tracker/components.py Introduced factory_object_keypoint_similarity function for keypoint similarity calculations; added logging for error handling.
sleap/nn/tracking.py Modified get_candidates and related methods to support max_tracking parameter; added OKS options.
tests/fixtures/datasets.py Added centered_pair_predictions_sorted fixture for sorting labeled frames.
tests/nn/test_inference.py Updated tests to utilize the new similarity method and sorted predictions.
tests/nn/test_tracker_components.py Added a new tracking test function and modified existing tests for enhanced parameter handling.

Sequence Diagram(s)

sequenceDiagram
    participant User
    participant CLI
    participant Tracker
    participant SimilarityFunction

    User->>CLI: Invoke tracking command with parameters
    CLI->>Tracker: Process parameters (including OKS options)
    Tracker->>SimilarityFunction: Calculate similarity based on keypoints
    SimilarityFunction-->>Tracker: Return calculated similarity
    Tracker-->>CLI: Provide tracking results
    CLI-->>User: Display tracking results
Loading

🐇 In the meadow, I hop with glee,
New features sprout like leaves on a tree.
Keypoints dancing, weights in play,
Tracking’s magic grows each day!
With tests that leap and bounds that soar,
Let’s celebrate these changes and explore! 🎉


Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media?

Share
Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>.
    • Generate unit testing code for this file.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai generate unit testing code for this file.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai generate interesting stats about this repository and render them as a table.
    • @coderabbitai show all the console.log statements in this repository.
    • @coderabbitai read src/utils.ts and generate unit testing code.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (invoked as PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Additionally, you can add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

Outside diff range, codebase verification and nitpick comments (1)
tests/nn/test_tracker_components.py (1)

62-83: Insufficient Test Coverage for test_oks_tracker_by_name

The function test_oks_tracker_by_name in tests/nn/test_tracker_components.py does not appear to have sufficient test coverage, as no additional usages or references were found in the codebase. Ensure that this function is adequately tested to cover the intended scenarios.

  • File: tests/nn/test_tracker_components.py
  • Function: test_oks_tracker_by_name
Analysis chain

LGTM! Verify the test coverage.

The new function test_oks_tracker_by_name correctly focuses on different parameters related to object keypoint tracking.

Ensure that the test coverage is sufficient for the intended scenarios.

Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Description: Verify the test coverage of `test_oks_tracker_by_name`.

# Test: Search for the test function usage. Expect: Sufficient test coverage for the intended scenarios.
rg --type python -A 5 $'test_oks_tracker_by_name'

Length of output: 79


Script:

#!/bin/bash
# Description: Verify the test coverage of `test_oks_tracker_by_name`.

# Search for the function definition of `test_oks_tracker_by_name` in the codebase.
rg 'def test_oks_tracker_by_name' --type py -A 10

# Search for the usage of `test_oks_tracker_by_name` in the codebase.
rg 'test_oks_tracker_by_name' --type py -A 5

Length of output: 1113

Review details

Configuration used: CodeRabbit UI
Review profile: CHILL

Commits

Files that changed from the base of the PR and between 3e2bd25 and 1bcca03.

Files selected for processing (7)
  • sleap/config/pipeline_form.yaml (7 hunks)
  • sleap/gui/learning/runners.py (1 hunks)
  • sleap/nn/tracker/components.py (3 hunks)
  • sleap/nn/tracking.py (15 hunks)
  • tests/fixtures/datasets.py (1 hunks)
  • tests/nn/test_inference.py (4 hunks)
  • tests/nn/test_tracker_components.py (1 hunks)
Additional context used
Ruff
tests/nn/test_tracker_components.py

12-12: sleap.io.dataset.Labels imported but unused

Remove unused import: sleap.io.dataset.Labels

(F401)

sleap/nn/tracking.py

625-625: Use key in dict instead of key in dict.keys()

Remove .keys()

(SIM118)

Additional comments not posted (17)
tests/fixtures/datasets.py (1)

44-48: LGTM! Verify the fixture usage in the test files.

The new fixture centered_pair_predictions_sorted correctly sorts labeled frames based on their frame_idx attribute.

Ensure that the fixture is used appropriately in the test files.

Verification successful

Fixture usage verified.

The centered_pair_predictions_sorted fixture is used appropriately in the following test files:

  • tests/nn/test_tracker_components.py
  • tests/nn/test_inference.py
Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Description: Verify the usage of `centered_pair_predictions_sorted` in the test files.

# Test: Search for the fixture usage. Expect: Only relevant test files use the fixture.
rg --type python -A 5 $'centered_pair_predictions_sorted'

Length of output: 87


Script:

#!/bin/bash
# Description: Verify the usage of `centered_pair_predictions_sorted` in the test files.

# Test: Search for the fixture usage. Expect: Only relevant test files use the fixture.
rg --glob '*.py' -A 5 'centered_pair_predictions_sorted'

Length of output: 2402

tests/nn/test_tracker_components.py (2)

43-60: LGTM! Verify the test coverage.

The modified function test_tracker_by_name correctly includes additional parameters and enhances its functionality.

Ensure that the test coverage is sufficient for the intended scenarios.


18-35: LGTM! Verify the function usage in the test files.

The new function tracker_by_name correctly encapsulates the logic for tracking instances based on specified parameters.

Ensure that the function is used appropriately in the test files.

Verification successful

LGTM! Verify the function usage in the test files.

The new function tracker_by_name correctly encapsulates the logic for tracking instances based on specified parameters and is appropriately used in the test files.

  • tests/nn/test_tracker_components.py
Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Description: Verify the usage of `tracker_by_name` in the test files.

# Test: Search for the function usage. Expect: Only relevant test files use the function.
rg --type python -A 5 $'tracker_by_name'

Length of output: 70


Script:

#!/bin/bash
# Description: Verify the usage of `tracker_by_name` in the codebase.

# Test: Search for the function usage. Expect: Only relevant files use the function.
rg --type py -A 5 'tracker_by_name'

Length of output: 6383

sleap/nn/tracker/components.py (1)

46-132: LGTM! Verify the function usage in the codebase.

The new function factory_object_keypoint_similarity correctly creates a similarity function based on object keypoints and includes detailed parameters for flexible configuration.

Ensure that the function is used appropriately in the codebase.

Verification successful

Appropriate usage of factory_object_keypoint_similarity confirmed.

The function factory_object_keypoint_similarity is correctly used in sleap/nn/tracking.py to create a similarity_function with the intended parameters.

  • sleap/nn/tracking.py: Lines where similarity_function is assigned.
Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Description: Verify the usage of `factory_object_keypoint_similarity` in the codebase.

# Test: Search for the function usage. Expect: Only relevant parts of the codebase use the function.
rg --type python -A 5 $'factory_object_keypoint_similarity'

Length of output: 89


Script:

#!/bin/bash
# Description: Verify the usage of `factory_object_keypoint_similarity` in the codebase.

# Test: Search for the function usage. Expect: Only relevant parts of the codebase use the function.
rg 'factory_object_keypoint_similarity' -t py -A 5

Length of output: 1184


Script:

#!/bin/bash
# Description: Extract the relevant lines in `sleap/nn/tracking.py` where `factory_object_keypoint_similarity` is used.

# Test: Extract lines around the usage of `factory_object_keypoint_similarity`.
rg 'factory_object_keypoint_similarity' -A 10 sleap/nn/tracking.py

Length of output: 818

sleap/config/pipeline_form.yaml (3)

442-442: Approved: Addition of "object keypoint" to similarity method options.

The new option "object keypoint" is correctly added and consistent with the rest of the configuration.

Also applies to: 540-541


484-491: Approved: Addition of new parameter tracking.oks_errors.

The new parameter tracking.oks_errors is correctly defined with appropriate label, help text, type, and default value.

Also applies to: 579-586


492-496: Approved: Addition of new parameter tracking.oks_score_weighting.

The new parameter tracking.oks_score_weighting is correctly defined with appropriate label, help text, type, and default value.

Also applies to: 587-591

sleap/gui/learning/runners.py (2)

263-263: Approved: Addition of tracking.oks_score_weighting to boolean items.

The parameter tracking.oks_score_weighting is correctly added to the list of boolean items processed.


270-276: Approved: Handling of tracking.similarity parameter.

The new handling of the tracking.similarity parameter by replacing spaces with underscores is correctly implemented and consistent with the rest of the configuration.

sleap/nn/tracking.py (5)

605-610: Use key in dict instead of key in dict.keys().

The condition key in dict.keys() can be simplified to key in dict for better readability and performance.

-        if self.has_max_tracking:
-            for track in self.track_matching_queue_dict.keys():
+        if self.has_max_tracking:
+            for track in self.track_matching_queue_dict:

Likely invalid or redundant comment.


883-888: LGTM! But verify the function usage in the codebase.

The code changes are approved.

However, ensure that all function calls to factory_object_keypoint_similarity match the new parameters.

Verification successful

Function usage verified successfully.

All function calls to factory_object_keypoint_similarity match the new parameters.

  • sleap/nn/tracking.py
  • sleap/nn/tracker/components.py
Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Description: Verify all function calls to `factory_object_keypoint_similarity` match the new parameters.

# Test: Search for the function usage. Expect: Only occurrences of the new parameters.
rg --type python -A 5 $'factory_object_keypoint_similarity'

Length of output: 89


Script:

#!/bin/bash
# Description: Verify all function calls to `factory_object_keypoint_similarity` match the new parameters.

# Test: Search for the function usage. Expect: Only occurrences of the new parameters.
rg --type py -A 5 $'factory_object_keypoint_similarity'

Length of output: 1188


471-471: Ensure max_tracking parameter is correctly utilized.

The addition of the max_tracking parameter should be verified to ensure it is correctly used throughout the function.

Verification successful

The max_tracking parameter is correctly utilized.

The max_tracking parameter is consistently used throughout the codebase, including in the main implementation and test files. The integration appears thorough and logically sound.

  • sleap/nn/tracking.py: Multiple instances where max_tracking is used in tracking logic.
  • tests/nn/test_inference.py, tests/nn/test_tracker_components.py, tests/nn/test_tracking_integration.py: Test cases validating the max_tracking functionality.
Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Description: Verify the usage of `max_tracking` parameter in the codebase.

# Test: Search for the `max_tracking` parameter usage. Expect: Consistent usage in relevant functions.
rg --type python -A 5 $'max_tracking'

Length of output: 67


Script:

#!/bin/bash
# Description: Verify the usage of `max_tracking` parameter in the codebase.

# Test: Search for the `max_tracking` parameter usage. Expect: Consistent usage in relevant functions.
rg --type py -A 5 $'max_tracking'

Length of output: 16035


854-864: Ensure new parameters are correctly utilized.

The addition of the oks_errors, oks_score_weighting, and oks_normalization parameters should be verified to ensure they are correctly used throughout the function.

Verification successful

Ensure new parameters are correctly utilized.

The new parameters oks_errors, oks_score_weighting, and oks_normalization are correctly defined and utilized within the sleap/nn/tracking.py file. They are also referenced in test cases in tests/nn/test_tracker_components.py, indicating proper integration and testing.

  • sleap/nn/tracking.py: Parameters are defined and used within the function.
  • tests/nn/test_tracker_components.py: Parameters are included in test cases to verify functionality.
Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Description: Verify the usage of new parameters in the codebase.

# Test: Search for the `oks_errors`, `oks_score_weighting`, and `oks_normalization` parameters usage. Expect: Consistent usage in relevant functions.
rg --type python -A 5 $'oks_errors|oks_score_weighting|oks_normalization'

Length of output: 105


Script:

#!/bin/bash
# Description: Verify the usage of new parameters in the codebase.

# Test: Search for the `oks_errors`, `oks_score_weighting`, and `oks_normalization` parameters usage. Expect: Consistent usage in relevant functions.
rg --type-add 'py:*.py' --type py -A 5 'oks_errors|oks_score_weighting|oks_normalization'

Length of output: 4284


395-395: Ensure max_tracking parameter is correctly utilized.

The addition of the max_tracking parameter should be verified to ensure it is correctly used throughout the function.

Verification successful

The max_tracking parameter is correctly utilized.

The addition of the max_tracking parameter is consistently used throughout the codebase without any apparent issues.

  • sleap/nn/tracking.py: Various functions and methods correctly integrate the max_tracking parameter into their logic.
Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Description: Verify the usage of `max_tracking` parameter in the codebase.

# Test: Search for the `max_tracking` parameter usage. Expect: Consistent usage in relevant functions.
rg --type python -A 5 $'max_tracking'

Length of output: 67


Script:

#!/bin/bash
# Description: Verify the usage of `max_tracking` parameter in the codebase.

# Test: Search for the `max_tracking` parameter usage. Expect: Consistent usage in relevant functions.
rg --type py -A 5 $'max_tracking'

Length of output: 16035

tests/nn/test_inference.py (3)

1376-1376: Update the command to include the new similarity method parameter.

The command now includes --tracking.similarity object_keypoint, which is necessary for the new similarity method.


1396-1397: Verify the similarity function.

The assertion ensures that the similarity_function of the tracker is set to "object_keypoint_similarity", which is crucial for verifying the correct configuration.


1752-1752: Update parameter name to centered_pair_predictions_sorted.

The parameter name has been changed to centered_pair_predictions_sorted, indicating a possible change in the expected input format or data ordering.

sleap/nn/tracking.py Outdated Show resolved Hide resolved
@getzze getzze force-pushed the object_keypoint_similarity branch from 1bcca03 to a2954a7 Compare July 23, 2024 14:48
@getzze getzze force-pushed the object_keypoint_similarity branch from a2954a7 to 6ef29e1 Compare July 23, 2024 14:54
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Caution

Inline review comments failed to post

Actionable comments posted: 5

Outside diff range, codebase verification and nitpick comments (2)
sleap/nn/tracker/components.py (1)

73-75: Improve readability by simplifying conditional assignment.

Simplify the conditional assignment for keypoint_errors.

-    keypoint_errors = 1 if keypoint_errors is None else keypoint_errors
+    keypoint_errors = keypoint_errors if keypoint_errors is not None else 1
sleap/nn/tracking.py (1)

860-864: Improve clarity of max_tracking assignment.

Simplify the conditional assignment for max_tracking.

-        max_tracking = max_tracking if max_tracks else False
+        max_tracking = bool(max_tracks)
Review details

Configuration used: CodeRabbit UI
Review profile: CHILL

Commits

Files that changed from the base of the PR and between 1bcca03 and a2954a7.

Files selected for processing (7)
  • sleap/config/pipeline_form.yaml (7 hunks)
  • sleap/gui/learning/runners.py (1 hunks)
  • sleap/nn/tracker/components.py (3 hunks)
  • sleap/nn/tracking.py (15 hunks)
  • tests/fixtures/datasets.py (1 hunks)
  • tests/nn/test_inference.py (4 hunks)
  • tests/nn/test_tracker_components.py (1 hunks)
Files skipped from review as they are similar to previous changes (4)
  • sleap/config/pipeline_form.yaml
  • sleap/gui/learning/runners.py
  • tests/fixtures/datasets.py
  • tests/nn/test_inference.py
Additional context used
Ruff
tests/nn/test_tracker_components.py

12-12: sleap.io.dataset.Labels imported but unused

Remove unused import: sleap.io.dataset.Labels

(F401)

sleap/nn/tracking.py

625-625: Use key in dict instead of key in dict.keys()

Remove .keys()

(SIM118)

Additional comments not posted (3)
tests/nn/test_tracker_components.py (2)

50-51: Note: Test is slow.

This test is slow, so it should be optimized in the future if possible.

Ensure that the test is run sparingly and with as few frames as possible.


73-74: Note: Test is slow.

This test is slow, so it should be optimized in the future if possible.

Ensure that the test is run sparingly and with as few frames as possible.

sleap/nn/tracking.py (1)

479-479: Potential off-by-one error in track count check.

The condition len(tracks) <= self.max_tracks might need to be len(tracks) < self.max_tracks to avoid an off-by-one error.

-            if not max_tracking or len(tracks) <= self.max_tracks:
+            if not max_tracking or len(tracks) < self.max_tracks:

Likely invalid or redundant comment.

Comments failed to post (5)
tests/nn/test_tracker_components.py

20-21: Replace print statements with logging.

The print statements are useful for debugging but should be replaced with logging for production code.

-    print(kwargs)
-    print(t.candidate_maker)
+    logger.debug(kwargs)
+    logger.debug(t.candidate_maker)
Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

    logger.debug(kwargs)
    logger.debug(t.candidate_maker)
sleap/nn/tracker/components.py

104-121: Handle size mismatch in kp_precision more gracefully.

Consider warning the user and using a good substitute instead of raising an error.

-            raise ValueError(
-                "keypoint_errors array should have the same size as the number of "
-                f"keypoints in the instance: {kp_precision.size} != {n_points}"
-            )
+            mess = (
+                "keypoint_errors array should have the same size as the number of "
+                f"keypoints in the instance: {kp_precision.size} != {n_points}"
+            )
+            if kp_precision.size > n_points:
+                kp_precision = kp_precision[:n_points]
+                mess += "\nTruncating keypoint_errors array."
+            else:  # elif kp_precision.size < n_points:
+                pad = n_points - kp_precision.size
+                kp_precision = np.pad(kp_precision, (0, pad), "edge")
+                mess += "\nPadding keypoint_errors array by repeating the last value."
+            logger.warning(mess)
Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

        # Make sure the sizes of kp_precision and n_points match
        if kp_precision.size > 1 and 2 * kp_precision.size != ref_points.size:
            # Correct kp_precision size to fit number of points
            n_points = ref_points.size // 2
            mess = (
                "keypoint_errors array should have the same size as the number of "
                f"keypoints in the instance: {kp_precision.size} != {n_points}"
            )

            if kp_precision.size > n_points:
                kp_precision = kp_precision[:n_points]
                mess += "\nTruncating keypoint_errors array."

            else:  # elif kp_precision.size < n_points:
                pad = n_points - kp_precision.size
                kp_precision = np.pad(kp_precision, (0, pad), "edge")
                mess += "\nPadding keypoint_errors array by repeating the last value."
            logger.warning(mess)
sleap/nn/tracking.py

883-888: Enhance maintainability by adding a helper function.

Consider adding a helper function to create the similarity function for object keypoint similarity.

def create_similarity_function(similarity, oks_errors, oks_score_weighting, oks_normalization):
    if similarity == "object_keypoint":
        return factory_object_keypoint_similarity(
            keypoint_errors=oks_errors,
            score_weighting=oks_score_weighting,
            normalization_keypoints=oks_normalization,
        )
    return similarity_policies[similarity]

# Usage
similarity_function = create_similarity_function(
    similarity, oks_errors, oks_score_weighting, oks_normalization
)

409-409: Potential off-by-one error in track count check.

The condition len(tracks) <= self.max_tracks might need to be len(tracks) < self.max_tracks to avoid an off-by-one error.

-            if not max_tracking or len(tracks) <= self.max_tracks:
+            if not max_tracking or len(tracks) < self.max_tracks:
Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

            if not max_tracking or len(tracks) < self.max_tracks:

742-742: Potential off-by-one error in track count check.

The condition len(self.track_matching_queue_dict) < self.max_tracks might need to be len(self.track_matching_queue_dict) <= self.max_tracks to avoid an off-by-one error.

-                elif not self.max_tracking or len(self.track_matching_queue_dict) < self.max_tracks:
+                elif not self.max_tracking or len(self.track_matching_queue_dict) <= self.max_tracks:
Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

                elif not self.max_tracking or len(self.track_matching_queue_dict) <= self.max_tracks:

@getzze
Copy link
Contributor Author

getzze commented Jul 23, 2024

Hi @roomrys @talmo
I rebased this PR, I hope it will make it to version 1.4 (oldest open PR now !!).
The UI problem has been solved so nothing should block it.

Tests in tests/nn/test_tracker_components.py were not passing because of bugs introduced by the max_tracks new feature that were actually not tested. Now max_track is also tested and fixed.

Cheers

@getzze
Copy link
Contributor Author

getzze commented Jul 23, 2024

The max_tracks bugs happen when calling Tracker.make_tracker_by_name, which should rarely happen, even when scripting.
It's due to mismatches between the tracker, max_tracks and max_tracking argument to this method. Due to command-line and GUI processing of the inputs, invalid combinations were not possible. But it's safer to correct them in Tracker also.

Copy link
Collaborator

@talmo talmo left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for following up here @getzze!! This looks great to me :)

@talmo talmo merged commit 38a5ca7 into talmolab:develop Jul 23, 2024
7 of 8 checks passed
@getzze
Copy link
Contributor Author

getzze commented Jul 23, 2024

Thanks!

roomrys added a commit that referenced this pull request Dec 19, 2024
* Remove no-op code from #1498

* Add options to set background color when exporting video (#1328)

* implement #921

* simplified form / refractor

* Add test function and update cli docs

* Improve test function to check background color

* Improve comments

* Change background options to lowercase

* Use coderabbitai suggested `fill`

---------

Co-authored-by: Shrivaths Shyam <[email protected]>
Co-authored-by: Liezl Maree <[email protected]>

* Increase range on batch size (#1513)

* Increase range on batch size

* Set maximum to a factor of 2

* Set default callable for `match_lists_function` (#1520)

* Set default for `match_lists_function`

* Move test code to official tests

* Check using expected values

* Allow passing in `Labels` to `app.main` (#1524)

* Allow passing in `Labels` to `app.main`

* Load the labels object through command

* Add warning when unable to switch back to CPU mode

* Replace (broken) `--unrag` with `--ragged` (#1539)

* Fix unrag always set to true in sleap-export

* Replace unrag with ragged

* Fix typos

* Add function to create app (#1546)

* Refactor `AddInstance` command (#1561)

* Refactor AddInstance command

* Add staticmethod wrappers

* Return early from set_visible_nodes

* Import DLC with uniquebodyparts, add Tracks (#1562)

* Import DLC with uniquebodyparts, add Tracks

* add tests

* correct tests

* Make the hdf5 videos store as int8 format (#1559)

* make the hdf5 video dataset type as proper int8 by padding with zeros

* add gzip compression

* Scale new instances to new frame size (#1568)

* Fix typehinting in `AddInstance`

* brought over changes from my own branch

* added suggestions

* Ensured google style comments

---------

Co-authored-by: roomrys <[email protected]>
Co-authored-by: sidharth srinath <[email protected]>

* Fix package export (#1619)

Add check for empty videos

* Add resize/scroll to training GUI (#1565)

* Make resizable training GUI and add adaptive scroll bar

* Set a maximum window size

---------

Co-authored-by: Liezl Maree <[email protected]>

* support loading slp files with non-compound types and str in metadata (#1566)

Co-authored-by: Liezl Maree <[email protected]>

* change inference pipeline option to tracking-only (#1666)

change inference pipeline none option to tracking-only

* Add ABL:AOC 2023 Workshop link (#1673)

* Add ABL:AOC 2023 Workshop link

* Trigger website build

* Graceful failing with seeking errors (#1712)

* Don't try to seek to faulty last frame on provider initialization

* Catch seeking errors and pass

* Lint

* Fix IndexError for hdf5 file import for single instance analysis files (#1695)

* Fix hdf5 read for single instance analysis files

* Add test

* Small test files

* removing unneccessary fixtures

* Replace imgaug with albumentations (#1623)

What's the worst that could happen?

* Initial commit

* Fix augmentation

* Update more deps requirements

* Use pip for installing albumentations and avoid reinstalling OpenCV

* Update other conda envs

* Fix out of bounds albumentations issues and update dependencies (#1724)

* Install albumentations using conda-forge in environment file

* Conda install albumentations

* Add ndx-pose to pypi requirements

* Keep out of bounds points

* Black

* Add ndx-pose to conda install in environment file

* Match environment file without cuda

* Ordered dependencies

* Add test

* Delete comments

* Add conda packages to mac environment file

* Order dependencies in pypi requirements

* Add tests with zeroes and NaNs for augmentation

* Back

* Black

* Make comment one line

* Add todo for later

* Black

* Update to new TensorFlow conda package (#1726)

* Build conda package locally

* Try 2.8.4

* Merge develop into branch to fix dependencies

* Change tensorflow version to 2.7.4 in where conda packages are used

* Make tensorflow requirements in pypi looser

* Conda package has TensorFlow 2.7.0 and h5py and numpy installed via conda

* Change tensorflow version in `environment_no_cuda.yml` to test using CI

* Test new sleap/tensorflow package

* Reset build number

* Bump version

* Update mac deps

* Update to Arm64 Mac runners

* pin `importlib-metadata`

* Pin more stuff on mac

* constrain `opencv` version due to new qt dependencies

* Update more mac stuff

* Patches to get to green

* More mac skipping

---------

Co-authored-by: Talmo Pereira <[email protected]>
Co-authored-by: Talmo Pereira <[email protected]>

* Fix CI on macosx-arm64 (#1734)

* Build conda package locally

* Try 2.8.4

* Merge develop into branch to fix dependencies

* Change tensorflow version to 2.7.4 in where conda packages are used

* Make tensorflow requirements in pypi looser

* Conda package has TensorFlow 2.7.0 and h5py and numpy installed via conda

* Change tensorflow version in `environment_no_cuda.yml` to test using CI

* Test new sleap/tensorflow package

* Reset build number

* Bump version

* Update mac deps

* Update to Arm64 Mac runners

* pin `importlib-metadata`

* Pin more stuff on mac

* constrain `opencv` version due to new qt dependencies

* Update more mac stuff

* Patches to get to green

* More mac skipping

* Re-enable mac tests

* Handle GPU re-init

* Fix mac build CI

* Widen tolerance for movenet correctness test

* Fix build ci

* Try for manual build without upload

* Try to reduce training CI time

* Rework actions

* Fix miniforge usage

* Tweaks

* Fix build ci

* Disable manual build

* Try merging CI coverage

* GPU/CPU usage in tests

* Lint

* Clean up

* Fix test skip condition

* Remove scratch test

---------

Co-authored-by: eberrigan <[email protected]>

* Add option to export to CSV via sleap-convert and API (#1730)

* Add csv as a format option

* Add analysis to format

* Add csv suffix to output path

* Add condition for csv analysis file

* Add export function to Labels class

* delete print statement

* lint

* Add `analysis.csv` as parametrize input for `sleap-convert` tests

* test `export_csv` method added to `Labels` class

* black formatting

* use `Path` to construct filename

* add `analysis.csv` to cli guide for `sleap-convert`

---------

Co-authored-by: Talmo Pereira <[email protected]>

* Only propagate Transpose Tracks when propagate is checked (#1748)

Fix always-propagate transpose tracks issue

* View Hyperparameter nonetype fix (#1766)

Pass config getter argument to fetch hyperparameters

* Adding ragged metadata to `info.json` (#1765)

Add ragged metadata to info.json file

* Add batch size to GUI for inference (#1771)

* Fix conda builds (#1776)

* test conda packages in a test environment as part of CI

* do not test sleap import using conda build

* use github environment variables to define build path for each OS in the matrix and add print statements for testing

* figure out paths one OS at a time

* github environment variables work in subsequent steps not current step

* use local builds first

* print env info

* try simple environment creation

* try conda instead of mamba

* fix windows build path

* fix windows build path

* add comment to reference pull request

* remove test stage from conda build for macs and test instead by creating the environment in a workflow

* test workflow by pushing to current branch

* test conda package on macos runner

* Mac build does not need nvidia channel

* qudida and albumentations are conda installed now

* add comment with original issue

* use python 3.9

* use conda match specifications syntax

* make print statements more readable for troubleshooting python versioning

* clean up build file

* update version for pre-release

* add TODO

* add tests for conda packages before uploading

* update ci comments and branches

* remove macos test of pip wheel since python 3.9 is not supported by setup-python action

* Upgrade build actions for release (#1779)

* update `build.yml` so it matches updates from `build_manual.yml`

* test `build.yml` without uploading

* test again using build_manual.yml

* build pip wheel with Ubuntu and turn off caching so build.yml exactly matches build_manual.yml

* `build.yml` on release only and upload

* testing caching

* `use-only-tar-bz2: true` makes environment unsolvable, change it back

* Update .github/workflows/build_manual.yml

Co-authored-by: Liezl Maree <[email protected]>

* Update .github/workflows/build.yml

Co-authored-by: Liezl Maree <[email protected]>

* bump pre-release version

* fix version for pre-release

* run build and upload on release!

* try setting `CACHE_NUMBER` to 1 with `use-only-tar-bz2` set to true

* increasing the cache number to reset the cache does work when `use-only-tar-bz2` is set to true

* publish and upload on release only

---------

Co-authored-by: Liezl Maree <[email protected]>

* Add ZMQ support via GUI and CLI (#1780)

* Add ZMQ support via GUI and CLI, automatic port handler, separate utils module for the functions

* Change menu name to match deleting predictions beyond max instance (#1790)

Change menu and function names

* Fix website build and remove build cache across workflows (#1786)

* test with build_manual on push

* comment out caching in build manual

* remove cache step from builad manual since environment resolves when this is commented out

* comment out cache in build ci

* remove cache from build on release

* remove cache from website build

* test website build on push

* add name to checkout step

* update checkout to v4

* update checkout to v4 in build ci

* remove cache since build ci works without it

* update upload-artifact to v4 in build ci

* update second chechout to v4 in build ci

* update setup-python to v5 in build ci

* update download-artifact to v4 in build ci

* update checkout to v4 in build ci

* update checkout to v4 in website build

* update setup-miniconda to v3.0.3 in website build

* update actions-gh-pages to v4 in website build

* update actions checkout and setup-python in ci

* update checkout action in ci to v4

* pip install lxml[html_clean] because of error message during action

* add error message to website to explain why pip install lxml[html_clean]

* remove my branch for pull request

* Bump to 1.4.1a1 (#1791)

* bump versions to 1.4.1a1

* we can change the version on the installation page since this will be merged into the develop branch and not main

* Fix windows conda package upload and build ci (#1792)

* windows OS is 2022 not 2019 on runner

* upload windows conda build manually but not pypi build

* remove comment and run build ci

* change build manual back so that it doesn't upload

* remove branch from build manual

* update installation docs for 1.4.1a1

* Fix zmq inference (#1800)

* Ensure that we always pass in the zmq_port dict to LossViewer

* Ensure zmq_ports has correct keys inside LossViewer

* Use specified controller and publish ports for first attempted addresses

* Add test for ports being set in LossViewer

* Add max attempts to find unused port

* Fix find free port loop and add for controller port also

* Improve code readablility and reuse

* Improve error message when unable to find free port

* Set selected instance to None after removal (#1808)

* Add test that selected instance set to None after removal

* Set selected instance to None after removal

* Add `InstancesList` class to handle backref to `LabeledFrame` (#1807)

* Add InstancesList class to handle backref to LabeledFrame

* Register structure/unstructure hooks for InstancesList

* Add tests for the InstanceList class

* Handle case where instance are passed in but labeled_frame is None

* Add tests relevant methods in LabeledFrame

* Delegate setting frame to InstancesList

* Add test for PredictedInstance.frame after complex merge

* Add todo comment to not use Instance.frame

* Add rtest for InstasnceList.remove

* Use normal list for informative `merged_instances`

* Add test for copy and clear

* Add copy and clear methods, use normal lists in merge method

* Bump to v1.4.1a2 (#1835)

bump to 1.4.1a2

* Updated trail length viewing options (#1822)

* updated trail length optptions

* Updated trail length options in the view menu

* Updated `prefs` to include length info from `preferences.yaml`

* Added trail length as method of `MainWindow`

* Updated trail length documentation

* black formatting

---------

Co-authored-by: Keya Loding <[email protected]>

* Handle case when no frame selection for trail overlay (#1832)

* Menu option to open preferences directory and update to util functions to pathlib (#1843)

* Add menu to view preferences directory and update to pathlib

* text formatting

* Add `Keep visualizations` checkbox to training GUI (#1824)

* Renamed save_visualizations to view_visualizations for clarity

* Added Delete Visualizations button to the training pipeline gui, exposed del_viz_predictions config option to the user

* Reverted view_ back to save_ and changed new training checkbox to Keep visualization images after training.

* Fixed keep_viz config option state override bug and updated keep_viz doc description

* Added test case for reading training CLI argument correctly

* Removed unnecessary testing code

* Creating test case to check for viz folder

* Finished tests to check CLI argument reading and viz directory existence

* Use empty string instead of None in cli args test

* Use keep_viz_images false in most all test configs (except test to override config)

---------

Co-authored-by: roomrys <[email protected]>

* Allowing inference on multiple videos via `sleap-track` (#1784)

* implementing proposed code changes from issue #1777

* comments

* configuring output_path to support multiple video inputs

* fixing errors from preexisting test cases

* Test case / code fixes

* extending test cases for mp4 folders

* test case for output directory

* black and code rabbit fixes

* code rabbit fixes

* as_posix errors resolved

* syntax error

* adding test data

* black

* output error resolved

* edited for push to dev branch

* black

* errors fixed, test cases implemented

* invalid output test and invalid input test

* deleting debugging statements

* deleting print statements

* black

* deleting unnecessary test case

* implemented tmpdir

* deleting extraneous file

* fixing broken test case

* fixing test_sleap_track_invalid_output

* removing support for multiple slp files

* implementing talmo's comments

* adding comments

* Add object keypoint similarity method (#1003)

* Add object keypoint similarity method

* fix max_tracking

* correct off-by-one error

* correct off-by-one error

* Generate suggestions using max point displacement threshold (#1862)

* create function max_point_displacement, _max_point_displacement_video. Add to yaml file. Create test for new function . . . will need to edit

* remove unnecessary for loop, calculate proper displacement, adjusted tests accordingly

* Increase range for displacement threshold

* Fix frames not found bug

* Return the latter frame index

* Lint

---------

Co-authored-by: roomrys <[email protected]>

* Added Three Different Cases for Adding a New Instance (#1859)

* implemented paste with offset

* right click and then default will paste the new instance at the location of the cursor

* modified the logics for creating new instance

* refined the logic

* fixed the logic for right click

* refined logics for adding new instance at a specific location

* Remove print statements

* Comment code

* Ensure that we choose a non nan reference node

* Move OOB nodes to closest in-bounds position

---------

Co-authored-by: roomrys <[email protected]>

* Allow csv and text file support on sleap track (#1875)

* initial changes

* csv support and test case

* increased code coverage

* Error fixing, black, deletion of (self-written) unused code

* final edits

* black

* documentation changes

* documentation changes

* Fix GUI crash on scroll (#1883)

* Only pass wheelEvent to children that can handle it

* Add test for wheelEvent

* Fix typo to allow rendering videos with mp4 (Mac) (#1892)

Fix typo to allow rendering videos with mp4

* Do not apply offset when double clicking a `PredictedInstance` (#1888)

* Add offset argument to newInstance and AddInstance

* Apply offset of 10 for Add Instance menu button (Ctrl + I)

* Add offset for docks Add Instance button

* Make the QtVideoPlayer context menu unit-testable

* Add test for creating a new instance

* Add test for "New Instance" button in `InstancesDock`

* Fix typo in docstring

* Add docstrings and typehinting

* Remove unused imports and sort imports

* Refactor video writer to use imageio instead of skvideo (#1900)

* modify `VideoWriter` to use imageio with ffmpeg backend

* check to see if ffmpeg is present

* use the new check for ffmpeg

* import imageio.v2

* add imageio-ffmpeg to environments to test

* using avi format for now

* remove SKvideo videowriter

* test `VideoWriterImageio` minimally

* add more documentation for ffmpeg

* default mp4 for ffmpeg should be mp4

* print using `IMAGEIO` when using ffmpeg

* mp4 for ffmpeg

* use mp4 ending in test

* test `VideoWriterImageio` with avi file extension

* test video with odd size

* remove redundant filter since imageio-ffmpeg resizes automatically

* black

* remove unused import

* use logging instead of print statement

* import cv2 is needed for resize

* remove logging

* Use `Video.from_filename` when structuring videos (#1905)

* Use Video.from_filename when structuring videos

* Modify removal_test_labels to have extension in filename

* Use | instead of + in key commands (#1907)

* Use | instead of + in key commands

* Lint

* Replace QtDesktop widget in preparation for PySide6 (#1908)

* Replace to-be-depreciated QDesktopWidget

* Remove unused imports and sort remaining imports

* Remove unsupported |= operand to prepare for PySide6 (#1910)

Fixes TypeError: unsupported operand type(s) for |=: 'int' and 'Option'

* Use positional argument for exception type (#1912)

traceback.format_exception has changed it's first positional argument's name from etype to exc in python 3.7 to 3.10

* Replace all Video structuring with Video.cattr() (#1911)

* Remove unused AsyncVideo class (#1917)

Remove unused AsyncVideo

* Refactor `LossViewer` to use matplotlib (#1899)

* use updated syntax for QtAgg backend of matplotlib

* start add features to `MplCanvas` to replace QtCharts features in `LossViewer` (untested)

* remove QtCharts imports and replace with MplCanvas

* remove QtCharts imports and replace with MplCanvas

* start using MplCanvas in LossViwer instead of QtCharts (untested)

* use updated syntax

* Uncomment all commented out QtChart

* Add debug code

* Refactor monitor to use LossViewer._init_series method

* Add monitor only debug code

* Add methods for setting up axes and legend

* Add the matplotlib canvas to the widget

* Resize axis with data (no log support yet)

* Try using PathCollection for "batch"

* Get "batch" plotting with ax.scatter (no log support yet)

* Add log support

* Add a _resize_axis method

* Modify init_series to work for ax.plot as well

* Use matplotlib to plot epoch_loss line

* Add method _add_data_to_scatter

* Add _add_data_to_plot method

* Add docstring to _resize_axes

* Add matplotlib plot for val_loss

* Add matplotlib scatter for val_loss_best

* Avoid errors with setting log scale before any positive values

* Add x and y axes labels

* Set title (removing html tags)

* Add legend

* Adjust positioning of plot

* Lint

* Leave MplCanvas unchanged

* Removed unused training_monitor.LossViewer

* Resize fonts

* Move legend outside of plot

* Add debug code for montitor aesthetics

* Use latex formatting to bold parts of title

* Make axes aesthetic

* Add midpoint grid lines

* Set initial limits on x and y axes to be 0+

* Ensure x axis minimum is always resized to 0+

* Adjust plot to account for plateau patience title

* Add debug code for plateau patience title line

* Lint

* Set thicker line width

* Remove unused import

* Set log axis on initialization

* Make tick labels smaller

* Move plot down a smidge

* Move ylabel left a bit

* Lint

* Add class LossPlot

* Refactor LossViewer to use LossPlot

* Remove QtCharts code

* Remove debug codes

* Allocate space for figure items based on item's size

* Refactor LossPlot to use underscores for internal methods

* Ensure y_min, y_max not equal
Otherwise we get an unnecessary teminal message:
UserWarning: Attempting to set identical bottom == top == 3.0 results in singular transformations; automatically expanding.
  self.axes.set_ylim(y_min, y_max)

---------

Co-authored-by: roomrys <[email protected]>
Co-authored-by: roomrys <[email protected]>

* Refactor `LossViewer` to use underscores for internal method names (#1919)

Refactor LossViewer to use underscores for internal method names

* Manually handle `Instance.from_predicted` structuring when not `None` (#1930)

* Use `tf.math.mod` instead of `%` (#1931)

* Option for Max Stride to be 128 (#1941)

Co-authored-by: Max  Weinberg <[email protected]>

* Add discussion comment workflow (#1945)

* Add a bot to autocomment on workflow

* Use github markdown warning syntax

* Add a multiline warning

* Change happy coding to happy SLEAPing

Co-authored-by: Talmo Pereira <[email protected]>

---------

Co-authored-by: roomrys <[email protected]>
Co-authored-by: Talmo Pereira <[email protected]>

* Add comment on issue workflow (#1946)

* Add workflow to test conda packages (#1935)

* Add missing imageio-ffmpeg to meta.ymls (#1943)

* Update installation docs 1.4.1 (#1810)

* [wip] Updated installation docs

* Add tabs for different OS installations

* Move installation methods to tabs

* Use tabs.css

* FIx styling error (line under last tab in terminal hint)

* Add installation instructions before TOC

* Replace mamba with conda

* Lint

* Find good light colors
not switching when change dark/light themes

* Get color scheme switching
with dark/light toggle button

* Upgrade website build dependencies

* Remove seemingly unneeded dependencies from workflow

* Add myst-nb>=0.16.0 lower bound

* Trigger dev website build

* Fix minor typo in css

* Add miniforge and one-liner installs for package managers

---------

Co-authored-by: roomrys <[email protected]>
Co-authored-by: Talmo Pereira <[email protected]>

* Add imageio dependencies for pypi wheel (#1950)

Add imagio dependencies for pypi wheel

Co-authored-by: roomrys <[email protected]>

* Do not always color skeletons table black (#1952)

Co-authored-by: roomrys <[email protected]>

* Remove no module named work error (#1956)

* Do not always color skeletons table black

* Remove offending (possibly unneeded) line
that causes the no module named work error to print in terminal

* Remove offending (possibly unneeded) line
that causes the no module named work error to print in terminal

* Remove accidentally added changes

* Add (failing) test to ensure menu-item updates with state change

* Reconnect callback for menu-item (using lambda)

* Add (failing) test to ensure menu-item updates with state change

Do not assume inital state

* Reconnect callback for menu-item (using lambda)

---------

Co-authored-by: roomrys <[email protected]>

* Add `normalized_instance_similarity` method  (#1939)

* Add normalize function

* Expose normalization function

* Fix tests

* Expose object keypoint sim function

* Fix tests

* Handle skeleton decoding internally (#1961)

* Reorganize (and add) imports

* Add (and reorganize) imports

* Modify decode_preview_image to return bytes if specified

* Implement (minimally tested) replace_jsonpickle_decode

* Add support for using idx_to_node map
i.e. loading from Labels (slp file)

* Ignore None items in reduce_list

* Convert large function to SkeletonDecoder class

* Update SkeletonDecoder.decode docstring

* Move decode_preview_image to SkeletonDecoder

* Use SkeletonDecoder instead of jsonpickle in tests

* Remove unused imports

* Add test for decoding dict vs tuple pystates

* Handle skeleton encoding internally (#1970)

* start class `SkeletonEncoder`

* _encoded_objects need to be a dict to add to

* add notebook for testing

* format

* fix type in docstring

* finish classmethod for encoding Skeleton as a json string

* test encoded Skeleton as json string by decoding it

* add test for decoded encoded skeleton

* update jupyter notebook for easy testing

* constraining attrs in dev environment to make sure decode format is always the same locally

* encode links first then encode source then target then type

* save first enconding statically as an input to _get_or_assign_id so that we do not always get py/id

* save first encoding statically

* first encoding is passed to _get_or_assign_id

* use first_encoding variable to determine if we should assign a py/id

* add print statements for debugging

* update notebook for easy testing

* black

* remove comment

* adding attrs constraint to show this passes for certain attrs version only

* add import

* switch out jsonpickle.encode

* oops remove import

* can attrs be unconstrained?

* forgot comma

* pin attrs for testing

* test Skeleton from json, template, with symmetries, and template

* use SkeletonEncoder.encode

* black

* try removing None values in EdgeType reduced

* Handle case when nodes are replaced by integer indices from caller

* Remove prototyping notebook

* Remove attrs pins

* Remove sort keys (which flips the neccessary ordering of our py/ids)

* Do not add extra indents to encoded file

* Only append links after fully encoded (fat-finger)

* Remove outdated comment

* Lint

---------

Co-authored-by: Talmo Pereira <[email protected]>
Co-authored-by: roomrys <[email protected]>

* Pin ndx-pose<0.2.0 (#1978)

* Pin ndx-pose<0.2.0

* Typo

* Sort encoded `Skeleton` dictionary for backwards compatibility  (#1975)

* Add failing test to check that encoded Skeleton is sorted

* Sort Skeleton dictionary before encoding

* Remove unused import

* Disable comment bot for now

* Fix COCO Dataset Loading for Invisible Keypoints (#2035)

Update coco.py

# Fix COCO Dataset Loading for Invisible Keypoints

## Issue
When loading COCO datasets, keypoints marked as invisible (flag=0) are currently skipped and later placed randomly within the instance's bounding box. However, in COCO format, these keypoints may still have valid coordinate information that should be preserved (see toy_dataset for expected vs. current behavior).

## Changes
Modified the COCO dataset loading logic to:
- Check if invisible keypoints (flag=0) have non-zero coordinates
- If coordinates are (0,0), skip the point (existing behavior)
- If coordinates are not (0,0), create the point at those coordinates but mark it as not visible
- Maintain existing behavior for visible (flag=2) and labeled

* Lint

* Add tracking score as seekbar header options (#2047)

* Add `tracking_score` as a constructor arg for `PredictedInstance`

* Add `tracking_score` to ID models

* Add fixture with tracking scores

* Add tracking score to seekbar header

* Add bonsai guide for sleap docs (#2050)

* [WIP] Add bonsai guide page

* Add more information to the guide with images

* add branch for website build

* Typos

* fix links

* Include suggestions

* Add more screenshots and refine the doc

* Remove branch from website workflow

* Completed documentation edits from PR made by reviewer + review bot.

---------

Co-authored-by: Shrivaths Shyam <[email protected]>
Co-authored-by: Liezl Maree <[email protected]>

* Don't mark complete on instance scaling (#2049)

* Add check for instances with track assigned before training ID models (#2053)

* Add menu item for deleting instances beyond frame limit (#1797)

* Add menu item for deleting instances beyond frame limit

* Add test function to test the instances returned

* typos

* Update docstring

* Add frame range form

* Extend command to use frame range

---------

Co-authored-by: Talmo Pereira <[email protected]>

* Highlight instance box on hover (#2055)

* Make node marker and label sizes configurable via preferences (#2057)

* Make node marker and label sizes configurable via preferences

* Fix test

* Enable touchpad pinch to zoom (#2058)

* Fix import PySide2 -> qtpy (#2065)

* Fix import PySide2 -> qtpy

* Remove unnecessary print statements.

* Add channels for pip conda env (#2067)

* Add channels for pypi conda env

* Trigger dev website build

* Separate the video name and its filepath columns in `VideoTablesModel` (#2052)

* add option to show video names with filepath

* add doc

* new feature added successfully

* delete unnecessary code

* remove attributes from video object

* Update dataviews.py

* remove all properties

* delete toggle option

* remove video show

* fix the order of the columns

* remove options

* Update sleap/gui/dataviews.py

Co-authored-by: Liezl Maree <[email protected]>

* Update sleap/gui/dataviews.py

Co-authored-by: Liezl Maree <[email protected]>

* use pathlib instead of substrings

* Update dataviews.py

Co-authored-by: Liezl Maree <[email protected]>

* Use Path instead of pathlib.Path
and sort imports and remove unused imports

* Use item.filename instead of getattr

---------

Co-authored-by: Liezl Maree <[email protected]>

* Make status bar dependent on UI mode (#2063)

* remove bug for dark mode

* fix toggle case

---------

Co-authored-by: Liezl Maree <[email protected]>

* Bump version to 1.4.1 (#2062)

* Bump version to 1.4.1

* Trigger conda/pypi builds (no upload)

* Trigger website build

* Add dev channel to installation instructions

---------

Co-authored-by: Talmo Pereira <[email protected]>

* Add -c sleap/label/dev channel for win/linux
- also trigger website build

---------

Co-authored-by: Scott Yang <[email protected]>
Co-authored-by: Shrivaths Shyam <[email protected]>
Co-authored-by: getzze <[email protected]>
Co-authored-by: Lili Karashchuk <[email protected]>
Co-authored-by: Sidharth Srinath <[email protected]>
Co-authored-by: sidharth srinath <[email protected]>
Co-authored-by: Talmo Pereira <[email protected]>
Co-authored-by: KevinZ0217 <[email protected]>
Co-authored-by: Elizabeth <[email protected]>
Co-authored-by: Talmo Pereira <[email protected]>
Co-authored-by: eberrigan <[email protected]>
Co-authored-by: vaibhavtrip29 <[email protected]>
Co-authored-by: Keya Loding <[email protected]>
Co-authored-by: Keya Loding <[email protected]>
Co-authored-by: Hajin Park <[email protected]>
Co-authored-by: Elise Davis <[email protected]>
Co-authored-by: gqcpm <[email protected]>
Co-authored-by: Andrew Park <[email protected]>
Co-authored-by: roomrys <[email protected]>
Co-authored-by: MweinbergUmass <[email protected]>
Co-authored-by: Max  Weinberg <[email protected]>
Co-authored-by: DivyaSesh <[email protected]>
Co-authored-by: Felipe Parodi <[email protected]>
Co-authored-by: croblesMed <[email protected]>
@roomrys roomrys mentioned this pull request Dec 19, 2024
11 tasks
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
stale but not fixed Issues that have been backlogged for a long time, but may be addressed in the future
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants