Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Scale new instances to new frame size #1568

Merged

Conversation

ssrinath22
Copy link
Contributor

@ssrinath22 ssrinath22 commented Oct 24, 2023

Description

Fixed the updated x and y values when copying an instance across frames so that the new x and y values are still within the boundaries of the new frame.

Types of changes

  • Bugfix
  • New feature
  • Refactor / Code style update (no logical changes)
  • Build / CI changes
  • Documentation Update
  • Other (explain)

Does this address any currently open issues?

Outside contributors checklist

  • Review the guidelines for contributing to this repository
  • Read and sign the CLA and add yourself to the authors list
  • Make sure you are making a pull request against the develop branch (not main). Also you should start your branch off develop
  • Add tests that prove your fix is effective or that your feature works
  • Add necessary documentation (if appropriate)

Thank you for contributing to SLEAP!

❤️

Summary by CodeRabbit

New Feature:

  • Enhanced video frame resizing in the SLEAP GUI. The update ensures that when the size of a video frame changes, the coordinates of visible nodes are accurately scaled to fit within the new frame size. This improvement maintains the correct positioning of nodes, providing a more accurate and consistent user experience during video analysis.

Documentation:

  • Added a new entry for Lili Karashchuk with email [email protected] and affiliation with the Allen Institute of Neural Dynamics.

Refactor:

  • Modified the read_frames function in deeplabcut.py to improve efficiency and track information.

Bug Fix:

  • Fixed a spelling error in the summary() function of nn/system.py.

Style:

  • Updated the encoding process in the to_hdf5 function of video.py to optimize storage of video frames in the HDF5 file.

Test:

  • Modified the test case test_import_labels_from_dlc_folder() in test_commands.py to assert the correct number of tracks in the labels object.

Chores:

  • Updated dependencies to the latest versions.

Note: The changes in tests/io/test_formats.py and sleap/gui/commands.py are not categorized as they do not fall under any of the specified categories.

@coderabbitai
Copy link

coderabbitai bot commented Oct 24, 2023

Walkthrough

The introduced changes focus on improving the set_visible_nodes function in sleap/gui/commands.py. The new logic ensures that the coordinates of nodes are correctly scaled when the video frame size changes, maintaining the relative positions of the nodes within the frame.

Changes

File Summary
sleap/gui/commands.py The set_visible_nodes function now calculates new x and y coordinates based on the ratio of old and new frame sizes. This ensures that the nodes' positions are scaled correctly when the frame size changes.
AUTHORS Added a new entry for Lili Karashchuk with email [email protected] and affiliation with the Allen Institute of Neural Dynamics.
sleap/io/format/deeplabcut.py Various changes including import statements, modifications to the read_frames function, and updates to video file searching logic. These changes enhance the functionality and efficiency of reading frames and track information.
sleap/io/video.py Changes to the to_hdf5 function optimize the encoding and storage of video frames in the HDF5 file. The frames are now encoded as int8 type and padded with zeroes. The dataset creation has been modified accordingly.
sleap/nn/system.py A spelling correction in the summary() function, changing "Initalized" to "Initialized".
tests/gui/test_commands.py Modification in the test case test_import_labels_from_dlc_folder(), changing the assertion for the number of tracks in the labels object.
tests/io/test_formats.py Removal of a parameterization line and addition of a new test function test_maudlc with parameterization for different test data. The function performs various assertions on the labels object and checks the correctness of tracks.

🐇

"In the land of code, where the shadows lie,

A rabbit hops, under the binary sky.

With each key press, a new world unfurls,

In the dance of brackets, and the swirl of curls.

A change is made, a node takes flight,

Scaling with grace, in the video's light.

AUTHORS grow, new names appear,

Lili Karashchuk joins, spreading cheer.

DeepLabCut, a format refined,

Tracks and frames, in harmony aligned.

Video encoding, a touch so fine,

Frames stored with care, in HDF5's shrine.

System summary, a word corrected,

Attention to detail, never neglected.

Tests evolve, assertions renewed,

Labels and tracks, correctness pursued.

In this coder's realm, changes take flight,

A dance of progress, in the day and night." 🎉


Tips

Chat with CodeRabbit Bot (@coderabbitai)

  • If you reply to a review comment from CodeRabbit, the bot will automatically respond.
  • To engage with CodeRabbit bot directly around the specific lines of code in the PR, mention @coderabbitai in your review comment
  • Note: Review comments are made on code diffs or files, not on the PR overview.
  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.

CodeRabbit Commands (invoked as PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger a review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai help to get help.
  • @coderabbitai resolve to resolve all the CodeRabbit review comments.

Note: For conversation with the bot, please use the review comments on code diffs or files.

CodeRabbit Configration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • The JSON schema for the configuration file is available here.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/coderabbit-overrides.json

@codecov
Copy link

codecov bot commented Oct 25, 2023

Codecov Report

Merging #1568 (4367467) into develop (dbe14a8) will increase coverage by 0.01%.
The diff coverage is 100.00%.

@@             Coverage Diff             @@
##           develop    #1568      +/-   ##
===========================================
+ Coverage    73.36%   73.38%   +0.01%     
===========================================
  Files          134      134              
  Lines        24012    24024      +12     
===========================================
+ Hits         17616    17629      +13     
+ Misses        6396     6395       -1     
Files Coverage Δ
sleap/gui/commands.py 61.96% <100.00%> (+0.28%) ⬆️

... and 1 file with indirect coverage changes

📣 We’re building smart automated test selection to slash your CI/CD build times. Learn more

@roomrys
Copy link
Collaborator

roomrys commented Oct 25, 2023

@coderabbitai review

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Review Status

Actionable comments generated: 0

Configuration used: CodeRabbit UI

Commits Files that changed from the base of the PR and between ae5764c and 6f73dfe.
Files selected for processing (1)
  • sleap/gui/commands.py (1 hunks)
Files skipped from review due to trivial changes (1)
  • sleap/gui/commands.py

Copy link
Collaborator

@roomrys roomrys left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just some optimization suggestions

Comment on lines 3008 to 3009
# x=copy_instance[node].x,
# y=copy_instance[node].y,
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
# x=copy_instance[node].x,
# y=copy_instance[node].y,

@@ -2995,9 +2995,20 @@ def set_visible_nodes(
if node in copy_instance and not copy_instance[node].isnan():
# just copy x, y, and visible
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
# just copy x, y, and visible
# Ensure x, y inside current frame, then copy x, y, and visible

Comment on lines 3000 to 3003
old_size_width= copy_instance.frame.video.shape[2]
old_size_height= copy_instance.frame.video.shape[1]
new_size_width= new_instance.frame.video.shape[2]
new_size_height= new_instance.frame.video.shape[1]
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we move this calculation outside the for loop?

Comment on lines 3004 to 3005
x_new = (x_old/old_size_width) * new_size_width
y_new = (y_old/old_size_height) * new_size_height
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Similarly, we can probably optimize by calculating (outside the for loop) a single factor to multiply by as

scale_width = new_size_width / old_size_width

same for height and then

Suggested change
x_new = (x_old/old_size_width) * new_size_width
y_new = (y_old/old_size_height) * new_size_height
x_new = x_old * scale_width
y_new = y_old * scale_height

@roomrys roomrys changed the title Merged changes to fix issue 1551 Scale new instances to new frame size Oct 25, 2023
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Review Status

Actionable comments generated: 0

Configuration used: CodeRabbit UI

Commits Files that changed from the base of the PR and between 6f73dfe and 78560e1.
Files selected for processing (1)
  • sleap/gui/commands.py (1 hunks)
Files skipped from review due to trivial changes (1)
  • sleap/gui/commands.py

…ath22/sleap_sid into liezl/fix-typehinting-add-instance

This merge refactors the scaling change a little bit and optimizes code
readability and runtime.
Copy link
Collaborator

@roomrys roomrys left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nits and worries, but no serious action required.

sleap/gui/commands.py Outdated Show resolved Hide resolved
sleap/gui/commands.py Outdated Show resolved Hide resolved
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Review Status

Actionable comments generated: 4

Configuration used: CodeRabbit UI

Commits Files that changed from the base of the PR and between 78560e1 and 4367467.
Files ignored due to filter (4)
  • tests/data/dlc/labeled-data/video/CollectedData_LM.csv
  • tests/data/dlc/labeled-data/video/maudlc_testdata.csv
  • tests/data/dlc/labeled-data/video/maudlc_testdata_v2.csv
  • tests/data/dlc/madlc_230_config.yaml
Files selected for processing (7)
  • AUTHORS (1 hunks)
  • sleap/gui/commands.py (1 hunks)
  • sleap/io/format/deeplabcut.py (5 hunks)
  • sleap/io/video.py (1 hunks)
  • sleap/nn/system.py (1 hunks)
  • tests/gui/test_commands.py (1 hunks)
  • tests/io/test_formats.py (2 hunks)
Files skipped from review due to trivial changes (2)
  • AUTHORS
  • sleap/nn/system.py
Additional comments: 13
sleap/gui/commands.py (2)
  • 2992-2998: The calculation of the scale factors assumes that the aspect ratio of the old and new frames are the same. If this is not the case, the scaling might distort the positions of the nodes. Please verify if this assumption holds true in all cases.

  • 3001-3013: The code assumes that the copy_instance and new_instance have the same set of nodes. If this is not the case, the code might fail or lead to incorrect results. Please verify if this assumption holds true in all cases.

sleap/io/video.py (2)
  • 1446-1446: The astype("int8") conversion might cause data loss if the encoded image data contains values outside the range of int8 (-128 to 127). Please verify that the encoded image data is within this range before the conversion.

  • 1455-1460: The use of gzip compression is a good practice for reducing the size of the stored data. However, it might increase the time it takes to read and write data. If performance is a concern, consider using a faster compression algorithm or no compression at all.

sleap/io/format/deeplabcut.py (5)
  • 19-28: The import statements are well organized and follow the PEP8 guidelines. The imports are grouped in the correct order: standard library imports, related third-party imports, and local application/library specific imports.

  • 119-130: The logic for creating a dictionary to keep track of starting frame indices for each individual/track is correct. The use of a dictionary here is efficient for lookups when checking if an animal_name already exists in the keys.

  • 178-209: The logic for creating instances and tracks is correct. The use of a dictionary to store instance points is efficient for lookups. The check for any non-missing points before creating an instance is a good practice to avoid creating instances with no valid data.

  • 281-287: The logic for creating a skeleton based on the project data is correct. The use of the get method with a default value of False is a good practice to avoid KeyError if the key does not exist in the dictionary.

  • 311-334: The logic for finding a full video corresponding to a subdirectory is correct. The use of os.scandir is efficient for iterating over the directory. The check for the existence of the videos directory before scanning it is a good practice to avoid errors. The check for the existence of the video file before creating a Video object is also a good practice to avoid errors.

tests/io/test_formats.py (3)
  • 198-203: The new test function test_madlc is parameterized with different test data. It reads labels from the test data and performs various assertions on the labels and their properties. This is a good practice as it allows for testing the function with different inputs, increasing the coverage of the test.

  • 234-242: The new test function test_maudlc is parameterized with different test data. It reads labels from the test data and performs various assertions on the labels and their properties. This is a good practice as it allows for testing the function with different inputs, increasing the coverage of the test.

  • 294-303: The test function test_maudlc checks the correctness of tracks in the labels object. This is a good practice as it ensures that the tracks are being correctly read and processed.

tests/gui/test_commands.py (1)
  • 65-68: The test case test_import_labels_from_dlc_folder() has been updated to assert that the number of tracks in the labels object is 3. This change is consistent with the pull request summary, which mentions that the number of tracks should now be 3 instead of 0. Ensure that the test data in tests/data/dlc_multiple_datasets has been updated to reflect this change.

---end hunk 0---

---start hunk + context 1---

The test case test_ExportAnalysisFile() is comprehensive and covers a wide range of scenarios. It tests the ExportAnalysisFile command with different parameters and asserts the expected behavior. The test case also handles exceptions and asserts that the correct number of output files are written. No changes have been made to this test case in the current pull request.

---end hunk + context 1---

Comment on lines 3003 to +3009
if node in copy_instance and not copy_instance[node].isnan():
# just copy x, y, and visible
# we don't want to copy a PredictedPoint or score attribute
# Ensure x, y inside current frame, then copy x, y, and visible.
# We don't want to copy a PredictedPoint or score attribute.
x_old = copy_instance[node].x
y_old = copy_instance[node].y
x_new = x_old * scale_width
y_new = y_old * scale_height
Copy link

@coderabbitai coderabbitai bot Oct 30, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The code does not check if the new x and y values are within the bounds of the new frame. If the old frame is larger than the new frame, the scaled x and y values might be outside the bounds of the new frame. Consider adding a check to ensure that the new x and y values are within the bounds of the new frame.

x_new = min(x_old * scale_width, new_size_width - 1)
y_new = min(y_old * scale_height, new_size_height - 1)

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think that is true since we are using a ratio of new frame size to old frame size.

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@roomrys You're correct that the code is using a ratio of new frame size to old frame size to scale the x and y values. However, the issue arises when the old frame is larger than the new frame. In such a case, the scaled x and y values could potentially be larger than the new frame size, resulting in coordinates that are outside the bounds of the new frame.

For example, let's say the old frame size is 200x200 and the new frame size is 100x100. If a point in the old frame has coordinates (150, 150), when scaled down using the ratio, the new coordinates would be (75, 75). This is within the bounds of the new frame and works perfectly.

However, if a point in the old frame has coordinates (250, 250), when scaled down using the ratio, the new coordinates would be (125, 125). This is outside the bounds of the new frame, which could lead to issues.

To prevent this, we can add a check to ensure that the new x and y values are within the bounds of the new frame, as suggested in the previous comment. This would look like:

x_new = min(x_old * scale_width, new_size_width - 1)
y_new = min(y_old * scale_height, new_size_height - 1)

This ensures that even if the scaled x and y values are larger than the new frame size, they are capped at the maximum possible value within the new frame.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

But... the points would be within the old frame size to begin with.

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@roomrys You're absolutely correct, and I apologize for the oversight. The points in the old frame would indeed be within the bounds of the old frame size. Therefore, when scaled down using the ratio of new frame size to old frame size, they should also fall within the bounds of the new frame size. There's no need for the additional check I suggested earlier. Thank you for pointing out this mistake.

sleap/io/video.py Outdated Show resolved Hide resolved
sleap/io/video.py Outdated Show resolved Hide resolved
tests/io/test_formats.py Outdated Show resolved Hide resolved
Copy link
Collaborator

@roomrys roomrys left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hey, it looks like a bunch of extra files were added and committed to this PR (we only care about the changes in commands.py). Please revert/remove the changes to the other files.

@roomrys roomrys changed the base branch from liezl/fix-typehinting-add-instance to develop October 31, 2023 20:22
@roomrys roomrys merged commit cb82d36 into talmolab:develop Nov 1, 2023
9 checks passed
@roomrys
Copy link
Collaborator

roomrys commented Feb 7, 2024

I've run into a bug that originates from this PR - but might actually just be a side-effect of another issue.

It looks like copy_instance has a frame attribute of None.

Is this the case for all PredictedInstances? No, just for the PredictedInstances that were created when I ran inference for my bottom-up id model with max-instances of 1:

[p_inst for p_inst in labels.predicted_instances if p_inst.frame is None]
[PredictedInstance(vi...core=0.00), PredictedInstance(vi...core=0.00), PredictedInstance(vi...core=0.00)]
special variables
function variables
0:
PredictedInstance(video=None, frame_idx=None, points=[head: (780.3, 676.1, 0.89), thorax: (795.8, 643.9, 0.71), abdomen: (808.2, 608.2, 0.54), wingL: (816.1, 595.6, 0.44), wingR: (808.2, 592.4, 0.42), forelegL4: (795.7, 683.9, 0.48), forelegR4: (771.7, 676.0, 0.51), midlegL4: (824.2, 660.0, 0.45), midlegR4: (771.7, 627.9, 0.39), hindlegL4: (827.7, 623.9, 0.41), hindlegR4: (787.6, 611.6, 0.40), eyeL: (792.5, 675.7, 0.87), eyeR: (776.1, 667.9, 0.85)], score=8.61, track=None, tracking_score=0.00)
1:
PredictedInstance(video=None, frame_idx=None, points=[head: (479.9, 663.9, 0.99), thorax: (492.3, 631.7, 0.70), abdomen: (508.0, 603.6, 0.42), wingL: (516.2, 583.8, 0.35), wingR: (511.9, 580.2, 0.31), forelegL4: (492.0, 672.0, 0.52), forelegR4: (468.1, 667.5, 0.59), midlegL4: (524.0, 644.3, 0.42), midlegR4: (467.6, 619.7, 0.36), hindlegL4: (524.1, 615.8, 0.39), hindlegR4: (480.2, 596.1, 0.36), eyeL: (492.0, 663.5, 0.95), eyeR: (472.8, 652.4, 0.89)], score=8.26, track=None, tracking_score=0.00)
2:
PredictedInstance(video=None, frame_idx=None, points=[head: (536.1, 472.4, 0.80), thorax: (556.3, 443.9, 0.67), abdomen: (584.3, 411.7, 0.58), wingL: (596.1, 396.2, 0.55), wingR: (592.1, 396.1, 0.54), forelegL4: (547.9, 484.0, 0.48), forelegR4: (524.3, 475.8, 0.43), midlegL4: (584.2, 464.3, 0.40), midlegR4: (531.8, 427.7, 0.35), hindlegL4: (599.6, 427.9, 0.41), hindlegR4: (563.9, 400.1, 0.34), eyeL: (551.8, 472.3, 0.76), eyeR: (532.3, 464.0, 0.71)], score=8.06, track=None, tracking_score=0.00)
len():
3

In BottomUpPredictor._make_labeled_frames_from_generator, we seem to make the LabeledFrame correctly (specifying the frame_ind):

sleap/sleap/nn/inference.py

Lines 3251 to 3257 in 5fa1079

predicted_frames.append(
LabeledFrame(
video=data_provider.videos[video_ind],
frame_idx=frame_ind,
instances=predicted_instances,
)
)

Passing the list of PredictedInstances to the instances argument for LabeledFrame should then run through this LabeledFrame.__attrs_post_init__ method which adds the frame reference to each PredictedInstance:

sleap/sleap/instance.py

Lines 1304 to 1306 in 5fa1079

# Make sure all instances have a reference to this frame
for instance in self.instances:
instance.frame = self

So what is going on? It seemed to be an issue that we faces when predictions were merged into an existing LabeledFrame. A solution was suggested in

Steps

The steps I took to create the bug were:

  1. Predicted on user-labeled frames
  2. Double-clicked prediction to create a user Instance
  3. Received error

Traceback

Traceback (most recent call last):
  File "d:\social-leap-estimates-animal-poses\source\sleap\sleap\gui\app.py", line 1592, in _handle_instance_double_click
    copy_instance=instance, mark_complete=mark_complete
  File "d:\social-leap-estimates-animal-poses\source\sleap\sleap\gui\commands.py", line 522, in newInstance
    mark_complete=mark_complete,
  File "d:\social-leap-estimates-animal-poses\source\sleap\sleap\gui\commands.py", line 241, in execute
    command().execute(context=self, params=kwargs)
  File "d:\social-leap-estimates-animal-poses\source\sleap\sleap\gui\commands.py", line 137, in execute
    self.do_with_signal(context, params)
  File "d:\social-leap-estimates-animal-poses\source\sleap\sleap\gui\commands.py", line 161, in do_with_signal
    cls.do_action(context, params)
  File "d:\social-leap-estimates-animal-poses\source\sleap\sleap\gui\commands.py", line 2870, in do_action
    from_prev_frame=from_prev_frame,
  File "d:\social-leap-estimates-animal-poses\source\sleap\sleap\gui\commands.py", line 2902, in create_new_instance
    mark_complete=mark_complete,
  File "d:\social-leap-estimates-animal-poses\source\sleap\sleap\gui\commands.py", line 2993, in set_visible_nodes
    old_size_width = copy_instance.frame.video.shape[2]
AttributeError: 'NoneType' object has no attribute 'video'
Traceback (most recent call last):
  File "d:\social-leap-estimates-animal-poses\source\sleap\sleap\gui\app.py", line 1592, in _handle_instance_double_click
    copy_instance=instance, mark_complete=mark_complete
  File "d:\social-leap-estimates-animal-poses\source\sleap\sleap\gui\commands.py", line 522, in newInstance
    mark_complete=mark_complete,
  File "d:\social-leap-estimates-animal-poses\source\sleap\sleap\gui\commands.py", line 241, in execute
    command().execute(context=self, params=kwargs)
  File "d:\social-leap-estimates-animal-poses\source\sleap\sleap\gui\commands.py", line 137, in execute
    self.do_with_signal(context, params)
  File "d:\social-leap-estimates-animal-poses\source\sleap\sleap\gui\commands.py", line 161, in do_with_signal
    cls.do_action(context, params)
  File "d:\social-leap-estimates-animal-poses\source\sleap\sleap\gui\commands.py", line 2870, in do_action
    from_prev_frame=from_prev_frame,
  File "d:\social-leap-estimates-animal-poses\source\sleap\sleap\gui\commands.py", line 2902, in create_new_instance
    mark_complete=mark_complete,
  File "d:\social-leap-estimates-animal-poses\source\sleap\sleap\gui\commands.py", line 2993, in set_visible_nodes
    old_size_width = copy_instance.frame.video.shape[2]
AttributeError: 'NoneType' object has no attribute 'video'

Variables

image

roomrys added a commit that referenced this pull request Dec 19, 2024
* Remove no-op code from #1498

* Add options to set background color when exporting video (#1328)

* implement #921

* simplified form / refractor

* Add test function and update cli docs

* Improve test function to check background color

* Improve comments

* Change background options to lowercase

* Use coderabbitai suggested `fill`

---------

Co-authored-by: Shrivaths Shyam <[email protected]>
Co-authored-by: Liezl Maree <[email protected]>

* Increase range on batch size (#1513)

* Increase range on batch size

* Set maximum to a factor of 2

* Set default callable for `match_lists_function` (#1520)

* Set default for `match_lists_function`

* Move test code to official tests

* Check using expected values

* Allow passing in `Labels` to `app.main` (#1524)

* Allow passing in `Labels` to `app.main`

* Load the labels object through command

* Add warning when unable to switch back to CPU mode

* Replace (broken) `--unrag` with `--ragged` (#1539)

* Fix unrag always set to true in sleap-export

* Replace unrag with ragged

* Fix typos

* Add function to create app (#1546)

* Refactor `AddInstance` command (#1561)

* Refactor AddInstance command

* Add staticmethod wrappers

* Return early from set_visible_nodes

* Import DLC with uniquebodyparts, add Tracks (#1562)

* Import DLC with uniquebodyparts, add Tracks

* add tests

* correct tests

* Make the hdf5 videos store as int8 format (#1559)

* make the hdf5 video dataset type as proper int8 by padding with zeros

* add gzip compression

* Scale new instances to new frame size (#1568)

* Fix typehinting in `AddInstance`

* brought over changes from my own branch

* added suggestions

* Ensured google style comments

---------

Co-authored-by: roomrys <[email protected]>
Co-authored-by: sidharth srinath <[email protected]>

* Fix package export (#1619)

Add check for empty videos

* Add resize/scroll to training GUI (#1565)

* Make resizable training GUI and add adaptive scroll bar

* Set a maximum window size

---------

Co-authored-by: Liezl Maree <[email protected]>

* support loading slp files with non-compound types and str in metadata (#1566)

Co-authored-by: Liezl Maree <[email protected]>

* change inference pipeline option to tracking-only (#1666)

change inference pipeline none option to tracking-only

* Add ABL:AOC 2023 Workshop link (#1673)

* Add ABL:AOC 2023 Workshop link

* Trigger website build

* Graceful failing with seeking errors (#1712)

* Don't try to seek to faulty last frame on provider initialization

* Catch seeking errors and pass

* Lint

* Fix IndexError for hdf5 file import for single instance analysis files (#1695)

* Fix hdf5 read for single instance analysis files

* Add test

* Small test files

* removing unneccessary fixtures

* Replace imgaug with albumentations (#1623)

What's the worst that could happen?

* Initial commit

* Fix augmentation

* Update more deps requirements

* Use pip for installing albumentations and avoid reinstalling OpenCV

* Update other conda envs

* Fix out of bounds albumentations issues and update dependencies (#1724)

* Install albumentations using conda-forge in environment file

* Conda install albumentations

* Add ndx-pose to pypi requirements

* Keep out of bounds points

* Black

* Add ndx-pose to conda install in environment file

* Match environment file without cuda

* Ordered dependencies

* Add test

* Delete comments

* Add conda packages to mac environment file

* Order dependencies in pypi requirements

* Add tests with zeroes and NaNs for augmentation

* Back

* Black

* Make comment one line

* Add todo for later

* Black

* Update to new TensorFlow conda package (#1726)

* Build conda package locally

* Try 2.8.4

* Merge develop into branch to fix dependencies

* Change tensorflow version to 2.7.4 in where conda packages are used

* Make tensorflow requirements in pypi looser

* Conda package has TensorFlow 2.7.0 and h5py and numpy installed via conda

* Change tensorflow version in `environment_no_cuda.yml` to test using CI

* Test new sleap/tensorflow package

* Reset build number

* Bump version

* Update mac deps

* Update to Arm64 Mac runners

* pin `importlib-metadata`

* Pin more stuff on mac

* constrain `opencv` version due to new qt dependencies

* Update more mac stuff

* Patches to get to green

* More mac skipping

---------

Co-authored-by: Talmo Pereira <[email protected]>
Co-authored-by: Talmo Pereira <[email protected]>

* Fix CI on macosx-arm64 (#1734)

* Build conda package locally

* Try 2.8.4

* Merge develop into branch to fix dependencies

* Change tensorflow version to 2.7.4 in where conda packages are used

* Make tensorflow requirements in pypi looser

* Conda package has TensorFlow 2.7.0 and h5py and numpy installed via conda

* Change tensorflow version in `environment_no_cuda.yml` to test using CI

* Test new sleap/tensorflow package

* Reset build number

* Bump version

* Update mac deps

* Update to Arm64 Mac runners

* pin `importlib-metadata`

* Pin more stuff on mac

* constrain `opencv` version due to new qt dependencies

* Update more mac stuff

* Patches to get to green

* More mac skipping

* Re-enable mac tests

* Handle GPU re-init

* Fix mac build CI

* Widen tolerance for movenet correctness test

* Fix build ci

* Try for manual build without upload

* Try to reduce training CI time

* Rework actions

* Fix miniforge usage

* Tweaks

* Fix build ci

* Disable manual build

* Try merging CI coverage

* GPU/CPU usage in tests

* Lint

* Clean up

* Fix test skip condition

* Remove scratch test

---------

Co-authored-by: eberrigan <[email protected]>

* Add option to export to CSV via sleap-convert and API (#1730)

* Add csv as a format option

* Add analysis to format

* Add csv suffix to output path

* Add condition for csv analysis file

* Add export function to Labels class

* delete print statement

* lint

* Add `analysis.csv` as parametrize input for `sleap-convert` tests

* test `export_csv` method added to `Labels` class

* black formatting

* use `Path` to construct filename

* add `analysis.csv` to cli guide for `sleap-convert`

---------

Co-authored-by: Talmo Pereira <[email protected]>

* Only propagate Transpose Tracks when propagate is checked (#1748)

Fix always-propagate transpose tracks issue

* View Hyperparameter nonetype fix (#1766)

Pass config getter argument to fetch hyperparameters

* Adding ragged metadata to `info.json` (#1765)

Add ragged metadata to info.json file

* Add batch size to GUI for inference (#1771)

* Fix conda builds (#1776)

* test conda packages in a test environment as part of CI

* do not test sleap import using conda build

* use github environment variables to define build path for each OS in the matrix and add print statements for testing

* figure out paths one OS at a time

* github environment variables work in subsequent steps not current step

* use local builds first

* print env info

* try simple environment creation

* try conda instead of mamba

* fix windows build path

* fix windows build path

* add comment to reference pull request

* remove test stage from conda build for macs and test instead by creating the environment in a workflow

* test workflow by pushing to current branch

* test conda package on macos runner

* Mac build does not need nvidia channel

* qudida and albumentations are conda installed now

* add comment with original issue

* use python 3.9

* use conda match specifications syntax

* make print statements more readable for troubleshooting python versioning

* clean up build file

* update version for pre-release

* add TODO

* add tests for conda packages before uploading

* update ci comments and branches

* remove macos test of pip wheel since python 3.9 is not supported by setup-python action

* Upgrade build actions for release (#1779)

* update `build.yml` so it matches updates from `build_manual.yml`

* test `build.yml` without uploading

* test again using build_manual.yml

* build pip wheel with Ubuntu and turn off caching so build.yml exactly matches build_manual.yml

* `build.yml` on release only and upload

* testing caching

* `use-only-tar-bz2: true` makes environment unsolvable, change it back

* Update .github/workflows/build_manual.yml

Co-authored-by: Liezl Maree <[email protected]>

* Update .github/workflows/build.yml

Co-authored-by: Liezl Maree <[email protected]>

* bump pre-release version

* fix version for pre-release

* run build and upload on release!

* try setting `CACHE_NUMBER` to 1 with `use-only-tar-bz2` set to true

* increasing the cache number to reset the cache does work when `use-only-tar-bz2` is set to true

* publish and upload on release only

---------

Co-authored-by: Liezl Maree <[email protected]>

* Add ZMQ support via GUI and CLI (#1780)

* Add ZMQ support via GUI and CLI, automatic port handler, separate utils module for the functions

* Change menu name to match deleting predictions beyond max instance (#1790)

Change menu and function names

* Fix website build and remove build cache across workflows (#1786)

* test with build_manual on push

* comment out caching in build manual

* remove cache step from builad manual since environment resolves when this is commented out

* comment out cache in build ci

* remove cache from build on release

* remove cache from website build

* test website build on push

* add name to checkout step

* update checkout to v4

* update checkout to v4 in build ci

* remove cache since build ci works without it

* update upload-artifact to v4 in build ci

* update second chechout to v4 in build ci

* update setup-python to v5 in build ci

* update download-artifact to v4 in build ci

* update checkout to v4 in build ci

* update checkout to v4 in website build

* update setup-miniconda to v3.0.3 in website build

* update actions-gh-pages to v4 in website build

* update actions checkout and setup-python in ci

* update checkout action in ci to v4

* pip install lxml[html_clean] because of error message during action

* add error message to website to explain why pip install lxml[html_clean]

* remove my branch for pull request

* Bump to 1.4.1a1 (#1791)

* bump versions to 1.4.1a1

* we can change the version on the installation page since this will be merged into the develop branch and not main

* Fix windows conda package upload and build ci (#1792)

* windows OS is 2022 not 2019 on runner

* upload windows conda build manually but not pypi build

* remove comment and run build ci

* change build manual back so that it doesn't upload

* remove branch from build manual

* update installation docs for 1.4.1a1

* Fix zmq inference (#1800)

* Ensure that we always pass in the zmq_port dict to LossViewer

* Ensure zmq_ports has correct keys inside LossViewer

* Use specified controller and publish ports for first attempted addresses

* Add test for ports being set in LossViewer

* Add max attempts to find unused port

* Fix find free port loop and add for controller port also

* Improve code readablility and reuse

* Improve error message when unable to find free port

* Set selected instance to None after removal (#1808)

* Add test that selected instance set to None after removal

* Set selected instance to None after removal

* Add `InstancesList` class to handle backref to `LabeledFrame` (#1807)

* Add InstancesList class to handle backref to LabeledFrame

* Register structure/unstructure hooks for InstancesList

* Add tests for the InstanceList class

* Handle case where instance are passed in but labeled_frame is None

* Add tests relevant methods in LabeledFrame

* Delegate setting frame to InstancesList

* Add test for PredictedInstance.frame after complex merge

* Add todo comment to not use Instance.frame

* Add rtest for InstasnceList.remove

* Use normal list for informative `merged_instances`

* Add test for copy and clear

* Add copy and clear methods, use normal lists in merge method

* Bump to v1.4.1a2 (#1835)

bump to 1.4.1a2

* Updated trail length viewing options (#1822)

* updated trail length optptions

* Updated trail length options in the view menu

* Updated `prefs` to include length info from `preferences.yaml`

* Added trail length as method of `MainWindow`

* Updated trail length documentation

* black formatting

---------

Co-authored-by: Keya Loding <[email protected]>

* Handle case when no frame selection for trail overlay (#1832)

* Menu option to open preferences directory and update to util functions to pathlib (#1843)

* Add menu to view preferences directory and update to pathlib

* text formatting

* Add `Keep visualizations` checkbox to training GUI (#1824)

* Renamed save_visualizations to view_visualizations for clarity

* Added Delete Visualizations button to the training pipeline gui, exposed del_viz_predictions config option to the user

* Reverted view_ back to save_ and changed new training checkbox to Keep visualization images after training.

* Fixed keep_viz config option state override bug and updated keep_viz doc description

* Added test case for reading training CLI argument correctly

* Removed unnecessary testing code

* Creating test case to check for viz folder

* Finished tests to check CLI argument reading and viz directory existence

* Use empty string instead of None in cli args test

* Use keep_viz_images false in most all test configs (except test to override config)

---------

Co-authored-by: roomrys <[email protected]>

* Allowing inference on multiple videos via `sleap-track` (#1784)

* implementing proposed code changes from issue #1777

* comments

* configuring output_path to support multiple video inputs

* fixing errors from preexisting test cases

* Test case / code fixes

* extending test cases for mp4 folders

* test case for output directory

* black and code rabbit fixes

* code rabbit fixes

* as_posix errors resolved

* syntax error

* adding test data

* black

* output error resolved

* edited for push to dev branch

* black

* errors fixed, test cases implemented

* invalid output test and invalid input test

* deleting debugging statements

* deleting print statements

* black

* deleting unnecessary test case

* implemented tmpdir

* deleting extraneous file

* fixing broken test case

* fixing test_sleap_track_invalid_output

* removing support for multiple slp files

* implementing talmo's comments

* adding comments

* Add object keypoint similarity method (#1003)

* Add object keypoint similarity method

* fix max_tracking

* correct off-by-one error

* correct off-by-one error

* Generate suggestions using max point displacement threshold (#1862)

* create function max_point_displacement, _max_point_displacement_video. Add to yaml file. Create test for new function . . . will need to edit

* remove unnecessary for loop, calculate proper displacement, adjusted tests accordingly

* Increase range for displacement threshold

* Fix frames not found bug

* Return the latter frame index

* Lint

---------

Co-authored-by: roomrys <[email protected]>

* Added Three Different Cases for Adding a New Instance (#1859)

* implemented paste with offset

* right click and then default will paste the new instance at the location of the cursor

* modified the logics for creating new instance

* refined the logic

* fixed the logic for right click

* refined logics for adding new instance at a specific location

* Remove print statements

* Comment code

* Ensure that we choose a non nan reference node

* Move OOB nodes to closest in-bounds position

---------

Co-authored-by: roomrys <[email protected]>

* Allow csv and text file support on sleap track (#1875)

* initial changes

* csv support and test case

* increased code coverage

* Error fixing, black, deletion of (self-written) unused code

* final edits

* black

* documentation changes

* documentation changes

* Fix GUI crash on scroll (#1883)

* Only pass wheelEvent to children that can handle it

* Add test for wheelEvent

* Fix typo to allow rendering videos with mp4 (Mac) (#1892)

Fix typo to allow rendering videos with mp4

* Do not apply offset when double clicking a `PredictedInstance` (#1888)

* Add offset argument to newInstance and AddInstance

* Apply offset of 10 for Add Instance menu button (Ctrl + I)

* Add offset for docks Add Instance button

* Make the QtVideoPlayer context menu unit-testable

* Add test for creating a new instance

* Add test for "New Instance" button in `InstancesDock`

* Fix typo in docstring

* Add docstrings and typehinting

* Remove unused imports and sort imports

* Refactor video writer to use imageio instead of skvideo (#1900)

* modify `VideoWriter` to use imageio with ffmpeg backend

* check to see if ffmpeg is present

* use the new check for ffmpeg

* import imageio.v2

* add imageio-ffmpeg to environments to test

* using avi format for now

* remove SKvideo videowriter

* test `VideoWriterImageio` minimally

* add more documentation for ffmpeg

* default mp4 for ffmpeg should be mp4

* print using `IMAGEIO` when using ffmpeg

* mp4 for ffmpeg

* use mp4 ending in test

* test `VideoWriterImageio` with avi file extension

* test video with odd size

* remove redundant filter since imageio-ffmpeg resizes automatically

* black

* remove unused import

* use logging instead of print statement

* import cv2 is needed for resize

* remove logging

* Use `Video.from_filename` when structuring videos (#1905)

* Use Video.from_filename when structuring videos

* Modify removal_test_labels to have extension in filename

* Use | instead of + in key commands (#1907)

* Use | instead of + in key commands

* Lint

* Replace QtDesktop widget in preparation for PySide6 (#1908)

* Replace to-be-depreciated QDesktopWidget

* Remove unused imports and sort remaining imports

* Remove unsupported |= operand to prepare for PySide6 (#1910)

Fixes TypeError: unsupported operand type(s) for |=: 'int' and 'Option'

* Use positional argument for exception type (#1912)

traceback.format_exception has changed it's first positional argument's name from etype to exc in python 3.7 to 3.10

* Replace all Video structuring with Video.cattr() (#1911)

* Remove unused AsyncVideo class (#1917)

Remove unused AsyncVideo

* Refactor `LossViewer` to use matplotlib (#1899)

* use updated syntax for QtAgg backend of matplotlib

* start add features to `MplCanvas` to replace QtCharts features in `LossViewer` (untested)

* remove QtCharts imports and replace with MplCanvas

* remove QtCharts imports and replace with MplCanvas

* start using MplCanvas in LossViwer instead of QtCharts (untested)

* use updated syntax

* Uncomment all commented out QtChart

* Add debug code

* Refactor monitor to use LossViewer._init_series method

* Add monitor only debug code

* Add methods for setting up axes and legend

* Add the matplotlib canvas to the widget

* Resize axis with data (no log support yet)

* Try using PathCollection for "batch"

* Get "batch" plotting with ax.scatter (no log support yet)

* Add log support

* Add a _resize_axis method

* Modify init_series to work for ax.plot as well

* Use matplotlib to plot epoch_loss line

* Add method _add_data_to_scatter

* Add _add_data_to_plot method

* Add docstring to _resize_axes

* Add matplotlib plot for val_loss

* Add matplotlib scatter for val_loss_best

* Avoid errors with setting log scale before any positive values

* Add x and y axes labels

* Set title (removing html tags)

* Add legend

* Adjust positioning of plot

* Lint

* Leave MplCanvas unchanged

* Removed unused training_monitor.LossViewer

* Resize fonts

* Move legend outside of plot

* Add debug code for montitor aesthetics

* Use latex formatting to bold parts of title

* Make axes aesthetic

* Add midpoint grid lines

* Set initial limits on x and y axes to be 0+

* Ensure x axis minimum is always resized to 0+

* Adjust plot to account for plateau patience title

* Add debug code for plateau patience title line

* Lint

* Set thicker line width

* Remove unused import

* Set log axis on initialization

* Make tick labels smaller

* Move plot down a smidge

* Move ylabel left a bit

* Lint

* Add class LossPlot

* Refactor LossViewer to use LossPlot

* Remove QtCharts code

* Remove debug codes

* Allocate space for figure items based on item's size

* Refactor LossPlot to use underscores for internal methods

* Ensure y_min, y_max not equal
Otherwise we get an unnecessary teminal message:
UserWarning: Attempting to set identical bottom == top == 3.0 results in singular transformations; automatically expanding.
  self.axes.set_ylim(y_min, y_max)

---------

Co-authored-by: roomrys <[email protected]>
Co-authored-by: roomrys <[email protected]>

* Refactor `LossViewer` to use underscores for internal method names (#1919)

Refactor LossViewer to use underscores for internal method names

* Manually handle `Instance.from_predicted` structuring when not `None` (#1930)

* Use `tf.math.mod` instead of `%` (#1931)

* Option for Max Stride to be 128 (#1941)

Co-authored-by: Max  Weinberg <[email protected]>

* Add discussion comment workflow (#1945)

* Add a bot to autocomment on workflow

* Use github markdown warning syntax

* Add a multiline warning

* Change happy coding to happy SLEAPing

Co-authored-by: Talmo Pereira <[email protected]>

---------

Co-authored-by: roomrys <[email protected]>
Co-authored-by: Talmo Pereira <[email protected]>

* Add comment on issue workflow (#1946)

* Add workflow to test conda packages (#1935)

* Add missing imageio-ffmpeg to meta.ymls (#1943)

* Update installation docs 1.4.1 (#1810)

* [wip] Updated installation docs

* Add tabs for different OS installations

* Move installation methods to tabs

* Use tabs.css

* FIx styling error (line under last tab in terminal hint)

* Add installation instructions before TOC

* Replace mamba with conda

* Lint

* Find good light colors
not switching when change dark/light themes

* Get color scheme switching
with dark/light toggle button

* Upgrade website build dependencies

* Remove seemingly unneeded dependencies from workflow

* Add myst-nb>=0.16.0 lower bound

* Trigger dev website build

* Fix minor typo in css

* Add miniforge and one-liner installs for package managers

---------

Co-authored-by: roomrys <[email protected]>
Co-authored-by: Talmo Pereira <[email protected]>

* Add imageio dependencies for pypi wheel (#1950)

Add imagio dependencies for pypi wheel

Co-authored-by: roomrys <[email protected]>

* Do not always color skeletons table black (#1952)

Co-authored-by: roomrys <[email protected]>

* Remove no module named work error (#1956)

* Do not always color skeletons table black

* Remove offending (possibly unneeded) line
that causes the no module named work error to print in terminal

* Remove offending (possibly unneeded) line
that causes the no module named work error to print in terminal

* Remove accidentally added changes

* Add (failing) test to ensure menu-item updates with state change

* Reconnect callback for menu-item (using lambda)

* Add (failing) test to ensure menu-item updates with state change

Do not assume inital state

* Reconnect callback for menu-item (using lambda)

---------

Co-authored-by: roomrys <[email protected]>

* Add `normalized_instance_similarity` method  (#1939)

* Add normalize function

* Expose normalization function

* Fix tests

* Expose object keypoint sim function

* Fix tests

* Handle skeleton decoding internally (#1961)

* Reorganize (and add) imports

* Add (and reorganize) imports

* Modify decode_preview_image to return bytes if specified

* Implement (minimally tested) replace_jsonpickle_decode

* Add support for using idx_to_node map
i.e. loading from Labels (slp file)

* Ignore None items in reduce_list

* Convert large function to SkeletonDecoder class

* Update SkeletonDecoder.decode docstring

* Move decode_preview_image to SkeletonDecoder

* Use SkeletonDecoder instead of jsonpickle in tests

* Remove unused imports

* Add test for decoding dict vs tuple pystates

* Handle skeleton encoding internally (#1970)

* start class `SkeletonEncoder`

* _encoded_objects need to be a dict to add to

* add notebook for testing

* format

* fix type in docstring

* finish classmethod for encoding Skeleton as a json string

* test encoded Skeleton as json string by decoding it

* add test for decoded encoded skeleton

* update jupyter notebook for easy testing

* constraining attrs in dev environment to make sure decode format is always the same locally

* encode links first then encode source then target then type

* save first enconding statically as an input to _get_or_assign_id so that we do not always get py/id

* save first encoding statically

* first encoding is passed to _get_or_assign_id

* use first_encoding variable to determine if we should assign a py/id

* add print statements for debugging

* update notebook for easy testing

* black

* remove comment

* adding attrs constraint to show this passes for certain attrs version only

* add import

* switch out jsonpickle.encode

* oops remove import

* can attrs be unconstrained?

* forgot comma

* pin attrs for testing

* test Skeleton from json, template, with symmetries, and template

* use SkeletonEncoder.encode

* black

* try removing None values in EdgeType reduced

* Handle case when nodes are replaced by integer indices from caller

* Remove prototyping notebook

* Remove attrs pins

* Remove sort keys (which flips the neccessary ordering of our py/ids)

* Do not add extra indents to encoded file

* Only append links after fully encoded (fat-finger)

* Remove outdated comment

* Lint

---------

Co-authored-by: Talmo Pereira <[email protected]>
Co-authored-by: roomrys <[email protected]>

* Pin ndx-pose<0.2.0 (#1978)

* Pin ndx-pose<0.2.0

* Typo

* Sort encoded `Skeleton` dictionary for backwards compatibility  (#1975)

* Add failing test to check that encoded Skeleton is sorted

* Sort Skeleton dictionary before encoding

* Remove unused import

* Disable comment bot for now

* Fix COCO Dataset Loading for Invisible Keypoints (#2035)

Update coco.py

# Fix COCO Dataset Loading for Invisible Keypoints

## Issue
When loading COCO datasets, keypoints marked as invisible (flag=0) are currently skipped and later placed randomly within the instance's bounding box. However, in COCO format, these keypoints may still have valid coordinate information that should be preserved (see toy_dataset for expected vs. current behavior).

## Changes
Modified the COCO dataset loading logic to:
- Check if invisible keypoints (flag=0) have non-zero coordinates
- If coordinates are (0,0), skip the point (existing behavior)
- If coordinates are not (0,0), create the point at those coordinates but mark it as not visible
- Maintain existing behavior for visible (flag=2) and labeled

* Lint

* Add tracking score as seekbar header options (#2047)

* Add `tracking_score` as a constructor arg for `PredictedInstance`

* Add `tracking_score` to ID models

* Add fixture with tracking scores

* Add tracking score to seekbar header

* Add bonsai guide for sleap docs (#2050)

* [WIP] Add bonsai guide page

* Add more information to the guide with images

* add branch for website build

* Typos

* fix links

* Include suggestions

* Add more screenshots and refine the doc

* Remove branch from website workflow

* Completed documentation edits from PR made by reviewer + review bot.

---------

Co-authored-by: Shrivaths Shyam <[email protected]>
Co-authored-by: Liezl Maree <[email protected]>

* Don't mark complete on instance scaling (#2049)

* Add check for instances with track assigned before training ID models (#2053)

* Add menu item for deleting instances beyond frame limit (#1797)

* Add menu item for deleting instances beyond frame limit

* Add test function to test the instances returned

* typos

* Update docstring

* Add frame range form

* Extend command to use frame range

---------

Co-authored-by: Talmo Pereira <[email protected]>

* Highlight instance box on hover (#2055)

* Make node marker and label sizes configurable via preferences (#2057)

* Make node marker and label sizes configurable via preferences

* Fix test

* Enable touchpad pinch to zoom (#2058)

* Fix import PySide2 -> qtpy (#2065)

* Fix import PySide2 -> qtpy

* Remove unnecessary print statements.

* Add channels for pip conda env (#2067)

* Add channels for pypi conda env

* Trigger dev website build

* Separate the video name and its filepath columns in `VideoTablesModel` (#2052)

* add option to show video names with filepath

* add doc

* new feature added successfully

* delete unnecessary code

* remove attributes from video object

* Update dataviews.py

* remove all properties

* delete toggle option

* remove video show

* fix the order of the columns

* remove options

* Update sleap/gui/dataviews.py

Co-authored-by: Liezl Maree <[email protected]>

* Update sleap/gui/dataviews.py

Co-authored-by: Liezl Maree <[email protected]>

* use pathlib instead of substrings

* Update dataviews.py

Co-authored-by: Liezl Maree <[email protected]>

* Use Path instead of pathlib.Path
and sort imports and remove unused imports

* Use item.filename instead of getattr

---------

Co-authored-by: Liezl Maree <[email protected]>

* Make status bar dependent on UI mode (#2063)

* remove bug for dark mode

* fix toggle case

---------

Co-authored-by: Liezl Maree <[email protected]>

* Bump version to 1.4.1 (#2062)

* Bump version to 1.4.1

* Trigger conda/pypi builds (no upload)

* Trigger website build

* Add dev channel to installation instructions

---------

Co-authored-by: Talmo Pereira <[email protected]>

* Add -c sleap/label/dev channel for win/linux
- also trigger website build

---------

Co-authored-by: Scott Yang <[email protected]>
Co-authored-by: Shrivaths Shyam <[email protected]>
Co-authored-by: getzze <[email protected]>
Co-authored-by: Lili Karashchuk <[email protected]>
Co-authored-by: Sidharth Srinath <[email protected]>
Co-authored-by: sidharth srinath <[email protected]>
Co-authored-by: Talmo Pereira <[email protected]>
Co-authored-by: KevinZ0217 <[email protected]>
Co-authored-by: Elizabeth <[email protected]>
Co-authored-by: Talmo Pereira <[email protected]>
Co-authored-by: eberrigan <[email protected]>
Co-authored-by: vaibhavtrip29 <[email protected]>
Co-authored-by: Keya Loding <[email protected]>
Co-authored-by: Keya Loding <[email protected]>
Co-authored-by: Hajin Park <[email protected]>
Co-authored-by: Elise Davis <[email protected]>
Co-authored-by: gqcpm <[email protected]>
Co-authored-by: Andrew Park <[email protected]>
Co-authored-by: roomrys <[email protected]>
Co-authored-by: MweinbergUmass <[email protected]>
Co-authored-by: Max  Weinberg <[email protected]>
Co-authored-by: DivyaSesh <[email protected]>
Co-authored-by: Felipe Parodi <[email protected]>
Co-authored-by: croblesMed <[email protected]>
@roomrys roomrys mentioned this pull request Dec 19, 2024
11 tasks
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants