-
Notifications
You must be signed in to change notification settings - Fork 6
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Update frozen graph import to support both ragged and unragged tensors #21
Conversation
Thanks for submitting the PR! I tried to implement this in the past but I couldn't find a satisfying way to support both versions. If we are merging this (and thus breaking the support for previous sleap versions) it would be nice to converge on a stable metadata format for the network export in sleap. This is also related to #10. I haven't thought about this in a while, but if there was a way to generate a single, merged, config file that also includes the tensor information on top of the training metadata would be ideal. Let me know if this makes sense and happy to discuss further! |
FYI both - just commented on the relevant issue in the SLEAP repo.
|
Also fyi I suggested a format to @ssrinath22 a few months ago, and I believe this will be implemented soon |
Hi guys, Just had a chance to review this. The changes implemented here will fix compatibility with non-ragged exported models (the new default in newer SLEAP versions), but won't work for ragged models (default in older versions). Since we support both and they have potentially different use cases, I'd like to also support both on this side as well. I've asked @shrivaths16 to make a couple of additional changes to the current PR to hardcode some heuristic checks for whether it's ragged or unragged based on clues available in the shape and keys. For example, checking for the rank of After this PR is merged in, we can add support for using the metadata model spec file. Technically, we already export an |
I have added support for both ragged and unragged versions of the tensors of the frozen graphs. It might be a little repetitive but this seems to work. |
Ok, not the prettiest solution but it gets this working again for the foreseeable future until we update the metadata we store in This is fine by me if it's cool with the Bonsai folks! @jkbhagatio @bruno-f-cruz |
@shrivaths16 Thanks for taking a jab at this! I agree with @bruno-f-cruz it would be nicer to have a more stable JSON spec, but also understand this is getting people stuck so is probably worth resolving in a backwards-compatible way for now. I have a few ideas to remove repetition and also to avoid detecting "raggedness" by catching exceptions (it's actually enough to do a null test). I'm happy to either directly edit your branch or make a PR, but would you mind first removing the double quotes from your commit messages? Not sure how they got there in the first place, but it is a little bit distracting. If you are able to edit them, I would also recommend making them slightly shorter to fit on GitHub without trimming, e.g.:
If you need you can always add extended description using extra lines below the message title. |
514782d
to
649b2e9
Compare
@shrivaths16 I was able to rebase and reword the commit messages accordingly so no need to do anything for now, will push my proposed refactoring soon. |
@shrivaths16 @talmo one last question: I can see that |
This all sounds great @glopesdev! Thanks for making the changes! Yes, single instance models always produce fixed size outputs so are never ragged. |
@shrivaths16 @talmo I have tentatively refactored the code assuming both assumptions stated above are true and pushed the changes to @shrivaths16's fork. Not sure how much time I will have today to test this on all possible model combinations, but if you have a setup ready to do this, then it would be great to see whether we could have a single code base to deal with both ragged and unragged versions. |
This looks great! Thanks for the refactor @glopesdev! @shrivaths16 let us know when you get a chance to test out all the configurations. |
@shrivaths16 @talmo Sorry for the late review comments, I had forgotten to actually submit the review. These are both outdated now, but they are meaningful since they describe the assumptions underlying my refactor. The code simplification I propose should be valid only if the above two comments hold. |
Hi @glopesdev, |
@shrivaths16 Is there a way for us to access a test suite of exported frozen graph models for all these different configurations? That would greatly accelerate iteration time on this. |
For top-down supervised ID: Models:
Test data:
Extract, then run (in a conda environment with SLEAP installed):
(optionally with @shrivaths16 since you've already done this, maybe you can share the exported model folder in a Google Drive link or something? P.S.: if it's helpful for covering other model types, here's some models we use in our core SLEAP test fixtures |
@talmo @shrivaths16 thank you for the feedback and all the information, I think everything is more clear now. I fixed the ragged detection for the top-down centroid model and also relaxed the constraints around copying of unjagged arrays, where the TF wrapper library was being overly conservative (in our case we want to ignore shape differences as long as we can ensure the array type and size matches). My only remaining question is around the pose-identity top-down model. In this case, e.g. @shrivaths16 can you confirm this is the case? I think this is really the only detail left to wrap this up. |
@glopesdev |
@shrivaths16 great, in that case this last commit should finish the refactor and work for all cases of both ragged and unragged. If you have a set of models you could use for double-checking and testing this that would be great, and I will then merge and publish a new version. |
@glopesdev |
@shrivaths16 @talmo merged and release deployed, thanks for contributing this! |
@glopesdev |
While trying to import the frozen graphs from SLEAP, there is an issue where an Object reference error that pops up when you import. This is due to the default in the previous versions (< 1.3) of SLEAP had ragged tensors, while it is by default unragged tensors for versions > 1.3.
So to fix this, I have modified the code to read the correct tensor with the correct shapes.
This potentially could solve this issue talmolab/sleap#1542
But there is a need to store more metadata to distinguish reading the ragged tensors from unragged tensors, for a future enhancement in SLEAP