-
Notifications
You must be signed in to change notification settings - Fork 589
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add segment-anything models (SAM) to model zoo #3019
Conversation
Codecov ReportPatch coverage:
Additional details and impacted files@@ Coverage Diff @@
## develop #3019 +/- ##
============================================
- Coverage 62.24% 48.61% -13.63%
============================================
Files 250 227 -23
Lines 45578 33359 -12219
Branches 319 319
============================================
- Hits 28368 16217 -12151
+ Misses 17210 17142 -68
Flags with carried forward coverage won't be shown. Click here to find out more.
☔ View full report in Codecov by Sentry. |
@brimoor Need your input on the following issue. I'm following this tutorial to add SAM to the model zoo, so far AMG mode worked fine. In order to implement prompting SAM with points and boxes I want to let users specify keypoint/detection label field while calling |
also a idea and idk if it would bloat this PR. but if you would integrate groundingDINO first you would also add a parameter of "objects" or so and then just prompt the model for objects. |
Thanks for the suggestion @timmermansjoy. I am myself not familiar with grounding Dino and that's why I want to limit the scope to segment-anything for the moment. After it is integrated, we can think about adding grounding Dino. |
@Rusteam hmm, I see what you mean. Perhaps there should be a way for a prediction = model.predict(img, sample)
predictions = model.predict_all(imgs, samples) One stylistic way to achieve this would be to implement a Note that this line: fiftyone/fiftyone/core/models.py Line 312 in 963418d
will need to be excluded for models that inherit from Alternatively there could be a if isinstance(model, SampleMixin):
samples = samples.select_fields(model.needs_fields)
else:
samples = samples.select_fields()
...
if isinstance(model, SampleMixin):
predictions_batch = model.predict_all(imgs_batch, samples_batch)
else:
predictions_batch = model.predict_all(imgs_batch) |
I've slightly changed the behavior of Keypoints from the @jacobmarks blogpost. In this implementation, each Keypoint object will be processed individually with all points passed as positive labels (and no background labels). Also a user can select which mask (0,1, or 2) to use. I guess this requires a short tutorial, which I can work on after we complete this integration. |
@brimoor hi, this is now ready for testing and integration. |
@brimoor hey any updates? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@Rusteam LGTM (sorry for the delay! 😅)
I'm going to merge this into a feature branch and add a couple additional enhancements. Then we'll merge into develop
and it will be included in the next release 📈
What changes are proposed in this pull request?
Add three segment-anything models to the model zoo.
How is this patch tested? If it is not, please explain why.
Added an extra intensive model test for sam backbone.
Release Notes
Is this a user-facing change that should be mentioned in the release notes?
notes for FiftyOne users.
Segment-anything model is now available in the model zoo. It can perform automatic mask generation and guided mask generation with prompts.
What areas of FiftyOne does this PR affect?
fiftyone
Python library changes