-
Notifications
You must be signed in to change notification settings - Fork 3.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support on-the-fly tensorflow inference #197
Comments
It is definitely interesting and a good idea.
If you can help to implement the future it will be awesome. |
@nmanovic - I assume by the first point you're meaning "on the fly" as I described it? (I.e. not just running from Thinking more - we should probably cache the tensorflow annotations for any of these on-the-fly approaches. E.g. if you go back to a frame, there's no point recomputing the annotations. That should be pretty straightforward though. Re a PR - I think it makes sense to wait for the PR from @savan77 (hopefully soon?). Does that seem reasonable? |
* bugfix - ignore subsets of near-zero-ratio (cvat-ai#187) * Ignore subsets of near-zero-ratio in splitter Co-authored-by: Maxim Zhiltsov <[email protected]> * Fix validator imbalance threshold (cvat-ai#190) * Validator threshold adjustment + style correction Co-authored-by: Maxim Zhiltsov <[email protected]> * Allow undeclared label attributes on CVAT format (cvat-ai#192) * Add saving and parsing of attributes in label categories in Datumaro format * Support common label attributes in CVAT format, add an option to ignore undeclared attributes * Add logging for parsed parameters in plugins * update changelog * Fix export of masks with holes (cvat-ai#188) * Fix export of masks with holes in polygons (background class should not introduce a new instance) * update changelog * Format fixes in COCO and VOC (cvat-ai#195) * Allow splitting and merging of image directories in COCO export * Avoid producing conflicting attributes in VOC segmentation Co-authored-by: Jihyeon Yi <[email protected]>
Currently, I believe the inferences runs entirely first, before any annotation can begin. This can be pretty slow, especially on slow hardware and large videos etc. I'm requesting that the model runs 'on demand'. To prevent a visible slow-down to the user, we should buffer (e.g. when I start annotating frame N, inferences starts on frame N + 3, so that it's complete by the time I get to it). This seems effective as I can start annotating nearly immediately, and there's no noticeable slow-down while annotating (as the time taken annotating a frame - which things are currently idea - should hopefully exceed model evaluation time).
Note - we'd actually have to consider how to add in support for e.g. evaluating the model only every Nth frame (e.g. when I'm annotating I generally only annotate every 10th frame, and use interpolation for the rest - so it makes sense to only run tensorflow on every 10th frame too).
This seems reasonably feasible - is there likely to be interest?
The text was updated successfully, but these errors were encountered: