Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support on-the-fly tensorflow inference #197

Closed
kodonnell opened this issue Nov 16, 2018 · 2 comments · Fixed by #1767 or #2102
Closed

Support on-the-fly tensorflow inference #197

kodonnell opened this issue Nov 16, 2018 · 2 comments · Fixed by #1767 or #2102
Assignees
Labels
enhancement New feature or request
Milestone

Comments

@kodonnell
Copy link

Currently, I believe the inferences runs entirely first, before any annotation can begin. This can be pretty slow, especially on slow hardware and large videos etc. I'm requesting that the model runs 'on demand'. To prevent a visible slow-down to the user, we should buffer (e.g. when I start annotating frame N, inferences starts on frame N + 3, so that it's complete by the time I get to it). This seems effective as I can start annotating nearly immediately, and there's no noticeable slow-down while annotating (as the time taken annotating a frame - which things are currently idea - should hopefully exceed model evaluation time).

Note - we'd actually have to consider how to add in support for e.g. evaluating the model only every Nth frame (e.g. when I'm annotating I generally only annotate every 10th frame, and use interpolation for the rest - so it makes sense to only run tensorflow on every 10th frame too).

This seems reasonably feasible - is there likely to be interest?

@nmanovic
Copy link
Contributor

@kodonnell ,

It is definitely interesting and a good idea.
Let's add the following features:

  • Annotate data with a step, from start to end frames.
  • Add a button to annotate a specific frame (during task annotation). Thus when you annotate a task you can click the button, client will send request to server and wait a reply. After a couple of seconds ther server will send response with annotation results.

If you can help to implement the future it will be awesome.

@nmanovic nmanovic added the enhancement New feature or request label Nov 16, 2018
@nmanovic nmanovic added this to the Backlog milestone Nov 16, 2018
@nmanovic nmanovic changed the title Feature request: support on-the-fly tensorflow inference Support on-the-fly tensorflow inference Nov 16, 2018
@kodonnell
Copy link
Author

@nmanovic - I assume by the first point you're meaning "on the fly" as I described it? (I.e. not just running from start to end by step with the current batch processing.) If not, we might need to add my 'buffered' on-the-fly annotation (which could be enabled/disabled via the GUI). I'm unsure of your second option - clicking and having to wait is not great from a UI perspective, and is much less efficient than the 'buffered' approach.

Thinking more - we should probably cache the tensorflow annotations for any of these on-the-fly approaches. E.g. if you go back to a frame, there's no point recomputing the annotations. That should be pretty straightforward though.

Re a PR - I think it makes sense to wait for the PR from @savan77 (hopefully soon?). Does that seem reasonable?

@bsekachev bsekachev self-assigned this Jul 9, 2020
@bsekachev bsekachev modified the milestones: Backlog, 1.1.0-release Jul 9, 2020
@bsekachev bsekachev reopened this Jul 29, 2020
TOsmanov pushed a commit to TOsmanov/cvat that referenced this issue Aug 23, 2021
* bugfix - ignore subsets of near-zero-ratio (cvat-ai#187)

* Ignore subsets of near-zero-ratio in splitter

Co-authored-by: Maxim Zhiltsov <[email protected]>

* Fix validator imbalance threshold (cvat-ai#190)

* Validator threshold adjustment + style correction

Co-authored-by: Maxim Zhiltsov <[email protected]>

* Allow undeclared label attributes on CVAT format (cvat-ai#192)

* Add saving and parsing of attributes in label categories in Datumaro format

* Support common label attributes in CVAT format, add an option to ignore undeclared attributes

* Add logging for parsed parameters in plugins

* update changelog

* Fix export of masks with holes (cvat-ai#188)

* Fix export of masks with holes in polygons (background class should not introduce a new instance)

* update changelog

* Format fixes in COCO and VOC (cvat-ai#195)

* Allow splitting and merging of image directories in COCO export

* Avoid producing conflicting attributes in VOC segmentation

Co-authored-by: Jihyeon Yi <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants