Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

AI Tools performance issues #2936

Closed
2 of 7 tasks
bsekachev opened this issue Mar 10, 2021 · 3 comments
Closed
2 of 7 tasks

AI Tools performance issues #2936

bsekachev opened this issue Mar 10, 2021 · 3 comments
Labels
enhancement New feature or request

Comments

@bsekachev
Copy link
Member

My actions before raising this issue

According to the feedback about AI tools, using these tools is ineffective.

  • It happens because usually generated objects aren't precise and an annotator needs to adjust them (spending more time than for fully manual annotation).
  • One more issues related with number of generated points. Sometimes they are not enough, sometimes they are extra.
  • Some objects aren't annotated by the model

  • Probably we should check if these models work "full-power"
  • Future work should be directed to reducing number of corrections after automatic annotation
  • It was noticed, than models work better on images with better quality, so need to check if we use all available quality
  • Introducing a tracking step to accelerate the feature was proposed
  • Interactors don't work well for thin objects and occluded objects. Need to investigate.

Interactor (f-BRS)

Annotation type Objects Time (h) Spped (objects per hour)
manual 43 0.4 107.5
f-BRS 43 0.41 104.88
f-BRS with correction 43 0.96 44.79

f-BRS 58.33% slower

Detector (Mask RCNN)

People

Annotation type Objects Time (h) Spped (objects per hour)
manual 53 0.25 212.00
Mask RCNN 53 0.015 3533.33
Mask RCNN with correction 53 0.44 120.45

Mask RCNN 43.18% slower

Cars

Annotation type Objects Time (h) Spped (objects per hour)
manual 44 0.25 176.00
Mask RCNN 44 0.015 2933.33
Mask RCNN with correction 44 0.5 88

Mask RCNN 50% slower

Common

Annotation type Objects Time (h) Spped (objects per hour)
manual 97 0.5 194.00
Mask RCNN 97 0.03 3233.33
Mask RCNN with correction 97 0.94 103.19

Mask RCNN 46.8% slower

Detector (YOLO)

People

Annotation type Objects Time (h) Spped (objects per hour)
manual 132 0.11 1200
YOLO 132 0.008 16500
YOLO with correction 132 0.29 825

YOLO 31.25% slower

Cars

Annotation type Objects Time (h) Spped (objects per hour)
manual 188 0.19 989.47
YOLO 188 0.008 23500
YOLO with correction 188 0.29 648.28

YOLO 34.48% slower

Common

Annotation type Objects Time (h) Spped (objects per hour)
manual 320 0.3 1066.67
YOLO 320 0.016 20000
YOLO with correction 320 0.45 711.11

YOLO 33.3% slower

Tracker (SiamMask)

Annotation type Objects Time (h) Spped (objects per hour)
manual 360 0.2 1800
SiamMask 360 0.15 2400
SiamMask with correction 360 0.46 782.61

SiamMask 56.25% slower

Please, refer to the origin report for details.

@bsekachev bsekachev added the enhancement New feature or request label Mar 10, 2021
@f3rm4rf3r
Copy link

f3rm4rf3r commented Mar 10, 2021

Same here. We expend some time enabling nuclio and linking services in a kubernetes based deployment (+ allocating specific GPU nodes, etc...) and we end up asking the annotators to not use the AI assistant (we used it with SiamMask) because the performance is very bad.
I don't know where the issue is but I got the feeling it is something to do with moving data between services and the serialisation/deserialization of the images (we use PNGs and they are around 2MiB each so serialising and deserialization to JSON and moving these using REST APIs doesn't seems like a good idea tbh).
Sorry I couldn't help @bsekachev

@nmanovic nmanovic added this to the Backlog milestone Mar 10, 2021
@rzuidhof
Copy link

Same here. We expend some time enabling nuclio and linking services in a kubernetes based deployment (+ allocating specific GPU nodes, etc...) and we end up asking the annotators to not use the AI assistant (we used it with SiamMask) because the performance is very bad.
I don't know where the issue is but I got the feeling it is something to do with moving data between services and the serialisation/deserialization of the images (we use PNGs and they are around 2MiB each so serialising and deserialization to JSON and moving these using REST APIs doesn't seems like a good idea tbh).

The bottleneck in automatic annotation performance seems to be in CVAT, partly because the jobs go to the low queue with 1 concurrent worker. (A task in CVAT is a job in Django RQ.) After splitting videos over multiple tasks and raising the number of workers for the low queue in supervisord.conf we obtained more parallelism. Of course one should also increase the number of Nuclio function workers.

@nmanovic
Copy link
Contributor

@bsekachev , I will close the issue. You made a number of enhancement. It is a huge topic. I don't see any reason to keep the issue opened.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

4 participants