Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

User Experience Issues with Auto Annotation Run Model Script #913

Closed
benhoff opened this issue Dec 7, 2019 · 1 comment · Fixed by #934
Closed

User Experience Issues with Auto Annotation Run Model Script #913

benhoff opened this issue Dec 7, 2019 · 1 comment · Fixed by #934
Labels
enhancement New feature or request

Comments

@benhoff
Copy link
Contributor

benhoff commented Dec 7, 2019

If an auto annotation script fails for someone, it is currently difficult to debug with a casual user.

Ref: #896

Ref: There's a conversation in gitter where Boris walked through with someone how to copy relevant files from their local machine to docker so they could use the run_model.py script.

This is a current deficiency with the auto annotation run_model.py script. The script is setup to locally debug, but it would be better to be a more general user-facing debugging script.

Need to implement a second interface where users can identify:

  1. The relevant model
  2. The relevant task

and get a traceback that they can share for easier debugging.

This would solve the issue of explaining to users how to copy files from their local machines to docker. I also had a user (with a functioning CVAT docker installation), try to run run_model.py script but did not correctly activate OpenVINO's setupvars.sh on their local machine. This lead to importing OpenVINO errors that were unrelated to the problem that the user was trying to solve (poor user experience).

In order to solve this:

Lines 21-24 of the auto annotation script will need to be changed so that the py, json, xml, and bin files are no longer required. https://github.com/opencv/cvat/blob/32027ce884c0584015874c4ae99ba7f53ffb46c0/utils/auto_annotation/run_model.py#L21

Instead the script would enforce either the passing of the aforementioned 4 variables, or a "model name"/"task reference" number.

Would need to explore what implementation (Docker based, REST based, etc) would make the most sense.

@benhoff
Copy link
Contributor Author

benhoff commented Dec 7, 2019

Alternatively, could setup Auto Annotation's Load Model UI, so that it could use an existing task to test/debug the model with. This would catch significantly more errors presented back to the user on upload, as testing currently uses an all black image.

@benhoff benhoff changed the title New Users find Auto Annotation Run Model Script Intimidating User Experience Issues with Auto Annotation Run Model Script Dec 7, 2019
@nmanovic nmanovic added enhancement New feature or request help wanted labels Dec 8, 2019
@nmanovic nmanovic added this to the Backlog milestone Dec 8, 2019
@benhoff benhoff closed this as completed Dec 22, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants