You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Ref: There's a conversation in gitter where Boris walked through with someone how to copy relevant files from their local machine to docker so they could use the run_model.py script.
This is a current deficiency with the auto annotation run_model.py script. The script is setup to locally debug, but it would be better to be a more general user-facing debugging script.
Need to implement a second interface where users can identify:
The relevant model
The relevant task
and get a traceback that they can share for easier debugging.
This would solve the issue of explaining to users how to copy files from their local machines to docker. I also had a user (with a functioning CVAT docker installation), try to run run_model.py script but did not correctly activate OpenVINO's setupvars.sh on their local machine. This lead to importing OpenVINO errors that were unrelated to the problem that the user was trying to solve (poor user experience).
Alternatively, could setup Auto Annotation's Load Model UI, so that it could use an existing task to test/debug the model with. This would catch significantly more errors presented back to the user on upload, as testing currently uses an all black image.
benhoff
changed the title
New Users find Auto Annotation Run Model Script Intimidating
User Experience Issues with Auto Annotation Run Model Script
Dec 7, 2019
If an auto annotation script fails for someone, it is currently difficult to debug with a casual user.
Ref: #896
Ref: There's a conversation in gitter where Boris walked through with someone how to copy relevant files from their local machine to docker so they could use the
run_model.py
script.This is a current deficiency with the auto annotation
run_model.py
script. The script is setup to locally debug, but it would be better to be a more general user-facing debugging script.Need to implement a second interface where users can identify:
and get a traceback that they can share for easier debugging.
This would solve the issue of explaining to users how to copy files from their local machines to docker. I also had a user (with a functioning CVAT docker installation), try to run
run_model.py
script but did not correctly activate OpenVINO'ssetupvars.sh
on their local machine. This lead to importing OpenVINO errors that were unrelated to the problem that the user was trying to solve (poor user experience).In order to solve this:
Lines 21-24 of the auto annotation script will need to be changed so that the
py, json, xml, and bin
files are no longer required. https://github.com/opencv/cvat/blob/32027ce884c0584015874c4ae99ba7f53ffb46c0/utils/auto_annotation/run_model.py#L21Instead the script would enforce either the passing of the aforementioned 4 variables, or a "model name"/"task reference" number.
Would need to explore what implementation (Docker based, REST based, etc) would make the most sense.
The text was updated successfully, but these errors were encountered: