Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add in ability to run Auto Annotation model using model name and task id #934

Merged
merged 1 commit into from
Dec 24, 2019
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
88 changes: 52 additions & 36 deletions utils/auto_annotation/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,49 +4,65 @@ A small command line program to test and run AutoAnnotation Scripts.

## Instructions

Change in to the root of the project directory and run
There are two modes to run this script in. If you already have a model uploaded into the server, and you're having
issues with running it in production, you can pass in the model name and a task id that you want to test against.

```shell
$ python cvat/utils/auto_annotation/run_model.py --py /path/to/python/interp.py \
--xml /path/to/xml/file.xml \
--bin /path/to/bin/file.bin \
--json /path/to/json/mapping/mapping.json
# Note that this module can be found in cvat/utils/auto_annotation/run_model.py
$ python /path/to/run_model.py --model-name mymodel --task-id 4
```

If you're running in docker, this can be useful way to debug your model.

``` shell
$ docker exec -it cvat bash -ic 'python3 ~/cvat/apps/auto_annotation/run_model.py --model-name my-model --task-id 4
```
If you are developing an auto annotation model or you can't get something uploaded into the server,
then you'll need to specify the individual inputs.
```shell
# Note that this module can be found in cvat/utils/auto_annotation/run_model.py
$ python path/to/run_model.py --py /path/to/python/interp.py \
--xml /path/to/xml/file.xml \
--bin /path/to/bin/file.bin \
--json /path/to/json/mapping/mapping.json
```
Some programs need to run unrestricted or as an administer. Use the `--unrestriced` flag to simulate.
You can pass image files in to fully simulate your findings. Images are passed in as a list
```shell
$ python cvat/utils/auto_annotation/run_model.py --py /path/to/python/interp.py \
--xml /path/to/xml/file.xml \
--bin /path/to/bin/file.bin \
--json /path/to/json/mapping/mapping.json \
--image-files /path/to/img.jpg /path2/to/img2.png /path/to/img3.jpg
$ python /path/to/run_model.py --py /path/to/python/interp.py \
--xml /path/to/xml/file.xml \
--bin /path/to/bin/file.bin \
--json /path/to/json/mapping/mapping.json \
--image-files /path/to/img.jpg /path2/to/img2.png /path/to/img3.jpg
```
Additionally, it's sometimes useful to visualize your images.
Use the `--show-images` flag to have each image with the annotations pop up.

```shell
$ python cvat/utils/auto_annotation/run_model.py --py /path/to/python/interp.py \
--xml /path/to/xml/file.xml \
--bin /path/to/bin/file.bin \
--json /path/to/json/mapping/mapping.json \
--image-files /path/to/img.jpg /path2/to/img2.png /path/to/img3.jpg \
--show-images
$ python /path/to/run_model.py --py /path/to/python/interp.py \
--xml /path/to/xml/file.xml \
--bin /path/to/bin/file.bin \
--json /path/to/json/mapping/mapping.json \
--image-files /path/to/img.jpg /path2/to/img2.png /path/to/img3.jpg \
--show-images
```

If you'd like to see the labels printed on the image, use the `--show-labels` flag
```shell
$ python cvat/utils/auto_annotation/run_model.py --py /path/to/python/interp.py \
--xml /path/to/xml/file.xml \
--bin /path/to/bin/file.bin \
--json /path/to/json/mapping/mapping.json \
--image-files /path/to/img.jpg /path2/to/img2.png /path/to/img3.jpg \
--show-images \
--show-labels
$ python /path/to/run_model.py --py /path/to/python/interp.py \
--xml /path/to/xml/file.xml \
--bin /path/to/bin/file.bin \
--json /path/to/json/mapping/mapping.json \
--image-files /path/to/img.jpg /path2/to/img2.png /path/to/img3.jpg \
--show-images \
--show-labels
```
There's a command that let's you scan quickly by setting the length of time (in milliseconds) to display each image.
Expand All @@ -55,13 +71,13 @@ In this example, 2000 milliseconds is 2 seconds for each image.
```shell
# Display each image in a window for 2 seconds
$ python cvat/utils/auto_annotation/run_model.py --py /path/to/python/interp.py \
--xml /path/to/xml/file.xml \
--bin /path/to/bin/file.bin \
--json /path/to/json/mapping/mapping.json \
--image-files /path/to/img.jpg /path2/to/img2.png /path/to/img3.jpg \
--show-images \
--show-image-delay 2000
$ python /path/to/run_model.py --py /path/to/python/interp.py \
--xml /path/to/xml/file.xml \
--bin /path/to/bin/file.bin \
--json /path/to/json/mapping/mapping.json \
--image-files /path/to/img.jpg /path2/to/img2.png /path/to/img3.jpg \
--show-images \
--show-image-delay 2000
```
Visualization isn't always enough.
Expand All @@ -70,10 +86,10 @@ You must install the necessary packages installed, but then you can add the `--s
results will serialize correctly.

```shell
$ python cvat/utils/auto_annotation/run_model.py --py /path/to/python/interp.py \
--xml /path/to/xml/file.xml \
--bin /path/to/bin/file.bin \
--json /path/to/json/mapping/mapping.json \
--image-files /path/to/img.jpg /path2/to/img2.png /path/to/img3.jpg \
--serialize
$ python /path/to/run_model.py --py /path/to/python/interp.py \
--xml /path/to/xml/file.xml \
--bin /path/to/bin/file.bin \
--json /path/to/json/mapping/mapping.json \
--image-files /path/to/img.jpg /path2/to/img2.png /path/to/img3.jpg \
--serialize
```
87 changes: 78 additions & 9 deletions utils/auto_annotation/run_model.py
Original file line number Diff line number Diff line change
Expand Up @@ -4,6 +4,8 @@
import argparse
import random
import logging
import fnmatch
from operator import xor

import numpy as np
import cv2
Expand All @@ -18,10 +20,14 @@

def _get_kwargs():
parser = argparse.ArgumentParser()
parser.add_argument('--py', required=True, help='Path to the python interpt file')
parser.add_argument('--xml', required=True, help='Path to the xml file')
parser.add_argument('--bin', required=True, help='Path to the bin file')
parser.add_argument('--json', required=True, help='Path to the JSON mapping file')
parser.add_argument('--py', help='Path to the python interpt file')
parser.add_argument('--xml', help='Path to the xml file')
parser.add_argument('--bin', help='Path to the bin file')
parser.add_argument('--json', help='Path to the JSON mapping file')

parser.add_argument('--model-name', help='Name of the model in the Model Manager')
parser.add_argument('--task-id', type=int, help='ID task used to test the model')

parser.add_argument('--restricted', dest='restricted', action='store_true')
parser.add_argument('--unrestricted', dest='restricted', action='store_false')
parser.add_argument('--image-files', nargs='*', help='Paths to image files you want to test')
Expand Down Expand Up @@ -56,14 +62,75 @@ def find_min_y(array):

return array[index]

def _get_docker_files(model_name: str, task_id: int):
os.environ['DJANGO_SETTINGS_MODULE'] = 'cvat.settings.development'

import django
django.setup()

from cvat.apps.auto_annotation.models import AnnotationModel
from cvat.apps.engine.models import Task as TaskModel

task = TaskModel(pk=task_id)
model = AnnotationModel.objects.get(name=model_name)

images_dir = task.get_data_dirname()

py_file = model.interpretation_file.name
mapping_file = model.labelmap_file.name
xml_file = model.model_file.name
bin_file = model.weights_file.name

image_files = []
for root, _, filenames in os.walk(images_dir):
for filename in fnmatch.filter(filenames, '*.jpg'):
image_files.append(os.path.join(root, filename))

return py_file, mapping_file, bin_file, xml_file, image_files


def main():
kwargs = _get_kwargs()

py_file = kwargs['py']
bin_file = kwargs['bin']
mapping_file = kwargs['json']
xml_file = kwargs['xml']
py_file = kwargs.get('py')
bin_file = kwargs.get('bin')
mapping_file = kwargs.get('json')
xml_file = kwargs.get('xml')

model_name = kwargs.get('model_name')
task_id = kwargs.get('task_id')

is_docker = model_name and task_id

# xor is `exclusive or`. English is: if one or the other but not both
if xor(bool(model_name), bool(task_id)):
logging.critical('Must provide both `--model-name` and `--task-id` together!')
return

if is_docker:
files = _get_docker_files(model_name, task_id)
py_file = files[0]
mapping_file = files[1]
bin_file = files[2]
xml_file = files[3]
image_files = files[4]
else:
return_ = False
if not py_file:
logging.critical('Must provide --py file!')
return_ = True
if not bin_file:
logging.critical('Must provide --bin file!')
return_ = True
if not xml_file:
logging.critical('Must provide --xml file!')
return_ = True
if not mapping_file:
logging.critical('Must provide --json file!')
return_ = True

if return_:
return

if not os.path.isfile(py_file):
logging.critical('Py file not found! Check the path')
Expand Down Expand Up @@ -98,7 +165,9 @@ def main():
mapping = {int(k): v for k, v in mapping.items()}

restricted = kwargs['restricted']
image_files = kwargs.get('image_files')

if not is_docker:
image_files = kwargs.get('image_files')

if image_files:
image_data = [cv2.imread(f) for f in image_files]
Expand Down