Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Az/custom annotation #233

Merged
merged 42 commits into from
Dec 26, 2018
Merged
Show file tree
Hide file tree
Changes from 7 commits
Commits
Show all changes
42 commits
Select commit Hold shift + click to select a range
176d632
initial version of custom annotation application
Dec 10, 2018
98760d1
added readme for custom annotation
Dec 13, 2018
20b5d01
Update Readme
azhavoro Dec 13, 2018
b5d90eb
Update README
azhavoro Dec 13, 2018
fdb1d51
update README
Dec 13, 2018
145beea
Merge branch 'az/custom_annotation' of https://github.com/opencv/cvat…
Dec 13, 2018
e81bbbb
minor fixes
Dec 13, 2018
897ad63
custom annotation -> auto annotation
Dec 14, 2018
095bfbb
fixed typos
Dec 14, 2018
8d608db
remove unused method
Dec 14, 2018
039d70f
fixed indents
Dec 14, 2018
0f19d63
restricting usage of built-ins in user's code
Dec 14, 2018
95ac2d9
updted README
Dec 14, 2018
515aafb
fixed typos
Dec 17, 2018
ae3e565
fixed typo
Dec 17, 2018
96b57cc
fixed comments from Boris
Dec 17, 2018
f6a9563
switch from OpenCV to IE to infer directly
Dec 20, 2018
78d19a4
fixed some codacy issues
Dec 25, 2018
ba53010
updated README
Dec 25, 2018
f3e33a1
fix codacy issue
Dec 25, 2018
0240061
fix codacy issues
Dec 25, 2018
dd7ae04
fix codacy issues
Dec 25, 2018
284df75
updated readme
Dec 25, 2018
1d7204d
Update README.md
azhavoro Dec 25, 2018
565d442
Update README.md
azhavoro Dec 26, 2018
8c0c8fd
fixed some typos
Dec 26, 2018
c854dfb
fixed some typos
Dec 26, 2018
6dc611b
Moved information about components into root README.md
Dec 26, 2018
9142fd8
Merge remote-tracking branch 'origin/develop' into az/custom_annotation
Dec 26, 2018
b2805b2
Slightly improved documentation.
Dec 26, 2018
1daf222
Fix typo
nmanovic Dec 26, 2018
f0ff2f1
added public results class to interact with interpretation script
Dec 26, 2018
4f1dc0d
attributes support
Dec 26, 2018
e98080f
fixed codacy issues
Dec 26, 2018
4c20011
added several points for Point shape support
Dec 26, 2018
cd68501
added polylines and polygons support
Dec 26, 2018
0254757
rename add_point -> add_points
Dec 26, 2018
28d86e2
updated readme
Dec 26, 2018
8370bb7
Merge remote-tracking branch 'origin/develop' into az/custom_annotation
Dec 26, 2018
fde65d8
Update CHANGELOG.md
Dec 26, 2018
015571c
support OpenVINO R5
Dec 26, 2018
6cc79f5
Merge branch 'az/custom_annotation' of https://github.com/opencv/cvat…
Dec 26, 2018
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
94 changes: 94 additions & 0 deletions cvat/apps/custom_annotation/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,94 @@
## Custom annotation
azhavoro marked this conversation as resolved.
Show resolved Hide resolved

### Description

This application will be enabled automatically if OpenVINO component was installed and allows to use custom detection models for preannotation.
Supported frameworks:
* DLDT form OpenVINO toolkit

Application uses OpenCV dnn module with DLDT backed for inference.

### Usage
To annotate task with custom model you need prepare 4 files:
1. **Model config** - a text file contains network configuration. It could be a file with the following extensions:
* *.xml (DLDT)
1. **Model weights** - a binary file contains trained weights. The following file extensions are expected for models from different frameworks:
* *.bin (DLDT)
1. **Preprocessing configureation and label map** - simple json file that describes image dimentions and preprocessing options. For more details please view [OpenCV](https://docs.opencv.org/3.4/d6/d0f/group__dnn.html#ga0b7b7c3c530b747ef738178835e1e70f) documentation.
Label values in label_map should be exactly equal to labels wich task was created, otherwise will be ignored.
azhavoro marked this conversation as resolved.
Show resolved Hide resolved
Example:
```json
{
"blob_params": {
"width": 300,
"height":300,
"mean": "127.5, 127.5, 127.5",
"scalefactor": 0.0078431372549,
"swapRB": false,
"crop": false
},
"label_map": {
"0": "background",
"1": "aeroplane",
"2": "bicycle",
"3": "bird",
"4": "boat",
"5": "bottle",
"6": "bus",
"7": "car",
"8": "cat",
"9": "chair",
"10": "cow",
"11": "diningtable",
"12": "dog",
"13": "horse",
"14": "motorbike",
"15": "person",
"16": "pottedplant",
"17": "sheep",
"18": "sofa",
"19": "train",
"20": "tvmonitor"
}
}
```
1. **Interpretation script** - python scripts that converts output results from net to CVAT format. File must contain function with following signature: `process_detections(detections):`. There detection is list object of dictionaries that represent detections for each frame of task with folloing keys:
* frame_id - frame number
* frame_height - frame height
* frame_width - frame width
* detections - output blob (See [cv::dnn::Net::forward](https://docs.opencv.org/3.4/db/d30/classcv_1_1dnn_1_1Net.html#a98ed94cb6ef7063d3697259566da310b) for details).
Example for SSD based network
```python
def process_detections(detections):
def clip(value):
return max(min(1.0, value), 0.0)

boxes = []
for frame_results in detections:
frame_height = frame_results['frame_height']
frame_width = frame_results['frame_width']
frame_number = frame_results['frame_id']

for i in range(frame_results['detections'].shape[2]):
confidence = frame_results['detections'][0, 0, i, 2]
if confidence < 0.4: continue

class_id = str(int(frame_results['detections'][0, 0, i, 1]))
xtl = '{:.2f}'.format(clip(frame_results['detections'][0, 0, i, 3]) * frame_width)
ytl = '{:.2f}'.format(clip(frame_results['detections'][0, 0, i, 4]) * frame_height)
xbr = '{:.2f}'.format(clip(frame_results['detections'][0, 0, i, 5]) * frame_width)
ybr = '{:.2f}'.format(clip(frame_results['detections'][0, 0, i, 6]) * frame_height)

boxes.append({
'label': class_id,
'frame': frame_number,
'xtl': xtl,
'ytl': ytl,
'xbr': xbr,
'ybr': ybr,
'attributes': {
'confidence': '{:.2f}'.format(confidence),
}
})
return {'boxes': boxes }
```
8 changes: 8 additions & 0 deletions cvat/apps/custom_annotation/__init__.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,8 @@

# Copyright (C) 2018 Intel Corporation
#
# SPDX-License-Identifier: MIT

from cvat.settings.base import JS_3RDPARTY

JS_3RDPARTY['dashboard'] = JS_3RDPARTY.get('dashboard', []) + ['custom_annotation/js/custom_annotation.js']
9 changes: 9 additions & 0 deletions cvat/apps/custom_annotation/admin.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,9 @@

# Copyright (C) 2018 Intel Corporation
#
# SPDX-License-Identifier: MIT

from django.contrib import admin

# Register your models here.

11 changes: 11 additions & 0 deletions cvat/apps/custom_annotation/apps.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,11 @@

# Copyright (C) 2018 Intel Corporation
#
# SPDX-License-Identifier: MIT

from django.apps import AppConfig


class CustomAnnotationConfig(AppConfig):
name = 'custom_annotation'

23 changes: 23 additions & 0 deletions cvat/apps/custom_annotation/image_loader.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,23 @@
import cv2

class ImageLoader():
def __init__(self, image_list):
self.image_list = image_list

def __getitem__(self, i):
return self.image_list[i]

def __iter__(self):
for imagename in self.image_list:
yield imagename, self.load_image(imagename)

def __len__(self):
return len(self.image_list)

@staticmethod
def load_image(path_to_image):
azhavoro marked this conversation as resolved.
Show resolved Hide resolved
return cv2.imread(path_to_image)

@staticmethod
def _resize_image(image, size):
return cv2.resize(image, size)
5 changes: 5 additions & 0 deletions cvat/apps/custom_annotation/migrations/__init__.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@

# Copyright (C) 2018 Intel Corporation
#
# SPDX-License-Identifier: MIT

51 changes: 51 additions & 0 deletions cvat/apps/custom_annotation/model_loader.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,51 @@
import cv2
import json

class BlobParamaters():
def __init__(self, scalefactor, input_size, mean, swapRB, crop):
self.scalefactor = scalefactor
self.input_size = input_size
self.mean = mean
self.swapRB = swapRB
self.crop = crop

class ModelLoader():
def __init__(self, path_to_model, blob_params):
self.path_to_model = path_to_model
self.blob_params = blob_params

def load(self):
self.net = cv2.dnn.readNet(*self.path_to_model)
self.net.setPreferableTarget(cv2.dnn.DNN_TARGET_CPU)
self.net.setPreferableBackend(cv2.dnn.DNN_BACKEND_DEFAULT)

def setInput(self, images):
blob = cv2.dnn.blobFromImages(images,
self.blob_params.scalefactor,
self.blob_params.input_size,
self.blob_params.mean,
self.blob_params.swapRB,
self.blob_params.crop)

self.net.setInput(blob)

def forward(self):
return self.net.forward()

def read_model_config(config_path):
with open(config_path, 'r') as f:
return json.load(f)

def get_blob_props(config):
blob_params = config['blob_params']
return BlobParamaters(
scalefactor=blob_params['scalefactor'] if 'scalefactor' in blob_params else 1.0,
input_size=(blob_params['height'], blob_params['width']) if 'height' in blob_params and 'width' in blob_params else (),
mean=tuple(float(v) for v in blob_params['mean'].split(',')) if 'mean' in blob_params else tuple(),
swapRB=blob_params['swapRB'] if 'swapRB' in blob_params else False,
crop=blob_params['crop'] if 'crop' in blob_params else False,
)

def get_model_label_map(config):
return config['label_map']

9 changes: 9 additions & 0 deletions cvat/apps/custom_annotation/models.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,9 @@

# Copyright (C) 2018 Intel Corporation
#
# SPDX-License-Identifier: MIT

from django.db import models

# Create your models here.

Original file line number Diff line number Diff line change
@@ -0,0 +1,179 @@
/*
* Copyright (C) 2018 Intel Corporation
*
* SPDX-License-Identifier: MIT
*/

"use strict";

window.cvat = window.cvat || {};
azhavoro marked this conversation as resolved.
Show resolved Hide resolved
window.cvat.dashboard = window.cvat.dashboard || {};
window.cvat.dashboard.uiCallbacks = window.cvat.dashboard.uiCallbacks || [];
window.cvat.dashboard.uiCallbacks.push(function(newElements) {
let tids = [];
for (let el of newElements) {
tids.push(el.id.split('_')[1]);
}

$.ajax({
type: 'POST',
url: '/custom_annotation/meta/get',
data: JSON.stringify(tids),
contentType: "application/json; charset=utf-8",
success: (data) => {
newElements.each(function(idx) {
let elem = $(newElements[idx]);
let tid = +elem.attr('id').split('_')[1];

const customAnnoButton = $('<button> Run custom annotation </button>').addClass('semiBold dashboardButtonUI dashboardCustomAnno');
customAnnoButton.appendTo(elem.find('div.dashboardButtonsUI')[0]);

if ((tid in data) && (data[tid].active)) {
customAnnoButton.text('Cancel custom annotation');
customAnnoButton.addClass('customAnnotationProcess');
window.cvat.custom_annotation.checkCustomAnnotationRequest(tid, customAnnoButton);
}

customAnnoButton.on('click', () => {
if (customAnnoButton.hasClass('customAnnotationProcess')) {
$.post(`/custom_annotation/cancel/task/${tid}`).fail( (data) => {
let message = `Error during cansel custom annotation request. Code: ${data.status}. Message: ${data.responseText || data.statusText}`;
showMessage(message);
throw Error(message);
});
}
else {
let dialogWindow = $(`#${window.cvat.custom_annotation.modalWindowId}`);
dialogWindow.attr('current_tid', tid);
dialogWindow.removeClass('hidden');
}
});
});
},
error: (data) => {
let message = `Can not get custom annotation meta info. Code: ${data.status}. Message: ${data.responseText || data.statusText}`;
showMessage(message);
throw Error(message);
}
});
});

window.cvat.custom_annotation = {
modalWindowId: 'customAnnotationWindow',
customAnnoFromId: 'customAnnotationForm',
customAnnoModelFieldId: 'customAnnotationModelField',
customAnnoWeightsFieldId: 'customAnnotationWeightsField',
customAnnoConfigFieldId: 'customAnnotationConfigField',
customAnnoConvertFieldId: 'customAnnotationConvertField',
customAnnoCloseButtonId: 'customAnnoCloseButton',
customAnnoSubmitButtonId: 'customAnnoSubmitButton',

checkCustomAnnotationRequest: (tid, customAnnoButton) => {
setTimeout(timeoutCallback, 1000);
function timeoutCallback() {
$.get(`/custom_annotation/check/task/${tid}`).done((data) => {
if (data.status == "started" || data.status == "queued") {
let progress = Math.round(data.progress) || 0;
customAnnoButton.text(`Cancel custom annotation (${progress}%)`);
setTimeout(timeoutCallback, 1000);
}
else {
customAnnoButton.text("Run custom annotation");
customAnnoButton.removeClass("customAnnotationProcess");
}
}).fail((data) => {
let message = `Error was occured during check annotation status. ` +
`Code: ${data.status}, text: ${data.responseText || data.statusText}`;
badResponse(message);
});
}
},
};

document.addEventListener("DOMContentLoaded", () => {
$(`<div id="${window.cvat.custom_annotation.modalWindowId}" class="modal hidden">
<form id="${window.cvat.custom_annotation.customAnnoFromId}" class="modal-content" autocomplete="on" onsubmit="return false" style="width: 700px;">
<center>
<label class="semiBold h1"> Custom annotation setup </label>
</center>

<table style="width: 100%; text-align: left;">
<tr>
<td style="width: 25%"> <label class="regular h2"> Model </label> </td>
<td> <input id="${window.cvat.custom_annotation.customAnnoModelFieldId}" type="file" name="model" /> </td>
</tr>
<tr>
<td style="width: 25%"> <label class="regular h2"> Weights </label> </td>
<td> <input id="${window.cvat.custom_annotation.customAnnoWeightsFieldId}" type="file" name="weights" /> </td>
</tr>
<tr>
<td style="width: 25%"> <label class="regular h2"> Config </label> </td>
<td> <input id="${window.cvat.custom_annotation.customAnnoConfigFieldId}" type="file" name="config" accept=".json" /> </td>
</tr>
<tr>
<td style="width: 25%"> <label class="regular h2"> Convertation script </label> </td>
<td> <input id="${window.cvat.custom_annotation.customAnnoConvertFieldId}" type="file" name="convert" /> </td>
</tr>
</table>
<div>
<button id="${window.cvat.custom_annotation.customAnnoCloseButtonId}" class="regular h2"> Close </button>
<button id="${window.cvat.custom_annotation.customAnnoSubmitButtonId}" class="regular h2"> Submit </button>
</div>
</form>

</div>`).appendTo('body');

let annoWindow = $(`#${window.cvat.custom_annotation.modalWindowId}`);
let closeWindowButton = $(`#${window.cvat.custom_annotation.customAnnoCloseButtonId}`);
let submitButton = $(`#${window.cvat.custom_annotation.customAnnoSubmitButtonId}`);

closeWindowButton.on('click', () => {
annoWindow.addClass('hidden');
});

submitButton.on('click', function() {
const tid = annoWindow.attr('current_tid');
const modelInput = $(`#${window.cvat.custom_annotation.customAnnoModelFieldId}`);
const weightsInput = $(`#${window.cvat.custom_annotation.customAnnoWeightsFieldId}`);
const configInput = $(`#${window.cvat.custom_annotation.customAnnoConfigFieldId}`);
const convFileInput = $(`#${window.cvat.custom_annotation.customAnnoConvertFieldId}`);

const modelFile = modelInput.prop('files')[0];
const weightsFile = weightsInput.prop('files')[0];
const configFile = configInput.prop('files')[0];
const convFile = convFileInput.prop('files')[0];

if (!modelFile || !weightsFile || !configFile || !convFile) {
showMessage("All files must be selected");
return;
}

let taskData = new FormData();
taskData.append('model', modelFile);
taskData.append('weights', weightsFile);
taskData.append('config', configFile);
taskData.append('conv_script', convFile);

$.ajax({
url: `/custom_annotation/create/task/${tid}`,
type: 'POST',
data: taskData,
contentType: false,
processData: false,
}).done(() => {
annoWindow.addClass('hidden');
const customAnnoButton = $(`#dashboardTask_${tid} div.dashboardButtonsUI button.dashboardCustomAnno`);
customAnnoButton.addClass('customAnnotationProcess');
window.cvat.custom_annotation.checkCustomAnnotationRequest(tid, customAnnoButton);
}).fail((data) => {
let message = `Error was occured during run annotation request. ` +
`Code: ${data.status}, text: ${data.responseText || data.statusText}`;
badResponse(message);
});

function badResponse(message) {
showMessage(message);
throw Error(message);
}
});
});
Loading