Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

REST API tests with skeletons #4987

Merged
merged 49 commits into from
Sep 28, 2022
Merged
Show file tree
Hide file tree
Changes from 39 commits
Commits
Show all changes
49 commits
Select commit Hold shift + click to select a range
39c03be
Added changes to CvatExtractor
yasakova-anastasia Aug 24, 2022
4355f43
Added Skeleton annotation type
yasakova-anastasia Aug 26, 2022
8496436
Some fixes
yasakova-anastasia Aug 30, 2022
5f39c10
Resolve conflicts
yasakova-anastasia Sep 13, 2022
12208be
Fix an issue with backups
yasakova-anastasia Sep 13, 2022
2c0eeae
Fixes
yasakova-anastasia Sep 13, 2022
c57d6bc
Update Datumaro version
yasakova-anastasia Sep 13, 2022
912c312
Resolve conflicts
yasakova-anastasia Sep 13, 2022
ca4def9
Fix an issue with backups
yasakova-anastasia Sep 15, 2022
bcd7b00
Fix tests
yasakova-anastasia Sep 19, 2022
2a565ed
Fix Pylint
yasakova-anastasia Sep 19, 2022
1fc4d42
Small fix
yasakova-anastasia Sep 19, 2022
cc72a50
Fix test
yasakova-anastasia Sep 19, 2022
8cf1267
Merge branch 'develop' into ay/fix-dataset-import
yasakova-anastasia Sep 19, 2022
07e1e5f
Small fix
yasakova-anastasia Sep 19, 2022
3a36bbe
Add a test to create a task
yasakova-anastasia Sep 20, 2022
f99c605
Merge branch 'develop' into ay/tests-with-skeletons
yasakova-anastasia Sep 21, 2022
c705550
Merge remote-tracking branch 'remotes/origin/ay/fix-dataset-import' i…
yasakova-anastasia Sep 21, 2022
b1c41a5
Add tests
yasakova-anastasia Sep 21, 2022
46e1b5b
Merge branch 'develop' into ay/fix-dataset-import
yasakova-anastasia Sep 21, 2022
3beb2f2
Update Datumaro version
yasakova-anastasia Sep 21, 2022
aa2dccc
Merge branch 'develop' into ay/datumaro-update
yasakova-anastasia Sep 21, 2022
5f99f37
Update Changelog
yasakova-anastasia Sep 22, 2022
a5f5f80
Fixes
yasakova-anastasia Sep 22, 2022
3907ec9
Merge branch 'ay/fix-dataset-import' into ay/tests-with-skeletons
yasakova-anastasia Sep 22, 2022
fbfadf2
Fix assets
yasakova-anastasia Sep 23, 2022
c153d93
Small fix
yasakova-anastasia Sep 23, 2022
a31b1a4
Update tests
yasakova-anastasia Sep 23, 2022
dd937ff
Fixes
yasakova-anastasia Sep 23, 2022
fed0296
Merge branch 'ay/fix-dataset-import' into ay/tests-with-skeletons
yasakova-anastasia Sep 23, 2022
dd849f3
Fix tests
yasakova-anastasia Sep 23, 2022
9f83cd7
Add tests
yasakova-anastasia Sep 23, 2022
7fa94e7
Small fix
yasakova-anastasia Sep 23, 2022
8b669e9
Merge branch 'ay/fix-dataset-import' into ay/tests-with-skeletons
yasakova-anastasia Sep 23, 2022
a88aeaf
Add test for COCO Keypoints
yasakova-anastasia Sep 25, 2022
39ed8df
Fix Pylint
yasakova-anastasia Sep 25, 2022
96724e1
Merge branch 'develop' into ay/tests-with-skeletons
yasakova-anastasia Sep 25, 2022
4139a1a
Fix sdk tests
yasakova-anastasia Sep 25, 2022
395b5f5
Small fix
yasakova-anastasia Sep 26, 2022
f20c915
Update documentation
yasakova-anastasia Sep 26, 2022
1bc417c
Resolve conflicts
yasakova-anastasia Sep 26, 2022
b33527f
Update Changelog
yasakova-anastasia Sep 26, 2022
a178140
Some fixes
yasakova-anastasia Sep 26, 2022
2052141
Merge branch 'ay/fix-dataset-import' into ay/tests-with-skeletons
yasakova-anastasia Sep 26, 2022
caa3ea4
Small fix
yasakova-anastasia Sep 27, 2022
d7db4fc
Fix test
yasakova-anastasia Sep 27, 2022
8400912
Add test to remove skeleton label
yasakova-anastasia Sep 27, 2022
7b3beed
Resolve conflicts
yasakova-anastasia Sep 28, 2022
900af1a
Remove useless changes
yasakova-anastasia Sep 28, 2022
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
5 changes: 3 additions & 2 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -12,6 +12,7 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0

### Changed
- `api/docs`, `api/swagger`, `api/schema` endpoints now allow unauthorized access (<https://github.com/opencv/cvat/pull/4928>)
- Datumaro version (<https://github.com/opencv/cvat/pull/4984>)

### Deprecated
- TDB
Expand All @@ -20,9 +21,9 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
- TDB

### Fixed
- Removed a possibly duplicated encodeURI() calls in `server-proxy.ts` to prevent doubly encoding
- Removed a possibly duplicated encodeURI() calls in `server-proxy.ts` to prevent doubly encoding
non-ascii paths while adding files from "Connected file share" (issue #4428)
- Removed unnecessary volumes defined in docker-compose.serverless.yml
- Removed unnecessary volumes defined in docker-compose.serverless.yml
(<https://github.com/openvinotoolkit/cvat/pull/4659>)

### Security
Expand Down
190 changes: 114 additions & 76 deletions cvat/apps/dataset_manager/bindings.py

Large diffs are not rendered by default.

151 changes: 122 additions & 29 deletions cvat/apps/dataset_manager/formats/cvat.py
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,7 @@

from datumaro.components.annotation import (AnnotationType, Bbox, Label,
LabelCategories, Points, Polygon,
PolyLine)
PolyLine, Skeleton)
from datumaro.components.dataset import Dataset, DatasetItem
from datumaro.components.extractor import (DEFAULT_SUBSET_NAME, Extractor,
Importer)
Expand Down Expand Up @@ -118,23 +118,34 @@ def _parse(cls, path):
items = OrderedDict()

track = None
track_element = None
track_shapes = None
shape = None
shape_element = None
tag = None
attributes = None
element_attributes = None
image = None
subset = None
for ev, el in context:
if ev == 'start':
if el.tag == 'track':
frame_size = tasks_info[int(el.attrib.get('task_id'))]['frame_size'] if el.attrib.get('task_id') else tuple(tasks_info.values())[0]['frame_size']
track = {
'id': el.attrib['id'],
'label': el.attrib.get('label'),
'group': int(el.attrib.get('group_id', 0)),
'height': frame_size[0],
'width': frame_size[1],
}
subset = el.attrib.get('subset')
if track:
track_element = {
'id': el.attrib['id'],
'label': el.attrib.get('label'),
}
else:
frame_size = tasks_info[int(el.attrib.get('task_id'))]['frame_size'] if el.attrib.get('task_id') else tuple(tasks_info.values())[0]['frame_size']
track = {
'id': el.attrib['id'],
'label': el.attrib.get('label'),
'group': int(el.attrib.get('group_id', 0)),
'height': frame_size[0],
'width': frame_size[1],
}
subset = el.attrib.get('subset')
track_shapes = {}
elif el.tag == 'image':
image = {
'name': el.attrib.get('name'),
Expand All @@ -144,16 +155,28 @@ def _parse(cls, path):
}
subset = el.attrib.get('subset')
elif el.tag in cls._SUPPORTED_SHAPES and (track or image):
attributes = {}
shape = {
'type': None,
'attributes': attributes,
}
if track:
shape.update(track)
shape['track_id'] = int(track['id'])
if image:
shape.update(image)
if shape and shape['type'] == 'skeleton':
element_attributes = {}
shape_element = {
'type': 'rectangle' if el.tag == 'box' else el.tag,
'attributes': element_attributes,
}
shape_element.update(image)
else:
attributes = {}
shape = {
'type': 'rectangle' if el.tag == 'box' else el.tag,
'attributes': attributes,
}
shape['elements'] = []
if track_element:
shape.update(track_element)
shape['track_id'] = int(track_element['id'])
elif track:
shape.update(track)
shape['track_id'] = int(track['id'])
if image:
shape.update(image)
elif el.tag == 'tag' and image:
attributes = {}
tag = {
Expand All @@ -164,7 +187,19 @@ def _parse(cls, path):
}
subset = el.attrib.get('subset')
elif ev == 'end':
if el.tag == 'attribute' and attributes is not None:
if el.tag == 'attribute' and element_attributes is not None and shape_element is not None:
attr_value = el.text or ''
attr_type = attribute_types.get(el.attrib['name'])
if el.text in ['true', 'false']:
attr_value = attr_value == 'true'
elif attr_type is not None and attr_type != 'text':
try:
attr_value = float(attr_value)
except ValueError:
pass
element_attributes[el.attrib['name']] = attr_value

if el.tag == 'attribute' and attributes is not None and shape_element is None:
attr_value = el.text or ''
attr_type = attribute_types.get(el.attrib['name'])
if el.text in ['true', 'false']:
Expand All @@ -175,6 +210,37 @@ def _parse(cls, path):
except ValueError:
pass
attributes[el.attrib['name']] = attr_value

elif el.tag in cls._SUPPORTED_SHAPES and shape["type"] == "skeleton" and el.tag != "skeleton":
shape_element['label'] = el.attrib.get('label')
shape_element['group'] = int(el.attrib.get('group_id', 0))

shape_element['type'] = el.tag
shape_element['z_order'] = int(el.attrib.get('z_order', 0))

if el.tag == 'box':
shape_element['points'] = list(map(float, [
el.attrib['xtl'], el.attrib['ytl'],
el.attrib['xbr'], el.attrib['ybr'],
]))
else:
shape_element['points'] = []
for pair in el.attrib['points'].split(';'):
shape_element['points'].extend(map(float, pair.split(',')))

if el.tag == 'points' and el.attrib.get('occluded') == '1':
shape_element['visibility'] = [Points.Visibility.hidden] * (len(shape_element['points']) // 2)
else:
shape_element['occluded'] = (el.attrib.get('occluded') == '1')

if el.tag == 'points' and el.attrib.get('outside') == '1':
shape_element['visibility'] = [Points.Visibility.absent] * (len(shape_element['points']) // 2)
else:
shape_element['outside'] = (el.attrib.get('outside') == '1')

shape['elements'].append(shape_element)
shape_element = None

elif el.tag in cls._SUPPORTED_SHAPES:
if track is not None:
shape['frame'] = el.attrib['frame']
Expand All @@ -193,15 +259,22 @@ def _parse(cls, path):
el.attrib['xtl'], el.attrib['ytl'],
el.attrib['xbr'], el.attrib['ybr'],
]))
elif el.tag == 'skeleton':
shape['points'] = []
else:
shape['points'] = []
for pair in el.attrib['points'].split(';'):
shape['points'].extend(map(float, pair.split(',')))
if track_element:
track_shapes[shape['frame']]['elements'].append(shape)
elif track:
track_shapes[shape['frame']] = shape
else:
frame_desc = items.get((subset, shape['frame']), {'annotations': []})
frame_desc['annotations'].append(
cls._parse_shape_ann(shape, categories))
items[(subset, shape['frame'])] = frame_desc

frame_desc = items.get((subset, shape['frame']), {'annotations': []})
frame_desc['annotations'].append(
cls._parse_shape_ann(shape, categories))
items[(subset, shape['frame'])] = frame_desc
shape = None

elif el.tag == 'tag':
Expand All @@ -211,7 +284,15 @@ def _parse(cls, path):
items[(subset, tag['frame'])] = frame_desc
tag = None
elif el.tag == 'track':
track = None
if track_element:
track_element = None
else:
for track_shape in track_shapes.values():
frame_desc = items.get((subset, track_shape['frame']), {'annotations': []})
frame_desc['annotations'].append(
cls._parse_shape_ann(track_shape, categories))
items[(subset, track_shape['frame'])] = frame_desc
track = None
elif el.tag == 'image':
frame_desc = items.get((subset, image['frame']), {'annotations': []})
frame_desc.update({
Expand Down Expand Up @@ -376,7 +457,8 @@ def _parse_shape_ann(cls, ann, categories):
id=ann_id, attributes=attributes, group=group)

elif ann_type == 'points':
return Points(points, label=label_id, z_order=z_order,
visibility = ann.get('visibility', None)
return Points(points, visibility, label=label_id, z_order=z_order,
id=ann_id, attributes=attributes, group=group)

elif ann_type == 'box':
Expand All @@ -385,6 +467,14 @@ def _parse_shape_ann(cls, ann, categories):
return Bbox(x, y, w, h, label=label_id, z_order=z_order,
id=ann_id, attributes=attributes, group=group)

elif ann_type == 'skeleton':
elements = []
for element in ann.get('elements', []):
elements.append(cls._parse_shape_ann(element, categories))

return Skeleton(elements, label=label_id, z_order=z_order,
id=ann_id, attributes=attributes, group=group)

else:
raise NotImplementedError("Unknown annotation type '%s'" % ann_type)

Expand All @@ -409,7 +499,7 @@ def _load_items(self, parsed, image_items):
di.subset = subset or DEFAULT_SUBSET_NAME
di.annotations = item_desc.get('annotations')
di.attributes = {'frame': int(frame_id)}
di.image = image if isinstance(image, Image) else di.image
di.media = image if isinstance(image, Image) else di.media
image_items[(subset, osp.splitext(name)[0])] = di
return image_items

Expand Down Expand Up @@ -962,7 +1052,10 @@ def dump_track(idx, track):
elements=[],
) for element in shape.elements]
}
if isinstance(annotations, ProjectData): track['task_id'] = shape.task_id
if isinstance(annotations, ProjectData):
track['task_id'] = shape.task_id
for element in track['elements']:
element.task_id = shape.task_id
dump_track(counter, annotations.Track(**track))
counter += 1

Expand Down
6 changes: 3 additions & 3 deletions cvat/apps/dataset_manager/formats/icdar.py
Original file line number Diff line number Diff line change
Expand Up @@ -90,7 +90,7 @@ def _import(src_file, instance_data, load_data_callback=None):
with TemporaryDirectory() as tmp_dir:
zipfile.ZipFile(src_file).extractall(tmp_dir)
dataset = Dataset.import_from(tmp_dir, 'icdar_word_recognition', env=dm_env)
dataset.transform(CaptionToLabel, 'icdar')
dataset.transform(CaptionToLabel, label='icdar')
if load_data_callback is not None:
load_data_callback(dataset, instance_data)
import_dm_annotations(dataset, instance_data)
Expand All @@ -110,7 +110,7 @@ def _import(src_file, instance_data, load_data_callback=None):
zipfile.ZipFile(src_file).extractall(tmp_dir)

dataset = Dataset.import_from(tmp_dir, 'icdar_text_localization', env=dm_env)
dataset.transform(AddLabelToAnns, 'icdar')
dataset.transform(AddLabelToAnns, label='icdar')
if load_data_callback is not None:
load_data_callback(dataset, instance_data)
import_dm_annotations(dataset, instance_data)
Expand All @@ -133,7 +133,7 @@ def _import(src_file, instance_data, load_data_callback=None):
with TemporaryDirectory() as tmp_dir:
zipfile.ZipFile(src_file).extractall(tmp_dir)
dataset = Dataset.import_from(tmp_dir, 'icdar_text_segmentation', env=dm_env)
dataset.transform(AddLabelToAnns, 'icdar')
dataset.transform(AddLabelToAnns, label='icdar')
dataset.transform('masks_to_polygons')
if load_data_callback is not None:
load_data_callback(dataset, instance_data)
Expand Down
4 changes: 2 additions & 2 deletions cvat/apps/dataset_manager/formats/market1501.py
Original file line number Diff line number Diff line change
Expand Up @@ -65,7 +65,7 @@ def _export(dst_file, instance_data, save_images=False):
dataset = Dataset.from_extractors(GetCVATDataExtractor(
instance_data, include_images=save_images), env=dm_env)
with TemporaryDirectory() as temp_dir:
dataset.transform(LabelAttrToAttr, 'market-1501')
dataset.transform(LabelAttrToAttr, label='market-1501')
dataset.export(temp_dir, 'market1501', save_images=save_images)
make_zip_archive(temp_dir, dst_file)

Expand All @@ -75,7 +75,7 @@ def _import(src_file, instance_data, load_data_callback=None):
zipfile.ZipFile(src_file).extractall(tmp_dir)

dataset = Dataset.import_from(tmp_dir, 'market1501', env=dm_env)
dataset.transform(AttrToLabelAttr, 'market-1501')
dataset.transform(AttrToLabelAttr, label='market-1501')
if load_data_callback is not None:
load_data_callback(dataset, instance_data)
import_dm_annotations(dataset, instance_data)
2 changes: 1 addition & 1 deletion cvat/apps/dataset_manager/formats/vggface2.py
Original file line number Diff line number Diff line change
Expand Up @@ -29,7 +29,7 @@ def _import(src_file, instance_data, load_data_callback=None):
zipfile.ZipFile(src_file).extractall(tmp_dir)

dataset = Dataset.import_from(tmp_dir, 'vgg_face2', env=dm_env)
dataset.transform('rename', r"|([^/]+/)?(.+)|\2|")
dataset.transform('rename', regex=r"|([^/]+/)?(.+)|\2|")
if load_data_callback is not None:
load_data_callback(dataset, instance_data)
import_dm_annotations(dataset, instance_data)
Loading