If you want to use prepared configs to run the Accuracy Checker tool and the Model Quantizer, you need to organize <DATASET_DIR>
folder with validation datasets in a certain way. Instructions for preparing validation data are described in this document.
To download images from ImageNet, you need to have an account and agree to the Terms of Access. Follow the steps below:
- Go to the ImageNet homepage
- If you have an account, click
Login
. Otherwise, clickSignup
in the right upper corner, provide your data, and wait for a confirmation email - Log in after receiving the confirmation email and go to the
Download
tab - Select
Download Original Images
- You will be redirected to the Terms of Access page. If you agree to the Terms, continue by clicking Agree and Sign
- Click one of the links in the
Download as one tar file
section to select it - Unpack archive
To download annotation files, you need to follow the steps below:
val.txt
- Download arhive
- Unpack
val.txt
from the archivecaffe_ilsvrc12.tar.gz
val15.txt
- Download annotation file
- Rename
ILSVRC2017_val.txt
toval15.txt
To use this dataset with OMZ tools, make sure <DATASET_DIR>
contains the following:
ILSVRC2012_img_val
- directory containing the ILSVRC 2012 validation imagesval.txt
- annotation file used for ILSVRC 2012val15.txt
- annotation file used for ILSVRC 2015
imagenet_1000_classes
used for evaluation models trained on ILSVRC 2012 dataset with 1000 classes. (model examples:alexnet
,vgg16
)imagenet_1000_classes_2015
used for evaluation models trained on ILSVRC 2015 dataset with 1000 classes. (model examples:se-resnet-152
,se-resnext-50
)imagenet_1001_classes
used for evaluation models trained on ILSVRC 2012 dataset with 1001 classes (background label + original labels). (model examples:googlenet-v2-tf
,resnet-50-tf
)
To download COCO dataset, you need to follow the steps below:
- Download
2017 Val images
and2017 Train/Val annotations
- Unpack archives
To use this dataset with OMZ tools, make sure <DATASET_DIR>
contains the following:
val2017
- directory containing the COCO 2017 validation imagesinstances_val2017.json
- annotation file which used for object detection and instance segmentation tasksperson_keypoints_val2017.json
- annotation file which used for human pose estimation tasks
ms_coco_mask_rcnn
used for evaluation models trained on COCO dataset for object detection and instance segmentation tasks. Background label + label map with 80 public available object categories are used. Annotations are saved in order of ascending image ID.ms_coco_detection_91_classes
used for evaluation models trained on COCO dataset for object detection tasks. Background label + label map with 80 public available object categories are used (original indexing to 91 categories is preserved. You can find more information about object categories labels here). Annotations are saved in order of ascending image ID. (model examples:faster_rcnn_resnet50_coco
,ssd_resnet50_v1_fpn_coco
)ms_coco_detection_80_class_with_background
used for evaluation models trained on COCO dataset for object detection tasks. Background label + label map with 80 public available object categories are used. Annotations are saved in order of ascending image ID. (model examples:faster-rcnn-resnet101-coco-sparse-60-0001
,ssd-resnet34-1200-onnx
)ms_coco_detection_80_class_without_background
used for evaluation models trained on COCO dataset for object detection tasks. Label map with 80 public available object categories is used. Annotations are saved in order of ascending image ID. (model examples:ctdet_coco_dlav0_384
,yolo-v3-tf
)ms_coco_keypoints
used for evaluation models trained on COCO dataset for human pose estimation tasks. Each annotation stores multiple keypoints for one image. (model examples:human-pose-estimation-0001
)ms_coco_single_keypoints
used for evaluation models trained on COCO dataset for human pose estimation tasks. Each annotation stores single keypoints for image, so several annotation can be associated to one image. (model examples:single-human-pose-estimation-0001
)
To download WIDER Face dataset, you need to follow the steps below:
- Go to the WIDER FACE website
- Go to the
Download
section - Select
WIDER Face Validation images
and download them from Google Drive or Tencent Drive - Select and download
Face annotations
- Unpack archives
To use this dataset with OMZ tools, make sure <DATASET_DIR>
contains the following:
WIDER_val
- directory containing images directoryimages
- directory containing the WIDER Face validation images
wider_face_split
- directory with annotation filewider_face_val_bbx_gt.txt
- annotation file
wider
used for evaluation models on WIDER Face dataset where the face is the first class. (model examples:mtcnn
,retinaface-resnet50
)wider_without_bkgr
used for evaluation models on WIDER Face dataset where the face is class zero. (model examples:mobilefacedet-v1-mxnet
)