This training extension is designed for running SSD MobileNet with FPN on Intel devices which can not place the original 640-size model. By resizing and fine-tuning the network, the model is consistent with OpenVINO running on various Intel devices as well as achieves decent accuracy close to the original 640-size model. Check the accuracy table below.
Model Name | Size | AP IOU=0.50:0.95 | AP IOU=0.50 | AR maxDets=100 |
---|---|---|---|---|
Pretrained[*1] | 640 | 0.291 | 0.445 | 0.370 |
Resized[*2] | 602 | 0.234 | 0.424 | 0.309 |
Fine-tuned[*3] | 602 | 0.283 | 0.439 | 0.361 |
Note:
- [*1] is the 640-size model downloaded from TensorFlow Model Zoo;
- [*2] is the 602-size model resized from [*1] straight away;
- [*3] is the fine-tuned 602-size model in this extension;
- Ubuntu 16.04
- Python 3.5
- TensorFlow 1.13.1
- OpenVINO 2019 R1 with built Inference Engine Samples
- Download submodules
cd openvino_training_extensions
git submodule update --init --recursive
- Create virtual environment
virtualenv venv -p python3 --prompt="(ssd_mobilenet_fpn_602)"
- Modify
venv/bin/activate
to set environment variables
cat <<EOT >> venv/bin/activate
export PYTHONPATH=\$PYTHONPATH:$(git rev-parse --show-toplevel)/external/models/research
export PYTHONPATH=\$PYTHONPATH:$(git rev-parse --show-toplevel)/external/models/research/slim
. /opt/intel/openvino/bin/setupvars.sh
EOT
- Activate virtual environment and setup OpenVINO variables
. venv/bin/activate
- Install modules
pip3 install -r requirements.txt
pip3 install -r ${INTEL_OPENVINO_DIR}/deployment_tools/model_optimizer/requirements_tf.txt
- Build and install COCO API for python
cd $(git rev-parse --show-toplevel)/external/cocoapi
2to3 . -w
cd PythonAPI
make install
- Protobuf Compilation
cd openvino_training_extensions/external/models/research/
protoc object_detection/protos/*.proto --python_out=.
- Please download images and annotations from cocodataset.org. COCO2017 is used in this repo for training and validation.
# From openvino_training_extensions/tensorflow_toolkit/ssd_mobilenet_fpn_602/
mkdir -p dataset/images
wget -P dataset http://images.cocodataset.org/annotations/annotations_trainval2017.zip
wget -P dataset http://images.cocodataset.org/zips/train2017.zip
wget -P dataset http://images.cocodataset.org/zips/val2017.zip
unzip dataset/annotations_trainval2017.zip -d dataset
unzip dataset/train2017.zip -d dataset/images
unzip dataset/val2017.zip -d dataset/images
- Here a data generation script is provided to generate training and validation set since TensorFlow Object Detection API is using COCO minival set(note that the split is different from COCO2017 val) for evaluation. See create_coco_tfrecord.py for more details. To run the script,
python tools/create_coco_tfrecord.py \
--image_folder=dataset/images \
--annotation_folder=dataset/annotations \
--coco_minival_ids_file=../../external/models/research/object_detection/data/mscoco_minival_ids.txt \
--output_folder=dataset/tfrecord
- After data preparation, tfrecords are located at dataset/tfrecord. Files with coco_train2017_plus.record and coco_minival2017.record prefixes can be used for training and validatation respectively.
- Download pre-trained model from TensorFlow Object Detection Model Zoo and extract it.
# From openvino_training_extensions/tensorflow_toolkit/ssd_mobilenet_fpn_602/
mkdir -p models
wget -P models http://download.tensorflow.org/models/object_detection/ssd_mobilenet_v1_fpn_shared_box_predictor_640x640_coco14_sync_2018_07_03.tar.gz
tar xzvf models/ssd_mobilenet_v1_fpn_shared_box_predictor_640x640_coco14_sync_2018_07_03.tar.gz -C models
- Run fine-tuning as follows(note that evaluation metrics will be printed out during fine-tuning).
python ../../external/models/research/object_detection/model_main.py \
--model_dir=./models/checkpoint \
--pipeline_config_path=./configs/pipeline.config
- Run following command for visualization, and follow the terminal instruction to view fine-tuning and evaluation result in browser.
tensorboard --logdir=./models/checkpoint
- After fine-tuning, a single evaluation can be run as follows.
python ../../external/models/research/object_detection/model_main.py \
--model_dir=./models/checkpoint \
--pipeline_config_path=./configs/pipeline.config \
--sample_1_of_n_eval_examples=1 \
--checkpoint_dir=./models/checkpoint \
--run_once=True
- New TF checkpoints are generated after fine-tuning, which can be converted into frozen inference graph as follows.
python ../../external/models/research/object_detection/export_inference_graph.py \
--input_type=image_tensor \
--pipeline_config_path=./configs/pipeline.config \
--trained_checkpoint_prefix=./models/checkpoint/model.ckpt-2500 \
--output_directory=./models/frozen_graph
- frozen_inference_graph.pb will be genereted after Step 1 conversion. It then can be converted into OpenVINO IR.
python "${INTEL_OPENVINO_DIR}"/deployment_tools/model_optimizer/mo_tf.py \
--input_model=./models/frozen_graph/frozen_inference_graph.pb \
--output_dir=./models/openvino_ir \
--tensorflow_object_detection_api_pipeline_config=./configs/pipeline.config \
--tensorflow_use_custom_operations_config="${INTEL_OPENVINO_DIR}"/deployment_tools/model_optimizer/extensions/front/tf/ssd_v2_support.json
- Build OpenVINO sampales
cd ${INTEL_OPENVINO_DIR}/deployment_tools/inference_engine/samples
bash build_samples.sh
- Run OpenVINO SSD sample after OpenVINO IR is generated as follows.
$HOME/inference_engine_samples_build/intel64/Release/object_detection_sample_ssd \
-i ./assets/000000322211.jpg \
-m ./models/openvino_ir/frozen_inference_graph.xml \
-d ${DEVICE}
- Check output image using
eog output_0.bmp
.
Note: OpenVINO R1 Inference Engine Samples are assumed built already. Please refer to OpenVINO build the sample applications for those who haven't done this before.
Note: Try export ${DEVICE}=MYRIAD
to run this model on VPU.