The crossroad detection network model provides detection of 3 class objects: vehicle, pedestrian, non-vehicle (ex: bikes). This detector was trained on the data from crossroad cameras.
- Ubuntu 16.04
- Python 3.6
- TensorFlow 1.13.1
- OpenVINO 2019 R1 with Python API
- Download submodules
cd openvino_training_extensions
git submodule update --init --recommend-shallow external/cocoapi external/models
- Create virtual environment
virtualenv venv -p python3 --prompt="(pvb)"
- Modify
venv/bin/activate
to set environment variables
cat <<EOT >> venv/bin/activate
export PYTHONPATH=\$PYTHONPATH:$(git rev-parse --show-toplevel)/external/models/research
export PYTHONPATH=\$PYTHONPATH:$(git rev-parse --show-toplevel)/external/models/research/slim
. /opt/intel/openvino/bin/setupvars.sh
EOT
- Activate virtual environment and setup OpenVINO variables
. venv/bin/activate
- Install modules
pip3 install -r requirements.txt
- Build and install COCO API for python
cd $(git rev-parse --show-toplevel)/external/cocoapi
2to3 . -w
cd PythonAPI
make install
- Compile Protobuf libraries
cd $(git rev-parse --show-toplevel)/external/models/research/
protoc object_detection/protos/*.proto --python_out=.
NOTE To train model on own dataset you should change num_steps: 10
in configs/pipeline.config
.
-
Go to
openvino_training_extensions/tensorflow_toolkit/veh_ped_nonveh_ssd_mobilenetv2_detector/
directory -
The example dataset has annotation in coco format. You can find it here:
openvino_training_extensions/tensorflow_toolkit/veh_ped_nonveh_ssd_mobilenetv2_detector/dataset
To collect annotation used COCO object detection format. . -
To convert the dataset to tfrecords you have to run:
python ./tools/create_crossroad_extra_tf_records.py \ --train_image_dir=../../data/airport/train/ \ --val_image_dir=../../data/airport/val/ \ --train_annotations_file=../../data/airport/annotation_example_train.json \ --val_annotations_file=../../data/airport/annotation_example_val.json \ --output_dir=../../data/airport/tfrecords
-
To start training you have to run:
python ../../external/models/research/object_detection/model_main.py \ --model_dir=./model \ --pipeline_config_path=./configs/pipeline.config
Training artifacts will be stored by default in
model
-
Evalution artifacts will be stored by default in
openvino_training_extensions/tensorflow_toolkit/veh_ped_nonveh_ssd_mobilenetv2_detector/model/eval_0/
. To show results of network model working runtensorboard --logdir=./model
And view results in a browser: http://localhost:6006.
python ../../external/models/research/object_detection/export_inference_graph.py \
--input_type=image_tensor \
--pipeline_config_path=./configs/pipeline.config \
--trained_checkpoint_prefix=./model/model.ckpt-10 \
--output_directory ./model/export_10
python tools/infer.py --model=model/export_10/frozen_inference_graph.pb \
--label_map=../../data/airport/crossroad_label_map.pbtxt \
../../data/airport/val/image_000000.jpg
"${INTEL_OPENVINO_DIR}"/deployment_tools/model_optimizer/mo_tf.py \
--model_name veh_ped_nonveh_ssd_mobilenetv2_detector \
--input_model=./model/export_10/frozen_inference_graph.pb \
--output_dir=./model/export_10/IR \
--tensorflow_object_detection_api_pipeline_config=./configs/pipeline.config \
--tensorflow_use_custom_operations_config="${INTEL_OPENVINO_DIR}/deployment_tools/model_optimizer/extensions/front/tf/ssd_v2_support.json"
python tools/infer_ie.py --model model/export_10/IR/veh_ped_nonveh_ssd_mobilenetv2_detector.xml \
--device=CPU \
--cpu_extension="${INTEL_OPENVINO_DIR}/deployment_tools/inference_engine/lib/intel64/libcpu_extension_avx2.so" \
--label_map dataset/crossroad_label_map.pbtxt \
dataset/ssd_mbv2_data_val/image_000000.jpg