The code is for the video-based vehicle reidentification task in AIC19 track 1 and 2 [link]. The code is based on Jiyang Gao's Video-Person-ReID [code].
PyTorch 0.3.1
Torchvision 0.2.0
Python 2.7
First download the AIC19 dataset [link], and use the python scripts in data_util/
to convert images, keypoints and metadata into desired file structure. Please copy the scripts to your path to aic19-track2-reid
for simplicity.
- Run
xml_reader_testdata.py
andxml_reader_traindata.py
to convert images into desired file structure:image_train_deepreid/carId/camId/imgId.jpg
. - Run
create_feature_files.py
to convert the keypoints into desired file structure as images:keypoint_train_deepreid/carId/camId/imgId.txt
. - Run
convert_metadata_imglistprob.py
to convert the metadata inference result of query (and test) tracks intoprob_v2m100_query.txt
andimglist_v2m100_query.txt
. And then runcreate_metadata_files.py
to convert the metadata into desired file structure as images:metadata_v2m100_query_deepreid/carId/camId/imgId.txt
. If using other metadata models, changev2m100
to other names. Example txt output from the provided metadata model [link] can be downloaded here.
To train the model, please run
python main_video_person_reid.py --train-batch 16 --workers 0 --seq-len 4 --arch resnet50ta_surface_nu --width 224 --height 224 --dataset aictrack2 --use-surface --save-dir log --learning-rate 0.0001 --eval-step 50 --save-step 50 --gpu-devices 0 --re-ranking --metadata-model v2m100 --bstri
arch
could be resnet50ta_surface_nu
(Temporal Attention with keypoints feature, for AIC19 track 2) or resnet50ta
(Temporal Attention, for AIC19 track 1). If using resnet50ta
, do not use --use-surface
.
To test the model, please run
python main_video_person_reid.py --train-batch 16 --workers 0 --seq-len 4 --arch resnet50ta_surface_nu --width 224 --height 224 --dataset aictrack2 --use-surface --evaluate --pretrained-model log/checkpoint_ep300.pth.tar --save-dir log-test --gpu-devices 0 --re-ranking --metadata-model v2m100
Optionally, start from previously saved feature without redoing inference
python main_video_person_reid.py --dataset aictrack2 --save-dir log --re-ranking --metadata-model v2m100 --load-feature --feature-dir feature_dir
feature_dir
can be point to previously saved feature directory, e.g. log/feature_ep0300
.
The pre-trained model can be download at here.
Besides, the confusion matrix of metadata model need to be put under metadata/
. Example confusion matrix can be downloaded here.
For generating features for our AIC19 track 1 's testing [code], run
python Graph_ModelDataGen.py
The pretrained model can be downloaded here. The model should be put under log/
.
Besides, the data should be processed in a different manner:
Create video2img folder in the downloaded project (i.e., Video-Person-ReID/video2img/).
Put and run python crop_img.py
in the same folder in the downloaded dataset (i.e., aic19-track1-mtmc/test). You need to creat a folder track1_test_img in the same path (i.e., aic19-track1-mtmc/test/track1_test_img). After that, create a folder track1_sct_img_test_big and run python crop_img_big.py
. Then, create a folder log in the dowanloaded project (i.e., Video-Person-ReID/log) and put the downloaded model file of track1 ReID in this folder. Finally, run python Graph_ModelDataGen.py
to obtain the feature files (q_camids3_no_nms_big0510.npy, qf3_no_nms_big0510.npy and q_pids3_no_nms_big0510.npy).
The code is based on Jiyang Gao's Video-Person-ReID.
The visualization code is adopted from KaiyangZhou's deep-person-reid.
The re-ranking code is modified based on zhunzhong07's person-re-ranking.