TranSG: Transformer-Based Skeleton Graph Prototype Contrastive Learning with Structure-Trajectory Prompted Reconstruction for Person Re-Identification
By Haocong Rao and Chunyan Miao. In CVPR 2023 (Arxiv), (Paper), (Appendices).
This is the official implementation of TranSG presented by "TranSG: Transformer-Based Skeleton Graph Prototype Contrastive Learning with Structure-Trajectory Prompted Reconstruction for Person Re-Identification". The codes are used to reproduce experimental results of the proposed TranSG framework in the paper.
Abstract: Person re-identification (re-ID) via 3D skeleton data is an emerging topic with prominent advantages. Existing methods usually design skeleton descriptors with raw body joints or perform skeleton sequence representation learning. However, they typically cannot concurrently model different body-component relations, and rarely explore useful semantics from fine-grained representations of body joints. In this paper, we propose a generic Transformer-based Skeleton Graph prototype contrastive learning (TranSG) approach with structure-trajectory prompted reconstruction to fully capture skeletal relations and valuable spatial-temporal semantics from skeleton graphs for person re-ID. Specifically, we first devise the Skeleton Graph Transformer (SGT) to simultaneously learn body and motion relations within skeleton graphs, so as to aggregate key correlative node features into graph representations. Then, we propose the Graph Prototype Contrastive learning (GPC) to mine the most typical graph features (graph prototypes) of each identity, and contrast the inherent similarity between graph representations and different prototypes from both skeleton and sequence levels to learn discriminative graph representations. Last, a graph Structure-Trajectory Prompted Reconstruction (STPR) mechanism is proposed to exploit the spatial and temporal contexts of graph nodes to prompt skeleton graph reconstruction, which facilitates capturing more valuable patterns and graph semantics for person re-ID. Empirical evaluations demonstrate that TranSG significantly outperforms existing state-of-the-art methods. We further show its generality under different graph modeling, RGB-estimated skeletons, and unsupervised scenarios.
- Python >= 3.5
- Tensorflow-gpu >= 1.14.0
- Pytorch >= 1.1.0
- Faiss-gpu >= 1.6.3
Here we provide a configuration file to install the extra requirements (if needed):
conda install --file requirements.txt
Note: This file will not install tensorflow/tensorflow-gpu, faiss-gpu, pytroch/torch, please install them according to the cuda version of your graphic cards: Tensorflow, Pytorch. Take cuda 9.0 for example:
conda install faiss-gpu cuda90 -c pytorch
conda install pytorch==1.1.0 torchvision==0.3.0 cudatoolkit=9.0 -c pytorch
conda install tensorflow-gpu==1.14
conda install scikit-learn
We provide three already pre-processed datasets (IAS-Lab, BIWI, KGBD) with various sequence lengths (f=4/6/8/10/12) here (pwd: 7je2) and the pre-trained models here (pwd: gvub) or Google Drive. Since we report the average performance of our approach on all datasets, here the provided models may produce better results than the paper.
Please download the pre-processed datasets and model files while unzipping them to Datasets/
and ReID_Models/
folders in the current directory.
Note: The access to the Vislab Multi-view KS20 dataset and large-scale RGB-based gait dataset CASIA-B are available upon request. If you have signed the license agreement and been granted the right to use them, please email us with the signed agreement and we will share the complete pre-processed KS20 and CASIA-B data. The original datasets can be downloaded here: IAS-Lab, BIWI, KGBD, KS20, CASIA-B. We also provide the Preprocess.py
for directly transforming original datasets to the formated training and testing data.
[Update in March 2023]: The pre-trained models on CASIA-B (Discussion Section 5) are available here (pwd: vh3w).
To (1) extract 3D skeleton sequences of length f=6 from original datasets and (2) process them in a unified format (.npy
) for the model inputs, please simply run the following command:
python Preprocess.py 6
Note: If you hope to preprocess manually (or you can get the already preprocessed data (pwd: 7je2)), please frist download and unzip the original datasets to the current directory with following folder structure:
[Current Directory]
├─ BIWI
│ ├─ Testing
│ │ ├─ Still
│ │ └─ Walking
│ └─ Training
├─ IAS
│ ├─ TestingA
│ ├─ TestingB
│ └─ Training
├─ KGBD
│ └─ kinect gait raw dataset
└─ KS20
├─ frontal
├─ left_diagonal
├─ left_lateral
├─ right_diagonal
└─ right_lateral
After dataset preprocessing, the auto-generated folder structure of datasets is as follows:
Datasets
├─ BIWI
│ └─ 6
│ ├─ test_npy_data
│ │ ├─ Still
│ │ └─ Walking
│ └─ train_npy_data
├─ IAS
│ └─ 6
│ ├─ test_npy_data
│ │ ├─ A
│ │ └─ B
│ └─ train_npy_data
├─ KGBD
│ └─ 6
│ ├─ test_npy_data
│ │ ├─ gallery
│ │ └─ probe
│ └─ train_npy_data
└─ KS20
└─ 6
├─ test_npy_data
│ ├─ gallery
│ └─ probe
└─ train_npy_data
Note: KS20 data need first transforming ".mat" to ".txt". If you are interested in the complete preprocessing of KS20 and CASIA-B, please contact us and we will share. We recommend to directly download the preprocessed data here (pwd: 7je2).
To (1) train TranSG to obtain skeleton representations and (2) validate their effectiveness on the person re-ID task on a specific dataset (probe), please simply run the following command:
python TranSG.py --dataset KS20 --probe probe
# Default options: --dataset KS20 --probe probe --length 6 --gpu 0
# --dataset [IAS, KS20, BIWI, KGBD]
# --probe ['probe' (the only probe for KS20 or KGBD), 'A' (for IAS-A probe), 'B' (for IAS-B probe), 'Walking' (for BIWI-Walking probe), 'Still' (for BIWI-Still probe)]
# --length [4, 6, 8, 10, 12]
# --(H, n_heads, L_transfomer, seq_lambda, prompt_lambda, GPC_lambda, lr, etc.) with default settings for each dataset
# --mode [Train (for training), Eval (for testing)]
# --gpu [0, 1, ...]
Please see TranSG.py
for more details.
To print evaluation results (Top-1, Top-5, Top-10 Accuracy, mAP) of the best model saved in default directory (ReID_Models/(Dataset)/(Probe)
), run:
python TranSG.py --dataset KS20 --probe probe --mode Eval
To apply our SimMC to person re-ID under the large-scale RGB scenes (CASIA B), we exploit pose estimation methods to extract 3D skeletons from RGB videos of CASIA B as follows:
- Step 1: Download CASIA-B Dataset
- Step 2: Extract the 2D human body joints by using OpenPose
- Step 3: Estimate the 3D human body joints by using 3DHumanPose
We provide already pre-processed skeleton data of CASIA B for single-condition (Nm-Nm, Cl-Cl, Bg-Bg) and cross-condition evaluation (Cl-Nm, Bg-Nm) (f=40/50/60) here (pwd: 07id).
Please download the pre-processed datasets into the directory Datasets/
.
To (1) train the TranSG to obtain skeleton representations and (2) validate their effectiveness on the person re-ID task on CASIA B under single-condition and cross-condition settings, please simply run the following command:
python TranSG.py --dataset CAISA_B --probe_type nm.nm --length 40
# --length [40, 50, 60]
# --probe_type ['nm.nm' (for 'Nm' probe and 'Nm' gallery), 'cl.cl', 'bg.bg', 'cl.nm' (for 'Cl' probe and 'Nm' gallery), 'bg.nm']
# --(H, n_heads, L_transfomer, seq_lambda, prompt_lambda, GPC_lambda, lr, etc.) with default settings
# --gpu [0, 1, ...]
Please see TranSG.py
for more details.
If you find our work useful for your research, please cite our paper
@inproceedings{rao2023transg,
title={{TranSG}: Transformer-Based Skeleton Graph Prototype Contrastive Learning With Structure-Trajectory Prompted Reconstruction for Person Re-Identification},
author={Rao, Haocong and Miao, Chunyan},
booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
pages={22118--22128},
year={2023}
}
More awesome skeleton-based models are collected in our Awesome-Skeleton-Based-Models.
TranSG is released under the MIT License. Our models and codes must only be used for the purpose of research.