The code was tested on Ubuntu 16.04, with Anaconda Python 3.6 and PyTorch v0.4.1. NVIDIA GPUs are needed for both training and testing. After install Anaconda:
-
[Optional but recommended] create a new conda environment.
conda create --name CenterNet python=3.6
And activate the environment.
conda activate CenterNet
-
Install pytorch0.4.1:
conda install pytorch=0.4.1 torchvision -c pytorch
And disable cudnn batch normalization(Due to this issue).
# PYTORCH=/path/to/pytorch # usually ~/anaconda3/envs/CenterNet/lib/python3.6/site-packages/ # for pytorch v0.4.0 sed -i "1194s/torch\.backends\.cudnn\.enabled/False/g" ${PYTORCH}/torch/nn/functional.py # for pytorch v0.4.1 sed -i "1254s/torch\.backends\.cudnn\.enabled/False/g" ${PYTORCH}/torch/nn/functional.py
For other pytorch version, you can manually open
torch/nn/functional.py
and find the line withtorch.batch_norm
and replace thetorch.backends.cudnn.enabled
withFalse
. We observed slight worse training results without doing so. -
Install COCOAPI:
# COCOAPI=/path/to/clone/cocoapi git clone https://github.com/cocodataset/cocoapi.git $COCOAPI cd $COCOAPI/PythonAPI make python setup.py install --user
-
Clone this repo:
CenterNet_ROOT=/path/to/clone/CenterNet git clone https://github.com/xingyizhou/CenterNet $CenterNet_ROOT
-
Install the requirements
pip install -r requirements.txt
-
Compile deformable convolutional (from DCNv2).
cd $CenterNet_ROOT/src/lib/models/networks/DCNv2 ./make.sh
-
[Optional, only required if you are using extremenet or multi-scale testing] Compile NMS if your want to use multi-scale testing or test ExtremeNet.
cd $CenterNet_ROOT/src/lib/external make
-
Download pertained models for detection or pose estimation and move them to
$CenterNet_ROOT/models/
. More models can be found in Model zoo.