PointDistiller: Structured Knowledge Distillation Towards Efficient and Compact 3D Detection, CVPR'23
Linfeng Zhang*, Runpei Dong*, Hung-Shuo Tai, and Kaisheng Ma
OpenAccess | arXiv | Logs
This repository contains the implementation of the paper PointDistiller: Structured Knowledge Distillation Towards Efficient and Compact 3D Detection (CVPR 2023).
This codebase was tested with the following environment configurations. It may work with other versions.
- Ubuntu 18.04/20.04
- CUDA 10.2/11.3
- GCC 7.5.0/9.4.0
- Python 3.7.11/3.8.8
- PyTorch 1.9.0/1.10.0
- MMCV v1.4.8
- MMDetection3D v1.0.0rc0+
- MMDetection v2.22.0
- MMSegmentation v0.22.1
Please refer to getting_started.md for installation.
We use KITTI and nuScenes datsets, please follow the official instructions for set up.
Please make sure you have set up the environments and you can start knowledge distillation by running
DEVICE_ID = <gpu_id>
CUDA_VISIBLE_DEVICES=$DEVICE_ID python tools/train.py <student_cfg> --use-kd # for single gpu
bash ./tools/dist_train.sh <student_cfg> 8 --use-kd # for multiple gpus
PointDistiller is released under the MIT License. See the LICENSE file for more details.
Many thanks to following codes that help us a lot in building this codebase:
If you find our work useful in your research, please consider citing:
@inproceedings{pointdistiller23,
title={PointDistiller: Structured Knowledge Distillation Towards Efficient and Compact 3D Detection},
author={Linfeng Zhang and Runpei Dong and Hung-Shuo Tai and Kaisheng Ma},
booktitle={IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
year={2023},
}