[ALGORITHM]
We implement PointPillars with Shape-aware grouping heads used in the SSN and provide the results and checkpoints on the nuScenes and Lyft dataset.
@inproceedings{zhu2020ssn,
title={SSN: Shape Signature Networks for Multi-class Object Detection from Point Clouds},
author={Zhu, Xinge and Ma, Yuexin and Wang, Tai and Xu, Yan and Shi, Jianping and Lin, Dahua},
booktitle={Proceedings of the European Conference on Computer Vision},
year={2020}
}
Backbone | Lr schd | Mem (GB) | Inf time (fps) | mAP | NDS | Download |
---|---|---|---|---|---|---|
SECFPN | 2x | 16.4 | 35.17 | 49.76 | model | log | |
SSN | 2x | 9.62 | 41.56 | 54.83 | model | log | |
RegNetX-400MF-SECFPN | 2x | 16.4 | 41.15 | 55.20 | model | log | |
RegNetX-400MF-SSN | 2x | 10.26 | 46.95 | 58.24 | model | log |
Backbone | Lr schd | Mem (GB) | Inf time (fps) | Private Score | Public Score | Download |
---|---|---|---|---|---|---|
SECFPN | 2x | 13.4 | 13.4 | |||
SSN | 2x | 8.30 | 17.4 | 17.5 | model | log | |
RegNetX-400MF-SSN | 2x | 9.98 | 18.1 | 18.3 | model | log |
Note:
The main difference of the shape-aware grouping heads with the original SECOND FPN heads is that the former groups objects with similar sizes and shapes together, and design shape-specific heads for each group. Heavier heads (with more convolutions and large strides) are designed for large objects while smaller heads for small objects. Note that there may appear different feature map sizes in the outputs, so an anchor generator tailored to these feature maps is also needed in the implementation.
Users could try other settings in terms of the head design. Here we basically refer to the implementation HERE.