This repository deals with a perception system of Autonomous Driving techniques. In particular, we focused on the object detection, tracking, sensor fusion, and trajectory prediction. We used YOLOv5, PointPillars for the object detection of Camera and LiDAR sensor, respectively. Overall pipeline is as following.
Through ROS Rviz, the prediction output is as the videos above.
- Tested in Ubuntu 20.04 (ROS Noetic) & NVIDIA GeForce RTX 3070
- Other necessary library is in the
requirements.txt
Clone this repository and move your current directory to here.
cd path_to_your_ws
git clone https://github.com/s-duuu/pred_fusion.git
cd pred_fusion
Install modules in requirements.txt
.
pip install -r requirements.txt
Clone the official repository of PointPillars.
git clone https://github.com/zhulf0804/PointPillars.git
Clone the official repository of OpenPCDet.
git clone https://github.com/open-mmlab/OpenPCDet.git
Clone the official repository of CRAT-Pred.
git clone https://github.com/schmidt-ju/crat-pred.git
Build the package in the your workspace.
cd path_to_your_ws
catkin_make (or catkin build)
source ./devel/setup.bash
Execute launch file which includes all ROS nodes necessary for the system.
roslaunch fusion_prediction integrated.launch
You can test our system by ROS bagfile. Download the file and play it in another terminal. Rviz will display the result of the system.
cd path_to_bagfile
rosbag play test.bag
We trained YOLOv5s model, which is located in pred_fusion/fusion_prediction/yolo.pt
. Since the model was trained with image data extracted from CarMaker simulator, if you need the YOLOv5 model for the real vehicles, it would be better to change the YOLO model. You can train a new model from yolov5 official github.
We also trained PointPillars model, which is located in pred_fusion/fusion_prediction/pillars.pth
. This model was trained with Kitti dataset, thus you don't need to change the model.
Sensor fusion algorithm is based on Late Fusion algorithm. Algorithm in this repository is based on the bounding box projection. Each 3D bounding box predicted from the PointPillars model is projected onto the image plane. Then, the algorithm determines whether the 2 bounding boxes are for the same object based on IOU.
Object tracking algorithm is based on the SORT (Simple Online and Realtime Tracking). The algorithm tracks each BEV (Bird's Eye View) Bounding Box. Tracking is based on Kalman Filter, Matching is based on IOU, and Assignment is based on Hungarian algorithm.
Trajectory is predicted from the CRAT-Pred model. This model was trained with Argoverse dataset, thus you don't need to change the model. The model is located in pred_fusion/fusion_prediction/crat.ckpt
.
Kim SeongJu, School of Mechanical Engineering, Sungkyunkwan University, South Korea
e-mail: [email protected]