Skip to content

qintonguav/ParkingE2E

Repository files navigation

ParkingE2E

ParkingE2E: Camera-based End-to-end Parking Network, from Images to Planning

Autonomous parking is a crucial task in the intelligent driving field. Traditional parking algorithms are usually implemented using rule-based schemes. However, these methods are less effective in complex parking scenarios due to the intricate design of the algorithms. In contrast, neural-network-based methods tend to be more intuitive and versatile than the rule-based methods. By collecting a large number of expert parking trajectory data and emulating human strategy via learning-based methods, the parking task can be effectively addressed. We employ imitation learning to perform end-to-end planning from RGB images to path planning by imitating human driving trajectories. The proposed end-to-end approach utilizes a target query encoder to fuse images and target features, and a transformer-based decoder to autoregressively predict future waypoints.

Video:

Supplementary video material is available at: [Link].

Related Papers:

  • Changze Li, Ziheng Ji, Zhe Chen, Tong Qin and Ming Yang. "ParkingE2E: Camera-based End-to-end Parking Network, from Images to Planning." 2024 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, 2024. [Link]

  • Yunfan Yang, Denglon Chen, Tong Qin, Xiangru Mu, Chunjing Xu, Ming Yang. "E2E Parking: Autonomous Parking by the End-to-end Neural Network on the CARLA Simulator." 2024 IEEE Intelligent Vehicles Symposium (IV). IEEE, 2024. [Link]

1. Prerequisites

Ubuntu 20.04, CUDA, ROS Noetic and OpenCV 4.

2. Setup

Clone the code:

git clone https://github.com/ChauncyLeee/e2e_parking_imitation.git
cd e2e_parking_imitation/

Install virtual environment:

conda env create -f environment.yaml

Setup interface:

conda activate ParkingE2E
PARKINGE2E_PYTHON_PATH=`which python`
cd catkin_ws
catkin_make -DPYTHON_EXECUTABLE=${PARKINGE2E_PYTHON_PATH}
source devel/setup.bash

3. Run

Download pretrained model and test data:

Firstly, you should download the pretrained model [Google Drive or Baidunetdisk] and test data [Google Drive or Baidunetdisk]. Then, you need to modify inference config model_ckpt_path in ./config/inference_real.yaml.

Run driven program:

roslaunch core driven_core.launch

When the command is executed for the first time, a progress bar will appear (used to calculate the distortion map). After the four (fisheye camera) progress bars are completed, subsequent operations can be carried out.

Start inference using E2E algorithm:

conda activate ParkingE2E
python ros_inference.py

When the command is executed for the first time, the EfficientNet pretrained model will be downloaded.

Run test demo:

unzip demo_scene.zip
cd demo_scene
# scene_index = 1, 2, 3, 4, 5, 6, 7. For example: sh ./demo.sh 1
sh ./demo.sh ${scene_index}

In rviz, you can also select the parking trarget using 2D nav goal on the rviz pannel.

4. Train

We provide the demo rosbag [Google Drive or Baidunetdisk] to create a mini dataset and train a model.

Generate dataset

First, you need to create a dataset.

python toolkit/dataset_generation.py --bag_file_path ${DEMO_BAG_PATH} --output_folder_path ./e2e_dataset

If you use your own rosbag, please confirm the rosbag topic in ./catkin_ws/src/core/config/params.yaml and modify the camera configuration.

Train your model

python train.py --config ./config/training_real.yaml

You can modify the training configuration in ./config/training_real.yaml.

5. License

The source code is released under GPLv3 license.

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published