- Yongzhi Liu (Team leader)
- Yan Zhang
- Yanyan Peng
- Rajiv Sreedhar
- Pradhap Moorthi
- Programming a Real Self-Driving Car
In this project, we used ROS nodes to implement the core functionality of the autonomous vehicle system, including traffic light detection, drive-by-wire control, and waypoint following. The following is a system architecture diagram showing the ROS nodes and topics used in the project.
In this project, the perception is mainly from camera images for traffic light detection. The self driving car for this project is mainly for highway or test site without any obstacle. So no obstacle detection is considered.
- Input:
/image_color
: colored images from camera,/current_pose
: the vehicle's current position,/base_waypoints
: a complete list of reference waypoints the car will be following,
- Output:
/traffic_waypoint
: the locations to stop for red traffic lights
To reduce the latency, we only allow the traffic light classification when the car is within 100 waypoints away from the closest traffic light.
The traffic light classification is more complicated than the deep learning and transfer learning we did in the course. The input camera images for the simulator or for the site test are not previously provided by Udacity. We chose to train our classification model based on existing object detection models such as MobileNet that have already been tested and used successfully. These models can detect the traffic light box but cannot tell the color.
We have learned a lot about how to prepare data and training model from Tensorflow Object Detection API and got inspired from these past Udacity fellow students (Jose and Marco):
The images from simulator and test site are very different in terms of shape and position. Our traffic classification use one model for the simulator and one model for the test site. The training of the model includes five steps:
-
Collecting Images:
- simulator: use the state of the /vehicle/traffic_lights in the simulator to get the ground truth images.
- site: the images are from rosbag provided by Udacity.
-
Labelling Images:
- We used LabelImg to label the collected images.
-
Create TFRecord File:
- To use our own dataset in Tensorflow Object Detection API, we must convert it into the TFRecord file format.
-
Training Model:
-
Export Model:
- Models have to be exported for Tensorflow v1.4 to work with capstone project environment and Carla.
The path planning for this project is simply to produce a trajectory that obeys the traffic light. The resulting waypoints are the green points ahead of the car as shown in the snapshot below.
A package which loads the static waypoint data and publishes to /base_waypoints
.
The purpose of this node is to update the target velocity property of each waypoint based on traffic light data. The detected traffic light is used in this node to determine if the car need to stop for red light or continue to drive. The corresponding velocities are calculated for each waypoint ahead of the car.
- Input:
/base_waypoints
/current_pose
/traffic_waypoint
- Output:
/final_waypoints
: a list of waypoints ahead of the car with target velocities.
Carla is equipped with a drive-by-wire (DBW) system, meaning the throttle, brake, and steering have electronic control. This package contains the files that are responsible for control of the vehicle: the node dbw_node.py and the file twist_controller.py, along with a PID controller and lowpass filter.
This node should only publish the control commands when dbw_enabled
is true to avoid error accumulation in manual mode. The DBW node uses a PID controller to control the throttle.
- Input:
/current_velocity
/twist_cmd
: target linear and angular velocities./vehicle/dbw_enabled
: indicates if the car is under dbw or manual driver control.
- Output:
/vehicle/throttle_cmd
/vehicle/brake_cmd
/vehicle/steering_cmd
A package containing code from Autoware which subscribes to /final_waypoints
and publishes target vehicle linear and angular velocities in the form of twist commands to the /twist_cmd topic
.
Here is a video capture when our solution is running on the simulator.
Video clips for testing on Carla will be available soon.
Please use one of the two installation options, either native or docker installation.
-
Be sure that your workstation is running Ubuntu 16.04 Xenial Xerus or Ubuntu 14.04 Trusty Tahir. Ubuntu downloads can be found here.
-
If using a Virtual Machine to install Ubuntu, use the following configuration as minimum:
- 2 CPU
- 2 GB system memory
- 25 GB of free hard drive space
The Udacity provided virtual machine has ROS and Dataspeed DBW already installed, so you can skip the next two steps if you are using this.
-
Follow these instructions to install ROS
- ROS Kinetic if you have Ubuntu 16.04.
- ROS Indigo if you have Ubuntu 14.04.
-
- Use this option to install the SDK on a workstation that already has ROS installed: One Line SDK Install (binary)
-
Download the Udacity Simulator.
Build the docker container
docker build . -t capstone
Run the docker file
docker run -p 4567:4567 -v $PWD:/capstone -v /tmp/log:/root/.ros/ --rm -it capstone
To set up port forwarding, please refer to the "uWebSocketIO Starter Guide" found in the classroom (see Extended Kalman Filter Project lesson) or instructions from term 2.
- Clone the project repository
git clone https://github.com/udacity/CarND-Capstone.git
- Install python dependencies
cd CarND-Capstone
pip install -r requirements.txt
- Make and run styx
cd ros
catkin_make
source devel/setup.sh
roslaunch launch/styx.launch
- Run the simulator
- Download training bag that was recorded on the Udacity self-driving car.
- Unzip the file
unzip traffic_light_bag_file.zip
- Play the bag file
rosbag play -l traffic_light_bag_file/traffic_light_training.bag
- Launch your project in site mode
cd CarND-Capstone/ros
roslaunch launch/site.launch
- Confirm that traffic light detection works on real life images
Outside of requirements.txt
, here is information on other driver/library versions used in the simulator and Carla:
Specific to these libraries, the simulator grader and Carla use the following:
Simulator | Carla | |
---|---|---|
Nvidia driver | 384.130 | 384.130 |
CUDA | 8.0.61 | 8.0.61 |
cuDNN | 6.0.21 | 6.0.21 |
TensorRT | N/A | N/A |
OpenCV | 3.2.0-dev | 2.4.8 |
OpenMP | N/A | N/A |