Skip to content

Latest commit

 

History

History
executable file
·
236 lines (178 loc) · 10.2 KB

README.md

File metadata and controls

executable file
·
236 lines (178 loc) · 10.2 KB

Programming a Real Self-Driving Car

The capstone project of Self-Driving Car Engineer Nanodegree

Udacity - Self-Driving Car Engineer Nanodegree

Team Doudoufei

  • Yongzhi Liu (Team leader)
  • Yan Zhang
  • Yanyan Peng
  • Rajiv Sreedhar
  • Pradhap Moorthi

Table of contents

Overall Structure

In this project, we used ROS nodes to implement the core functionality of the autonomous vehicle system, including traffic light detection, drive-by-wire control, and waypoint following. The following is a system architecture diagram showing the ROS nodes and topics used in the project.

System architecture Diagram

Perception

In this project, the perception is mainly from camera images for traffic light detection. The self driving car for this project is mainly for highway or test site without any obstacle. So no obstacle detection is considered.

Traffic Light Detection

  • Input:
    • /image_color: colored images from camera,
    • /current_pose: the vehicle's current position,
    • /base_waypoints: a complete list of reference waypoints the car will be following,
  • Output:
    • /traffic_waypoint: the locations to stop for red traffic lights

To reduce the latency, we only allow the traffic light classification when the car is within 100 waypoints away from the closest traffic light.

Traffic Light Classification

The traffic light classification is more complicated than the deep learning and transfer learning we did in the course. The input camera images for the simulator or for the site test are not previously provided by Udacity. We chose to train our classification model based on existing object detection models such as MobileNet that have already been tested and used successfully. These models can detect the traffic light box but cannot tell the color.

We have learned a lot about how to prepare data and training model from Tensorflow Object Detection API and got inspired from these past Udacity fellow students (Jose and Marco):

The images from simulator and test site are very different in terms of shape and position. Our traffic classification use one model for the simulator and one model for the test site. The training of the model includes five steps:

  • Collecting Images:

    • simulator: use the state of the /vehicle/traffic_lights in the simulator to get the ground truth images.
    • site: the images are from rosbag provided by Udacity.
  • Labelling Images:

    • We used LabelImg to label the collected images.
  • Create TFRecord File:

    • To use our own dataset in Tensorflow Object Detection API, we must convert it into the TFRecord file format.
  • Training Model:

    • Our models get trained on both AWS and GCP to speed up the overall process.
    • For more detailed steps, please reference the following repos:
  • Export Model:

    • Models have to be exported for Tensorflow v1.4 to work with capstone project environment and Carla.

Planning

The path planning for this project is simply to produce a trajectory that obeys the traffic light. The resulting waypoints are the green points ahead of the car as shown in the snapshot below.

final waypoints

Waypoint Loader

A package which loads the static waypoint data and publishes to /base_waypoints.

Waypoint Updater

The purpose of this node is to update the target velocity property of each waypoint based on traffic light data. The detected traffic light is used in this node to determine if the car need to stop for red light or continue to drive. The corresponding velocities are calculated for each waypoint ahead of the car.

  • Input:
    • /base_waypoints
    • /current_pose
    • /traffic_waypoint
  • Output:
    • /final_waypoints: a list of waypoints ahead of the car with target velocities.

Control

Carla is equipped with a drive-by-wire (DBW) system, meaning the throttle, brake, and steering have electronic control. This package contains the files that are responsible for control of the vehicle: the node dbw_node.py and the file twist_controller.py, along with a PID controller and lowpass filter.

DBW

This node should only publish the control commands when dbw_enabled is true to avoid error accumulation in manual mode. The DBW node uses a PID controller to control the throttle.

  • Input:
    • /current_velocity
    • /twist_cmd: target linear and angular velocities.
    • /vehicle/dbw_enabled: indicates if the car is under dbw or manual driver control.
  • Output:
    • /vehicle/throttle_cmd
    • /vehicle/brake_cmd
    • /vehicle/steering_cmd

Waypoint Follower

A package containing code from Autoware which subscribes to /final_waypoints and publishes target vehicle linear and angular velocities in the form of twist commands to the /twist_cmd topic.

Demo

Here is a video capture when our solution is running on the simulator.

Watch the video

Video clips for testing on Carla will be available soon.

Environment

Please use one of the two installation options, either native or docker installation.

Native Installation

  • Be sure that your workstation is running Ubuntu 16.04 Xenial Xerus or Ubuntu 14.04 Trusty Tahir. Ubuntu downloads can be found here.

  • If using a Virtual Machine to install Ubuntu, use the following configuration as minimum:

    • 2 CPU
    • 2 GB system memory
    • 25 GB of free hard drive space

    The Udacity provided virtual machine has ROS and Dataspeed DBW already installed, so you can skip the next two steps if you are using this.

  • Follow these instructions to install ROS

  • Dataspeed DBW

  • Download the Udacity Simulator.

Docker Installation

Install Docker

Build the docker container

docker build . -t capstone

Run the docker file

docker run -p 4567:4567 -v $PWD:/capstone -v /tmp/log:/root/.ros/ --rm -it capstone

Port Forwarding

To set up port forwarding, please refer to the "uWebSocketIO Starter Guide" found in the classroom (see Extended Kalman Filter Project lesson) or instructions from term 2.

Usage

  1. Clone the project repository
git clone https://github.com/udacity/CarND-Capstone.git
  1. Install python dependencies
cd CarND-Capstone
pip install -r requirements.txt
  1. Make and run styx
cd ros
catkin_make
source devel/setup.sh
roslaunch launch/styx.launch
  1. Run the simulator

Real world testing

  1. Download training bag that was recorded on the Udacity self-driving car.
  2. Unzip the file
unzip traffic_light_bag_file.zip
  1. Play the bag file
rosbag play -l traffic_light_bag_file/traffic_light_training.bag
  1. Launch your project in site mode
cd CarND-Capstone/ros
roslaunch launch/site.launch
  1. Confirm that traffic light detection works on real life images

Other library/driver information

Outside of requirements.txt, here is information on other driver/library versions used in the simulator and Carla:

Specific to these libraries, the simulator grader and Carla use the following:

Simulator Carla
Nvidia driver 384.130 384.130
CUDA 8.0.61 8.0.61
cuDNN 6.0.21 6.0.21
TensorRT N/A N/A
OpenCV 3.2.0-dev 2.4.8
OpenMP N/A N/A