This is the project repo for the final project of the Udacity Self-Driving Car Nanodegree: Programming a Real Self-Driving Car. For more information about the project, see the project introduction here.
The main goal for this project was to program a car to drive through a controlled course which included traffic lights. The car was to be able to drive through the track, while stopping at the appropriate lights. We were able to test the code through a simulator as seen below.
The members of team:
Name | GitHub account | Udacity Email |
---|---|---|
Mohamed Elhayany (Team Lead) | Melhaya | [email protected] |
Dixon Liang | dixonliang | [email protected] |
Shahariar Rabby | ShahariarRabby | [email protected] |
Carla's system can be broken down into three main parts:
- The perception subsystem detects traffic lights and obstacles.
- The planning subsystem (node waypoint updater) updates the waypoints and the associated target velocities.
- The control subsystem actuates the throttle, steering, and brake to navigate the waypoints with the target velocity.
There were three sections to the code that we implemented.
The primarily purpose of the waypoint node is to provide waypoints for the car to travel through with the correct target velocity. We began by subscribing the following topics: /base_waypoints and /current_pose.
/base_waypoints provides all of the waypoints on the track. /current_pose provides the current locaization of the vechile in the simulator. /final_waypoints provides a subset of the /base_waypoints after taking into consideration the new target velocities given by the traffic sign classifier (see more below). In these waypoints, the car will know when to anticipate slowing down. Below is part of the code that actually finds the closest waypoint:
def get_closest_waypoint_idx(self):
x = self.pose.pose.position.x
y = self.pose.pose.position.y
closest_idx = self.waypoints_tree.query([x, y],1)[1]
#check if closest is ahead or behind vehicle
closest_coord = self.waypoints_2d[closest_idx]
prev_coord = self.waypoints_2d[closest_idx-1]
# Equation for hyperplane through closest_coords
cl_vect = np.array(closest_coord)
prev_vect = np.array(prev_coord)
pos_vect = np.array([x, y])
val = np.dot(cl_vect-prev_vect, pos_vect-cl_vect)
if val>0:
closest_idx = (closest_idx + 1) % len(self.waypoints_2d)
return closest_idx
This node is responsible for generating electronic control by publishing the appropriate steering, throttle, and brake commands for the car follwoing the car's waypoints. Primarily for the controller, we have implemented a generic PID.
The below code is part of /control in twist_contorller.py which is responsible for our throttle changes which controls how the car is at a stop or not by appliyng the brake.
if linear_vel == 0. and current_vel < 0.1:
throttle = 0
brake = 700 #N*m to hold the car in place if we are stopped at a light. acceleration ~ 1m/s^2
elif throttle <.1 and vel_error < 0:
throttle = 0
decel = max(vel_error, self.decel_limit)
brake = abs(decel)*(self.vehicle_mass + self.fuel_capacity * GAS_DENSITY)*self.wheel_radius # Torque N*m
return throttle, brake, steering
This node consisted of a detector and a classifier. The main purpose of this node is to detect any traffic light and then classify the light to give direction for the car.
We implemented the detector by updating two functions /process_traffic_lights and /get_closest_waypoint. Effectively, we use the vehcile's location to feed in the coordinates for the closest traffic light coordinates using these two functions.
For the classifier, we decided to use an OpenCV approach. Below is the main piece of code as we decided on a range of colors for red on the traffic light, if red pixels are detected after filtering, we classify the light as red which then directs the car to stop. On this implementation, anything that is not red, ends up being Unknown which allows the car to keep moving forward.
# Threshold the HSV image, keep only the red pixels
lower_red_hue_range = cv2.inRange(hsv_image, (0, 100, 100), (10, 255, 255))
upper_red_hue_range = cv2.inRange(hsv_image, (160, 100, 100), (179, 255, 255))
mask = cv2.bitwise_or(lower_red_hue_range, upper_red_hue_range)
num_of_red_pixels = cv2.countNonZero(mask)
if num_of_red_pixels >= 200:
return TrafficLight.RED
return TrafficLight.UNKNOWN
At end of this node, we publish the index for the waypoint for the nearest upcoming red light stop to /traffic_waypoint.
Please use one of the two installation options, either native or docker installation.
-
Be sure that your workstation is running Ubuntu 16.04 Xenial Xerus or Ubuntu 14.04 Trusty Tahir. Ubuntu downloads can be found here.
-
If using a Virtual Machine to install Ubuntu, use the following configuration as minimum:
- 2 CPU
- 2 GB system memory
- 25 GB of free hard drive space
The Udacity provided virtual machine has ROS and Dataspeed DBW already installed, so you can skip the next two steps if you are using this.
-
Follow these instructions to install ROS
- ROS Kinetic if you have Ubuntu 16.04.
- ROS Indigo if you have Ubuntu 14.04.
-
- Use this option to install the SDK on a workstation that already has ROS installed: One Line SDK Install (binary)
-
Download the Udacity Simulator.
Build the docker container
docker build . -t capstone
Run the docker file
docker run -p 4567:4567 -v $PWD:/capstone -v /tmp/log:/root/.ros/ --rm -it capstone
To set up port forwarding, please refer to the "uWebSocketIO Starter Guide" found in the classroom (see Extended Kalman Filter Project lesson).
- Clone the project repository
git clone https://github.com/udacity/CarND-Capstone.git
- Install python dependencies
cd CarND-Capstone
pip install -r requirements.txt
- Make and run styx
cd ros
catkin_make
source devel/setup.sh
roslaunch launch/styx.launch
- Run the simulator
- Download training bag that was recorded on the Udacity self-driving car.
- Unzip the file
unzip traffic_light_bag_file.zip
- Play the bag file
rosbag play -l traffic_light_bag_file/traffic_light_training.bag
- Launch your project in site mode
cd CarND-Capstone/ros
roslaunch launch/site.launch
- Confirm that traffic light detection works on real life images
if you get following error message
CMake Warning at /opt/ros/kinetic/share/catkin/cmake/catkinConfig.cmake:76 (find_package):
Could not find a package configuration file provided by "dbw_mkz_msgs" with
any of the following names:
dbw_mkz_msgsConfig.cmake
dbw_mkz_msgs-config.cmake
Add the installation prefix of "dbw_mkz_msgs" to CMAKE_PREFIX_PATH or set
"dbw_mkz_msgs_DIR" to a directory containing one of the above files. If
"dbw_mkz_msgs" provides a separate development package or SDK, be sure it
has been installed.
Call Stack (most recent call first):
styx/CMakeLists.txt:10 (find_package)
-- Could not find the required component 'dbw_mkz_msgs'. The following CMake error indicates that you either need to install the package with the same name or change your environment so that it can be found.
CMake Error at /opt/ros/kinetic/share/catkin/cmake/catkinConfig.cmake:83 (find_package):
Could not find a package configuration file provided by "dbw_mkz_msgs" with
any of the following names:
dbw_mkz_msgsConfig.cmake
dbw_mkz_msgs-config.cmake
Add the installation prefix of "dbw_mkz_msgs" to CMAKE_PREFIX_PATH or set
"dbw_mkz_msgs_DIR" to a directory containing one of the above files. If
"dbw_mkz_msgs" provides a separate development package or SDK, be sure it
has been installed.
Call Stack (most recent call first):
styx/CMakeLists.txt:10 (find_package)
-- Configuring incomplete, errors occurred!
See also "/home/workspace/CarND-Capstone/ros/build/CMakeFiles/CMakeOutput.log".
See also "/home/workspace/CarND-Capstone/ros/build/CMakeFiles/CMakeError.log".
Invoking "cmake" failed
run the following commands
sudo apt-get update
sudo apt-get install -y ros-kinetic-dbw-mkz-msgs
cd /home/workspace/CarND-Capstone/ros
rosdep install --from-paths src --ignore-src --rosdistro=kinetic -y
Outside of requirements.txt
, here is information on other driver/library versions used in the simulator and Carla:
Specific to these libraries, the simulator grader and Carla use the following:
Simulator | Carla | |
---|---|---|
Nvidia driver | 384.130 | 384.130 |
CUDA | 8.0.61 | 8.0.61 |
cuDNN | 6.0.21 | 6.0.21 |
TensorRT | N/A | N/A |
OpenCV | 3.2.0-dev | 2.4.8 |
OpenMP | N/A | N/A |
We are working on a fix to line up the OpenCV versions between the two.