Skip to content

krishnatejakk/Self-Driving-Car

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

28 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Self-Driving-Car

image

1. Introduction

Prediction on Steering Angles for Self-driving Car is one of challenge in Udacity, and the main purpose of this project is to create two models, that are 3DCNN+LSTM and TransferLearning, to analyse and extract information from video of driving recording in real world, predicting steering angles based on real road sitaution, so that autonomous cars are able to drive by themselves.

2. Data

Data source is avaliable on Udacity github. Ⅰ. Insides includes steering angles, speed and torque from left, center and right cameras.
Ⅱ. Image size: 640*320

training data: Ch2_002.tar.gz.torrent
testing data: Ch2_001.tar.gz.torrent

2.1 Reading Tool

Data format: Rosbag Tool: udacity-driving-reader tool from Mr.Ross Wightman

2.2 DataLoading

For 3DCNN+LSTM and TransferLearning models, we load images in different sizes. In Dataloading code, two files can be found.

    ConsecutiveBatchSampler.py & UdacityDataset.py

3. Neural Network

For 3DCNN+LSTM and TransferLearning models, two files can be found.

    Convolution3D.py & TransferLearning.py

3.1 3DCNN+LSTM

image

    * Loading Size: [Batch_size, sequence_length, channels, height, width]  
    * Feeding Size: images in Batch_size * sequence_length * channels * 320 * 120   
Main Characteristics:

Ⅰ. Insides 3D convolution layers, residual connection layers are added in order to tackle graident vanishing situation; When going through LSTM, memory property is to withdraw information from former images and output integrated infromation to next linear connection layers
Ⅱ. With time sequence, 3DCNN+LSTM take video-type input data and LSTM can memorize driving history based on former frames extracted from 3DCNN layers;

3.2 TransferLearning

image
Reference: Self-Driving Car Steering Angle Prediction Based on Image Recognition, Shuyang Du, Haoli Guo, Andrew Simpson, arXiv:1912.05440v1[sc.CV] 11.Dec.2019, page 4, "Figure 3. Architecture used for transfer learning model"

    * Loading Size: Batch_size * channels * height * width  
    * Feeding Size: images in Batch_size * channels * 224 * 224   
Main Characteristics:

Ⅰ. Instead of considering time sequence, TransferLearning take frames as input dataset and using pretrained ResNet50 to extract more accurate infromation from former CNN layers

4. Results

below is loss values in two models in different stages

image

For visualze outputs from our models, you can go to Visualization

Visualization.ipynb

4.1 3DCNN+LSTM

  • below is attention map;
    image
  • below is kernel images:
    image
    By using this 3DCNN+LSTM model, you should get this result

4.2 TransferLearning

  • below is attention map;
    image
  • below is kernel images:
    image
    By using this TransferLearning model, you should get this result

4.3 Result video

For the video, you can check attention map video

5.Pointers and Acknowledgements

  • Some of the model architectures are based on Self-Driving Car Steering Angle Prediction Based on Image Recognition.
  • rwightman's docker tool was used to convert the round 2 data from ROSBAG to JPG.
  • Pytorch was used to build neural network models.

About

Udacity -> Self-Driving car -> Challenge 2

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Jupyter Notebook 99.8%
  • Python 0.2%