Forked from Boris's project: original repo here.
Self-driving cars require a deep understanding of their surroundings. Camera images are used to recognize road, pedestrians, cars, sidewalks, etc at a pixel level accuracy. In this repository, we aim at defining a neural network and optimizing it to perform semantic segmentation.
The AI framework used is fast.ai and the dataset is from Berkeley Deep Drive. It is highly diverse and present labeled segmentation data from a diverse range of cars, in multiple cities and weather conditions.
Every single experiment is automatically logged onto Weighs & Biases for easier analysis/interpretation of results and how to optimize the architecture.
Dependencies can be installed through requirements.txt
or Pipfile
.
The dataset needs to be downloaded from Berkeley Deep Drive.
The following files are present in src
folder:
pre_process.py
must be run once on the dataset to make it more user friendly (segmentation masks with consecutive values) ;prototype.ipynb
is a Jupyter Notebook used to prototype our solution ;train.py
is a script to run several experiments and log them on Weighs & Biases.
See my results and conclusions: