Skip to content

semantic segmentation on Berkeley Deep Drive 100K with wandb and fastai

Notifications You must be signed in to change notification settings

staceysv/deep-drive

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

50 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Semantic Segmentation

Forked from Boris's project: original repo here.

Introduction

Self-driving cars require a deep understanding of their surroundings. Camera images are used to recognize road, pedestrians, cars, sidewalks, etc at a pixel level accuracy. In this repository, we aim at defining a neural network and optimizing it to perform semantic segmentation.

The AI framework used is fast.ai and the dataset is from Berkeley Deep Drive. It is highly diverse and present labeled segmentation data from a diverse range of cars, in multiple cities and weather conditions.

Every single experiment is automatically logged onto Weighs & Biases for easier analysis/interpretation of results and how to optimize the architecture.

Usage

Dependencies can be installed through requirements.txt or Pipfile.

The dataset needs to be downloaded from Berkeley Deep Drive.

The following files are present in src folder:

  • pre_process.py must be run once on the dataset to make it more user friendly (segmentation masks with consecutive values) ;
  • prototype.ipynb is a Jupyter Notebook used to prototype our solution ;
  • train.py is a script to run several experiments and log them on Weighs & Biases.

Results

See my results and conclusions:

About

semantic segmentation on Berkeley Deep Drive 100K with wandb and fastai

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published