There is strong demand, both globally and locally in KSA, for deep learning (DL) skills and expertise to solve challenging business problems. This course will help learners build capacity in the core DL tools and methods used in the computer vision field and enable them to develop their own computer vision applications. This course covers the basic theory behind key DL computer vision algorithms but the majority of the focus is on building computer vision applications using PyTorch.
The primary learning objective of this course is to provide students with practical, hands-on experience with state-of-the-art machine learning and deep learning tools that are widely used in the computer vision field.
This course covers relevant portions of Hands-on Machine Learning with Scikit-Learn, Keras, and TensorFlow and Machine Learning with PyTorch and Scikit-Learn. The following topics will be discussed.
- Convolutional Neural Networks (CNNs)
- Autoencoders
- Generative Adversarial Networks (GANs)
- Diffusion Models
The lessons are organizes into modules with the idea that they can taught somewhat independently to accommodate specific audiences. It is assumed that learners will have sufficient background in the basics of DL equivalent to having taken Introduction to Deep Learning.
Materials should be completed prior to arriving at any in-person training.
Module 1: Introduction to CNNs
- Review of Deep Learning fundamentals.
- The morning session will focus on the theory behind Convolutional Neural Networks (CNNs) by covering relevant portions of Hands-on Machine Learning with Scikit-Learn, Keras, and TensorFlow and Machine Learning with PyTorch and Scikit-Learn.
- The afternoon session will focus on applying the techniques learned in the morning session using PyTorch, followed by a short assessment on the Kaggle data science competition platform.
Tutorial | Open in Google Colab | Open in Kaggle |
---|---|---|
Introduction to Computer Vision with PyTorch | ||
Introduction to Computer Vision with PyTorch Lightning | ||
Introduction to Convolutional Neural Networks (CNNs) |
Module 2: Introduction to Autoencoders
- Consolidation of previous content via Q/A and live coding demonstrations.
- The morning session will focus on the theory behind Autoencoders by covering relevant portions of Hands-on Machine Learning with Scikit-Learn, Keras, and TensorFlow and Machine Learning with PyTorch and Scikit-Learn.
- The afternoon session will focus on applying the techniques learned in the morning session using PyTorch, followed by a short assessment on the Kaggle data science competition platform.
Tutorial | Open in Google Colab | Open in Kaggle |
---|---|---|
Introduction to Autoencoders with PyTorch Lightning |
Module 3: Introduction to GANs
- Consolidation of previous content via Q/A and live coding demonstrations.
- The morning session will focus on the theory behind Generative Adversarial Networks (GANs) by covering relevant portions of Hands-on Machine Learning with Scikit-Learn, Keras, and TensorFlow and Machine Learning with PyTorch and Scikit-Learn.
- The afternoon session will focus on applying the techniques learned in the morning session using PyTorch, followed by a short assessment on the Kaggle data science competition platform.
Tutorial | Open in Google Colab | Open in Kaggle |
---|
Module 4: Introduction to Diffusion Models
- Consolidation of previous content via Q/A and live coding demonstrations.
- The morning session will focus on the theory behind Diffusion Models by covering relevant portions of Hands-on Machine Learning with Scikit-Learn, Keras, and TensorFlow and Machine Learning with PyTorch and Scikit-Learn.
- The afternoon session will focus on applying the techniques learned in the morning session using PyTorch, followed by a short assessment on the Kaggle data science competition platform.
Tutorial | Open in Google Colab | Open in Kaggle |
---|
Student performance on the course will be assessed through participation in a Kaggle classroom competition.
Repository organization is based on ideas from Good Enough Practices for Scientific Computing.
- Put each project in its own directory, which is named after the project.
- Put external scripts or compiled programs in the
bin
directory. - Put raw data and metadata in a
data
directory. - Put text documents associated with the project in the
doc
directory. - Put all Docker related files in the
docker
directory. - Install the Conda environment into an
env
directory. - Put all notebooks in the
notebooks
directory. - Put files generated during cleanup and analysis in a
results
directory. - Put project source code in the
src
directory. - Name all files to reflect their content or function.
After adding any necessary dependencies that should be downloaded via conda
to the
environment.yml
file and any dependencies that should be downloaded via pip
to the
requirements.txt
file you create the Conda environment in a sub-directory ./env
of your project
directory by running the following commands.
export ENV_PREFIX=$PWD/env
mamba env create --prefix $ENV_PREFIX --file environment.yml --force
Once the new environment has been created you can activate the environment with the following command.
conda activate $ENV_PREFIX
Note that the ENV_PREFIX
directory is not under version control as it can always be re-created as
necessary.
For your convenience these commands have been combined in a shell script ./bin/create-conda-env.sh
.
Running the shell script will create the Conda environment, activate the Conda environment, and build
JupyterLab with any additional extensions. The script should be run from the project root directory
as follows.
./bin/create-conda-env.sh
The most efficient way to build Conda environments on Ibex is to launch the environment creation script
as a job on the debug partition via Slurm. For your convenience a Slurm job script
./bin/create-conda-env.sbatch
is included. The script should be run from the project root directory
as follows.
sbatch ./bin/create-conda-env.sbatch
The list of explicit dependencies for the project are listed in the environment.yml
file. To see
the full lost of packages installed into the environment run the following command.
conda list --prefix $ENV_PREFIX
If you add (remove) dependencies to (from) the environment.yml
file or the requirements.txt
file
after the environment has already been created, then you can re-create the environment with the
following command.
$ mamba env create --prefix $ENV_PREFIX --file environment.yml --force
In order to build Docker images for your project and run containers with GPU acceleration you will need to install Docker, Docker Compose and the NVIDIA Docker runtime.
Detailed instructions for using Docker to build and image and launch containers can be found in
the docker/README.md
.