There is strong demand for machine learning (DL) skills and expertise to solve challenging business problems both globally and locally in KSA. This course will help learners build capacity in core DL tools and methods and enable them to develop their own deep learning applications. This course covers the basic theory behind DL algorithms but the majority of the focus is on hands-on examples using PyTorch.
The primary learning objective of this course is to provide students with practical, hands-on experience with state-of-the-art machine learning and deep learning tools that are widely used in industry.
This course covers portions of chapters 10-19 of Hands-on Machine Learning with Scikit-Learn, Keras, and TensorFlow and chapters 11-19 of Machine Learning with PyTorch and Scikit-Learn. The following topics will be discussed.
- Introduction to Artificial Neural Networks (ANNs)
- Training Deep Neural Networks (DNNs)
- Custom Models and Training with PyTorch and Lightning
- Stratgeies for Loading and Preprocessing Data
- Training and Deploying PyTorch Models at Scale
The lessons are organizes into modules and sub-modules with the idea that they can taught somewhat independently to accommodate specific audiences.
Module 1: Introduction to Deep Learning
- The morning session will focus on the theory behind neural networks for solving both classification and regression problems by covering relevant portions of Hands-on Machine Learning with Scikit-Learn, Keras, and TensorFlow and Machine Learning with PyTorch and Scikit-Learn.
- The afternoon session will focus on applying the techniques learned in the morning session using PyTorch, followed by a short assessment on the Kaggle data science competition platform.
Tutorial | Open in Google Colab | Open in Kaggle |
---|---|---|
First Steps with PyTorch | ||
Building Data Pipelines with PyTorch | ||
Building Neural Networks with PyTorch | ||
Introduction to PyTorch Lightning |
Module 2: Training DNNs
- Consolidation of previous days content via Q/A and live coding demonstrations.
- The morning session will focus on the theory behind neural networks for solving both classification and regression problems by covering chapters 12-13 of Hands-on Machine Learning with Scikit-Learn, Keras, and TensorFlow and chapters 12-13 of Machine Learning with PyTorch and Scikit-Learn.
- The afternoon session will focus on applying the techniques learned in the morning session using PyTorch, followed by a short assessment on the Kaggle data science competition platform.
Module 3: Deploying and Scaling PyTorch Models
- Consolidation of previous days content via Q/A and live coding demonstrations.
- The morning session will focus on various topics related to training and deploying PyTorch models as scale by covering chapter 19 of Hands-on Machine Learning with Scikit-Learn, Keras, and TensorFlow.
- The afternoon session will allow time for a final assessment as well as additional time for learners to complete any of the previous assessments.
Student performance on the course will be assessed through participation in a Kaggle classroom competition.
Repository organization is based on ideas from Good Enough Practices for Scientific Computing.
- Put each project in its own directory, which is named after the project.
- Put external scripts or compiled programs in the
bin
directory. - Put raw data and metadata in a
data
directory. - Put text documents associated with the project in the
doc
directory. - Put all Docker related files in the
docker
directory. - Install the Conda environment into an
env
directory. - Put all notebooks in the
notebooks
directory. - Put files generated during cleanup and analysis in a
results
directory. - Put project source code in the
src
directory. - Name all files to reflect their content or function.
After adding any necessary dependencies that should be downloaded via conda
to the
environment.yml
file and any dependencies that should be downloaded via pip
to the
requirements.txt
file you create the Conda environment in a sub-directory ./env
of your project
directory by running the following commands.
export ENV_PREFIX=$PWD/env
mamba env create --prefix $ENV_PREFIX --file environment.yml --force
Once the new environment has been created you can activate the environment with the following command.
conda activate $ENV_PREFIX
Note that the ENV_PREFIX
directory is not under version control as it can always be re-created as
necessary.
For your convenience these commands have been combined in a shell script ./bin/create-conda-env.sh
.
Running the shell script will create the Conda environment, activate the Conda environment, and build
JupyterLab with any additional extensions. The script should be run from the project root directory
as follows.
./bin/create-conda-env.sh
The most efficient way to build Conda environments on Ibex is to launch the environment creation script
as a job on the debug partition via Slurm. For your convenience a Slurm job script
./bin/create-conda-env.sbatch
is included. The script should be run from the project root directory
as follows.
sbatch ./bin/create-conda-env.sbatch
The list of explicit dependencies for the project are listed in the environment.yml
file. To see
the full lost of packages installed into the environment run the following command.
conda list --prefix $ENV_PREFIX
If you add (remove) dependencies to (from) the environment.yml
file or the requirements.txt
file
after the environment has already been created, then you can re-create the environment with the
following command.
$ mamba env create --prefix $ENV_PREFIX --file environment.yml --force
In order to build Docker images for your project and run containers with GPU acceleration you will need to install Docker, Docker Compose and the NVIDIA Docker runtime.
Detailed instructions for using Docker to build and image and launch containers can be found in
the docker/README.md
.