Ever felt overwhelmed by Machine Learning jargon? We get it! This course explains Deep Learning concepts using simple analogies and practical code examples using PyTorch.
Learning Machine Learning and Deep Learning can be tough. While there are many great resources out there, we believe the best way to learn is through:
- Simple analogies that relate to everyday life
- Clear explanations of why we do what we do
- Hands-on coding examples
- No complicated math (just the essential concepts)
- Everything explained with analogies: Complex topics broken down using real-world examples
- Beginner-friendly: No prior Machine Learning/Deep Learning knowledge needed
- Learn by doing: Practical code examples and exercises
- AI-assisted learning: Recommended use of AI chat tools (like Perplexity) when stuck
- Basic Python programming knowledge
- Ability to run Python code (locally or using cloud platforms like Google Colab)
- Curiosity to learn!
- Linear Regression: Your First ML Model
- Gradient Descent: How Models Learn
- Normal Equations: A Different Approach
- PyTorch Basics: Your New ML Friend
- Neural Networks: Building Blocks
- Convolutional Neural Networks (CNNs): Image Processing Magic
- Recurrent Neural Networks (RNNs): Understanding Sequences
- Go to the notebooks folder and open the notebook that you want to learn from
- Read every piece of text carefully and then run the code that you see below it oftentimes just write the code again yourself just type it out as you see above and run it yourself.
- Stuck on a concept? Use AI chat tools for personalized analogies
- Found an issue? Create a new issue in this repository
- Need visual learning? Check out Amazon's MLU-Explain
This is an open-source project! Feel free to:
- Suggest better analogies
- Add new examples
- Fix errors
- Share your learning experience
Ready to begin? Head to the first notebook in the notebooks
folder!
Remember: Everyone learns differently. If this approach doesn't click with you, that's okay! Check out other resources like MLU-Explain for visual learning.
[Introduction to Deep Learning with PyTorch.ipynb](notebooks/deep-learning/Introduction to Deep Learning with PyTorch.ipynb)
This is important because if you go how it is intended you'll get the maximum amount of value out of this course.
There are two ways to use this repository:
-
To run the code examples locally:
bash pip install -r requirements.txt
This will install all the necessary ML/DL libraries needed to run the example code. -
To build the book locally: ```bash
conda env create -f requirements-book.yml conda activate jupyter-book
jupyter-book build . --all
The separation provides several benefits:
- Faster deployments since only necessary packages are installed
- Clearer separation of concerns between book building and code running
- Conda environment for book building helps avoid dependency conflicts
- Users can choose to install only what they need
You might also want to add a .gitignore
entry for the conda environment:
Based on your target audience of Python-familiar beginners, here's a comprehensive course structure:
Chapter 1: Introduction to Machine Learning
- What is Machine Learning
- Types of Machine Learning
- Real-world Applications
- ML Project Lifecycle
- Basic ML Terminology
Chapter 2: Data Fundamentals
- Data Types and Structures
- Exploratory Data Analysis
- Data Preprocessing
- Feature Scaling
- Handling Missing Values
- Feature Selection
- Data Visualization[1]
Chapter 3: Supervised Learning
- Classification vs Regression
- Linear Regression
- Logistic Regression
- Decision Trees
- Random Forests
- XGBoost
- Model Evaluation Metrics[1][3]
Chapter 4: Unsupervised Learning
- Clustering Algorithms
- Dimensionality Reduction
- Association Rules
- Principal Component Analysis
- Practical Applications[1]
Chapter 5: Neural Networks Basics
- Artificial Neural Networks
- Activation Functions
- Forward and Backward Propagation
- Training Neural Networks
- Optimization Techniques[2]
Chapter 6: Deep Learning Tools
- Introduction to TensorFlow/PyTorch
- Working with Tensors
- Building Neural Networks
- Model Training and Evaluation
- GPU Acceleration[2]
Chapter 7: Convolutional Neural Networks
- Image Processing Basics
- CNN Architecture
- Pooling and Padding
- Transfer Learning
- Computer Vision Applications[3]
Chapter 8: Sequential Data and RNNs
- Sequence Processing
- RNN Architecture
- LSTM and GRU
- Natural Language Processing Basics
- Text Classification[3]
Chapter 9: Modern Deep Learning
- Transformers
- Attention Mechanisms
- Generative AI Basics
- Model Deployment
- Best Practices[2]
Each chapter should include:
- Theoretical concepts with real-world examples
- Code demonstrations using Python
- Hands-on exercises
- Mini-projects
- Practice questions
- Jupyter notebooks for experimentation[1]
The course should progress from basic concepts to more complex topics, with each chapter building upon previous knowledge. Include practical exercises that demonstrate real-world applications while keeping mathematical complexity to a minimum[4].
jupyter-book build . --all