diff --git a/Gemfile b/Gemfile
new file mode 100644
index 00000000..82716951
--- /dev/null
+++ b/Gemfile
@@ -0,0 +1,28 @@
+source "https://rubygems.org"
+# Hello! This is where you manage which Jekyll version is used to run.
+# When you want to use a different version, change it below, save the
+# file and run `bundle install`. Run Jekyll with `bundle exec`, like so:
+#
+# bundle exec jekyll serve
+#
+# This will help ensure the proper Jekyll version is running.
+# Happy Jekylling!
+# gem "jekyll", "~> 4.1.1"
+# If you want to use GitHub Pages, remove the "gem "jekyll"" above and
+# uncomment the line below. To upgrade, run `bundle update github-pages`.
+gem "github-pages", group: :jekyll_plugins
+# If you have any plugins, put them here!
+group :jekyll_plugins do
+ gem "jekyll-remote-theme"
+end
+
+# Windows and JRuby does not include zoneinfo files, so bundle the tzinfo-data gem
+# and associated library.
+platforms :mingw, :x64_mingw, :mswin, :jruby do
+ gem "tzinfo", "~> 1.2"
+ gem "tzinfo-data"
+end
+
+# Performance-booster for watching directories on Windows
+gem "wdm", "~> 0.1.1", :platforms => [:mingw, :x64_mingw, :mswin]
+
diff --git a/README.md b/README.md
new file mode 100644
index 00000000..6ef06346
--- /dev/null
+++ b/README.md
@@ -0,0 +1,63 @@
+# README
+
+These notes accompany the Stanford CS class [**CS131**](http://cs131.stanford.edu/), Computer Vision: Foundations
+and Applications. This is a development space for the class notes where you can commit your changes as your team builds
+the notes for your assigned lecture, and once you're down we will merge your notes onto the finished website.
+
+Head over to [https://anarcomey.github.io/cs131_notes_dev/](https://anarcomey.github.io/cs131_notes_dev/) to see what the web page notes that you'll create will look like!
+
+## Steps to create your own notes
+To begin writing your own notes that will appear on a website like this [https://anarcomey.github.io/cs131_notes_dev/](https://anarcomey.github.io/cs131_notes_dev/), have one team member fork the repository building this web page and configure the fork to build a web page for your team to develop on!
+
+- **Step 1: Fork the repository:** Fork this repository [https://github.com/ANarcomey/cs131_notes_dev](https://github.com/ANarcomey/cs131_notes_dev) into your own GitHub account
+
+
+
+
+
+- **Step 2: Enable GitHub Pages:** Enter settings from the menu bar in your forked repo, find the "GitHub Pages" heading, and choose the defaults of "master" branch and "root" directory so that your settings look like the figure below, except "anarcomey" will be replaced with the github username of the team member who created the fork. Don't worry about choosing a theme or any other settings, we've configured that all for you.
+
+
+
+
+
+
+- **Step 3: Link the repository to your GitHub Page:** In your forked repository, edit the file `_config.yml`. Update the `url` field to `https://your_github_username.github.io` and either remove the `email` field or set it to one of your team members emails if you want to receive build updates by email.
+
+
+
+
+- **Step 4: Submit:** Create a group Gradescope submission with all of your teammates and submit the url of your GitHub Page containing your notes (e.g. `https://your_github_username.github.io/cs131_notes_dev/` and the url of your repository (e.g. `https://github.com/your_GitHub_username/cs131_notes_dev`).
+
+
+## Steps to update your notes
+Now that your notes are live in your own GitHub fork and running at `https://your_github_username.github.io/cs131_notes_dev/`, you'll want to add content and update them. To do that, find the Markdown file for your lecture in the `_chapters` directory and edit the .md file. Once you've made your desired updates and want to see what they look like online, commit and push your changes to the master branch. The newly pushed code will render online in ~< a minute and you can see your notes! Once you have a handle on the basic mechanics of Markdown, you can write most of your notes without having to push code and render very often. Take a look at some examples and a template with Markdown guidance in the `Intructions` module of the website, and also look at the markdown code creating those pages in the .md files in `_chapters/instructions`.
+
+Since you're working in groups and editing the same Markdown file, it might make things easier to collaboratively edit a shared document. Since this code is in Markdown, Google Colab Notebooks are a great tool! They're the google docs of jupyter notebooks. We've provided an example Colab notebook that you can copy and use for collaboratively developing your notes: [link](https://colab.research.google.com/drive/19B1VAXjzQaxuwxwl8VmERDaZPKHqCjkX?usp=sharing), but you're free to use any tools or collaboration structures you wish!
+
+
+## Optional local setup
+ If the iteration time of waiting for the github web page to load your changes is too slow for you, you can install the Ruby and Jekyll software behind the web page on your own machine and render the web page locally. Typically the load time for new changes is under a minute, so you really shouldn't need to do this.
+
+### Local setup
+
+1. Install Jekyll: https://jekyllrb.com/docs/installation/
+2. Clone this repository if you haven't already:
+```sh
+git clone ...
+```
+3. Install dependencies:
+```sh
+# From cs131_notes_dev/ directory
+bundle install
+```
+
+### Local development
+
+1. Launch server:
+```sh
+# From cs131_notes_dev/ directory
+bundle exec jekyll serve
+```
+2. Navigate to `localhost:4000/cs131_notes_dev/` in your web browser! That / at the end matters!
+3. Modify notes and save — local site should update automatically (just refresh).
diff --git a/_chapters/cameras/camera_parameters_and_stereo.md b/_chapters/cameras/camera_parameters_and_stereo.md
new file mode 100644
index 00000000..b96cad77
--- /dev/null
+++ b/_chapters/cameras/camera_parameters_and_stereo.md
@@ -0,0 +1,12 @@
+---
+title: Camera parameters and stereo
+keywords: (insert comma-separated keywords here)
+order: 18 # Lecture number for 2020
+---
+
+**Lorem ipsum** dolor sit amet, consectetur adipiscing elit, sed do eiusmod
+tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam,
+quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo
+consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum
+dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident,
+sunt in culpa qui officia deserunt mollit anim id est laborum.
diff --git a/_chapters/cameras/pinhole_computational_corners.md b/_chapters/cameras/pinhole_computational_corners.md
new file mode 100644
index 00000000..2951866e
--- /dev/null
+++ b/_chapters/cameras/pinhole_computational_corners.md
@@ -0,0 +1,6 @@
+---
+title: Pinhole, computationational, and corner cameras
+keywords: (insert comma-separated keywords here)
+order: 17 # Lecture number for 2020
+---
+
diff --git a/_chapters/images/clustering.md b/_chapters/images/clustering.md
new file mode 100644
index 00000000..f97cb88c
--- /dev/null
+++ b/_chapters/images/clustering.md
@@ -0,0 +1,12 @@
+---
+title: Clustering
+keywords: (insert comma-separated keywords here)
+order: 10 # Lecture number for 2020
+---
+
+**Lorem ipsum** dolor sit amet, consectetur adipiscing elit, sed do eiusmod
+tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam,
+quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo
+consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum
+dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident,
+sunt in culpa qui officia deserunt mollit anim id est laborum.
diff --git a/_chapters/images/color.md b/_chapters/images/color.md
new file mode 100644
index 00000000..82b160d3
--- /dev/null
+++ b/_chapters/images/color.md
@@ -0,0 +1,12 @@
+---
+title: Color
+keywords: (insert comma-separated keywords here)
+order: 8 # Lecture number for 2020
+---
+
+**Lorem ipsum** dolor sit amet, consectetur adipiscing elit, sed do eiusmod
+tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam,
+quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo
+consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum
+dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident,
+sunt in culpa qui officia deserunt mollit anim id est laborum.
diff --git a/_chapters/images/image_models_and_priors.md b/_chapters/images/image_models_and_priors.md
new file mode 100644
index 00000000..86a4f8bc
--- /dev/null
+++ b/_chapters/images/image_models_and_priors.md
@@ -0,0 +1,12 @@
+---
+title: Image models and priors
+keywords: (insert comma-separated keywords here)
+order: 7 # Lecture number for 2020
+---
+
+**Lorem ipsum** dolor sit amet, consectetur adipiscing elit, sed do eiusmod
+tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam,
+quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo
+consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum
+dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident,
+sunt in culpa qui officia deserunt mollit anim id est laborum.
diff --git a/_chapters/images/segmentation.md b/_chapters/images/segmentation.md
new file mode 100644
index 00000000..226b76ff
--- /dev/null
+++ b/_chapters/images/segmentation.md
@@ -0,0 +1,12 @@
+---
+title: Segmentation
+keywords: (insert comma-separated keywords here)
+order: 9 # Lecture number for 2020
+---
+
+**Lorem ipsum** dolor sit amet, consectetur adipiscing elit, sed do eiusmod
+tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam,
+quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo
+consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum
+dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident,
+sunt in culpa qui officia deserunt mollit anim id est laborum.
diff --git a/_chapters/instructions/example1.md b/_chapters/instructions/example1.md
new file mode 100644
index 00000000..fdb13c6b
--- /dev/null
+++ b/_chapters/instructions/example1.md
@@ -0,0 +1,109 @@
+---
+title: Example 1 from CS 231N
+keywords:
+order: 1
+---
+
+This year, the recommended way to work on assignments is through [Google Colaboratory](https://colab.research.google.com/). However, if you already own GPU-backed hardware and would prefer to work locally, we provide you with instructions for setting up a virtual environment.
+
+- [Working remotely on Google Colaboratory](#working-remotely-on-google-colaboratory)
+- [Working locally on your machine](#working-locally-on-your-machine)
+ - [Anaconda virtual environment](#anaconda-virtual-environment)
+ - [Python venv](#python-venv)
+ - [Installing packages](#installing-packages)
+
+### Working remotely on Google Colaboratory
+
+Google Colaboratory is basically a combination of Jupyter notebook and Google Drive. It runs entirely in the cloud and comes
+preinstalled with many packages (e.g. PyTorch and Tensorflow) so everyone has access to the same
+dependencies. Even cooler is the fact that Colab benefits from free access to hardware accelerators
+like GPUs (K80, P100) and TPUs which will be particularly useful for assignments 2 and 3.
+
+**Requirements**. To use Colab, you must have a Google account with an associated Google Drive. Assuming you have both, you can connect Colab to your Drive with the following steps:
+
+1. Click the wheel in the top right corner and select `Settings`.
+2. Click on the `Manage Apps` tab.
+3. At the top, select `Connect more apps` which should bring up a `GSuite Marketplace` window.
+4. Search for **Colab** then click `Add`.
+
+**Workflow**. Every assignment provides you with a download link to a zip file containing Colab notebooks and Python starter code. You can upload the folder to Drive, open the notebooks in Colab and work on them, then save your progress back to Drive. We encourage you to watch the tutorial video below which covers the recommended workflow using assignment 1 as an example.
+
+
+
+**Best Practices**. There are a few things you should be aware of when working with Colab. The first thing to note is that resources aren't guaranteed (this is the price for being free). If you are idle for a certain amount of time or your total connection time exceeds the maximum allowed time (~12 hours), the Colab VM will disconnect. This means any unsaved progress will be lost. Thus, get into the habit of frequently saving your code whilst working on assignments. To read more about resource limitations in Colab, read their FAQ [here](https://research.google.com/colaboratory/faq.html).
+
+**Using a GPU**. Using a GPU is as simple as switching the runtime in Colab. Specifically, click `Runtime -> Change runtime type -> Hardware Accelerator -> GPU` and your Colab instance will automatically be backed by GPU compute.
+
+If you're interested in learning more about Colab, we encourage you to visit the resources below:
+
+* [Intro to Google Colab](https://www.youtube.com/watch?v=inN8seMm7UI)
+* [Welcome to Colab](https://colab.research.google.com/notebooks/intro.ipynb)
+* [Overview of Colab Features](https://colab.research.google.com/notebooks/basic_features_overview.ipynb)
+
+### Working locally on your machine
+If you wish to work locally, you should use a virtual environment. You can install one via Anaconda (recommended) or via Python's native `venv` module. Ensure you are using Python 3.7 as **we are no longer supporting Python 2**.
+
+#### Anaconda virtual environment
+We strongly recommend using the free [Anaconda Python distribution](https://www.anaconda.com/download/), which provides an easy way for you to handle package dependencies. Please be sure to download the Python 3 version, which currently installs Python 3.7. The neat thing about Anaconda is that it ships with [MKL optimizations](https://docs.anaconda.com/mkl-optimizations/) by default, which means your `numpy` and `scipy` code benefit from significant speed-ups without having to change a single line of code.
+
+Once you have Anaconda installed, it makes sense to create a virtual environment for the course. If you choose not to use a virtual environment (strongly not recommended!), it is up to you to make sure that all dependencies for the code are installed globally on your machine. To set up a virtual environment called `cs231n`, run the following in your terminal:
+
+```bash
+# this will create an anaconda environment
+# called cs231n in 'path/to/anaconda3/envs/'
+conda create -n cs231n python=3.7
+```
+
+To activate and enter the environment, run `conda activate cs231n`. To deactivate the environment, either run `conda deactivate cs231n` or exit the terminal. Note that every time you want to work on the assignment, you should rerun `conda activate cs231n`.
+
+```bash
+# sanity check that the path to the python
+# binary matches that of the anaconda env
+# after you activate it
+which python
+# for example, on my machine, this prints
+# $ '/Users/kevin/anaconda3/envs/sci/bin/python'
+```
+
+You may refer to [this page](https://docs.conda.io/projects/conda/en/latest/user-guide/tasks/manage-environments.html) for more detailed instructions on managing virtual environments with Anaconda.
+
+**Note:** If you've chosen to go the Anaconda route, you can safely skip the next section and move straight to [Installing Packages](#installing-packages).
+
+
+#### Python venv
+
+As of 3.3, Python natively ships with a lightweight virtual environment module called [venv](https://docs.python.org/3/library/venv.html). Each virtual environment packages its own independent set of installed Python packages that are isolated from system-wide Python packages and runs a Python version that matches that of the binary that was used to create it. To set up a virtual environment called `cs231n`, run the following in your terminal:
+
+```bash
+# this will create a virtual environment
+# called cs231n in your home directory
+python3.7 -m venv ~/cs231n
+```
+
+To activate and enter the environment, run `source ~/cs231n/bin/activate`. To deactivate the environment, either run `deactivate` or exit the terminal. Note that every time you want to work on the assignment, you should rerun `source ~/cs231n/bin/activate`.
+
+```bash
+# sanity check that the path to the python
+# binary matches that of the virtual env
+# after you activate it
+which python
+# for example, on my machine, this prints
+# $ '/Users/kevin/cs231n/bin/python'
+```
+
+
+#### Installing packages
+
+Once you've **setup** and **activated** your virtual environment (via `conda` or `venv`), you should install the libraries needed to run the assignments using `pip`. To do so, run:
+
+```bash
+# again, ensure your virtual env (either conda or venv)
+# has been activated before running the commands below
+cd assignment1 # cd to the assignment directory
+
+# install assignment dependencies.
+# since the virtual env is activated,
+# this pip is associated with the
+# python binary of the environment
+pip install -r requirements.txt
+```
\ No newline at end of file
diff --git a/_chapters/instructions/example2.md b/_chapters/instructions/example2.md
new file mode 100644
index 00000000..5e070d3d
--- /dev/null
+++ b/_chapters/instructions/example2.md
@@ -0,0 +1,291 @@
+---
+title: Example 2 from CS 231N
+keywords:
+order: 2
+---
+
+This is an introductory lecture designed to introduce people from outside of Computer Vision to the Image Classification problem, and the data-driven approach. The Table of Contents:
+
+- [Image Classification](#image-classification)
+ - [Nearest Neighbor Classifier](#nearest-neighbor-classifier)
+ - [k - Nearest Neighbor Classifier](#k---nearest-neighbor-classifier)
+ - [Validation sets for Hyperparameter tuning](#validation-sets-for-hyperparameter-tuning)
+ - [Summary](#summary)
+ - [Summary: Applying kNN in practice](#summary-applying-knn-in-practice)
+ - [Further Reading](#further-reading)
+
+
+
+## Image Classification
+
+**Motivation**. In this section we will introduce the Image Classification problem, which is the task of assigning an input image one label from a fixed set of categories. This is one of the core problems in Computer Vision that, despite its simplicity, has a large variety of practical applications. Moreover, as we will see later in the course, many other seemingly distinct Computer Vision tasks (such as object detection, segmentation) can be reduced to image classification.
+
+**Example**. For example, in the image below an image classification model takes a single image and assigns probabilities to 4 labels, *{cat, dog, hat, mug}*. As shown in the image, keep in mind that to a computer an image is represented as one large 3-dimensional array of numbers. In this example, the cat image is 248 pixels wide, 400 pixels tall, and has three color channels Red,Green,Blue (or RGB for short). Therefore, the image consists of 248 x 400 x 3 numbers, or a total of 297,600 numbers. Each number is an integer that ranges from 0 (black) to 255 (white). Our task is to turn this quarter of a million numbers into a single label, such as *"cat"*.
+
+
+
+
The task in Image Classification is to predict a single label (or a distribution over labels as shown here to indicate our confidence) for a given image. Images are 3-dimensional arrays of integers from 0 to 255, of size Width x Height x 3. The 3 represents the three color channels Red, Green, Blue.
+
+
+**Challenges**. Since this task of recognizing a visual concept (e.g. cat) is relatively trivial for a human to perform, it is worth considering the challenges involved from the perspective of a Computer Vision algorithm. As we present (an inexhaustive) list of challenges below, keep in mind the raw representation of images as a 3-D array of brightness values:
+
+- **Viewpoint variation**. A single instance of an object can be oriented in many ways with respect to the camera.
+- **Scale variation**. Visual classes often exhibit variation in their size (size in the real world, not only in terms of their extent in the image).
+- **Deformation**. Many objects of interest are not rigid bodies and can be deformed in extreme ways.
+- **Occlusion**. The objects of interest can be occluded. Sometimes only a small portion of an object (as little as few pixels) could be visible.
+- **Illumination conditions**. The effects of illumination are drastic on the pixel level.
+- **Background clutter**. The objects of interest may *blend* into their environment, making them hard to identify.
+- **Intra-class variation**. The classes of interest can often be relatively broad, such as *chair*. There are many different types of these objects, each with their own appearance.
+
+A good image classification model must be invariant to the cross product of all these variations, while simultaneously retaining sensitivity to the inter-class variations.
+
+
+
+
+
+
+**Data-driven approach**. How might we go about writing an algorithm that can classify images into distinct categories? Unlike writing an algorithm for, for example, sorting a list of numbers, it is not obvious how one might write an algorithm for identifying cats in images. Therefore, instead of trying to specify what every one of the categories of interest look like directly in code, the approach that we will take is not unlike one you would take with a child: we're going to provide the computer with many examples of each class and then develop learning algorithms that look at these examples and learn about the visual appearance of each class. This approach is referred to as a *data-driven approach*, since it relies on first accumulating a *training dataset* of labeled images. Here is an example of what such a dataset might look like:
+
+
+
+
An example training set for four visual categories. In practice we may have thousands of categories and hundreds of thousands of images for each category.
+
+
+**The image classification pipeline**. We've seen that the task in Image Classification is to take an array of pixels that represents a single image and assign a label to it. Our complete pipeline can be formalized as follows:
+
+- **Input:** Our input consists of a set of *N* images, each labeled with one of *K* different classes. We refer to this data as the *training set*.
+- **Learning:** Our task is to use the training set to learn what every one of the classes looks like. We refer to this step as *training a classifier*, or *learning a model*.
+- **Evaluation:** In the end, we evaluate the quality of the classifier by asking it to predict labels for a new set of images that it has never seen before. We will then compare the true labels of these images to the ones predicted by the classifier. Intuitively, we're hoping that a lot of the predictions match up with the true answers (which we call the *ground truth*).
+
+
+
+### Nearest Neighbor Classifier
+As our first approach, we will develop what we call a **Nearest Neighbor Classifier**. This classifier has nothing to do with Convolutional Neural Networks and it is very rarely used in practice, but it will allow us to get an idea about the basic approach to an image classification problem.
+
+**Example image classification dataset: CIFAR-10.** One popular toy image classification dataset is the CIFAR-10 dataset. This dataset consists of 60,000 tiny images that are 32 pixels high and wide. Each image is labeled with one of 10 classes (for example *"airplane, automobile, bird, etc"*). These 60,000 images are partitioned into a training set of 50,000 images and a test set of 10,000 images. In the image below you can see 10 random example images from each one of the 10 classes:
+
+
+
+
Left: Example images from the CIFAR-10 dataset. Right: first column shows a few test images and next to each we show the top 10 nearest neighbors in the training set according to pixel-wise difference.
+
+
+Suppose now that we are given the CIFAR-10 training set of 50,000 images (5,000 images for every one of the labels), and we wish to label the remaining 10,000. The nearest neighbor classifier will take a test image, compare it to every single one of the training images, and predict the label of the closest training image. In the image above and on the right you can see an example result of such a procedure for 10 example test images. Notice that in only about 3 out of 10 examples an image of the same class is retrieved, while in the other 7 examples this is not the case. For example, in the 8th row the nearest training image to the horse head is a red car, presumably due to the strong black background. As a result, this image of a horse would in this case be mislabeled as a car.
+
+You may have noticed that we left unspecified the details of exactly how we compare two images, which in this case are just two blocks of 32 x 32 x 3. One of the simplest possibilities is to compare the images pixel by pixel and add up all the differences. In other words, given two images and representing them as vectors \\( I_1, I_2 \\) , a reasonable choice for comparing them might be the **L1 distance**:
+
+$$
+d_1 (I_1, I_2) = \sum_{p} \left| I^p_1 - I^p_2 \right|
+$$
+
+Where the sum is taken over all pixels. Here is the procedure visualized:
+
+
+
+
An example of using pixel-wise differences to compare two images with L1 distance (for one color channel in this example). Two images are subtracted elementwise and then all differences are added up to a single number. If two images are identical the result will be zero. But if the images are very different the result will be large.
+
+
+Let's also look at how we might implement the classifier in code. First, let's load the CIFAR-10 data into memory as 4 arrays: the training data/labels and the test data/labels. In the code below, `Xtr` (of size 50,000 x 32 x 32 x 3) holds all the images in the training set, and a corresponding 1-dimensional array `Ytr` (of length 50,000) holds the training labels (from 0 to 9):
+
+```python
+Xtr, Ytr, Xte, Yte = load_CIFAR10('data/cifar10/') # a magic function we provide
+# flatten out all images to be one-dimensional
+Xtr_rows = Xtr.reshape(Xtr.shape[0], 32 * 32 * 3) # Xtr_rows becomes 50000 x 3072
+Xte_rows = Xte.reshape(Xte.shape[0], 32 * 32 * 3) # Xte_rows becomes 10000 x 3072
+```
+
+Now that we have all images stretched out as rows, here is how we could train and evaluate a classifier:
+
+```python
+nn = NearestNeighbor() # create a Nearest Neighbor classifier class
+nn.train(Xtr_rows, Ytr) # train the classifier on the training images and labels
+Yte_predict = nn.predict(Xte_rows) # predict labels on the test images
+# and now print the classification accuracy, which is the average number
+# of examples that are correctly predicted (i.e. label matches)
+print 'accuracy: %f' % ( np.mean(Yte_predict == Yte) )
+```
+
+Notice that as an evaluation criterion, it is common to use the **accuracy**, which measures the fraction of predictions that were correct. Notice that all classifiers we will build satisfy this one common API: they have a `train(X,y)` function that takes the data and the labels to learn from. Internally, the class should build some kind of model of the labels and how they can be predicted from the data. And then there is a `predict(X)` function, which takes new data and predicts the labels. Of course, we've left out the meat of things - the actual classifier itself. Here is an implementation of a simple Nearest Neighbor classifier with the L1 distance that satisfies this template:
+
+```python
+import numpy as np
+
+class NearestNeighbor(object):
+ def __init__(self):
+ pass
+
+ def train(self, X, y):
+ """ X is N x D where each row is an example. Y is 1-dimension of size N """
+ # the nearest neighbor classifier simply remembers all the training data
+ self.Xtr = X
+ self.ytr = y
+
+ def predict(self, X):
+ """ X is N x D where each row is an example we wish to predict label for """
+ num_test = X.shape[0]
+ # lets make sure that the output type matches the input type
+ Ypred = np.zeros(num_test, dtype = self.ytr.dtype)
+
+ # loop over all test rows
+ for i in range(num_test):
+ # find the nearest training image to the i'th test image
+ # using the L1 distance (sum of absolute value differences)
+ distances = np.sum(np.abs(self.Xtr - X[i,:]), axis = 1)
+ min_index = np.argmin(distances) # get the index with smallest distance
+ Ypred[i] = self.ytr[min_index] # predict the label of the nearest example
+
+ return Ypred
+```
+
+If you ran this code, you would see that this classifier only achieves **38.6%** on CIFAR-10. That's more impressive than guessing at random (which would give 10% accuracy since there are 10 classes), but nowhere near human performance (which is [estimated at about 94%](https://karpathy.github.io/2011/04/27/manually-classifying-cifar10/)) or near state-of-the-art Convolutional Neural Networks that achieve about 95%, matching human accuracy (see the [leaderboard](https://www.kaggle.com/c/cifar-10/leaderboard) of a recent Kaggle competition on CIFAR-10).
+
+**The choice of distance.**
+There are many other ways of computing distances between vectors. Another common choice could be to instead use the **L2 distance**, which has the geometric interpretation of computing the euclidean distance between two vectors. The distance takes the form:
+
+$$
+d_2 (I_1, I_2) = \sqrt{\sum_{p} \left( I^p_1 - I^p_2 \right)^2}
+$$
+
+In other words we would be computing the pixelwise difference as before, but this time we square all of them, add them up and finally take the square root. In numpy, using the code from above we would need to only replace a single line of code. The line that computes the distances:
+
+```python
+distances = np.sqrt(np.sum(np.square(self.Xtr - X[i,:]), axis = 1))
+```
+
+Note that I included the `np.sqrt` call above, but in a practical nearest neighbor application we could leave out the square root operation because square root is a *monotonic function*. That is, it scales the absolute sizes of the distances but it preserves the ordering, so the nearest neighbors with or without it are identical. If you ran the Nearest Neighbor classifier on CIFAR-10 with this distance, you would obtain **35.4%** accuracy (slightly lower than our L1 distance result).
+
+**L1 vs. L2.** It is interesting to consider differences between the two metrics. In particular, the L2 distance is much more unforgiving than the L1 distance when it comes to differences between two vectors. That is, the L2 distance prefers many medium disagreements to one big one. L1 and L2 distances (or equivalently the L1/L2 norms of the differences between a pair of images) are the most commonly used special cases of a [p-norm](https://planetmath.org/vectorpnorm).
+
+
+
+### k - Nearest Neighbor Classifier
+
+You may have noticed that it is strange to only use the label of the nearest image when we wish to make a prediction. Indeed, it is almost always the case that one can do better by using what's called a **k-Nearest Neighbor Classifier**. The idea is very simple: instead of finding the single closest image in the training set, we will find the top **k** closest images, and have them vote on the label of the test image. In particular, when *k = 1*, we recover the Nearest Neighbor classifier. Intuitively, higher values of **k** have a smoothing effect that makes the classifier more resistant to outliers:
+
+
+
+
An example of the difference between Nearest Neighbor and a 5-Nearest Neighbor classifier, using 2-dimensional points and 3 classes (red, blue, green). The colored regions show the decision boundaries induced by the classifier with an L2 distance. The white regions show points that are ambiguously classified (i.e. class votes are tied for at least two classes). Notice that in the case of a NN classifier, outlier datapoints (e.g. green point in the middle of a cloud of blue points) create small islands of likely incorrect predictions, while the 5-NN classifier smooths over these irregularities, likely leading to better generalization on the test data (not shown). Also note that the gray regions in the 5-NN image are caused by ties in the votes among the nearest neighbors (e.g. 2 neighbors are red, next two neighbors are blue, last neighbor is green).
+
+
+In practice, you will almost always want to use k-Nearest Neighbor. But what value of *k* should you use? We turn to this problem next.
+
+
+
+### Validation sets for Hyperparameter tuning
+
+The k-nearest neighbor classifier requires a setting for *k*. But what number works best? Additionally, we saw that there are many different distance functions we could have used: L1 norm, L2 norm, there are many other choices we didn't even consider (e.g. dot products). These choices are called **hyperparameters** and they come up very often in the design of many Machine Learning algorithms that learn from data. It's often not obvious what values/settings one should choose.
+
+You might be tempted to suggest that we should try out many different values and see what works best. That is a fine idea and that's indeed what we will do, but this must be done very carefully. In particular, **we cannot use the test set for the purpose of tweaking hyperparameters**. Whenever you're designing Machine Learning algorithms, you should think of the test set as a very precious resource that should ideally never be touched until one time at the very end. Otherwise, the very real danger is that you may tune your hyperparameters to work well on the test set, but if you were to deploy your model you could see a significantly reduced performance. In practice, we would say that you **overfit** to the test set. Another way of looking at it is that if you tune your hyperparameters on the test set, you are effectively using the test set as the training set, and therefore the performance you achieve on it will be too optimistic with respect to what you might actually observe when you deploy your model. But if you only use the test set once at end, it remains a good proxy for measuring the **generalization** of your classifier (we will see much more discussion surrounding generalization later in the class).
+
+> Evaluate on the test set only a single time, at the very end.
+
+Luckily, there is a correct way of tuning the hyperparameters and it does not touch the test set at all. The idea is to split our training set in two: a slightly smaller training set, and what we call a **validation set**. Using CIFAR-10 as an example, we could for example use 49,000 of the training images for training, and leave 1,000 aside for validation. This validation set is essentially used as a fake test set to tune the hyper-parameters.
+
+Here is what this might look like in the case of CIFAR-10:
+
+```python
+# assume we have Xtr_rows, Ytr, Xte_rows, Yte as before
+# recall Xtr_rows is 50,000 x 3072 matrix
+Xval_rows = Xtr_rows[:1000, :] # take first 1000 for validation
+Yval = Ytr[:1000]
+Xtr_rows = Xtr_rows[1000:, :] # keep last 49,000 for train
+Ytr = Ytr[1000:]
+
+# find hyperparameters that work best on the validation set
+validation_accuracies = []
+for k in [1, 3, 5, 10, 20, 50, 100]:
+
+ # use a particular value of k and evaluation on validation data
+ nn = NearestNeighbor()
+ nn.train(Xtr_rows, Ytr)
+ # here we assume a modified NearestNeighbor class that can take a k as input
+ Yval_predict = nn.predict(Xval_rows, k = k)
+ acc = np.mean(Yval_predict == Yval)
+ print 'accuracy: %f' % (acc,)
+
+ # keep track of what works on the validation set
+ validation_accuracies.append((k, acc))
+```
+
+By the end of this procedure, we could plot a graph that shows which values of *k* work best. We would then stick with this value and evaluate once on the actual test set.
+
+> Split your training set into training set and a validation set. Use validation set to tune all hyperparameters. At the end run a single time on the test set and report performance.
+
+**Cross-validation**.
+In cases where the size of your training data (and therefore also the validation data) might be small, people sometimes use a more sophisticated technique for hyperparameter tuning called **cross-validation**. Working with our previous example, the idea is that instead of arbitrarily picking the first 1000 datapoints to be the validation set and rest training set, you can get a better and less noisy estimate of how well a certain value of *k* works by iterating over different validation sets and averaging the performance across these. For example, in 5-fold cross-validation, we would split the training data into 5 equal folds, use 4 of them for training, and 1 for validation. We would then iterate over which fold is the validation fold, evaluate the performance, and finally average the performance across the different folds.
+
+
+
+
Example of a 5-fold cross-validation run for the parameter k. For each value of k we train on 4 folds and evaluate on the 5th. Hence, for each k we receive 5 accuracies on the validation fold (accuracy is the y-axis, each result is a point). The trend line is drawn through the average of the results for each k and the error bars indicate the standard deviation. Note that in this particular case, the cross-validation suggests that a value of about k = 7 works best on this particular dataset (corresponding to the peak in the plot). If we used more than 5 folds, we might expect to see a smoother (i.e. less noisy) curve.
+
+
+
+
+**In practice**. In practice, people prefer to avoid cross-validation in favor of having a single validation split, since cross-validation can be computationally expensive. The splits people tend to use is between 50%-90% of the training data for training and rest for validation. However, this depends on multiple factors: For example if the number of hyperparameters is large you may prefer to use bigger validation splits. If the number of examples in the validation set is small (perhaps only a few hundred or so), it is safer to use cross-validation. Typical number of folds you can see in practice would be 3-fold, 5-fold or 10-fold cross-validation.
+
+
+
+
Common data splits. A training and test set is given. The training set is split into folds (for example 5 folds here). The folds 1-4 become the training set. One fold (e.g. fold 5 here in yellow) is denoted as the Validation fold and is used to tune the hyperparameters. Cross-validation goes a step further and iterates over the choice of which fold is the validation fold, separately from 1-5. This would be referred to as 5-fold cross-validation. In the very end once the model is trained and all the best hyperparameters were determined, the model is evaluated a single time on the test data (red).
+
+
+
+
+**Pros and Cons of Nearest Neighbor classifier.**
+
+It is worth considering some advantages and drawbacks of the Nearest Neighbor classifier. Clearly, one advantage is that it is very simple to implement and understand. Additionally, the classifier takes no time to train, since all that is required is to store and possibly index the training data. However, we pay that computational cost at test time, since classifying a test example requires a comparison to every single training example. This is backwards, since in practice we often care about the test time efficiency much more than the efficiency at training time. In fact, the deep neural networks we will develop later in this class shift this tradeoff to the other extreme: They are very expensive to train, but once the training is finished it is very cheap to classify a new test example. This mode of operation is much more desirable in practice.
+
+As an aside, the computational complexity of the Nearest Neighbor classifier is an active area of research, and several **Approximate Nearest Neighbor** (ANN) algorithms and libraries exist that can accelerate the nearest neighbor lookup in a dataset (e.g. [FLANN](https://github.com/mariusmuja/flann)). These algorithms allow one to trade off the correctness of the nearest neighbor retrieval with its space/time complexity during retrieval, and usually rely on a pre-processing/indexing stage that involves building a kdtree, or running the k-means algorithm.
+
+The Nearest Neighbor Classifier may sometimes be a good choice in some settings (especially if the data is low-dimensional), but it is rarely appropriate for use in practical image classification settings. One problem is that images are high-dimensional objects (i.e. they often contain many pixels), and distances over high-dimensional spaces can be very counter-intuitive. The image below illustrates the point that the pixel-based L2 similarities we developed above are very different from perceptual similarities:
+
+
+
+
Pixel-based distances on high-dimensional data (and images especially) can be very unintuitive. An original image (left) and three other images next to it that are all equally far away from it based on L2 pixel distance. Clearly, the pixel-wise distance does not correspond at all to perceptual or semantic similarity.
+
+
+Here is one more visualization to convince you that using pixel differences to compare images is inadequate. We can use a visualization technique called t-SNE to take the CIFAR-10 images and embed them in two dimensions so that their (local) pairwise distances are best preserved. In this visualization, images that are shown nearby are considered to be very near according to the L2 pixelwise distance we developed above:
+
+
+
+
CIFAR-10 images embedded in two dimensions with t-SNE. Images that are nearby on this image are considered to be close based on the L2 pixel distance. Notice the strong effect of background rather than semantic class differences. Click here for a bigger version of this visualization.
+
+
+In particular, note that images that are nearby each other are much more a function of the general color distribution of the images, or the type of background rather than their semantic identity. For example, a dog can be seen very near a frog since both happen to be on white background. Ideally we would like images of all of the 10 classes to form their own clusters, so that images of the same class are nearby to each other regardless of irrelevant characteristics and variations (such as the background). However, to get this property we will have to go beyond raw pixels.
+
+
+
+### Summary
+
+In summary:
+
+- We introduced the problem of **Image Classification**, in which we are given a set of images that are all labeled with a single category. We are then asked to predict these categories for a novel set of test images and measure the accuracy of the predictions.
+- We introduced a simple classifier called the **Nearest Neighbor classifier**. We saw that there are multiple hyper-parameters (such as value of k, or the type of distance used to compare examples) that are associated with this classifier and that there was no obvious way of choosing them.
+- We saw that the correct way to set these hyperparameters is to split your training data into two: a training set and a fake test set, which we call **validation set**. We try different hyperparameter values and keep the values that lead to the best performance on the validation set.
+- If the lack of training data is a concern, we discussed a procedure called **cross-validation**, which can help reduce noise in estimating which hyperparameters work best.
+- Once the best hyperparameters are found, we fix them and perform a single **evaluation** on the actual test set.
+- We saw that Nearest Neighbor can get us about 40% accuracy on CIFAR-10. It is simple to implement but requires us to store the entire training set and it is expensive to evaluate on a test image.
+- Finally, we saw that the use of L1 or L2 distances on raw pixel values is not adequate since the distances correlate more strongly with backgrounds and color distributions of images than with their semantic content.
+
+In next lectures we will embark on addressing these challenges and eventually arrive at solutions that give 90% accuracies, allow us to completely discard the training set once learning is complete, and they will allow us to evaluate a test image in less than a millisecond.
+
+
+
+### Summary: Applying kNN in practice
+
+If you wish to apply kNN in practice (hopefully not on images, or perhaps as only a baseline) proceed as follows:
+
+1. Preprocess your data: Normalize the features in your data (e.g. one pixel in images) to have zero mean and unit variance. We will cover this in more detail in later sections, and chose not to cover data normalization in this section because pixels in images are usually homogeneous and do not exhibit widely different distributions, alleviating the need for data normalization.
+2. If your data is very high-dimensional, consider using a dimensionality reduction technique such as PCA ([wiki ref](https://en.wikipedia.org/wiki/Principal_component_analysis), [CS229ref](http://cs229.stanford.edu/notes/cs229-notes10.pdf), [blog ref](https://web.archive.org/web/20150503165118/http://www.bigdataexaminer.com:80/understanding-dimensionality-reduction-principal-component-analysis-and-singular-value-decomposition/)), NCA ([wiki ref](https://en.wikipedia.org/wiki/Neighbourhood_components_analysis), [blog ref](https://kevinzakka.github.io/2020/02/10/nca/)), or even [Random Projections](https://scikit-learn.org/stable/modules/random_projection.html).
+3. Split your training data randomly into train/val splits. As a rule of thumb, between 70-90% of your data usually goes to the train split. This setting depends on how many hyperparameters you have and how much of an influence you expect them to have. If there are many hyperparameters to estimate, you should err on the side of having larger validation set to estimate them effectively. If you are concerned about the size of your validation data, it is best to split the training data into folds and perform cross-validation. If you can afford the computational budget it is always safer to go with cross-validation (the more folds the better, but more expensive).
+4. Train and evaluate the kNN classifier on the validation data (for all folds, if doing cross-validation) for many choices of **k** (e.g. the more the better) and across different distance types (L1 and L2 are good candidates)
+5. If your kNN classifier is running too long, consider using an Approximate Nearest Neighbor library (e.g. [FLANN](https://github.com/mariusmuja/flann)) to accelerate the retrieval (at cost of some accuracy).
+6. Take note of the hyperparameters that gave the best results. There is a question of whether you should use the full training set with the best hyperparameters, since the optimal hyperparameters might change if you were to fold the validation data into your training set (since the size of the data would be larger). In practice it is cleaner to not use the validation data in the final classifier and consider it to be *burned* on estimating the hyperparameters. Evaluate the best model on the test set. Report the test set accuracy and declare the result to be the performance of the kNN classifier on your data.
+
+
+
+#### Further Reading
+
+Here are some (optional) links you may find interesting for further reading:
+
+- [A Few Useful Things to Know about Machine Learning](https://homes.cs.washington.edu/~pedrod/papers/cacm12.pdf), where especially section 6 is related but the whole paper is a warmly recommended reading.
+
+- [Recognizing and Learning Object Categories](https://people.csail.mit.edu/torralba/shortCourseRLOC/index.html), a short course of object categorization at ICCV 2005.
\ No newline at end of file
diff --git a/_chapters/instructions/template.md b/_chapters/instructions/template.md
new file mode 100644
index 00000000..8b299061
--- /dev/null
+++ b/_chapters/instructions/template.md
@@ -0,0 +1,110 @@
+---
+title: Template
+keywords:
+order: 0
+---
+
+You can start your notes here before you start diving into specific topics under each heading. This is a useful place to define the topic of the day and lay out the structure of your lecture notes. You'll edit the Markdown file, in this case template.md, which will automatically convert to HTML and serve this web page you're reading!
+
+The table of contents can link to each section so long as you match the names right (see comments in template.md for more elaboration on this!). This Markdown to HTML mapping doesn't like periods in the section titles and won't link them from the table of contents, so use dashes instead if you need to.
+
+- [First Big Topic](#first-big-topic)
+ - [Subtopic 1-1](#subtopic-1-1)
+ - [Subtopic 1-2](#subtopic-1-2)
+ - [Subtopic 1-3](#subtopic-1-3)
+- [Second Big Topic](#topic2)
+- [Third Big Topic](#topic3)
+
+[//]: # (This is how you can make a comment that won't appear in the web page! It might be visible on some machines/browsers so use this only for development.)
+
+[//]: # (Notice in the table of contents that [First Big Topic] matches #first-big-topic, except for all lowercase and spaces are replaced with dashes. This is important so that the table of contents links properly to the sections)
+
+[//]: # (Leave this line here, but you can replace the name field with anything! It's used in the HTML structure of the page but isn't visible to users)
+
+
+## First Big Topic
+
+Here you can start to talk about the first topic of your notes. You can bold text like **this**, or italicize text like *this*. If you want to make a numbered list it's as easy as
+1.
+2.
+3.
+
+- Bullet
+- points
+- are
+- similar
+
+For a more detailed cheatsheet on the most important functionality of Markdown, check out this link https://wordpress.com/support/markdown-quick-reference/, which you can format in Markdown with your own [link title](https://wordpress.com/support/markdown-quick-reference/)
+
+
+
+### Subtopic 1-1
+You might want to include images in your notes, since Computer Vision as a field is blessed with tons of cool visualizations. Here's an example from the CS 231N notes page we included as a reference for you:
+
+
+
+
Put your informative caption here! If you really want to mess around with the classes in this div container then feel free, but inserting images just like this should work great!
+
+
+
+### Subtopic 1-2
+Sometimes you might want to insert some code snippets into your notes. As an example, here's a snippet of python code taken from the CS 231N notes:
+```python
+Xtr, Ytr, Xte, Yte = load_CIFAR10('data/cifar10/') # a magic function we provide
+# flatten out all images to be one-dimensional
+Xtr_rows = Xtr.reshape(Xtr.shape[0], 32 * 32 * 3) # Xtr_rows becomes 50000 x 3072
+Xte_rows = Xte.reshape(Xte.shape[0], 32 * 32 * 3) # Xte_rows becomes 10000 x 3072
+```
+
+
+### Subtopic 1-3
+Sometimes you might want to write some mathematical equations, and LaTeX is a great tool for that! You can write an inline equation like this \\( a^2 = b^2 \\), or you can display an equation on its own line like this! \\[ a^2 = b^2 + c^2 \\]
+
+You can also apply LaTeX syntax to label your equations and refer to them later! Here's the equation:
+
+$$ \begin{equation} \label{your_label} a^2 = b^2 + c^2 + d^2 + e^2 \end{equation} $$
+
+and here's a linked reference to it: \eqref{your_label}. For now, this configuration likes the \\"\\$\\$ equation stuff ... \\$\\$\\" syntax to have an empty line above and below it, but it displays the same anyway.
+
+**For a guide on LaTeX syntax and how to write mathematical equations and formulas with it, check out [this link](https://www.overleaf.com/learn/latex/mathematical_expressions)**
+
+**Here's a short guide on how to use the basics of LaTeX**
+- You've seen above the syntax to start and end an equation, so now let's work on what you fill in the middle
+- You can make variables and expressions **bold** in equations too: \\(\mathbf{x} + y\\)
+- Superscripts and subscripts are easy: use the ^ and _ symbol and bound your super/sub script by {} if it's more than one character. For example: \\(e^{-x+10}\\)
+- Greek letters are also simple, use the \ character with their written name with optional capitalization, such as alpha or Alpha. For example: \\(\alpha + \beta + \gamma + \delta + \Gamma + \Delta\\). Not all capital greek letters work like this, but you can search online for solutions if this trick fails or reach out to the CA's. In general the \ character in LaTeX is the gateway to all kinds of special characters and functionalities.
+- Sums and Products are really useful in Latex. You can use both superscripts and subscripts to mark the bounds: \\(\log(\prod_{i=0}^{2n}i^2) = \sum_{i=0}^{2n}\log (i^2)\\)
+- Another useful trick is to write out a matrix or a vector in LaTex. There's a lot of customization you can do with this, so check out this [page](https://www.overleaf.com/learn/latex/Matrices) for more details. Here's some examples in our Markdown environment:
+
+
+$$\begin{bmatrix}
+1 & 2 & 3\\
+a & b & c
+\end{bmatrix}$$
+
+$$\begin{bmatrix}
+1\\
+2\\
+3\\
+\end{bmatrix}$$
+
+$$\begin{bmatrix}
+1 & 2 & 3\\
+\end{bmatrix}$$
+
+As with the labelled equations, it makes a difference whether the lines above and below the equation are blank, so keep that in mind while debugging!
+
+
+
+
+
+## Second Big Topic
+
+How can you experiment with your markdown visualization? At least one group member can follow the instructions on the README.md to install the ruby and jekyll requirements in order to visualize your notes on local host for quick iteration. Once you have a handle on the basic mechanics of Markdown, you can write most of your notes without every team member needing to visualize on their own machine.
+
+We recommend writing your notes on a shared document that everyone can simultaneously edit. Since this code is in Markdown, Google Colab Notebooks are a great tool! They're the google docs of jupyter notebooks. We've provided an example Colab notebook that you can copy and use for collaboratively developing your notes: [link](https://colab.research.google.com/drive/19B1VAXjzQaxuwxwl8VmERDaZPKHqCjkX?usp=sharing), but you're free to use any tools you wish to collaborate!
+
+
+
+## Third Big Topic
+This should give you the primary tools to develop your notes. Check out the [markdown quick reference](https://wordpress.com/support/markdown-quick-reference/) for any further Markdown functionality that you may find useful, and reach out to the teaching team on Piazza if you have any questions about how to create your lecture notes
diff --git a/_chapters/pixels/edge_detection.md b/_chapters/pixels/edge_detection.md
new file mode 100644
index 00000000..54234887
--- /dev/null
+++ b/_chapters/pixels/edge_detection.md
@@ -0,0 +1,12 @@
+---
+title: Edge detection
+keywords: (insert comma-separated keywords here)
+order: 4 # Lecture number for 2020
+---
+
+**Lorem ipsum** dolor sit amet, consectetur adipiscing elit, sed do eiusmod
+tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam,
+quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo
+consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum
+dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident,
+sunt in culpa qui officia deserunt mollit anim id est laborum.
diff --git a/_chapters/pixels/feature_descriptors.md b/_chapters/pixels/feature_descriptors.md
new file mode 100644
index 00000000..dba5277e
--- /dev/null
+++ b/_chapters/pixels/feature_descriptors.md
@@ -0,0 +1,12 @@
+---
+title: Feature descriptors
+keywords: (insert comma-separated keywords here)
+order: 6 # Lecture number for 2020
+---
+
+**Lorem ipsum** dolor sit amet, consectetur adipiscing elit, sed do eiusmod
+tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam,
+quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo
+consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum
+dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident,
+sunt in culpa qui officia deserunt mollit anim id est laborum.
diff --git a/_chapters/pixels/features_and_fitting.md b/_chapters/pixels/features_and_fitting.md
new file mode 100644
index 00000000..63e89603
--- /dev/null
+++ b/_chapters/pixels/features_and_fitting.md
@@ -0,0 +1,12 @@
+---
+title: Features and fitting
+keywords: (insert comma-separated keywords here)
+order: 5 # Lecture number for 2020
+---
+
+**Lorem ipsum** dolor sit amet, consectetur adipiscing elit, sed do eiusmod
+tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam,
+quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo
+consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum
+dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident,
+sunt in culpa qui officia deserunt mollit anim id est laborum.
diff --git a/_chapters/pixels/filters_and_convolutions.md b/_chapters/pixels/filters_and_convolutions.md
new file mode 100644
index 00000000..4fd4833c
--- /dev/null
+++ b/_chapters/pixels/filters_and_convolutions.md
@@ -0,0 +1,12 @@
+---
+title: Filters and convolutions
+keywords: (insert comma-separated keywords here)
+order: 3 # Lecture number for 2020
+---
+
+**Lorem ipsum** dolor sit amet, consectetur adipiscing elit, sed do eiusmod
+tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam,
+quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo
+consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum
+dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident,
+sunt in culpa qui officia deserunt mollit anim id est laborum.
diff --git a/_chapters/pixels/images_and_transformations.md b/_chapters/pixels/images_and_transformations.md
new file mode 100644
index 00000000..56837539
--- /dev/null
+++ b/_chapters/pixels/images_and_transformations.md
@@ -0,0 +1,14 @@
+---
+title: Images and transformations
+keywords: (insert comma-separated keywords here)
+order: 2 # Lecture number for 2020
+---
+
+**Lorem ipsum** dolor sit amet, consectetur adipiscing elit, sed do eiusmod
+tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam,
+quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo
+consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum
+dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident,
+sunt in culpa qui officia deserunt mollit anim id est laborum.
+
+Testing student changes to markdown.
\ No newline at end of file
diff --git a/_chapters/recognition/detecting_objects_by_parts.md b/_chapters/recognition/detecting_objects_by_parts.md
new file mode 100644
index 00000000..2d8a9322
--- /dev/null
+++ b/_chapters/recognition/detecting_objects_by_parts.md
@@ -0,0 +1,12 @@
+---
+title: Detecting objects by parts
+keywords: (insert comma-separated keywords here)
+order: 14 # Lecture number for 2020
+---
+
+**Lorem ipsum** dolor sit amet, consectetur adipiscing elit, sed do eiusmod
+tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam,
+quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo
+consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum
+dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident,
+sunt in culpa qui officia deserunt mollit anim id est laborum.
diff --git a/_chapters/recognition/visual_bag_of_words.md b/_chapters/recognition/visual_bag_of_words.md
new file mode 100644
index 00000000..7f2bbb26
--- /dev/null
+++ b/_chapters/recognition/visual_bag_of_words.md
@@ -0,0 +1,12 @@
+---
+title: Visual bag of words
+keywords: (insert comma-separated keywords here)
+order: 13 # Lecture number for 2020
+---
+
+**Lorem ipsum** dolor sit amet, consectetur adipiscing elit, sed do eiusmod
+tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam,
+quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo
+consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum
+dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident,
+sunt in culpa qui officia deserunt mollit anim id est laborum.
diff --git a/_chapters/recognition/visual_recognition.md b/_chapters/recognition/visual_recognition.md
new file mode 100644
index 00000000..d3c4e7b7
--- /dev/null
+++ b/_chapters/recognition/visual_recognition.md
@@ -0,0 +1,12 @@
+---
+title: Visual recognition
+keywords: (insert comma-separated keywords here)
+order: 12 # Lecture number for 2020
+---
+
+**Lorem ipsum** dolor sit amet, consectetur adipiscing elit, sed do eiusmod
+tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam,
+quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo
+consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum
+dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident,
+sunt in culpa qui officia deserunt mollit anim id est laborum.
diff --git a/_chapters/videos/motion.md b/_chapters/videos/motion.md
new file mode 100644
index 00000000..f6ea249b
--- /dev/null
+++ b/_chapters/videos/motion.md
@@ -0,0 +1,12 @@
+---
+title: Motion
+keywords: (insert comma-separated keywords here)
+order: 15 # Lecture number for 2020
+---
+
+**Lorem ipsum** dolor sit amet, consectetur adipiscing elit, sed do eiusmod
+tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam,
+quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo
+consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum
+dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident,
+sunt in culpa qui officia deserunt mollit anim id est laborum.
diff --git a/_chapters/videos/tracking.md b/_chapters/videos/tracking.md
new file mode 100644
index 00000000..1da207a6
--- /dev/null
+++ b/_chapters/videos/tracking.md
@@ -0,0 +1,12 @@
+---
+title: Tracking
+keywords: (insert comma-separated keywords here)
+order: 16 # Lecture number for 2020
+---
+
+**Lorem ipsum** dolor sit amet, consectetur adipiscing elit, sed do eiusmod
+tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam,
+quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo
+consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum
+dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident,
+sunt in culpa qui officia deserunt mollit anim id est laborum.
diff --git a/_config.yml b/_config.yml
new file mode 100644
index 00000000..8fada278
--- /dev/null
+++ b/_config.yml
@@ -0,0 +1,63 @@
+# Welcome to Jekyll!
+#
+# This config file is meant for settings that affect your whole blog, values
+# which you are expected to set up once and rarely edit after that. If you find
+# yourself editing this file very often, consider using Jekyll's data files
+# feature for the data you need to update frequently.
+#
+# For technical reasons, this file is *NOT* reloaded automatically when you use
+# 'bundle exec jekyll serve'. If you change this file, please restart the server process.
+#
+# If you need help with YAML syntax, here are some quick references for you:
+# https://learn-the-web.algonquindesign.ca/topics/markdown-yaml-cheat-sheet/#yaml
+# https://learnxinyminutes.com/docs/yaml/
+#
+# Site settings
+# These are used to personalize your new site. If you look in the HTML files,
+# you will see them accessed via {{ site.title }}, {{ site.email }}, and so on.
+# You can create any custom variable you would like, and they will be accessible
+# in the templates via {{ site.myvariable }}.
+
+title: CS131 Course Notes
+description: >- # this means to ignore newlines until "baseurl:"
+ Notes for Stanford's CS131 course.
+baseurl: /cs131_notes_dev # the subpath of your site, e.g. /blog
+
+########### ONLY CHANGE THESE FIELDS
+# replace with your own email for build notifications or leave it blank
+url: "https://anarcomey.github.io" # the base hostname & protocol for your site, e.g. http://example.com
+###########
+
+# Theme-specific
+courseurl: "https://cs131.stanford.edu"
+theme_color: "#8c1515"
+
+collections:
+ chapters:
+ output: true
+ permalink: /:path/
+defaults:
+ - scope:
+ path: ""
+ type: "chapters"
+ values:
+ layout: chapter
+
+
+# Build settings
+markdown: kramdown
+highlighter: rouge
+kramdown:
+ input: GFM
+ auto_ids: true
+ syntax_highlighter: rouge
+exclude:
+ - Gemfile
+ - Gemfile.lock
+jekyll_prettier:
+ exclude: ["*.css"]
+
+
+
+
+
diff --git a/_includes/css/highlight.css b/_includes/css/highlight.css
new file mode 100644
index 00000000..fe7ad7b0
--- /dev/null
+++ b/_includes/css/highlight.css
@@ -0,0 +1,61 @@
+
+.highlight { background: #ffffff; }
+.highlight .c { color: #999988; font-style: italic } /* Comment */
+.highlight .err { color: #a61717; background-color: #e3d2d2 } /* Error */
+.highlight .k { font-weight: bold } /* Keyword */
+.highlight .o { font-weight: bold } /* Operator */
+.highlight .cm { color: #999988; font-style: italic } /* Comment.Multiline */
+.highlight .cp { color: #999999; font-weight: bold } /* Comment.Preproc */
+.highlight .c1 { color: #999988; font-style: italic } /* Comment.Single */
+.highlight .cs { color: #999999; font-weight: bold; font-style: italic } /* Comment.Special */
+.highlight .gd { color: #000000; background-color: #ffdddd } /* Generic.Deleted */
+.highlight .gd .x { color: #000000; background-color: #ffaaaa } /* Generic.Deleted.Specific */
+.highlight .ge { font-style: italic } /* Generic.Emph */
+.highlight .gr { color: #aa0000 } /* Generic.Error */
+.highlight .gh { color: #999999 } /* Generic.Heading */
+.highlight .gi { color: #000000; background-color: #ddffdd } /* Generic.Inserted */
+.highlight .gi .x { color: #000000; background-color: #aaffaa } /* Generic.Inserted.Specific */
+.highlight .go { color: #888888 } /* Generic.Output */
+.highlight .gp { color: #555555 } /* Generic.Prompt */
+.highlight .gs { font-weight: bold } /* Generic.Strong */
+.highlight .gu { color: #aaaaaa } /* Generic.Subheading */
+.highlight .gt { color: #aa0000 } /* Generic.Traceback */
+.highlight .kc { font-weight: bold } /* Keyword.Constant */
+.highlight .kd { font-weight: bold } /* Keyword.Declaration */
+.highlight .kp { font-weight: bold } /* Keyword.Pseudo */
+.highlight .kr { font-weight: bold } /* Keyword.Reserved */
+.highlight .kt { color: #445588; font-weight: bold } /* Keyword.Type */
+.highlight .m { color: #009999 } /* Literal.Number */
+.highlight .s { color: #d14 } /* Literal.String */
+.highlight .na { color: #008080 } /* Name.Attribute */
+.highlight .nb { color: #0086B3 } /* Name.Builtin */
+.highlight .nc { color: #445588; font-weight: bold } /* Name.Class */
+.highlight .no { color: #008080 } /* Name.Constant */
+.highlight .ni { color: #800080 } /* Name.Entity */
+.highlight .ne { color: #990000; font-weight: bold } /* Name.Exception */
+.highlight .nf { color: #990000; font-weight: bold } /* Name.Function */
+.highlight .nn { color: #555555 } /* Name.Namespace */
+.highlight .nt { color: #000080 } /* Name.Tag */
+.highlight .nv { color: #008080 } /* Name.Variable */
+.highlight .ow { font-weight: bold } /* Operator.Word */
+.highlight .w { color: #bbbbbb } /* Text.Whitespace */
+.highlight .mf { color: #009999 } /* Literal.Number.Float */
+.highlight .mh { color: #009999 } /* Literal.Number.Hex */
+.highlight .mi { color: #009999 } /* Literal.Number.Integer */
+.highlight .mo { color: #009999 } /* Literal.Number.Oct */
+.highlight .sb { color: #d14 } /* Literal.String.Backtick */
+.highlight .sc { color: #d14 } /* Literal.String.Char */
+.highlight .sd { color: #d14 } /* Literal.String.Doc */
+.highlight .s2 { color: #d14 } /* Literal.String.Double */
+.highlight .se { color: #d14 } /* Literal.String.Escape */
+.highlight .sh { color: #d14 } /* Literal.String.Heredoc */
+.highlight .si { color: #d14 } /* Literal.String.Interpol */
+.highlight .sx { color: #d14 } /* Literal.String.Other */
+.highlight .sr { color: #009926 } /* Literal.String.Regex */
+.highlight .s1 { color: #d14 } /* Literal.String.Single */
+.highlight .ss { color: #990073 } /* Literal.String.Symbol */
+.highlight .bp { color: #999999 } /* Name.Builtin.Pseudo */
+.highlight .vc { color: #008080 } /* Name.Variable.Class */
+.highlight .vg { color: #008080 } /* Name.Variable.Global */
+.highlight .vi { color: #008080 } /* Name.Variable.Instance */
+.highlight .il { color: #009999 } /* Literal.Number.Integer.Long */
diff --git a/_includes/css/normalize.css b/_includes/css/normalize.css
new file mode 100644
index 00000000..192eb9ce
--- /dev/null
+++ b/_includes/css/normalize.css
@@ -0,0 +1,349 @@
+/*! normalize.css v8.0.1 | MIT License | github.com/necolas/normalize.css */
+
+/* Document
+ ========================================================================== */
+
+/**
+ * 1. Correct the line height in all browsers.
+ * 2. Prevent adjustments of font size after orientation changes in iOS.
+ */
+
+html {
+ line-height: 1.15; /* 1 */
+ -webkit-text-size-adjust: 100%; /* 2 */
+}
+
+/* Sections
+ ========================================================================== */
+
+/**
+ * Remove the margin in all browsers.
+ */
+
+body {
+ margin: 0;
+}
+
+/**
+ * Render the `main` element consistently in IE.
+ */
+
+main {
+ display: block;
+}
+
+/**
+ * Correct the font size and margin on `h1` elements within `section` and
+ * `article` contexts in Chrome, Firefox, and Safari.
+ */
+
+h1 {
+ font-size: 2em;
+ margin: 0.67em 0;
+}
+
+/* Grouping content
+ ========================================================================== */
+
+/**
+ * 1. Add the correct box sizing in Firefox.
+ * 2. Show the overflow in Edge and IE.
+ */
+
+hr {
+ box-sizing: content-box; /* 1 */
+ height: 0; /* 1 */
+ overflow: visible; /* 2 */
+}
+
+/**
+ * 1. Correct the inheritance and scaling of font size in all browsers.
+ * 2. Correct the odd `em` font sizing in all browsers.
+ */
+
+pre {
+ font-family: monospace, monospace; /* 1 */
+ font-size: 1em; /* 2 */
+}
+
+/* Text-level semantics
+ ========================================================================== */
+
+/**
+ * Remove the gray background on active links in IE 10.
+ */
+
+a {
+ background-color: transparent;
+}
+
+/**
+ * 1. Remove the bottom border in Chrome 57-
+ * 2. Add the correct text decoration in Chrome, Edge, IE, Opera, and Safari.
+ */
+
+abbr[title] {
+ border-bottom: none; /* 1 */
+ text-decoration: underline; /* 2 */
+ text-decoration: underline dotted; /* 2 */
+}
+
+/**
+ * Add the correct font weight in Chrome, Edge, and Safari.
+ */
+
+b,
+strong {
+ font-weight: bolder;
+}
+
+/**
+ * 1. Correct the inheritance and scaling of font size in all browsers.
+ * 2. Correct the odd `em` font sizing in all browsers.
+ */
+
+code,
+kbd,
+samp {
+ font-family: monospace, monospace; /* 1 */
+ font-size: 1em; /* 2 */
+}
+
+/**
+ * Add the correct font size in all browsers.
+ */
+
+small {
+ font-size: 80%;
+}
+
+/**
+ * Prevent `sub` and `sup` elements from affecting the line height in
+ * all browsers.
+ */
+
+sub,
+sup {
+ font-size: 75%;
+ line-height: 0;
+ position: relative;
+ vertical-align: baseline;
+}
+
+sub {
+ bottom: -0.25em;
+}
+
+sup {
+ top: -0.5em;
+}
+
+/* Embedded content
+ ========================================================================== */
+
+/**
+ * Remove the border on images inside links in IE 10.
+ */
+
+img {
+ border-style: none;
+}
+
+/* Forms
+ ========================================================================== */
+
+/**
+ * 1. Change the font styles in all browsers.
+ * 2. Remove the margin in Firefox and Safari.
+ */
+
+button,
+input,
+optgroup,
+select,
+textarea {
+ font-family: inherit; /* 1 */
+ font-size: 100%; /* 1 */
+ line-height: 1.15; /* 1 */
+ margin: 0; /* 2 */
+}
+
+/**
+ * Show the overflow in IE.
+ * 1. Show the overflow in Edge.
+ */
+
+button,
+input { /* 1 */
+ overflow: visible;
+}
+
+/**
+ * Remove the inheritance of text transform in Edge, Firefox, and IE.
+ * 1. Remove the inheritance of text transform in Firefox.
+ */
+
+button,
+select { /* 1 */
+ text-transform: none;
+}
+
+/**
+ * Correct the inability to style clickable types in iOS and Safari.
+ */
+
+button,
+[type="button"],
+[type="reset"],
+[type="submit"] {
+ -webkit-appearance: button;
+}
+
+/**
+ * Remove the inner border and padding in Firefox.
+ */
+
+button::-moz-focus-inner,
+[type="button"]::-moz-focus-inner,
+[type="reset"]::-moz-focus-inner,
+[type="submit"]::-moz-focus-inner {
+ border-style: none;
+ padding: 0;
+}
+
+/**
+ * Restore the focus styles unset by the previous rule.
+ */
+
+button:-moz-focusring,
+[type="button"]:-moz-focusring,
+[type="reset"]:-moz-focusring,
+[type="submit"]:-moz-focusring {
+ outline: 1px dotted ButtonText;
+}
+
+/**
+ * Correct the padding in Firefox.
+ */
+
+fieldset {
+ padding: 0.35em 0.75em 0.625em;
+}
+
+/**
+ * 1. Correct the text wrapping in Edge and IE.
+ * 2. Correct the color inheritance from `fieldset` elements in IE.
+ * 3. Remove the padding so developers are not caught out when they zero out
+ * `fieldset` elements in all browsers.
+ */
+
+legend {
+ box-sizing: border-box; /* 1 */
+ color: inherit; /* 2 */
+ display: table; /* 1 */
+ max-width: 100%; /* 1 */
+ padding: 0; /* 3 */
+ white-space: normal; /* 1 */
+}
+
+/**
+ * Add the correct vertical alignment in Chrome, Firefox, and Opera.
+ */
+
+progress {
+ vertical-align: baseline;
+}
+
+/**
+ * Remove the default vertical scrollbar in IE 10+.
+ */
+
+textarea {
+ overflow: auto;
+}
+
+/**
+ * 1. Add the correct box sizing in IE 10.
+ * 2. Remove the padding in IE 10.
+ */
+
+[type="checkbox"],
+[type="radio"] {
+ box-sizing: border-box; /* 1 */
+ padding: 0; /* 2 */
+}
+
+/**
+ * Correct the cursor style of increment and decrement buttons in Chrome.
+ */
+
+[type="number"]::-webkit-inner-spin-button,
+[type="number"]::-webkit-outer-spin-button {
+ height: auto;
+}
+
+/**
+ * 1. Correct the odd appearance in Chrome and Safari.
+ * 2. Correct the outline style in Safari.
+ */
+
+[type="search"] {
+ -webkit-appearance: textfield; /* 1 */
+ outline-offset: -2px; /* 2 */
+}
+
+/**
+ * Remove the inner padding in Chrome and Safari on macOS.
+ */
+
+[type="search"]::-webkit-search-decoration {
+ -webkit-appearance: none;
+}
+
+/**
+ * 1. Correct the inability to style clickable types in iOS and Safari.
+ * 2. Change font properties to `inherit` in Safari.
+ */
+
+::-webkit-file-upload-button {
+ -webkit-appearance: button; /* 1 */
+ font: inherit; /* 2 */
+}
+
+/* Interactive
+ ========================================================================== */
+
+/*
+ * Add the correct display in Edge, IE 10+, and Firefox.
+ */
+
+details {
+ display: block;
+}
+
+/*
+ * Add the correct display in all browsers.
+ */
+
+summary {
+ display: list-item;
+}
+
+/* Misc
+ ========================================================================== */
+
+/**
+ * Add the correct display in IE 10+.
+ */
+
+template {
+ display: none;
+}
+
+/**
+ * Add the correct display in IE 10.
+ */
+
+[hidden] {
+ display: none;
+}
diff --git a/_includes/footer.html b/_includes/footer.html
new file mode 100644
index 00000000..88bb5681
--- /dev/null
+++ b/_includes/footer.html
@@ -0,0 +1,70 @@
+
diff --git a/_includes/head.html b/_includes/head.html
new file mode 100644
index 00000000..678e8ee2
--- /dev/null
+++ b/_includes/head.html
@@ -0,0 +1,23 @@
+
+
+
+
+ {% if page.title %}{{ page.title }} • {% endif %}{{ site.title }}
+
+
+
+
+
+
+
+
+
+
+
diff --git a/_includes/header.html b/_includes/header.html
new file mode 100644
index 00000000..d2f4ad70
--- /dev/null
+++ b/_includes/header.html
@@ -0,0 +1,4 @@
+
+ {{ site.title }}
+ Course Website
+
diff --git a/_layouts/chapter.html b/_layouts/chapter.html
new file mode 100644
index 00000000..1bb608b2
--- /dev/null
+++ b/_layouts/chapter.html
@@ -0,0 +1,8 @@
+---
+layout: default
+---
+
+
+
{{ page.title }}
+ {{ content }}
+
diff --git a/_layouts/default.html b/_layouts/default.html
new file mode 100644
index 00000000..1f192cfb
--- /dev/null
+++ b/_layouts/default.html
@@ -0,0 +1,44 @@
+
+
+ {% include head.html %}
+
+ {% include header.html %}
+ {{ content }}
+ {% include footer.html %}
+
+
+
+
+
+
+
+
+
+
diff --git a/_layouts/index.html b/_layouts/index.html
new file mode 100644
index 00000000..643b7dd7
--- /dev/null
+++ b/_layouts/index.html
@@ -0,0 +1,27 @@
+---
+layout: default
+---
+
+{{ content }}
+
+{%- for module in page.modules -%}
+
+ {%- endfor %}
+
+{%- endfor -%}
diff --git a/assets/examples/challenges.jpeg b/assets/examples/challenges.jpeg
new file mode 100644
index 00000000..e03acc90
Binary files /dev/null and b/assets/examples/challenges.jpeg differ
diff --git a/assets/examples/classify.png b/assets/examples/classify.png
new file mode 100644
index 00000000..4509dbc0
Binary files /dev/null and b/assets/examples/classify.png differ
diff --git a/assets/examples/crossval.jpeg b/assets/examples/crossval.jpeg
new file mode 100644
index 00000000..59c9f874
Binary files /dev/null and b/assets/examples/crossval.jpeg differ
diff --git a/assets/examples/cvplot.png b/assets/examples/cvplot.png
new file mode 100644
index 00000000..461aeeac
Binary files /dev/null and b/assets/examples/cvplot.png differ
diff --git a/assets/examples/knn.jpeg b/assets/examples/knn.jpeg
new file mode 100644
index 00000000..63e45ce9
Binary files /dev/null and b/assets/examples/knn.jpeg differ
diff --git a/assets/examples/nn.jpg b/assets/examples/nn.jpg
new file mode 100644
index 00000000..c7590bed
Binary files /dev/null and b/assets/examples/nn.jpg differ
diff --git a/assets/examples/nneg.jpeg b/assets/examples/nneg.jpeg
new file mode 100644
index 00000000..a45b7305
Binary files /dev/null and b/assets/examples/nneg.jpeg differ
diff --git a/assets/examples/pixels_embed_cifar10.jpg b/assets/examples/pixels_embed_cifar10.jpg
new file mode 100644
index 00000000..bea60c42
Binary files /dev/null and b/assets/examples/pixels_embed_cifar10.jpg differ
diff --git a/assets/examples/pixels_embed_cifar10_big.jpg b/assets/examples/pixels_embed_cifar10_big.jpg
new file mode 100644
index 00000000..d4630204
Binary files /dev/null and b/assets/examples/pixels_embed_cifar10_big.jpg differ
diff --git a/assets/examples/samenorm.png b/assets/examples/samenorm.png
new file mode 100644
index 00000000..71bc21cc
Binary files /dev/null and b/assets/examples/samenorm.png differ
diff --git a/assets/examples/trainset.jpg b/assets/examples/trainset.jpg
new file mode 100644
index 00000000..e2c87875
Binary files /dev/null and b/assets/examples/trainset.jpg differ
diff --git a/assets/images/.keep b/assets/images/.keep
new file mode 100644
index 00000000..e69de29b
diff --git a/assets/instructions/config.png b/assets/instructions/config.png
new file mode 100644
index 00000000..f0e1cbfd
Binary files /dev/null and b/assets/instructions/config.png differ
diff --git a/assets/instructions/fork.png b/assets/instructions/fork.png
new file mode 100644
index 00000000..f748d51f
Binary files /dev/null and b/assets/instructions/fork.png differ
diff --git a/assets/instructions/pages.png b/assets/instructions/pages.png
new file mode 100644
index 00000000..3d0d199c
Binary files /dev/null and b/assets/instructions/pages.png differ
diff --git a/assets/instructions/settings.png b/assets/instructions/settings.png
new file mode 100644
index 00000000..e6aea37a
Binary files /dev/null and b/assets/instructions/settings.png differ
diff --git a/assets/main.css b/assets/main.css
new file mode 100644
index 00000000..43b80e6a
--- /dev/null
+++ b/assets/main.css
@@ -0,0 +1,273 @@
+---
+layout: null
+---
+
+{% include css/normalize.css %}
+{% include css/highlight.css %}
+
+/* Base */
+/* ----------------------------------------------------------*/
+
+html {
+ font-size: 62.5%;
+ overflow-y: scroll;
+}
+
+body {
+ /* font-family: Helvetica, Arial, sans-serif; */
+ font-family: "Roboto", sans-serif;
+ font-size: 1.6rem;
+ line-height: 1.5;
+ font-weight: 300;
+ background-color: #fdfdfd;
+ padding: 0;
+ margin: 0;
+}
+
+a:link,
+a:visited {
+ color: #2a7ae2;
+ text-decoration: none;
+}
+a:hover {
+ color: #000;
+ text-decoration: underline;
+}
+
+h1,
+h2,
+h3,
+h4,
+h5,
+h6 {
+ font-weight: 400;
+}
+
+/* Layout Styles */
+/* ----------------------------------------------------------*/
+
+body > main {
+ max-width: 80rem;
+ padding: 3rem;
+ margin: 0 auto;
+}
+
+.module-header {
+ font-size: 2.4rem;
+ color: {{ site.theme_color }};
+ margin-top: 2rem;
+ margin-bottom: 0.5rem;
+}
+
+ol.module-chapter-list {
+ color: #333;
+ display: block;
+ list-style: none;
+ padding: 0;
+ margin: 0 0 1rem 0;
+ font-size: 1.8rem;
+}
+ol.module-chapter-list li {
+ border-bottom: 1px solid #ccc;
+ padding: 0.5rem 1.5rem 0.3rem 1.5rem;
+}
+ol.module-chapter-list li:nth-child(odd) {
+ background-color: #f7f6f1;
+}
+ol.module-chapter-list li h3 {
+ font-size: inherit;
+ padding: 0;
+ margin: 0;
+}
+ol.module-chapter-list li .keywords {
+ font-size: 1.6rem;
+}
+
+/* Site header */
+/* ----------------------------------------------------------*/
+
+.site-header {
+ position: relative;
+ border-bottom: 1px solid #e8e8e8;
+ background-color: {{ site.theme_color }};
+ padding: 1.5rem;
+ text-align: center;
+}
+
+.site-title,
+.site-title:hover,
+.site-title:visited {
+ display: inline-block;
+ padding: 1rem;
+ font-size: 2.6rem;
+ line-height: 1.2em;
+ letter-spacing: -0.1rem;
+ color: #fff;
+ font-weight: 100;
+}
+
+.site-link:link,
+.site-link:hover,
+.site-link:visited {
+ margin-bottom: 1rem;
+ display: inline-block;
+ text-align: center;
+ font-size: 1.8rem;
+ line-height: 2em;
+ height: 2em;
+ padding: 0 1rem;
+ color: #fff;
+ border: 2px solid #fff;
+ font-weight: 50;
+}
+@media (min-width: 1080px) {
+ .site-link:link,
+ .site-link:hover,
+ .site-link:visited {
+ position: absolute;
+ right: 2rem;
+ top: 50%;
+ transform: translateY(-50%);
+ }
+}
+
+/* Site footer */
+/* ----------------------------------------------------------*/
+
+.site-footer {
+ border-top: 1px solid #e8e8e8;
+ padding: 3rem 0;
+}
+
+.site-footer ul {
+ list-style: none;
+}
+
+.site-footer li,
+.site-footer p {
+ font-size: 1.5rem;
+ letter-spacing: -0.3rem;
+ color: #828282;
+}
+
+.github-icon-svg,
+.twitter-icon-svg {
+ display: inline-block;
+ width: 1.6rem;
+ height: 1.6rem;
+ position: relative;
+ top: 0.3rem;
+}
+
+/* Custom CSS for pages */
+/* ----------------------------------------------------------*/
+
+.figcenter {
+ text-align: center;
+}
+.fig img {
+ max-width: 98%;
+}
+.figleft img {
+ max-width: 50%;
+ float: left;
+ margin-right: 2rem;
+}
+.figleft svg {
+ float: left;
+ margin-right: 2rem;
+}
+.fighighlight {
+ padding: 2rem 0.4rem 2rem 0.4rem;
+ border-bottom: 1px solid #999;
+ border-top: 1px solid #999;
+}
+.figcaption {
+ font-weight: 400;
+ font-size: 1.4rem;
+ color: #575651;
+ text-align: justify;
+}
+
+/* Post styles */
+/* ----------------------------------------------------------*/
+
+.post {
+ margin: 0 0 3rem;
+}
+
+.post > * {
+ margin: 2rem 0;
+}
+
+.post h1,
+.post h2,
+.post h3,
+.post h4,
+.post h5,
+.post h6 {
+ line-height: 1;
+ font-weight: 300;
+ margin: 4rem 0 2rem;
+}
+
+.post h1 {
+ margin-top: 2rem;
+ font-size: 3.6rem;
+ letter-spacing: -1.2.5rem;
+}
+
+.post h2 {
+ font-size: 3.2rem;
+ letter-spacing: -1.2.5rem;
+}
+
+.post h3 {
+ font-size: 2.6rem;
+ letter-spacing: -0.1rem;
+}
+
+.post h4 {
+ font-size: 2rem;
+ letter-spacing: -0.1rem;
+}
+
+.post blockquote {
+ border-left: 0.4rem solid #e8e8e8;
+ padding-left: 2rem;
+ font-size: 1.8rem;
+ opacity: 0.6;
+ letter-spacing: -0.1rem;
+ font-style: italic;
+ margin: 3rem 0;
+}
+
+.post ul,
+.post ol {
+ padding-left: 2rem;
+}
+
+.post pre,
+.post code {
+ background-color: #eef;
+ border: 1px solid #d5d5e9;
+ padding: 0.8rem 1.2rem;
+ border-radius: 0.3rem;
+ font-size: 1.5rem;
+ overflow: auto;
+}
+
+.post code {
+ padding: 0.1rem 0.5rem;
+}
+
+.post ul,
+.post ol {
+ margin-left: 1.35em;
+}
+
+.post pre code {
+ border: 0;
+ padding-right: 0;
+ padding-left: 0;
+}
diff --git a/index.md b/index.md
new file mode 100644
index 00000000..e90424bc
--- /dev/null
+++ b/index.md
@@ -0,0 +1,54 @@
+---
+layout: index
+
+# Configure modules to show
+modules:
+ - name: Instructions
+ chapter_dir: instructions
+ - name: Pixels
+ chapter_dir: pixels
+ - name: Images
+ chapter_dir: images
+ - name: Recognition
+ chapter_dir: recognition
+ - name: Videos
+ chapter_dir: videos
+ - name: Cameras
+ chapter_dir: cameras
+---
+
+These notes accompany the Stanford CS class [**CS131**](http://cs131.stanford.edu/), Computer Vision: Foundations
+and Applications. This is a development space for the class notes where you can commit your changes as your team builds
+the notes for your assigned lecture, and once you're down we will merge your notes onto the finished website.
+
+Head over to [https://github.com/ANarcomey/cs131_notes_dev](https://github.com/ANarcomey/cs131_notes_dev) to see the code that creates these web pages!
+
+
+## Steps to create your own notes
+To begin writing your own notes that will appear on a website like this, have one team member fork the repository building this web page and configure the fork to build a web page for your team to develop on!
+
+- **Step 1: Fork the repository:** Fork this repository [https://github.com/ANarcomey/cs131_notes_dev](https://github.com/ANarcomey/cs131_notes_dev) into your own GitHub account
+
+
+
+
+
+- **Step 2: Enable GitHub Pages:** Enter settings from the menu bar in your forked repo, find the "GitHub Pages" heading, and choose the defaults of "master" branch and "root" directory so that your settings look like the figure below, except "anarcomey" will be replaced with the github username of the team member who created the fork. Don't worry about choosing a theme or any other settings, we've configured that all for you.
+
+
+
+
+
+
+- **Step 3: Link the repository to your GitHub Page:** In your forked repository, edit the file `_config.yml`. Update the `url` field to `https://your_github_username.github.io` and either remove the `email` field or set it to one of your team members emails if you want to receive build updates by email.
+
+
+
+
+- **Step 4: Submit:** Create a group Gradescope submission with all of your teammates and submit the url of your GitHub Page containing your notes (e.g. `https://your_github_username.github.io/cs131_notes_dev/` and the url of your repository (e.g. `https://github.com/your_GitHub_username/cs131_notes_dev`).
+
+
+## Steps to update your notes
+Now that your notes are live in your own GitHub fork and running at `https://your_github_username.github.io/cs131_notes_dev/`, you'll want to add content and update them. To do that, find the Markdown file for your lecture in the `_chapters` directory and edit the .md file. Once you've made your desired updates and want to see what they look like online, commit and push your changes to the master branch. The newly pushed code will render online in ~< a minute and you can see your notes! Once you have a handle on the basic mechanics of Markdown, you can write most of your notes without having to push code and render very often. Take a look at some examples and a template with Markdown guidance in the `Intructions` module of the website, and also look at the markdown code creating those pages in the .md files in `_chapters/instructions`.
+
+Since you're working in groups and editing the same Markdown file, it might make things easier to collaboratively edit a shared document. Since this code is in Markdown, Google Colab Notebooks are a great tool! They're the google docs of jupyter notebooks. We've provided an example Colab notebook that you can copy and use for collaboratively developing your notes: [link](https://colab.research.google.com/drive/19B1VAXjzQaxuwxwl8VmERDaZPKHqCjkX?usp=sharing), but you're free to use any tools or collaboration structures you wish!