Skip to content

nicholas-leonard/torch-in-action

Repository files navigation

Torch in Action

This repository contains the code for the Torch in Action book.

Chapter 1: Meeting Torch

  • facedetect: toy face detection dataset (directory with only four samples);
  • train.lua: example face detection training script (listings 1.1, 1.2 and 1.3);

Chapter 2: Preparing a dataset

  • mnist: MNIST dataset in binary format as downloaded from yann.lecun.com;
  • createdataset.lua: code for serializing the MNIST dataset into .t7 files and generating samples (section 2.3);
  • dataloader.lua: code for listing 2.1, 2.2, 2.3 and 2.5. Defines the DataLoader and TensorLoader classes);
  • iteratedataset.lua: code for listing 2.5. This script tests the dataloader.lua file by iterating through it. Only works if createdataset.lua was executed before hand;
  • getmnistsample.lua: script for generating MNIST samples consolidated as a single image (used to generate figure 2.1);

Chapter 3: Training simple neural networks

  • trainlogreg.lua: Training script for applying binary logistic regression on OR dataset. The model is trained using stochastic gradient descent (listing 3.1);
  • logreg.log: log file created by running th trainlogreg.lua > logreg.log;
  • trainlogreg-mnist.lua: Script for training a multinomial logistic regression model (saved as logreg-mnist.t7) using SGD on the MNIST dataset. Training stops after 200 epochs where each epoch consists of 10000 samples divided into mini-batches of 32 random samples, or reaching an estimated empirical risk lower than 0.007, whichever comes first. The resulting model is evaluated on the entire training set of 50000 samples and saved to disk (listing 3.2);
  • logreg-mnist.log: log file created by running th trainlogreg-mnist.lua > logreg-mnist.log. The data can be used to generate a learning curve. Open the file from your favorite spreadsheet application (Microsoft Excel, LibreOffice Calc, etc.) and specify that values are separated by semicolons;
  • backward.lua: demonstrates gradient descent through a criterion. Using the input as a parameter, the loss is minized by tacking a step in opposite direction of gradient (section 8.1.3);

Chapter 4: Generalizing deep neural networks

About

Code for the Torch in Action book

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages