Welcome to this class. We hope you will enjoy it!
This is the first time this class is offered by the Mechanical Engineering Department so we will be experimenting with the content a bit. Here is the tentative content. We will make some adjustments as we go depending on interest and time left:
- Gaussian process regression
- Support vector machine for classification; kernel machines
- Deep learning
- Recurrent Neural Network
- Generative Adversarial Networks (GAN)
- Physics-informed learning machines (a new method specific to ME!)
- Reinforcement learning
- Markov decision processes, Bellman equation, Monte-Carlo tree search, and dynamic programming
- Temporal-difference learning (if time allows)
The material for this class is hosted on github. It can be downloaded from the main repository page https://github.com/stanford-me343/stanford-me343.github.io
If you click on the green button "Clone or download" you can download all the files as a zip archive.
- Tuesday: 7 PM to 8 PM (Hojat)
- Wednesday: 10 AM to 11 AM (Ziyi/Hojat)
- Thursday: 10 AM to 11 AM (Ziyi)
- Friday: 9 AM to 11 AM (Prof. Darve)
Office hours with TAs are held in the Huang basement. Prof. Darve's office hours are in building 520, room 125.
- AlphaGo slides
- Class feedback form
- Final project
- Homework assignment folder
- Demo computer code
- Piazza forum
- Gradescope; used to submit your assignments, see your grades, and request regrades for assignments.
- Mailing list; the instructors use this mailing list to send important messages; please check that you are registered.
- Syllabus
- Interactive polls on pollev
- Main instructor: Eric Darve, [email protected], office 520-125
- Lecturer: Hojat Ghorbanidehno, [email protected]
- TA: Ziyi Yang, [email protected]
Curated list of scientific machine learning papers from Paul Constantine.
Contributors: Nathan Baker, Jed Brown, Reagan Cronin, Ian Grooms, Jan Hesthaven, Des Higham, Katy Huff, Mark Kamuda, Julia Ling, Vasudeva Murthy, Houman Owhadi, Christoph Schwab.
Curation criteria:
- has ML, AI, Big Data, or related terms in the title
- comes from a scientific journal
- bias toward broad audience journals
- claims application to a scientific field or problem
- bias toward computational sciences
- bias toward recent publications
- bias toward perspective/prospective-type articles (e.g., "opportunities and challenges") and surveys/reviews
- bias toward materials design, fluid dynamics, and some environmental sciences
- bias against arXiv papers and preprints
- bias against medicine and related fields
- bias against social sciences and related fields
- bias against fast algorithms or HPC implementations
General book about machine learning: The Hundred-Page Machine Learning Book, by Andriy Burkov. Relatively easy to read with a discussion of all the fundamental concepts. The book does not cover more advanced topics though.
- AlphaGo, "Mastering the game of Go with deep neural networks and tree search," by Silver et al.
- AlphaGo Zero, "Mastering the game of Go without human knowledge," by Silver et al.
- AlphaZero, "Mastering Chess and Shogi by self-play with a general reinforcement learning algorithm," by Silver et al.
- Reinforcement learning: an introduction by R.S. Sutton and A.G. Barto; draft of second edition.
- Course on reinforcement learning by David Silver (2015). David was one of the lead researchers on AlphaGo.
- Curated list of resources on reinforcement learning, by H. Kim and J. Kim.
- OpenAI Gym, toolkit for developing and comparing reinforcement learning algorithms
- G.E. Karniadakis, physics-informed learning papers on arxiv
- G.E. Karniadakis, machine-learning papers on arxiv
- Physics-Informed Generative Adversarial Networks for Stochastic Differential Equations by L. Yang, D. Zhang, and G.E. Karniadakis
- Neural-net-induced Gaussian process regression for function approximation and PDE solution by G. Pang, L. Yang, and G.E. Karniadakis
- GAN Lab by M. Kahng, N. Thorat, D.H. Chau, F.B. Viegas, and M. Wattenberg
- GAN series, blog by Jonathan Hui
- An overview of gradient descent optimization algorithms, blog by Sebastian Ruder
- GAN tutorial
- Generative adversarial nets by I.J. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, Y. Bengio
- Wasserstein GAN by M. Arjovsky, S. Chintala, L. Bottou
- Improved training of Wasserstein GANs by I. Gulrajani, F. Ahmed, M. Arjovsky, V. Dumoulin, A. Courville
- InfoGAN: interpretable representation learning by information maximizing generative adversarial nets by X. Chen, Y. Duan, R. Houthooft, J. Schulman, I. Sutskever, P. Abbeel
- Conditional generative adversarial nets by M. Mirza, S. Osindero
- LeCun, Yann, Yoshua Bengio, and Geoffrey Hinton. "Deep learning." Nature 521.7553 (2015): 436.
- Deep learning by I. Goodfellow, Y. Bengio, and A. Courville
- Deep learning summer school, Montreal 2015, with many video presentations and tutorials
- Deep learning for perception, course from Virginia Tech
- Deep learning methods and applications, online book by L. Deng and D. Yu
- Neural networks and deep learning, online book by M. Nielsen
- "Optimization methods for large-scale machine learning," by L. Bottou. F.E. Curtis, and J. Nocedal. This paper discusses among other things the stochastic gradient method.
- A tutorial on support vector regression by Smola and Scholkopf
- A tutorial on support vector machines for pattern recognition by Burges. They have a very interesting mechanical analogy in terms of force and torque for the separating hyperplane.
- Gaussian processes for machine learning by Carl Edward Rasmussen and Christopher K. I. Williams, The MIT Press, 2006. ISBN 0-262-18253-X. This is a reference textbook on Gaussian Processes. Very extensive. Everything you ever wanted to know about GPR.
- Short review paper; Gaussian processes for regression by Williams and Rasmussen
- Intermediate review paper; Introduction to Gaussian processes by Mackay
- Intermediate review paper; Prediction with Gaussian processes from linear regression to linear prediction and beyond by Williams
- Longer review paper with an introduction to Gaussian processes on a fairly elementary level; Gaussian processes for machine learning by Seeger