Skip to content

Things to learn for new students in Lab611.

Notifications You must be signed in to change notification settings

mustard-seed/basic_knowledge

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

19 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Things to learn for new students in Lab611

Software Programming

Learn the following two programing langrages

  • C (C99)
  • Python (v3.0)

Do the following projects to practice and learn good habits when programming in C

Here's another good repo that has lots of good projects for you to practice.

Learn the following Good Coding Styles and use them in your research projects:

Hardware Basic Knowledge

Read the following two books to learn basic concepts for computer architecture. Important things to understand include pipeline, memory hierarchy, roofline model, Amdahl's law, ILP (instruction level parallelism), TLP (task level parallelism), DLP (data level parallelism), SIMD/VLIW processor

FPGA Design

Read the following book to learn OpenCL programing (GPU/FPGA):

Also, refers to Intel/Xilinx's OpenCL user guide to learn specific techniques that will be used in the project.

Finally, learn our opensource project PipeCNN. Run the examples, such as caffenet, vgg-16, resnet, YOLO on the DE10-nano and DE5-net platforms. Learn how to configure, compile, debug the source codes and profile the performance of the accelerator.

GPU Design

Learn TensorRT and CUDA programing. Try examples on our TX2/TK1 platforms.

Tutorials for Hardware Architecturs for DNN

Students who are working on hardware designs for deep neural networks should read the following tutorials.

For Phd. Students

First, read the following artichles to learn how to write research papers.

Secondly, read the following papers, which are really good examples in the related fields.

FPGA accelerator design

  • Optimizing FPGA-based Accelerator Design for Deep Convolutional Neural Networks, FPGA 2015.
  • Throughput-Optimized OpenCL-based FPGA Accelerator for Large-Scale Convolutional Neural Networks, FPGA 2016.
  • An OpenCL Deep Learning Accelerator on Arria 10, FPGA 2017.
  • Improving the Performance of OpenCL-based FPGA Accelerator for Convolutional Neural Network, FPGA 2017.
  • A Framework for Generating High Throughput CNN Implementations on FPGAs, FPGA 2018.
  • An Efficient Hardware Accelerator for Sparse Convolutional Neural Networks on FPGAs, FCCM 2019.

The following survery papers are also worth reading.

  • A Survey of FPGA Based Neural Network Accelerator, ACM TRETS 2017.
  • Deep Neural Network Approximation for Custom Hardware: Where We’ve Been, Where We’re Going, ACM Computing Surveys 2019.

Our own research papers on FPGA accelerators:

  • PipeCNN: An OpenCL-Based Open-Source FPGA Accelerator for Convolution Neural Networks, FPT 2017
  • ABM-SpConv: A Novel Approach to FPGA-Based Acceleration of Convolutional Neural Network Inference, DAC 2019

Neural network optimization (quantization, pruning, et.al.)

Quantization

  • Ristretto: A Framework for Empirical Study of Resource-Efficient Inference in Convolutional Neural Networks, IEEE T-NNLS 2018.
  • 8-bit Inference with TensorRT, Nvidia 2017.
  • Quantizing deep convolutional networks for efficient inference: A whitepaper, Google, 2018.

A more complete list is here.

Pruning and Compression

A more complete list is here.

Neural Architecture Search

A more complete list is here.

Object Detection

A more complete list is here.

About

Things to learn for new students in Lab611.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published