Skip to content

C++ from Python translations for cBLAS, cuBLAS and Intel® MKL of "Make Your Own Neural Network" book

License

Notifications You must be signed in to change notification settings

OleRoel/cpp_neural_network_mnist

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

35 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

cpp_neural_network_mnist

This is a coding experiment to compare speed of a C++ implementation for training a MNIST network against a Python implementation using Numpy/Scikit in a Jupyter notebook.

The original code can be found in the "Code for the Make Your Own Neural Network book" in Tariq Rashid's repository here: https://github.com/makeyourownneuralnetwork/makeyourownneuralnetwork

The MNIST datasets for training and testing the neural network can be found here: https://pjreddie.com/projects/mnist-in-csv/

The same training is performed with different flavours of the same code.

BLAS

Uses BLAS library for optimizing matrix operations with help of the cblas library. The library is preinstalled on my OS.

BLIS

BLIS is a portable software framework for instantiating high-performance BLAS-like dense linear algebra libraries. The library has been cloned from github and then:

./configure --prefix=/usr/local --enable-cblas -t pthreads CFLAGS="-std=C11 -msse4.2 -mfpmath=sse -O3" CC=clang  auto
make -j8
sudo make install

MKL

Uses Intel® Math Kernel Library and their cblas library for matrix operations optimized against Intel processors.

CUDA and cuBLAS

Uses NVIDIA CUDA and NVIDIA cuBLAS to perform training on a GPU.

Building

cblas:

clang++ mnist.cpp -I /System/Library/Frameworks/Accelerate.framework/Versions/A/Frameworks/vecLib.framework/Versions/A/Headers/ -lcblas -std=c++17 -msse4.2 -mfpmath=sse -pthread -O3 -DTARGET_CBLAS

blis:

clang++ mnist.cpp -I /usr/local/include/blis /usr/local/lib/libblis.a -std=c++17 -msse4.2 -mfpmath=sse -pthread -O3 -DTARGET_CBLAS

MKL:

clang++ mnist.cpp ${MKLROOT}/lib/libmkl_intel_ilp64.a ${MKLROOT}/lib/libmkl_sequential.a ${MKLROOT}/lib/libmkl_core.a -lpthread -lm -ldl -std=c++17 -msse4.2 -mfpmath=sse -pthread  -DMKL_ILP64 -m64 -I${MKLROOT}/include -O3 -DTARGET_MKL 

CUDA:

nvcc mnist_cublas.cu -lcublas -O3 -Xptxas -O3,-v

Running

./a.out

Performance

Performance Train Time [s] Test Time [s]
cblas 0.9668 37.192 0.200
blis 0.9667 17.471 0.122
MKL 0.9664 16.406 0.098
cuBLAS 0.9624 66.196 0.735
Python 0.9668 260.706 1.362

MKL is currently 4.125 times faster than cuBLAS (after refactoring and unifying the cblas/MKL code) – I guess there is room for improvement for my CUDA implementation.

Hardware used:
MacBook Pro (15-inch, 2018), 2,9 GHz Intel Core i9
GeForce GTX 1080 Ti

About

C++ from Python translations for cBLAS, cuBLAS and Intel® MKL of "Make Your Own Neural Network" book

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published