Skip to content

GauravBh1010tt/Deep-Cloud

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

10 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Deep-Cloud

###PLease refer to my blog Deep Cloud for the implementation details.

Deep learning algorithm are known to be computationally expensive and can take several days to train depending upon the size of data-set. To speed up the processing time, the use of GPU and distributed computing using map reduce can be seen. In this project I have tried to combine both of these processing paradigm.

Features of the project:

  • Mrjob is used as a MapReduce abstraction to implement a two layer neural network.
  • Each mapper or individual machine is equipped with a GPU and uses Theano/Tensorflow for GPU multi-threading.
  • Gradients are computed in the mapper throught backpropagation.
  • The reducer performs the updation of weights.
  • The number of epochs is the number of time the step function is called.

About

Implementation of neural nets using MapReduce and Theano

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages