-
Notifications
You must be signed in to change notification settings - Fork 172
Integrate with PS
Parameter servers provides a way to efficiently communicate between learners. Minerva can be integrated with Mu's parameter server to perform distributed machine learning.
- Download and compile parameter server.
git clone https://github.com/hjk41/parameter_server
cd parameter_server
make minerva
If everything goes successfully, you will get a file named libminervaps.a
under build directory.
Writing distributed applications is different from single-machine apps. Fortunately, Minerva provides a wrapper that makes this task easy. Just implement three functions, and you are ready to go. As an example, you can read {minerva_dir}/apps/ps/mnist_mlp_ps.cpp.
To leverage PS, you need to implement the following functions:
void InitLayer(const std::string & name, float * data, size_t size)
void UpdateLayer(const std::string & name, float * weight, float * grad, size_t size)
int MinervaWorkerMain(int workerId, int numWorkers, int argc, char ** argv)
InitLayer
and UpdateLayer
are executed in parameter server, while MinervaWorkerMain
is executed in workers. As the names suggest, the first two functions are used to initiate and update a layer, while the last is the main function that is executed in worker. The workerId
and numWorkers
parameters can be used to decide which part of input the worker should work on.
Once these three functions are implemented, you should be able to compile the code and integrate it with parameter server.
-
Put your program under
apps/ps
, sayapps/ps/myapp.cpp
-
Configure Minerva to support PS: set BUILD_WITH_PS=1 in configure.in, and set PS_ROOT to your parameter server folder
-
./build.sh --clean && ./build.sh
If all goes well, you should get release/apps/myapp
, which can then be used by parameter server