-
Notifications
You must be signed in to change notification settings - Fork 0
How to use
At the moment the code base of the project is represented by a single Visual Studio (2019) solution, DeepLearning, containing 3 VS projects:
- DeepLearning, containing code of the actual machine learning-related algorithms;
- DeepLearningTest, containing testing functionality of the algorithms (via VSTest framework);
- NetrunnerConsole, which produces a standalone executable to run training of neural networks from console;
All the functions, classes and methods have rather comprehensive summary sections. Most of the methods/components have dedicated test-suites in the testing project, which, besides the obvious validation/verification purposes, also provide examples of how the corresponding functionality is supposed to be invoked.
The solution requires CUDA SDK 11.7.
Currently there are 3 classes that represent 3 types of layers that one can use to build a neural net:
-
NLayer
(see NLayer.h) - a fully connected neural layer; -
CLayer
(see CLayer.h) - a convolution neural layer; -
PLayer
(see PLayer.h) - a "pooling" layer.
The net itself is represented with a class Net
(see Net.h).
A fully connected layer. To instantiate it we need to provide 3 pieces of information: a number of input neurons; a number of output neurons and the type of activation function to use. For example, this line of code
NLayer(100, 10, ActivationFunctionId::SIGMOID)
constructs a fully connected neural layer with 100 input neurons and 10 output neurons, using "sigmoid" as its activation function. Bu default, weight and biases of the layer are initialized by uniformly distributed pseudo-random floating point values from [-1, 1]. The constructor also provides 3 default parameters allowing more customized initialization. Besides "sigmoid" activation, one can use "hyperbolic tangent" (ActivationFunctionId::TANH
), "linear rectifier" (ActivationFunctionId::RELU
) and "soft-max" (ActivationFunctionId::SOFTMAX
) activation functions (see ActivationFunction.h for details).
A convolution neural layer. To instantiate this type of a layer one should provide the input data (image) size (3d --- number of channels, height, width), filter window size (2d --- height, width), number of filters to use (defines number of channels in the output of the layer) and type of activation function to use. For example, the following line of code
CLayer({1,28,28}, {5,5}, 20, ActivationFunctionId::RELU)
instantiates a convolution neural layer that takes a single channel image of size 28x28 as its input, uses 20 filters (convolution kernels) of size 1x5x5 which implies that its output will have size 20x24x24. Optionally one can specify "paddings" and "strides" used during the convolution as a couple of default parameters of the constructor.
In the current implementation a "pool" layer allows one to do "min", "max" or "average" pooling operation. To instantiate it, one should specify 3 parameters: 3d size of the input data (image), 2d size of the "pool" window (will be applied to each channel of the input data) and the "pool type" identifier (PoolTypeId::MAX, PoolTypeId::MIN, PoolTypeId::AVERAGE).
A couple of lines of code below give an example of how a simple neural network can be instantiated:
Net net;
auto size_in_next = in_data_size;
size_in_next = net.append_layer<CLayer<D>>(size_in_next, Index2d{ 5 }, run_long_test ? 20 : 5, ActivationFunctionId::RELU);
size_in_next = net.append_layer<PLayer<D>>(size_in_next, Index2d{ 2 }, PoolTypeId::MAX);
size_in_next = net.append_layer<CLayer<D>>(size_in_next, Index2d{ 5 }, run_long_test ? 40 : 10, ActivationFunctionId::RELU);
size_in_next = net.append_layer<PLayer<D>>(size_in_next, Index2d{ 2 }, PoolTypeId::MAX);
size_in_next = net.append_layer<NLayer<D>>(size_in_next.coord_prod(), 100, ActivationFunctionId::RELU, Real(-1), Real(1), true);
size_in_next = net.append_layer<NLayer<D>>(size_in_next.coord_prod(), out_size, ActivationFunctionId::SOFTMAX, Real(-1), Real(1), true);