Releases: pulp-platform/pulp-trainlib
pulp-trainlib-v0.4
G1) Conv2D, Fully-Connected layers - FP32 and FP16
G2) DepthWise Convolution, PointWise Convolution (no stride, padding, biases) - FP32 and FP16
G3) Max and Average Pooling
G4) ReLU activation
G5) Gradient Descent Optimizer
G6) MSELoss and CrossEntropyLoss
G7) Multi-Head Self Attention (FP32, FP16) and RNN (FP32) with enhanced
Check README.md for more details.
pulp-trainlib-v0.3
Release v0.3 of PULP-TrainLib with stable learning primitives in FP32 and FP16.
General features:
G1) DepthWise Convolution, PointWise Convolution, Conv2D, Fully-Connected layers (no pad, no stride, no biases) - FP32 and FP16
G2) Max and Average Pooling
G3) ReLU activation
G4) Gradient Descent Optimizer
G5) MSELoss and CrossEntropyLoss
G6) Multi-Head Self Attention (FP32, FP16) and RNN (FP32) with enhanced
Check README.md for more details.
Also, the TrainLib_Deployer supports G1-5 features and layers.
pulp-trainlib-v0.2
Release v0.2 of PULP-TrainLib with stable learning primitives in FP32 and FP16.
General features:
G1) DepthWise Convolution, PointWise Convolution, Conv2D, Fully-Connected layers (no pad, no stride, no biases) - FP32 and FP16
G2) Max and Average Pooling
G3) ReLU activation
G4) Gradient Descent Optimizer
G5) MSELoss and CrossEntropyLoss
G6) Multi-Head Self Attention (FP32, FP16) and RNN (FP32)
Check README.md for more details.
Also, the TrainLib_Deployer supports G1-5 features.
pulp-trainlib-v0.1
First release of PULP-TrainLib with stable learning primitives in FP32 and FP16.
General features:
- G1) DepthWise Convolution, PointWise Convolution, Conv2D, Fully-Connected layers (no pad, no stride, no biases) - FP32 and FP16
- G2) Max and Average Pooling
- G3) ReLU activation
- G4) Gradient Descent Optimizer
- G5) MSELoss and CrossEntropyLoss
- G6) Multi-Head Self Attention and RNN - FP32
Check README.md for more details.
Also, the TrainLib_Deployer supports G1-5 features.