βSometimes, the best way to learn how something works is to build it yourself.β
PyCandle 2 is not just another deep learning library β itβs your gateway to understanding what happens under the hood of frameworks like PyTorch. Designed for students, researchers, and deep learning enthusiasts, PyCandle lets you dive deep into the mathematics and magic of neural networks, gradients, and computational graphs.
Built entirely with NumPy (and optional CuPy for GPU acceleration), PyCandle is minimal, scalable and most importantly β educational.
β¨ This is an improved and enhanced version of the original PyCandle library, now with full computational graph support, extended modules, and GPU acceleration!
- Build and experiment with neural networks, layer by layer.
- Mimics the PyTorch API to keep learning smooth and intuitive.
- A fully functional computational graph that tracks gradients for backpropagation.
- Support for Linear, Convolutional, Embedding, BatchNorm, LayerNorm, and more.
- Key activation functions: ReLU, Sigmoid, Tanh, Leaky ReLU, Softmax, and GeLU.
- Gradient-based optimizers: SGD, NAG, RMSProp, and Adam.
- Seamless GPU acceleration using CuPy.
- Every operation, from matrix multiplications to activations, integrates smoothly into the computational graph.
- Backpropagation is automatic and transparent β allowing you to see how gradients flow through your network.
import candle
import candle.nn as nn
from candle.utils.data import DataLoader
from candle.utils import accuracy
from candle.optim import SGD, ADAM, RMSProp, NAG
from candle import Tensor
class SimpleNet(nn.Module):
def __init__(self):
super().__init__()
self.fc1 = nn.Linear(784, 128)
self.relu = nn.ReLU()
self.fc2 = nn.Linear(128, 10)
def forward(self, x):
x = self.fc1(x)
x = self.relu(x)
logits = self.fc2(x)
return logits
# Instantiate model, loss, and optimizer
model = SimpleNet()
loss_fn = nn.CrossEntropyLoss()
optimizer = optim.Adam(model, lr=0.001)
if __name__ == '__main__':
logits = model(x)
model.zero_grad()
loss = loss_fn(logits, target)
loss.backward()
optimizer.step()
model.to('gpu') # Switch from CPU to GPU seamlessly
At the heart of PyCandle is the Tensor β a flexible, gradient-aware structure that integrates seamlessly into the computational graph.
a = Tensor([1, 2, 3], requires_grad=True)
b = Tensor([4, 5, 6])
c = a + b # Automatically builds the graph
gradients = c.backward() # Computes gradients
print("Gradients:\n", gradients[a])
>>> Gradients:
>>> [1. 1. 1.]
The Tensor class supports arithmetic operations, activation functions, and more:
x = Tensor([[1, -2], [3, -4]], requires_grad=True)
y = Tensor.relu(x) # Apply ReLU activation
z = y ** 2 # Element-wise square
gradients = z.backward() # Computes gradients through the computational graph
print("Input Tensor:\n", x)
print("Gradients:\n", gradients[x])
>>> Input Tensor:
>>> array([[ 1., -2.],
[ 3., -4.]])
>>> Gradients:
>>> [[2. 0.]
[6. 0.]]
Matrix multiplications and advanced tensor manipulations are also fully supported:
w = Tensor([[1, 2], [3, 4]], requires_grad=True)
v = Tensor([[2, 0], [1, 3]])
result = w @ v # Matrix multiplication
gradients = result.backward() # Backpropagation
print("Result of Matrix Multiplication:\n", result)
print("Gradient of w:\n", gradients[w])
>>> array([[ 4., 6.],
[10., 12.]])
>>> Gradient of w:
>>> [[2. 4.]
[2. 4.]]
Switch seamlessly between CPU and GPU using CuPy:
a = Tensor([[6.0, 2.0], [-1.0, 4.0]], requires_grad=True).to('gpu')
b = Tensor([[2.0, 10.0], [1.0, 3.0]]).to('gpu')
c = a @ b # GPU-based matrix multiplication
gradients = c.backward()
print("Result on GPU:\n", c)
print(c.device)
>>> array([[14., 66.],
[ 2., 2.]])
>>> gpu
- Fully Connected:
Linear
- Convolutional:
Conv
- Normalization:
BatchNorm
,LayerNorm
- Regularization:
Dropout
- Submodules:
ModuleList
,Embedding
- Recurrent Networks: Coming soon β
RNN
,LSTM
andGRU
!
Choose from classic optimization algorithms:
- SGD: Stochastic Gradient Descent
- NAG: Nesterov Accelerated Gradient
- RMSProp: Root Mean Square Propagation
- Adam: Adaptive Momentum
class ConvNet(nn.Module):
def __init__(self):
super().__init__()
self.conv1 = nn.Conv(1, 8, (3, 3)) # Convolutional layer
self.relu1 = nn.ReLU()
self.flatten = nn.Flatten()
self.fc1 = nn.Linear(8 * 26 * 26, 10) # Fully connected
def forward(self, x):
x = self.conv1(x)
x = self.relu1(x)
x = self.flatten(x)
return self.fc1(x)
- Add RNN, LSTM, GRUs.
- Check for the correctness of the convolutional operations.
- Check for the correctness of NAG, RMSProp, Adam.
- Provide computational graph for indexing, stack, split, cat, min, max etc... operations.
- Add state dictionary
- Replace CuPy with Triton or Cuda.
- Replace NumPy with C++ library.
- Make Embdedding.forward() method more efficient as it is currently impossible to train transformers due to low perfomance of operation.
- Redesign library
Start exploring the building blocks of deep learning today! Whether you're a student eager to understand neural networks or a researcher experimenting with custom implementations, PyCandle is your perfect companion.
π₯ Light up your learning with PyCandle! π₯