The project aims to understand and investigate the potential of Spiking neural networks (SNNs) against more traditional deep learning architectures such as CNNs and RNNs.
In a few words, SNNs [1] are an alternative to the artificial neural network that mimics the behavior of biological neurons by transmitting information using spikes rather than continuous values. By mirroring these natural processes, SNNs aim to enhance the capabilities of artificial intelligence systems and reduce computational demands [2]. We wrote a broad audience article on the topic, which can be found here.
This project focuses on applying SNNs to new modalities, specifically, assessing their performance on tabular data classification with Iris dataset, image classification with CIFAR, time series classification with Human activity dataset. We found that SNNs do not compete with other architectures for images or tabular data but they can achieve competitive results with CNNs and RNNs on time series data. Further experiments are needed to understand the potential of SNNs in this domain. We also wrote a more detailed report on the project, which can be found here.
[1] Maass, W. (1997). Networks of spiking neurons: the third generation of neural network models. Neural networks, 10(9), 1659-1671.
[2] Maass, W., & Schmitt, M. (1999). On the complexity of learning for spiking neurons with temporal coding. Information and Computation, 153(1), 26-46.
Please use Python 3.9 and above python>=3.9
Install SpikingJelly using:
git clone https://github.com/fangwei123456/spikingjelly.git
cd spikingjelly
pip install -e .
Installing SpikingJelly using pip
is not yet compatible with this repo.
Install the other dependencies from the requirements.txt file using:
pip install -r requirements.txt
The first thing to do after installing all the dependencies is to specify the datasets_path
in config.py
. Simply create an empty data directory, preferably with two subdirectories, one for SHD and the other SSC. The datasets_path
should correspond to these subdirectories.
The datasets will then be downloaded and preprocessed automatically. For example:
cd SNN-delays
mkdir -p Datasets/SHD
mkdir -p Datasets/SSC
To train a new model as defined by the config.py
simply use:
python main.py
The loss and accuracy for the training and validation at every epoch will be printed to stdout
and the best model will be saved to the current directory.
If the use_wandb
parameter is set to True
, a more detailed log will be available at the wandb project specified in the configuration.