Skip to content

kennetms/Accenture_Hybrid_Guided_VAE

Repository files navigation

Accenture_Hybrid_Guided_VAE

The Hybrid Guided VAE (HG-VAE) is an open source library based on pytorch.

HG-VAE learns disentangled feature representations of event-based vision data streams by encoding an event stream with a spiking Variational Auto Encoder (VAE).

Then encoded features are guided with supervised learning to disentangle certain features, such as the features that characterize different digits shown in this T-SNE plot which clusters the digits based on their disentangled encoded representations.

The features can then be decoded and visualized with a convolutional neural network decoder.

Accenture Labs created the HG-VAE in collaboration with the UCI NMI Lab.

By open-sourcing the components that we used to enable training SNN models with our method, we hope to encourage adoption to other datasets, problem domains, and collaboration for improvements to the methods.

For more details refer to this paper.

If you use this library for your own work please cite it.

Table of Contents

Installation

Prerequisites

  • Linux
  • Python 3.6+

Provision a Virtual Environment

Create and activate a virtual environment (conda)

$ python3 -m venv hgvae
source hgvae/bin/activate

To clone the repository and install the required dependencies run the following:

$ git clone https://github.com/kennetms/Accenture_Hybrid_Guided_VAE
$ cd Accenture_Hybrid_Guided_VAE
$ pip install -r requirements.txt

The HG-VAE is built on pytorch. To install the correct pytorch version for your machine visit the link here

Datasets

DVSGestures

To install the DVSGestures dataset to train and run Hybrid Guided VAE models, run the following:

$ cd data
$ wget https://www.dropbox.com/s/3v0t4hn0c9asior/dvs_zipped.zip\?dl=0
$ unzip 'dvs_zipped.zip?dl=0'
$ cd ..

N-MNIST

The torchenuromorphic library will automatically install the N-MNIST dataset if it is not on your local machine when you try to train or run an N-MNIST model.

Training Models

Currently the HG-VAE library supports training three different kinds of models.

Two use the DVSGestures dataset, while the other uses the NMNIST dataset

To train and evaluate on DVSGestures run the following line:

$ cd dvs_gestures
$ python train_gestures.py

To train and evaluate with DVSGestures guided on lighting conditions:

$ cd dvs_gesture_lighting
$ python train_lights.py

To train and evaluate on the N-MNIST digits:

$ cd nmnist
$ python train_nmnist.py

To train a model with SLAYER-Loihi that is compatible with Loihi run the .ipynb notebook in the snn_loihi_example folder.

$ cd slayer_loihi_example
Loihi_Simulator_training.ipynb

Models typically take several hours to train, with intermediate results and models stored in the logs/ directory.

Loading Trained Models

Example models with their parameters can be found in the subfolders named

example_model

Checkpoints with saved models are placed in the logs directory.

Intermediate results can be viewed with tensorboard.

For example:

$ tensorboard --logdir logs/train_gestures/default/Mar23_20-57-42 --port 6007 --bind_all

The files in the parameters/ directory can be changed to load a model from a saved checkpoint.

For example, to resume a model from a checkpoint edit the line

resume_from: None

to the checkpoints folder of the model you want to resume from. For example, if you wanted to use the provided DVSGestures model:

resume_from: ../dvs_gestures/example_model/checkpoints/

To train a model from scratch, change the line to:

resume_from: None

Licensing

These assets are licensed under the Apache 2.0 License.

How to Contribute

We welcome community contributions, especially for new models, improvements, and documentation.

If you'd like to contribute your work to the repository you can do so by opening up a pull request.

How to Cite

If you use or adopt the models, code, or methods presented here please cite our work as follows:

@inproceedings{10.1145/3517343.3517372,
author = {Stewart, Kenneth and Danielescu, Andreea and Shea, Timothy and Neftci, Emre},
title = {Encoding Event-Based Data With a Hybrid SNN Guided Variational Auto-Encoder in Neuromorphic Hardware},
year = {2022},
isbn = {9781450395595},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
url = {https://doi.org/10.1145/3517343.3517372},
doi = {10.1145/3517343.3517372},
abstract = {Neuromorphic hardware equipped with learning capabilities can adapt to new, real-time data. While models of Spiking Neural Networks (SNNs) can now be trained using gradient descent to reach an accuracy comparable to equivalent conventional neural networks, such learning often relies on external labels. However, real-world data is unlabeled which can make supervised methods inapplicable. To solve this problem, we propose a Hybrid Guided Variational Autoencoder (VAE) which encodes event-based data sensed by a Dynamic Vision Sensor (DVS) into a latent space representation using an SNN. These representations can be used as an embedding to measure data similarity and predict labels in real-world data. We show that the Hybrid Guided-VAE achieves 87% classification accuracy on the DVSGesture dataset and it can encode the sparse, noisy inputs into an interpretable latent space representation, visualized through T-SNE plots. We also implement the encoder component of the model on neuromorphic hardware and discuss the potential for our algorithm to enable real-time learning from real-world event data.},
booktitle = {Neuro-Inspired Computational Elements Conference},
pages = {88–97},
numpages = {10},
keywords = {event-based sensing, spiking neural networks, neuromorphic computing, generative models},
location = {Virtual Event, USA},
series = {NICE 2022}
}

Contacts

Andreea Danielescu
Future Technologies, Accenture Labs
[email protected]

​Kenneth Stewart
PhD Candidate, University of California, Irvine
[email protected]

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published