Authors: David Ruhe, Johannes Brandstetter, Patrick Forré
- Clifford Group-Equivariant Simplicial Message Passing Networks
- Clifford-Steerable CNNs
- Geometric Algebra Transformer
- Multivector Neurons: Better and Faster O(n)-Equivariant Clifford GNNs
We introduce Clifford Group Equivariant Neural Networks: a novel approach for constructing
- Python 3.10.8
- torch 1.13.1+cu116
- PyYAML 6.0
- scikit-learn 1.2.2
- h5py 3.8.0
- tqdm 4.65.0
Check notebooks/tutorial.ipynb
for a tiny introduction to the Clifford equivariant layers.
There's also a tutorial given at .
algebra/
: Contains the Clifford algebra implementation.configs/
: Contains the configuration files.data/
: Contains the data loading scripts.engineer/
: Contains the training and evaluation scripts.models/
: Contains model and layer implementations.notebooks/
: Contains the tutorial notebook.
Set a DATAROOT environment variable.
E.g., export DATAROOT=./datasets/
For the signed volumes and convex hull experiments, run data/o3.py
to generate the data.
For the
python o3.py -C configs/engineer/trainer.yaml -C configs/optimizer/adam.yaml -C configs/dataset/o3.yaml -C configs/model/o3_cgmlp.yaml --trainer.max_steps=131072 --trainer.val_check_interval=1024 --dataset.batch_size=128 --dataset.num_samples=65536 --model.hidden_features=96 --model.num_layers=4 --optimizer.lr=0.001
python hulls.py -C configs/engineer/trainer.yaml -C configs/optimizer/adam.yaml -C configs/dataset/hulls.yaml -C configs/model/hulls_cgmlp.yaml --trainer.max_steps=131072 --trainer.val_check_interval=1024 --dataset.batch_size=128 --dataset.num_samples=65536 --model.hidden_features=32 --model.num_layers=4 --optimizer.lr=0.001
python o5_regression.py -C configs/engineer/trainer.yaml -C configs/optimizer/adam.yaml -C configs/dataset/o5_regression.yaml -C configs/model/o5_cgmlp.yaml --trainer.max_steps=131072 --trainer.val_check_interval=1024 --dataset.batch_size=32 --dataset.num_samples=50000 --optimizer.lr=0.001
python nbody.py -C configs/engineer/trainer.yaml -C configs/optimizer/adam.yaml -C configs/dataset/nbody.yaml -C configs/model/nbody_cggnn.yaml --trainer.val_check_interval=128 --trainer.max_steps=131072 --dataset.batch_size=100 --dataset.num_samples=3000 --optimizer.lr=0.004 --optimizer.weight_decay=0.0001
CUDA_VISIBLE_DEVICES=0,1,2,3 torchrun --standalone --nproc_per_node=1 top_tagging.py -C configs/engineer/trainer.yaml -C configs/optimizer/adam.yaml -C configs/dataset/top_tagging.yaml -C configs/model/lorentz_cggnn.yaml --trainer.max_steps=331126 --trainer.val_check_interval=8192 --dataset.batch_size=32 --optimizer.lr=0.001
If you found this code useful, please cite our paper:
@inproceedings{
ruhe2023clifford,
title={Clifford Group Equivariant Neural Networks},
author={Ruhe, David and Brandstetter, Johannes and Forr{\'e}, Patrick},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=n84bzMrGUD}
}