You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Pytorch lightning implementation of the VICReg model following the style of other SSL models already available in PL Bolts.
Motivation
VICReg is a SOTA model that is frequently used in Self Supervised Learning benchmarks but is not available in PL Bolts.
This method avoids the collapse problem with two regularizations terms applied to both embeddings separately: (1) a term that maintains the variance of each embedding dimension above a threshold, (2) a term that decorrelates each pair of variables. Unlike most other approaches to the same problem, VICReg does not require techniques such as: weight sharing between the branches, batch normalization, feature-wise normalization, output quantization, stop gradient, memory banks, etc., and achieves results on par with the state of the art on several downstream tasks.
Pitch
I already ported my code of VICReg from PyTorch to PyTorch Lightning. Also, I wrote the module trying to keep the style used in PL bolts for other SOTA Self-Supervised models (e.g. simsiam, simclr).
Then I would like to add this module to the PyTorch Lightning Bolts repo. You can find the code on my repo.
Additionally, similar to other SSL models in PL Bolts the module supports the following datasets: cifar10, stl10 and imagenet.
🚀 Feature
Pytorch lightning implementation of the VICReg model following the style of other SSL models already available in PL Bolts.
Motivation
VICReg is a SOTA model that is frequently used in Self Supervised Learning benchmarks but is not available in PL Bolts.
This method avoids the collapse problem with two regularizations terms applied to both embeddings separately: (1) a term that maintains the variance of each embedding dimension above a threshold, (2) a term that decorrelates each pair of variables. Unlike most other approaches to the same problem, VICReg does not require techniques such as: weight sharing between the branches, batch normalization, feature-wise normalization, output quantization, stop gradient, memory banks, etc., and achieves results on par with the state of the art on several downstream tasks.
Pitch
I already ported my code of VICReg from PyTorch to PyTorch Lightning. Also, I wrote the module trying to keep the style used in PL bolts for other SOTA Self-Supervised models (e.g. simsiam, simclr).
Then I would like to add this module to the PyTorch Lightning Bolts repo. You can find the code on my repo.
Additional context
I have pre-train the model for CIFAR10(here WandB eval metrics for CIFAR10) but pertaining of stl10 and imagenet are still pending
The text was updated successfully, but these errors were encountered: