Skip to content

This notebook is designed to plot the attention maps of a vision transformer trained on MNIST digits.

Notifications You must be signed in to change notification settings

mashaan14/VisionTransformer-MNIST

Repository files navigation

Check out my YouTube Video on ViT in JAX


VisionTransformer (ViT) Attention Maps using MNIST

Code walkthrough (YouTube Video)

An attention map for a test image (code)

An attention map for query and key images (code)

References

@misc{dosovitskiy2021image,
  title         = {An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale},
  author        = {Alexey Dosovitskiy and Lucas Beyer and Alexander Kolesnikov and Dirk Weissenborn and Xiaohua Zhai and Thomas Unterthiner and Mostafa Dehghani and Matthias Minderer and Georg Heigold and Sylvain Gelly and Jakob Uszkoreit and Neil Houlsby},
  year          ={2021},
  eprint        = {2010.11929},
  archivePrefix = {arXiv},
  primaryClass  = {cs.CV}
}

About

This notebook is designed to plot the attention maps of a vision transformer trained on MNIST digits.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published