By Ze Liu*, Yutong Lin*, Yue Cao*, Han Hu*, Yixuan Wei, Zheng Zhang, Stephen Lin and Baining Guo.
This repo is the official implementation of "Swin Transformer: Hierarchical Vision Transformer using Shifted Windows". It currently includes code and models for the following tasks:
Image Classification: Included in this repo. See get_started.md for a quick start.
Object Detection and Instance Segmentation: See Swin Transformer for Object Detection.
Semantic Segmentation: See Swin Transformer for Semantic Segmentation.
Video Action Recognition: See Video Swin Transformer.
Semi-Supervised Object Detection: See Soft Teacher.
SSL: Contrasitive Learning: See Transformer-SSL.
🔥 SSL: Masked Image Modeling: See SimMIM.
10/12/2021
News
: Swin Transformer received ICCV 2021 best paper award (Marr Prize).
08/09/2021
- Soft Teacher will appear at ICCV2021. The code will be released at GitHub Repo.
Soft Teacher
is an end-to-end semi-supervisd object detection method, achieving a new record on the COCO test-dev:61.3 box AP
and53.0 mask AP
.
07/03/2021
- Add Swin MLP, which is an adaption of
Swin Transformer
by replacing all multi-head self-attention (MHSA) blocks by MLP layers (more precisely it is a group linear layer). The shifted window configuration can also significantly improve the performance of vanilla MLP architectures.
06/25/2021
- Video Swin Transformer is released at Video-Swin-Transformer.
Video Swin Transformer
achieves state-of-the-art accuracy on a broad range of video recognition benchmarks, including action recognition (84.9
top-1 accuracy on Kinetics-400 and86.1
top-1 accuracy on Kinetics-600 with~20x
less pre-training data and~3x
smaller model size) and temporal modeling (69.6
top-1 accuracy on Something-Something v2).
05/12/2021
- Used as a backbone for
Self-Supervised Learning
: Transformer-SSL
Using Swin-Transformer as the backbone for self-supervised learning enables us to evaluate the transferring performance of the learnt representations on down-stream tasks, which is missing in previous works due to the use of ViT/DeiT, which has not been well tamed for down-stream tasks.
04/12/2021
Initial commits:
- Pretrained models on ImageNet-1K (Swin-T-IN1K, Swin-S-IN1K, Swin-B-IN1K) and ImageNet-22K (Swin-B-IN22K, Swin-L-IN22K) are provided.
- The supported code and models for ImageNet-1K image classification, COCO object detection and ADE20K semantic segmentation are provided.
- The cuda kernel implementation for the local relation layer is provided in branch LR-Net.
Swin Transformer (the name Swin
stands for Shifted window) is initially described in arxiv, which capably serves as a
general-purpose backbone for computer vision. It is basically a hierarchical Transformer whose representation is
computed with shifted windows. The shifted windowing scheme brings greater efficiency by limiting self-attention
computation to non-overlapping local windows while also allowing for cross-window connection.
Swin Transformer achieves strong performance on COCO object detection (58.7 box AP
and 51.1 mask AP
on test-dev) and
ADE20K semantic segmentation (53.5 mIoU
on val), surpassing previous models by a large margin.
ImageNet-1K and ImageNet-22K Pretrained Models
name | pretrain | resolution | acc@1 | acc@5 | #params | FLOPs | FPS | 22K model | 1K model |
---|---|---|---|---|---|---|---|---|---|
Swin-T | ImageNet-1K | 224x224 | 81.2 | 95.5 | 28M | 4.5G | 755 | - | github/baidu/config/log |
Swin-S | ImageNet-1K | 224x224 | 83.2 | 96.2 | 50M | 8.7G | 437 | - | github/baidu/config/log |
Swin-B | ImageNet-1K | 224x224 | 83.5 | 96.5 | 88M | 15.4G | 278 | - | github/baidu/config/log |
Swin-B | ImageNet-1K | 384x384 | 84.5 | 97.0 | 88M | 47.1G | 85 | - | github/baidu/config |
Swin-B | ImageNet-22K | 224x224 | 85.2 | 97.5 | 88M | 15.4G | 278 | github/baidu | github/baidu/config |
Swin-B | ImageNet-22K | 384x384 | 86.4 | 98.0 | 88M | 47.1G | 85 | github/baidu | github/baidu/config |
Swin-L | ImageNet-22K | 224x224 | 86.3 | 97.9 | 197M | 34.5G | 141 | github/baidu | github/baidu/config |
Swin-L | ImageNet-22K | 384x384 | 87.3 | 98.2 | 197M | 103.9G | 42 | github/baidu | github/baidu/config |
ImageNet-1K Pretrained Swin MLP Models
name | pretrain | resolution | acc@1 | acc@5 | #params | FLOPs | FPS | 1K model |
---|---|---|---|---|---|---|---|---|
Mixer-B/16 | ImageNet-1K | 224x224 | 76.4 | - | 59M | 12.7G | - | official repo |
ResMLP-S24 | ImageNet-1K | 224x224 | 79.4 | - | 30M | 6.0G | 715 | timm |
ResMLP-B24 | ImageNet-1K | 224x224 | 81.0 | - | 116M | 23.0G | 231 | timm |
Swin-T/C24 | ImageNet-1K | 256x256 | 81.6 | 95.7 | 28M | 5.9G | 563 | github/baidu/config |
SwinMLP-T/C24 | ImageNet-1K | 256x256 | 79.4 | 94.6 | 20M | 4.0G | 807 | github/baidu/config |
SwinMLP-T/C12 | ImageNet-1K | 256x256 | 79.6 | 94.7 | 21M | 4.0G | 792 | github/baidu/config |
SwinMLP-T/C6 | ImageNet-1K | 256x256 | 79.7 | 94.9 | 23M | 4.0G | 766 | github/baidu/config |
SwinMLP-B | ImageNet-1K | 224x224 | 81.3 | 95.3 | 61M | 10.4G | 409 | github/baidu/config |
Note: access code for baidu
is swin
. C24 means each head has 24 channels.
COCO Object Detection (2017 val)
Backbone | Method | pretrain | Lr Schd | box mAP | mask mAP | #params | FLOPs |
---|---|---|---|---|---|---|---|
Swin-T | Mask R-CNN | ImageNet-1K | 3x | 46.0 | 41.6 | 48M | 267G |
Swin-S | Mask R-CNN | ImageNet-1K | 3x | 48.5 | 43.3 | 69M | 359G |
Swin-T | Cascade Mask R-CNN | ImageNet-1K | 3x | 50.4 | 43.7 | 86M | 745G |
Swin-S | Cascade Mask R-CNN | ImageNet-1K | 3x | 51.9 | 45.0 | 107M | 838G |
Swin-B | Cascade Mask R-CNN | ImageNet-1K | 3x | 51.9 | 45.0 | 145M | 982G |
Swin-T | RepPoints V2 | ImageNet-1K | 3x | 50.0 | - | 45M | 283G |
Swin-T | Mask RepPoints V2 | ImageNet-1K | 3x | 50.3 | 43.6 | 47M | 292G |
Swin-B | HTC++ | ImageNet-22K | 6x | 56.4 | 49.1 | 160M | 1043G |
Swin-L | HTC++ | ImageNet-22K | 3x | 57.1 | 49.5 | 284M | 1470G |
Swin-L | HTC++* | ImageNet-22K | 3x | 58.0 | 50.4 | 284M | - |
Note: * indicates multi-scale testing.
ADE20K Semantic Segmentation (val)
Backbone | Method | pretrain | Crop Size | Lr Schd | mIoU | mIoU (ms+flip) | #params | FLOPs |
---|---|---|---|---|---|---|---|---|
Swin-T | UPerNet | ImageNet-1K | 512x512 | 160K | 44.51 | 45.81 | 60M | 945G |
Swin-S | UperNet | ImageNet-1K | 512x512 | 160K | 47.64 | 49.47 | 81M | 1038G |
Swin-B | UperNet | ImageNet-1K | 512x512 | 160K | 48.13 | 49.72 | 121M | 1188G |
Swin-B | UPerNet | ImageNet-22K | 640x640 | 160K | 50.04 | 51.66 | 121M | 1841G |
Swin-L | UperNet | ImageNet-22K | 640x640 | 160K | 52.05 | 53.53 | 234M | 3230G |
@article{liu2021Swin,
title={Swin Transformer: Hierarchical Vision Transformer using Shifted Windows},
author={Liu, Ze and Lin, Yutong and Cao, Yue and Hu, Han and Wei, Yixuan and Zhang, Zheng and Lin, Stephen and Guo, Baining},
journal={International Conference on Computer Vision (ICCV)},
year={2021}
}
@misc{liu2021swinv2,
title={Swin Transformer V2: Scaling Up Capacity and Resolution},
author={Ze Liu and Han Hu and Yutong Lin and Zhuliang Yao and Zhenda Xie and Yixuan Wei and Jia Ning and Yue Cao and Zheng Zhang and Li Dong and Furu Wei and Baining Guo},
year={2021},
eprint={2111.09883},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
- For Image Classification, please see get_started.md for detailed instructions.
- For Object Detection and Instance Segmentation, please see Swin Transformer for Object Detection.
- For Semantic Segmentation, please see Swin Transformer for Semantic Segmentation.
- For Self-Supervised Learning, please see Transformer-SSL.
- For Video Recognition, please see Video Swin Transformer.
In this pargraph, we cross link third-party repositories which use Swin and report results. You can let us know by raising an issue
(Note please report accuracy numbers and provide trained models in your new repository to facilitate others to get sense of correctness and model behavior
)
[12/21/2021] Swin Transformer for StyleGAN: StyleSwin
[12/13/2021] Swin Transformer for Face Recognition: FaceX-Zoo
[08/29/2021] Swin Transformer for Image Restoration: SwinIR
[08/12/2021] Swin Transformer for person reID: https://github.com/layumi/Person_reID_baseline_pytorch
[06/29/2021] Swin-Transformer in PaddleClas and inference based on whl package: https://github.com/PaddlePaddle/PaddleClas
[04/14/2021] Swin for RetinaNet in Detectron: https://github.com/xiaohu2015/SwinT_detectron2.
[04/16/2021] Included in a famous model zoo: https://github.com/rwightman/pytorch-image-models.
[04/20/2021] Swin-Transformer classifier inference using TorchServe: https://github.com/kamalkraj/Swin-Transformer-Serve
This project welcomes contributions and suggestions. Most contributions require you to agree to a Contributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us the rights to use your contribution. For details, visit https://cla.opensource.microsoft.com.
When you submit a pull request, a CLA bot will automatically determine whether you need to provide a CLA and decorate the PR appropriately (e.g., status check, comment). Simply follow the instructions provided by the bot. You will only need to do this once across all repos using our CLA.
This project has adopted the Microsoft Open Source Code of Conduct. For more information see the Code of Conduct FAQ or contact [email protected] with any additional questions or comments.
This project may contain trademarks or logos for projects, products, or services. Authorized use of Microsoft trademarks or logos is subject to and must follow Microsoft's Trademark & Brand Guidelines. Use of Microsoft trademarks or logos in modified versions of this project must not cause confusion or imply Microsoft sponsorship. Any use of third-party trademarks or logos are subject to those third-party's policies.