Skip to content

Commit

Permalink
update top news (PaddlePaddle#179)
Browse files Browse the repository at this point in the history
  • Loading branch information
GuoxiaWang authored Jan 11, 2023
1 parent 6cf251d commit 970d8c3
Showing 1 changed file with 4 additions and 2 deletions.
6 changes: 4 additions & 2 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -27,15 +27,17 @@

## Top News 🔥

**Update (2022-07-18):** PLSC v2.3 is released, a new upgrade, more modular and highly extensible. Support more tasks, such as [ViT](https://arxiv.org/abs/2010.11929), [DeiT](https://arxiv.org/abs/2012.12877). The ``static`` graph mode will no longer be maintained as of this release.
**Update (2023-01-11):** PLSC v2.4 is released, we refactored the entire repository based on task types. This repository has been adapted to PaddlePaddle release 2.4. In terms of models, we have added 4 new ones, including [FaceViT](https://arxiv.org/abs/2203.15565), [CaiT](https://arxiv.org/abs/2103.17239), [MoCo v3](https://arxiv.org/abs/2104.02057), [MAE](https://arxiv.org/abs/2111.06377). At present, each model in the repository can be trained from scratch to achieve the original official accuracy, especially the training of ViT-Large on the ImageNet21K dataset. In addition, we also provide a method for ImageNet21K data preprocessing. In terms of AMP training, PLSC uses FP16 O2 training by default, which can speed up training while maintaining accuracy.

**Update (2022-07-18):** PLSC v2.3 is released, a new upgrade, more modular and highly extensible. Support more tasks, such as [ViT](https://arxiv.org/abs/2010.11929), [DeiT](https://arxiv.org/abs/2012.12877). The `static` graph mode will no longer be maintained as of this release.

**Update (2022-01-11):** Supported NHWC data format of FP16 to improve 10% throughtput and decreased 30% GPU memory. It supported 92 million classes on single node 8 NVIDIA V100 (32G) and has high training throughtput. Supported best checkpoint save. And we released 18 pretrained models and PLSC v2.2.

**Update (2021-12-11):** Released [Zhihu Technical Artical](https://zhuanlan.zhihu.com/p/443091282) and [Bilibili Open Class](https://www.bilibili.com/video/BV1VP4y1G73X)

**Update (2021-10-10):** Added FP16 training, improved throughtput and optimized GPU memory. It supported 60 million classes on single node 8 NVIDIA V100 (32G) and has high training throughtput.

**Update (2021-09-10):** This repository supported both ``static`` mode and ``dynamic`` mode to use paddlepaddle v2.2, which supported 48 million classes on single node 8 NVIDIA V100 (32G). It added [PartialFC](https://arxiv.org/abs/2010.05222), SparseMomentum, and [ArcFace](https://arxiv.org/abs/1801.07698), [CosFace](https://arxiv.org/abs/1801.09414) (we refer to MarginLoss). Backbone includes IResNet and MobileNet.
**Update (2021-09-10):** This repository supported both `static` mode and `dynamic` mode to use paddlepaddle v2.2, which supported 48 million classes on single node 8 NVIDIA V100 (32G). It added [PartialFC](https://arxiv.org/abs/2010.05222), SparseMomentum, and [ArcFace](https://arxiv.org/abs/1801.07698), [CosFace](https://arxiv.org/abs/1801.09414) (we refer to MarginLoss). Backbone includes IResNet and MobileNet.


## Installation
Expand Down

0 comments on commit 970d8c3

Please sign in to comment.