diff --git a/README.md b/README.md index 0cc4948d..a757559a 100644 --- a/README.md +++ b/README.md @@ -60,13 +60,16 @@ The repo name detrex has several interpretations: - de-t.rex : de means 'the' in Dutch. T.rex, also called Tyrannosaurus Rex, means 'king of the tyrant lizards' and connects to our research work 'DINO', which is short for Dinosaur. ## What's New -v0.2.1 was released on 01/02/2023: -- Support **MaskDINO** coco instance segmentation. -- Support new **DINO** baselines: `ViTDet-DINO`, `Focal-DINO`. -- Support `FocalNet` Backbone. -- Add tutorial about `downloading pretrained backbones`, `verify installation`. -- Modified learning rate scheduler usage and add tutorial on `customized scheduler`. -- Add more readable logging information for criterion and matcher. +v0.3.0 was released on 03/17/2023: +- Support new algorithms including `Anchor-DETR` and `DETA`. +- Release more than 10+ pretrained models (including the converted weights): `DETR-R50 & R101`, `DETR-R50 & R101-DC5`, `DAB-DETR-R50 & R101-DC5`, `DAB-DETR-R50-3patterns`, `Conditional-DETR-R50 & R101-DC5`, `DN-DETR-R50-DC5`, `Anchor-DETR` and the `DETA-Swin-o365-finetune` model which can achieve **`62.9AP`** on coco val. +- Support **MaskDINO** on ADE20k semantic segmentation task. +- Support `EMAHook` during training by setting `train.model_ema.enabled=True`, which can enhance the model performance. DINO with EMA can achieve **`49.4AP`** with only 12epoch training. +- Support mixed precision training by setting `train.amp.enabled=True`, which will **reduce 20% to 30% GPU memory usage**. +- Support `train.fast_dev_run=True` for **fast debugging**. +- Support **encoder-decoder checkpoint** in DINO, which may reduce **30% GPU** memory usage. +- Support a great `slurm training scripts` by @rayleizhu, please check this issue for more details [#213](https://github.com/IDEA-Research/detrex/issues/213) + Please see [changelog.md](./changlog.md) for details and release history. diff --git a/changlog.md b/changlog.md index 6b057c92..a6e6f653 100644 --- a/changlog.md +++ b/changlog.md @@ -1,5 +1,16 @@ ## Change Log +### v0.3.0 (17/03/2023) +- Support new algorithms including `Anchor-DETR` and `DETA`. +- Release more than 10+ pretrained models (including the converted weights): `DETR-R50 & R101`, `DETR-R50 & R101-DC5`, `DAB-DETR-R50 & R101-DC5`, `DAB-DETR-R50-3patterns`, `Conditional-DETR-R50 & R101-DC5`, `DN-DETR-R50-DC5`, `Anchor-DETR` and the `DETA-Swin-o365-finetune` model which can achieve **`62.9AP`** on coco val. +- Support **MaskDINO** on ADE20k semantic segmentation task. +- Support `EMAHook` during training by setting `train.model_ema.enabled=True`, which can enhance the model performance. DINO with EMA can achieve **`49.4AP`** with only 12epoch training. +- Support mixed precision training by setting `train.amp.enabled=True`, which will **reduce 20% to 30% GPU memory usage**. +- Support `train.fast_dev_run=True` for **fast debugging**. +- Support **encoder-decoder checkpoint** in DINO, which may reduce **30% GPU** memory usage. +- Support a great slurm training scripts by @rayleizhu, please check this issue for more details [#213](https://github.com/IDEA-Research/detrex/issues/213) + + ### v0.2.1 (01/02/2023) #### New Algorithm - MaskDINO COCO instance-seg/panoptic-seg pre-release [#154](https://github.com/IDEA-Research/detrex/pull/154) diff --git a/docs/source/changelog.md b/docs/source/changelog.md index 0cbf3271..4e2c5c4b 100644 --- a/docs/source/changelog.md +++ b/docs/source/changelog.md @@ -1,5 +1,16 @@ ## Change Log +### v0.3.0 (17/03/2023) +- Support new algorithms including `Anchor-DETR` and `DETA`. +- Release more than 10+ pretrained models (including the converted weights): `DETR-R50 & R101`, `DETR-R50 & R101-DC5`, `DAB-DETR-R50 & R101-DC5`, `DAB-DETR-R50-3patterns`, `Conditional-DETR-R50 & R101-DC5`, `DN-DETR-R50-DC5`, `Anchor-DETR` and the `DETA-Swin-o365-finetune` model which can achieve **`62.9AP`** on coco val. +- Support **MaskDINO** on ADE20k semantic segmentation task. +- Support `EMAHook` during training by setting `train.model_ema.enabled=True`, which can enhance the model performance. DINO with EMA can achieve **`49.4AP`** with only 12epoch training. +- Support mixed precision training by setting `train.amp.enabled=True`, which will **reduce 20% to 30% GPU memory usage**. +- Support `train.fast_dev_run=True` for **fast debugging**. +- Support **encoder-decoder checkpoint** in DINO, which may reduce **30% GPU** memory usage. +- Support a great slurm training scripts by @rayleizhu, please check this issue for more details [#213](https://github.com/IDEA-Research/detrex/issues/213) + + ### v0.2.1 (01/02/2023) #### New Algorithm - MaskDINO COCO instance-seg/panoptic-seg pre-release [#154](https://github.com/IDEA-Research/detrex/pull/154) diff --git a/setup.py b/setup.py index 569163d4..128c9bba 100644 --- a/setup.py +++ b/setup.py @@ -31,7 +31,7 @@ from torch.utils.cpp_extension import CUDA_HOME, CppExtension, CUDAExtension # detrex version info -version = "0.2.1" +version = "0.3.0" package_name = "detrex" cwd = os.path.dirname(os.path.abspath(__file__))