Skip to content

Commit

Permalink
add changelog (#236)
Browse files Browse the repository at this point in the history
  • Loading branch information
rentainhe authored Mar 17, 2023
1 parent 4ff92af commit 69b50b8
Show file tree
Hide file tree
Showing 4 changed files with 33 additions and 8 deletions.
17 changes: 10 additions & 7 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -60,13 +60,16 @@ The repo name detrex has several interpretations:
- <font color=#008000> <b> de-t.rex </b> </font>: de means 'the' in Dutch. T.rex, also called Tyrannosaurus Rex, means 'king of the tyrant lizards' and connects to our research work 'DINO', which is short for Dinosaur.

## What's New
v0.2.1 was released on 01/02/2023:
- Support **MaskDINO** coco instance segmentation.
- Support new **DINO** baselines: `ViTDet-DINO`, `Focal-DINO`.
- Support `FocalNet` Backbone.
- Add tutorial about `downloading pretrained backbones`, `verify installation`.
- Modified learning rate scheduler usage and add tutorial on `customized scheduler`.
- Add more readable logging information for criterion and matcher.
v0.3.0 was released on 03/17/2023:
- Support new algorithms including `Anchor-DETR` and `DETA`.
- Release more than 10+ pretrained models (including the converted weights): `DETR-R50 & R101`, `DETR-R50 & R101-DC5`, `DAB-DETR-R50 & R101-DC5`, `DAB-DETR-R50-3patterns`, `Conditional-DETR-R50 & R101-DC5`, `DN-DETR-R50-DC5`, `Anchor-DETR` and the `DETA-Swin-o365-finetune` model which can achieve **`62.9AP`** on coco val.
- Support **MaskDINO** on ADE20k semantic segmentation task.
- Support `EMAHook` during training by setting `train.model_ema.enabled=True`, which can enhance the model performance. DINO with EMA can achieve **`49.4AP`** with only 12epoch training.
- Support mixed precision training by setting `train.amp.enabled=True`, which will **reduce 20% to 30% GPU memory usage**.
- Support `train.fast_dev_run=True` for **fast debugging**.
- Support **encoder-decoder checkpoint** in DINO, which may reduce **30% GPU** memory usage.
- Support a great `slurm training scripts` by @rayleizhu, please check this issue for more details [#213](https://github.com/IDEA-Research/detrex/issues/213)


Please see [changelog.md](./changlog.md) for details and release history.

Expand Down
11 changes: 11 additions & 0 deletions changlog.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,16 @@
## Change Log

### v0.3.0 (17/03/2023)
- Support new algorithms including `Anchor-DETR` and `DETA`.
- Release more than 10+ pretrained models (including the converted weights): `DETR-R50 & R101`, `DETR-R50 & R101-DC5`, `DAB-DETR-R50 & R101-DC5`, `DAB-DETR-R50-3patterns`, `Conditional-DETR-R50 & R101-DC5`, `DN-DETR-R50-DC5`, `Anchor-DETR` and the `DETA-Swin-o365-finetune` model which can achieve **`62.9AP`** on coco val.
- Support **MaskDINO** on ADE20k semantic segmentation task.
- Support `EMAHook` during training by setting `train.model_ema.enabled=True`, which can enhance the model performance. DINO with EMA can achieve **`49.4AP`** with only 12epoch training.
- Support mixed precision training by setting `train.amp.enabled=True`, which will **reduce 20% to 30% GPU memory usage**.
- Support `train.fast_dev_run=True` for **fast debugging**.
- Support **encoder-decoder checkpoint** in DINO, which may reduce **30% GPU** memory usage.
- Support a great slurm training scripts by @rayleizhu, please check this issue for more details [#213](https://github.com/IDEA-Research/detrex/issues/213)


### v0.2.1 (01/02/2023)
#### New Algorithm
- MaskDINO COCO instance-seg/panoptic-seg pre-release [#154](https://github.com/IDEA-Research/detrex/pull/154)
Expand Down
11 changes: 11 additions & 0 deletions docs/source/changelog.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,16 @@
## Change Log

### v0.3.0 (17/03/2023)
- Support new algorithms including `Anchor-DETR` and `DETA`.
- Release more than 10+ pretrained models (including the converted weights): `DETR-R50 & R101`, `DETR-R50 & R101-DC5`, `DAB-DETR-R50 & R101-DC5`, `DAB-DETR-R50-3patterns`, `Conditional-DETR-R50 & R101-DC5`, `DN-DETR-R50-DC5`, `Anchor-DETR` and the `DETA-Swin-o365-finetune` model which can achieve **`62.9AP`** on coco val.
- Support **MaskDINO** on ADE20k semantic segmentation task.
- Support `EMAHook` during training by setting `train.model_ema.enabled=True`, which can enhance the model performance. DINO with EMA can achieve **`49.4AP`** with only 12epoch training.
- Support mixed precision training by setting `train.amp.enabled=True`, which will **reduce 20% to 30% GPU memory usage**.
- Support `train.fast_dev_run=True` for **fast debugging**.
- Support **encoder-decoder checkpoint** in DINO, which may reduce **30% GPU** memory usage.
- Support a great slurm training scripts by @rayleizhu, please check this issue for more details [#213](https://github.com/IDEA-Research/detrex/issues/213)


### v0.2.1 (01/02/2023)
#### New Algorithm
- MaskDINO COCO instance-seg/panoptic-seg pre-release [#154](https://github.com/IDEA-Research/detrex/pull/154)
Expand Down
2 changes: 1 addition & 1 deletion setup.py
Original file line number Diff line number Diff line change
Expand Up @@ -31,7 +31,7 @@
from torch.utils.cpp_extension import CUDA_HOME, CppExtension, CUDAExtension

# detrex version info
version = "0.2.1"
version = "0.3.0"
package_name = "detrex"
cwd = os.path.dirname(os.path.abspath(__file__))

Expand Down

0 comments on commit 69b50b8

Please sign in to comment.