Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Feature] Support features_only in TIMMBackbone #668

Merged
merged 4 commits into from
Jan 25, 2022

Conversation

shinya7y
Copy link
Contributor

@shinya7y shinya7y commented Jan 23, 2022

Motivation

See the discussion in open-mmlab/mmdetection#7020. In MMDet, MMSeg, and other downstream repos, we wish to directly use backbones supported by TIMM in MMCls. Therefore, it is necessary to support this option for downstream tasks.
This PR will close #665.
This PR (especially test_timm_backbone_features_only) is based on open-mmlab/mmsegmentation#998.

Modification

  • update TIMMBackbone to enable extracting feature pyramid (multi-scale feature maps) with features_only=True
  • update and enhance docstring and log messages
  • add unit tests for features_only=True
  • fix a minor bug in test_timm_backbone

Use cases (Optional)

MMDetection

Here is an example config retinanet_timm_tv_resnet50_fpn_fp16_4x4_1x_coco.py.
The results at epoch 1 (bbox_mAP_copypaste: 0.162 0.285 0.164 0.084 0.201 0.201) are similar to those of retinanet_r50_fpn_fp16_1x_coco.py (bbox_mAP_copypaste: 0.164 0.284 0.168 0.082 0.189 0.203).

_base_ = [
    '../_base_/models/retinanet_r50_fpn.py',
    '../_base_/datasets/coco_detection.py',
    '../_base_/schedules/schedule_1x.py', '../_base_/default_runtime.py'
]

# import to trigger register_module in mmcls
custom_imports = dict(imports=['mmcls.models'], allow_failed_imports=False)
model = dict(
    backbone=dict(
        _delete_=True,
        type='mmcls.TIMMBackbone',
        model_name='tv_resnet50',  # ResNet-50 with torchvision weights
        features_only=True,
        pretrained=True,
        out_indices=(1, 2, 3, 4)))

optimizer = dict(type='SGD', lr=0.01, momentum=0.9, weight_decay=0.0001)

# disable NumClassCheckHook
custom_hooks = []

data = dict(samples_per_gpu=4)

fp16 = dict(loss_scale=dict(init_scale=512))

MMSegmentation

Here is an example config upernet_timm_resnet50d_512x512_20k_voc12aug.py.
mIoU 69.75 at 2000 iter. umm... too high? I'm not familiar with mmseg, and the config may be wrong.
In any case, training and evaluation work.

_base_ = [
    '../_base_/models/upernet_r50.py',
    '../_base_/datasets/pascal_voc12_aug.py', '../_base_/default_runtime.py',
    '../_base_/schedules/schedule_20k.py'
]

# import to trigger register_module in mmcls
custom_imports = dict(imports=['mmcls.models'], allow_failed_imports=False)
model = dict(
    pretrained=None,
    backbone=dict(
        _delete_=True,
        type='mmcls.TIMMBackbone',
        model_name='resnet50d',  # instead of ResNet-50-C
        features_only=True,
        pretrained=True,
        out_indices=(1, 2, 3, 4),
        norm_layer='SyncBN'),
    decode_head=dict(num_classes=21),
    auxiliary_head=dict(num_classes=21))

Checklist

Before PR:

  • Pre-commit or other linting tools are used to fix the potential lint issues.
  • Bug fixes are fully covered by unit tests, the case that causes the bug should be added in the unit tests.
  • The modification is covered by complete unit tests. If not, please add more unit test to ensure the correctness.
  • The documentation has been modified accordingly, like docstring or example tutorials.

After PR:

  • If the modification has potential influence on downstream or other related projects, this PR should be tested with those projects, like MMDet or MMSeg.
  • CLA has been signed and all committers have signed the CLA in this PR.

@CLAassistant
Copy link

CLAassistant commented Jan 23, 2022

CLA assistant check
All committers have signed the CLA.

@codecov
Copy link

codecov bot commented Jan 24, 2022

Codecov Report

Merging #668 (ba64f5a) into dev (e694269) will increase coverage by 0.01%.
The diff coverage is 84.84%.

Impacted file tree graph

@@            Coverage Diff             @@
##              dev     #668      +/-   ##
==========================================
+ Coverage   82.12%   82.14%   +0.01%     
==========================================
  Files         119      119              
  Lines        6865     6893      +28     
  Branches     1184     1192       +8     
==========================================
+ Hits         5638     5662      +24     
- Misses       1063     1066       +3     
- Partials      164      165       +1     
Flag Coverage Δ
unittests 82.14% <84.84%> (+0.01%) ⬆️

Flags with carried forward coverage won't be shown. Click here to find out more.

Impacted Files Coverage Δ
mmcls/models/backbones/timm_backbone.py 82.97% <84.84%> (+4.03%) ⬆️

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update e694269...ba64f5a. Read the comment docs.

Copy link
Member

@mzr1996 mzr1996 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@mzr1996 mzr1996 merged commit 16864c7 into open-mmlab:dev Jan 25, 2022
Ezra-Yu pushed a commit to Ezra-Yu/mmclassification that referenced this pull request Feb 14, 2022
* Support features_only in TIMMBackbone

based on open-mmlab/mmsegmentation#998

* update test for mmdet

* fix unit test for build_without_timm

* Update docstring

Co-authored-by: mzr1996 <[email protected]>
mzr1996 added a commit to mzr1996/mmpretrain that referenced this pull request Nov 24, 2022
* Support features_only in TIMMBackbone

based on open-mmlab/mmsegmentation#998

* update test for mmdet

* fix unit test for build_without_timm

* Update docstring

Co-authored-by: mzr1996 <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants