Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[MMSIG-92] Integrate WFLW deeppose model to dev-1.x branch #2265

Merged
merged 6 commits into from
Apr 21, 2023
Merged

[MMSIG-92] Integrate WFLW deeppose model to dev-1.x branch #2265

merged 6 commits into from
Apr 21, 2023

Conversation

xin-li-67
Copy link
Contributor

Motivation

MMSIG-92

Modification

configs/face_2d_keypoint/topdown_regression/wflw/td-reg_res50_8x64e-210e_wflw-256x256.py
configs/face_2d_keypoint/topdown_regression/wflw/README.md

BC-breaking (Optional)

Use cases (Optional)

Checklist

Before PR:

  • I have read and followed the workflow indicated in the CONTRIBUTING.md to create this PR.
  • Pre-commit or linting tools indicated in CONTRIBUTING.md are used to fix the potential lint issues.
  • Bug fixes are covered by unit tests, the case that causes the bug should be added in the unit tests.
  • New functionalities are covered by complete unit tests. If not, please add more unit tests to ensure correctness.
  • The documentation has been modified accordingly, including docstring or example tutorials.

After PR:

  • CLA has been signed and all committers have signed the CLA in this PR.

@codecov
Copy link

codecov bot commented Apr 20, 2023

Codecov Report

Patch coverage has no change and project coverage change: -0.01 ⚠️

Comparison is base (896e9d5) 82.25% compared to head (9ef24e7) 82.24%.

❗ Current head 9ef24e7 differs from pull request most recent head a819ed4. Consider uploading reports for the commit a819ed4 to get more accurate results

Additional details and impacted files
@@             Coverage Diff             @@
##           dev-1.x    #2265      +/-   ##
===========================================
- Coverage    82.25%   82.24%   -0.01%     
===========================================
  Files          228      230       +2     
  Lines        13387    13552     +165     
  Branches      2268     2301      +33     
===========================================
+ Hits         11011    11146     +135     
- Misses        1862     1876      +14     
- Partials       514      530      +16     
Flag Coverage Δ
unittests 82.24% <ø> (-0.01%) ⬇️

Flags with carried forward coverage won't be shown. Click here to find out more.

see 21 files with indirect coverage changes

Help us with your feedback. Take ten seconds to tell us how you rate us. Have a feature suggestion? Share it here.

☔ View full report in Codecov by Sentry.
📢 Do you have feedback about the report comment? Let us know in this issue.

@xin-li-67
Copy link
Contributor Author

test.py result of the downloaded 0.x model and the new config file:

04/21 09:52:35 - mmengine - INFO - Config:
default_scope = 'mmpose'
default_hooks = dict(
    timer=dict(type='IterTimerHook'),
    logger=dict(type='LoggerHook', interval=50),
    param_scheduler=dict(type='ParamSchedulerHook'),
    checkpoint=dict(
        type='CheckpointHook', interval=10, save_best='NME', rule='greater'),
    sampler_seed=dict(type='DistSamplerSeedHook'),
    visualization=dict(type='PoseVisualizationHook', enable=False))
custom_hooks = [dict(type='SyncBuffersHook')]
env_cfg = dict(
    cudnn_benchmark=False,
    mp_cfg=dict(mp_start_method='fork', opencv_num_threads=0),
    dist_cfg=dict(backend='nccl'))
vis_backends = [dict(type='LocalVisBackend')]
visualizer = dict(
    type='PoseLocalVisualizer',
    vis_backends=[dict(type='LocalVisBackend')],
    name='visualizer')
log_processor = dict(
    type='LogProcessor', window_size=50, by_epoch=True, num_digits=6)
log_level = 'INFO'
load_from = 'work_dirs/td-reg_res50_8x64e-210e_wflw-256x256/deeppose_res50_wflw_256x256-92d0ba7f_20210303.pth'
resume = False
backend_args = dict(backend='local')
train_cfg = dict(by_epoch=True, max_epochs=210, val_interval=1)
val_cfg = dict()
test_cfg = dict()
dataset_info = dict(
    dataset_name='wflw',
    paper_info=dict(
        author=
        'Wu, Wayne and Qian, Chen and Yang, Shuo and Wang, Quan and Cai, Yici and Zhou, Qiang',
        title='Look at boundary: A boundary-aware face alignment algorithm',
        container=
        'Proceedings of the IEEE conference on computer vision and pattern recognition',
        year='2018',
        homepage='https://wywu.github.io/projects/LAB/WFLW.html'),
    keypoint_info=dict({
        0:
        dict(name='kpt-0', id=0, color=[255, 0, 0], type='', swap='kpt-32'),
        1:
        dict(name='kpt-1', id=1, color=[255, 0, 0], type='', swap='kpt-31'),
        2:
        dict(name='kpt-2', id=2, color=[255, 0, 0], type='', swap='kpt-30'),
        3:
        dict(name='kpt-3', id=3, color=[255, 0, 0], type='', swap='kpt-29'),
        4:
        dict(name='kpt-4', id=4, color=[255, 0, 0], type='', swap='kpt-28'),
        5:
        dict(name='kpt-5', id=5, color=[255, 0, 0], type='', swap='kpt-27'),
        6:
        dict(name='kpt-6', id=6, color=[255, 0, 0], type='', swap='kpt-26'),
        7:
        dict(name='kpt-7', id=7, color=[255, 0, 0], type='', swap='kpt-25'),
        8:
        dict(name='kpt-8', id=8, color=[255, 0, 0], type='', swap='kpt-24'),
        9:
        dict(name='kpt-9', id=9, color=[255, 0, 0], type='', swap='kpt-23'),
        10:
        dict(name='kpt-10', id=10, color=[255, 0, 0], type='', swap='kpt-22'),
        11:
        dict(name='kpt-11', id=11, color=[255, 0, 0], type='', swap='kpt-21'),
        12:
        dict(name='kpt-12', id=12, color=[255, 0, 0], type='', swap='kpt-20'),
        13:
        dict(name='kpt-13', id=13, color=[255, 0, 0], type='', swap='kpt-19'),
        14:
        dict(name='kpt-14', id=14, color=[255, 0, 0], type='', swap='kpt-18'),
        15:
        dict(name='kpt-15', id=15, color=[255, 0, 0], type='', swap='kpt-17'),
        16:
        dict(name='kpt-16', id=16, color=[255, 0, 0], type='', swap=''),
        17:
        dict(name='kpt-17', id=17, color=[255, 0, 0], type='', swap='kpt-15'),
        18:
        dict(name='kpt-18', id=18, color=[255, 0, 0], type='', swap='kpt-14'),
        19:
        dict(name='kpt-19', id=19, color=[255, 0, 0], type='', swap='kpt-13'),
        20:
        dict(name='kpt-20', id=20, color=[255, 0, 0], type='', swap='kpt-12'),
        21:
        dict(name='kpt-21', id=21, color=[255, 0, 0], type='', swap='kpt-11'),
        22:
        dict(name='kpt-22', id=22, color=[255, 0, 0], type='', swap='kpt-10'),
        23:
        dict(name='kpt-23', id=23, color=[255, 0, 0], type='', swap='kpt-9'),
        24:
        dict(name='kpt-24', id=24, color=[255, 0, 0], type='', swap='kpt-8'),
        25:
        dict(name='kpt-25', id=25, color=[255, 0, 0], type='', swap='kpt-7'),
        26:
        dict(name='kpt-26', id=26, color=[255, 0, 0], type='', swap='kpt-6'),
        27:
        dict(name='kpt-27', id=27, color=[255, 0, 0], type='', swap='kpt-5'),
        28:
        dict(name='kpt-28', id=28, color=[255, 0, 0], type='', swap='kpt-4'),
        29:
        dict(name='kpt-29', id=29, color=[255, 0, 0], type='', swap='kpt-3'),
        30:
        dict(name='kpt-30', id=30, color=[255, 0, 0], type='', swap='kpt-2'),
        31:
        dict(name='kpt-31', id=31, color=[255, 0, 0], type='', swap='kpt-1'),
        32:
        dict(name='kpt-32', id=32, color=[255, 0, 0], type='', swap='kpt-0'),
        33:
        dict(name='kpt-33', id=33, color=[255, 0, 0], type='', swap='kpt-46'),
        34:
        dict(name='kpt-34', id=34, color=[255, 0, 0], type='', swap='kpt-45'),
        35:
        dict(name='kpt-35', id=35, color=[255, 0, 0], type='', swap='kpt-44'),
        36:
        dict(name='kpt-36', id=36, color=[255, 0, 0], type='', swap='kpt-43'),
        37:
        dict(name='kpt-37', id=37, color=[255, 0, 0], type='', swap='kpt-42'),
        38:
        dict(name='kpt-38', id=38, color=[255, 0, 0], type='', swap='kpt-50'),
        39:
        dict(name='kpt-39', id=39, color=[255, 0, 0], type='', swap='kpt-49'),
        40:
        dict(name='kpt-40', id=40, color=[255, 0, 0], type='', swap='kpt-48'),
        41:
        dict(name='kpt-41', id=41, color=[255, 0, 0], type='', swap='kpt-47'),
        42:
        dict(name='kpt-42', id=42, color=[255, 0, 0], type='', swap='kpt-37'),
        43:
        dict(name='kpt-43', id=43, color=[255, 0, 0], type='', swap='kpt-36'),
        44:
        dict(name='kpt-44', id=44, color=[255, 0, 0], type='', swap='kpt-35'),
        45:
        dict(name='kpt-45', id=45, color=[255, 0, 0], type='', swap='kpt-34'),
        46:
        dict(name='kpt-46', id=46, color=[255, 0, 0], type='', swap='kpt-33'),
        47:
        dict(name='kpt-47', id=47, color=[255, 0, 0], type='', swap='kpt-41'),
        48:
        dict(name='kpt-48', id=48, color=[255, 0, 0], type='', swap='kpt-40'),
        49:
        dict(name='kpt-49', id=49, color=[255, 0, 0], type='', swap='kpt-39'),
        50:
        dict(name='kpt-50', id=50, color=[255, 0, 0], type='', swap='kpt-38'),
        51:
        dict(name='kpt-51', id=51, color=[255, 0, 0], type='', swap=''),
        52:
        dict(name='kpt-52', id=52, color=[255, 0, 0], type='', swap=''),
        53:
        dict(name='kpt-53', id=53, color=[255, 0, 0], type='', swap=''),
        54:
        dict(name='kpt-54', id=54, color=[255, 0, 0], type='', swap=''),
        55:
        dict(name='kpt-55', id=55, color=[255, 0, 0], type='', swap='kpt-59'),
        56:
        dict(name='kpt-56', id=56, color=[255, 0, 0], type='', swap='kpt-58'),
        57:
        dict(name='kpt-57', id=57, color=[255, 0, 0], type='', swap=''),
        58:
        dict(name='kpt-58', id=58, color=[255, 0, 0], type='', swap='kpt-56'),
        59:
        dict(name='kpt-59', id=59, color=[255, 0, 0], type='', swap='kpt-55'),
        60:
        dict(name='kpt-60', id=60, color=[255, 0, 0], type='', swap='kpt-72'),
        61:
        dict(name='kpt-61', id=61, color=[255, 0, 0], type='', swap='kpt-71'),
        62:
        dict(name='kpt-62', id=62, color=[255, 0, 0], type='', swap='kpt-70'),
        63:
        dict(name='kpt-63', id=63, color=[255, 0, 0], type='', swap='kpt-69'),
        64:
        dict(name='kpt-64', id=64, color=[255, 0, 0], type='', swap='kpt-68'),
        65:
        dict(name='kpt-65', id=65, color=[255, 0, 0], type='', swap='kpt-75'),
        66:
        dict(name='kpt-66', id=66, color=[255, 0, 0], type='', swap='kpt-74'),
        67:
        dict(name='kpt-67', id=67, color=[255, 0, 0], type='', swap='kpt-73'),
        68:
        dict(name='kpt-68', id=68, color=[255, 0, 0], type='', swap='kpt-64'),
        69:
        dict(name='kpt-69', id=69, color=[255, 0, 0], type='', swap='kpt-63'),
        70:
        dict(name='kpt-70', id=70, color=[255, 0, 0], type='', swap='kpt-62'),
        71:
        dict(name='kpt-71', id=71, color=[255, 0, 0], type='', swap='kpt-61'),
        72:
        dict(name='kpt-72', id=72, color=[255, 0, 0], type='', swap='kpt-60'),
        73:
        dict(name='kpt-73', id=73, color=[255, 0, 0], type='', swap='kpt-67'),
        74:
        dict(name='kpt-74', id=74, color=[255, 0, 0], type='', swap='kpt-66'),
        75:
        dict(name='kpt-75', id=75, color=[255, 0, 0], type='', swap='kpt-65'),
        76:
        dict(name='kpt-76', id=76, color=[255, 0, 0], type='', swap='kpt-82'),
        77:
        dict(name='kpt-77', id=77, color=[255, 0, 0], type='', swap='kpt-81'),
        78:
        dict(name='kpt-78', id=78, color=[255, 0, 0], type='', swap='kpt-80'),
        79:
        dict(name='kpt-79', id=79, color=[255, 0, 0], type='', swap=''),
        80:
        dict(name='kpt-80', id=80, color=[255, 0, 0], type='', swap='kpt-78'),
        81:
        dict(name='kpt-81', id=81, color=[255, 0, 0], type='', swap='kpt-77'),
        82:
        dict(name='kpt-82', id=82, color=[255, 0, 0], type='', swap='kpt-76'),
        83:
        dict(name='kpt-83', id=83, color=[255, 0, 0], type='', swap='kpt-87'),
        84:
        dict(name='kpt-84', id=84, color=[255, 0, 0], type='', swap='kpt-86'),
        85:
        dict(name='kpt-85', id=85, color=[255, 0, 0], type='', swap=''),
        86:
        dict(name='kpt-86', id=86, color=[255, 0, 0], type='', swap='kpt-84'),
        87:
        dict(name='kpt-87', id=87, color=[255, 0, 0], type='', swap='kpt-83'),
        88:
        dict(name='kpt-88', id=88, color=[255, 0, 0], type='', swap='kpt-92'),
        89:
        dict(name='kpt-89', id=89, color=[255, 0, 0], type='', swap='kpt-91'),
        90:
        dict(name='kpt-90', id=90, color=[255, 0, 0], type='', swap=''),
        91:
        dict(name='kpt-91', id=91, color=[255, 0, 0], type='', swap='kpt-89'),
        92:
        dict(name='kpt-92', id=92, color=[255, 0, 0], type='', swap='kpt-88'),
        93:
        dict(name='kpt-93', id=93, color=[255, 0, 0], type='', swap='kpt-95'),
        94:
        dict(name='kpt-94', id=94, color=[255, 0, 0], type='', swap=''),
        95:
        dict(name='kpt-95', id=95, color=[255, 0, 0], type='', swap='kpt-93'),
        96:
        dict(name='kpt-96', id=96, color=[255, 0, 0], type='', swap='kpt-97'),
        97:
        dict(name='kpt-97', id=97, color=[255, 0, 0], type='', swap='kpt-96')
    }),
    skeleton_info=dict(),
    joint_weights=[
        1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0,
        1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0,
        1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0,
        1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0,
        1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0,
        1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0,
        1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0
    ],
    sigmas=[])
optim_wrapper = dict(optimizer=dict(type='Adam', lr=0.0005))
param_scheduler = [
    dict(
        type='LinearLR', begin=0, end=500, start_factor=0.001, by_epoch=False),
    dict(
        type='MultiStepLR',
        begin=0,
        end=210,
        milestones=[170, 200],
        gamma=0.1,
        by_epoch=True)
]
auto_scale_lr = dict(base_batch_size=512)
codec = dict(type='RegressionLabel', input_size=(256, 256))
model = dict(
    type='TopdownPoseEstimator',
    data_preprocessor=dict(
        type='PoseDataPreprocessor',
        mean=[123.675, 116.28, 103.53],
        std=[58.395, 57.12, 57.375],
        bgr_to_rgb=True),
    backbone=dict(
        type='ResNet',
        depth=50,
        init_cfg=dict(type='Pretrained', checkpoint='torchvision://resnet50')),
    neck=dict(type='GlobalAveragePooling'),
    head=dict(
        type='RegressionHead',
        in_channels=2048,
        num_joints=98,
        loss=dict(type='SmoothL1Loss', use_target_weight=True),
        decoder=dict(type='RegressionLabel', input_size=(256, 256))),
    train_cfg=dict(),
    test_cfg=dict(flip_test=True, shift_coords=True))
dataset_type = 'WFLWDataset'
data_mode = 'topdown'
data_root = 'data/wflw/'
train_pipeline = [
    dict(type='LoadImage'),
    dict(type='GetBBoxCenterScale'),
    dict(type='RandomFlip', direction='horizontal'),
    dict(type='RandomBBoxTransform', scale_factor=[0.25], rotate_factor=80),
    dict(type='TopdownAffine', input_size=(256, 256)),
    dict(
        type='GenerateTarget',
        encoder=dict(type='RegressionLabel', input_size=(256, 256))),
    dict(type='PackPoseInputs')
]
val_pipeline = [
    dict(type='LoadImage'),
    dict(type='GetBBoxCenterScale'),
    dict(type='TopdownAffine', input_size=(256, 256)),
    dict(type='PackPoseInputs')
]
train_dataloader = dict(
    batch_size=64,
    num_workers=2,
    persistent_workers=True,
    sampler=dict(type='DefaultSampler', shuffle=True),
    dataset=dict(
        type='WFLWDataset',
        data_root='data/wflw/',
        data_mode='topdown',
        ann_file='annotations/face_landmarks_wflw_train.json',
        data_prefix=dict(img='images/'),
        pipeline=[
            dict(type='LoadImage'),
            dict(type='GetBBoxCenterScale'),
            dict(type='RandomFlip', direction='horizontal'),
            dict(
                type='RandomBBoxTransform',
                scale_factor=[0.25],
                rotate_factor=80),
            dict(type='TopdownAffine', input_size=(256, 256)),
            dict(
                type='GenerateTarget',
                encoder=dict(type='RegressionLabel', input_size=(256, 256))),
            dict(type='PackPoseInputs')
        ]))
val_dataloader = dict(
    batch_size=32,
    num_workers=2,
    persistent_workers=True,
    drop_last=False,
    sampler=dict(type='DefaultSampler', shuffle=False, round_up=False),
    dataset=dict(
        type='WFLWDataset',
        data_root='data/wflw/',
        data_mode='topdown',
        ann_file='annotations/face_landmarks_wflw_test.json',
        data_prefix=dict(img='images/'),
        test_mode=True,
        pipeline=[
            dict(type='LoadImage'),
            dict(type='GetBBoxCenterScale'),
            dict(type='TopdownAffine', input_size=(256, 256)),
            dict(type='PackPoseInputs')
        ]))
test_dataloader = dict(
    batch_size=32,
    num_workers=2,
    persistent_workers=True,
    drop_last=False,
    sampler=dict(type='DefaultSampler', shuffle=False, round_up=False),
    dataset=dict(
        type='WFLWDataset',
        data_root='data/wflw/',
        data_mode='topdown',
        ann_file='annotations/face_landmarks_wflw_test.json',
        data_prefix=dict(img='images/'),
        test_mode=True,
        pipeline=[
            dict(type='LoadImage'),
            dict(type='GetBBoxCenterScale'),
            dict(type='TopdownAffine', input_size=(256, 256)),
            dict(type='PackPoseInputs')
        ]))
val_evaluator = dict(type='NME', norm_mode='keypoint_distance')
test_evaluator = dict(type='NME', norm_mode='keypoint_distance')
launcher = 'none'
work_dir = 'work_dirs/td-reg_res50_8x64e-210e_wflw-256x256/old_version/'

04/21 09:52:37 - mmengine - INFO - Distributed training is not used, all SyncBatchNorm (SyncBN) layers in the model will be automatically reverted to BatchNormXd layers if they are used.
04/21 09:52:37 - mmengine - INFO - Hooks will be executed in the following order:
before_run:
(VERY_HIGH   ) RuntimeInfoHook                    
(BELOW_NORMAL) LoggerHook                         
 -------------------- 
before_train:
(VERY_HIGH   ) RuntimeInfoHook                    
(NORMAL      ) IterTimerHook                      
(VERY_LOW    ) CheckpointHook                     
 -------------------- 
before_train_epoch:
(VERY_HIGH   ) RuntimeInfoHook                    
(NORMAL      ) IterTimerHook                      
(NORMAL      ) DistSamplerSeedHook                
 -------------------- 
before_train_iter:
(VERY_HIGH   ) RuntimeInfoHook                    
(NORMAL      ) IterTimerHook                      
 -------------------- 
after_train_iter:
(VERY_HIGH   ) RuntimeInfoHook                    
(NORMAL      ) IterTimerHook                      
(BELOW_NORMAL) LoggerHook                         
(LOW         ) ParamSchedulerHook                 
(VERY_LOW    ) CheckpointHook                     
 -------------------- 
after_train_epoch:
(NORMAL      ) IterTimerHook                      
(NORMAL      ) SyncBuffersHook                    
(LOW         ) ParamSchedulerHook                 
(VERY_LOW    ) CheckpointHook                     
 -------------------- 
before_val_epoch:
(NORMAL      ) IterTimerHook                      
 -------------------- 
before_val_iter:
(NORMAL      ) IterTimerHook                      
 -------------------- 
after_val_iter:
(NORMAL      ) IterTimerHook                      
(NORMAL      ) PoseVisualizationHook              
(BELOW_NORMAL) LoggerHook                         
 -------------------- 
after_val_epoch:
(VERY_HIGH   ) RuntimeInfoHook                    
(NORMAL      ) IterTimerHook                      
(BELOW_NORMAL) LoggerHook                         
(LOW         ) ParamSchedulerHook                 
(VERY_LOW    ) CheckpointHook                     
 -------------------- 
after_train:
(VERY_LOW    ) CheckpointHook                     
 -------------------- 
before_test_epoch:
(NORMAL      ) IterTimerHook                      
 -------------------- 
before_test_iter:
(NORMAL      ) IterTimerHook                      
 -------------------- 
after_test_iter:
(NORMAL      ) IterTimerHook                      
(NORMAL      ) PoseVisualizationHook              
(BELOW_NORMAL) LoggerHook                         
 -------------------- 
after_test_epoch:
(VERY_HIGH   ) RuntimeInfoHook                    
(NORMAL      ) IterTimerHook                      
(BELOW_NORMAL) LoggerHook                         
 -------------------- 
after_run:
(BELOW_NORMAL) LoggerHook                         
 -------------------- 
loading annotations into memory...
Done (t=0.07s)
creating index...
index created!
04/21 09:52:39 - mmengine - WARNING - The prefix is not set in metric class NME.
Loads checkpoint by local backend from path: work_dirs/td-reg_res50_8x64e-210e_wflw-256x256/deeppose_res50_wflw_256x256-92d0ba7f_20210303.pth
04/21 09:52:41 - mmengine - INFO - Load checkpoint from work_dirs/td-reg_res50_8x64e-210e_wflw-256x256/deeppose_res50_wflw_256x256-92d0ba7f_20210303.pth
/home/xinli/Projects/mmpose/mmpose/datasets/transforms/common_transforms.py:70: UserWarning: Use the existing "bbox_center" and "bbox_scale". The padding will still be applied.
  warnings.warn('Use the existing "bbox_center" and "bbox_scale"'
/home/xinli/Projects/mmpose/mmpose/datasets/transforms/common_transforms.py:70: UserWarning: Use the existing "bbox_center" and "bbox_scale". The padding will still be applied.
  warnings.warn('Use the existing "bbox_center" and "bbox_scale"'
04/21 09:52:45 - mmengine - INFO - Epoch(test) [50/79]    eta: 0:00:02  time: 0.095799  data_time: 0.040775  memory: 564  
04/21 09:52:48 - mmengine - INFO - Evaluating NME...
04/21 09:52:48 - mmengine - INFO - Epoch(test) [79/79]    NME: 0.048875  data_time: 0.041100  time: 0.078425

dict(type='LoadImage'),
dict(type='GetBBoxCenterScale'),
dict(type='RandomFlip', direction='horizontal'),
dict(type='RandomBBoxTransform', scale_factor=[0.25], rotate_factor=80),
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

scale_factor should be a Tuple[float, float]

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

scale_factor should be a Tuple[float, float]

Got it. Also, I noticed that the rotate_factor here was incorrect. I changed this part to:

# 1.x
dict(
        type='RandomBBoxTransform',
        scale_factor=[0.75, 1.25],
        rotate_factor=60),

# 0.x
dict(
        type='TopDownGetRandomScaleRotation', rot_factor=30,
        scale_factor=0.25),

@@ -0,0 +1,121 @@
_base_ = [
'../../../_base_/default_runtime.py', '../../../_base_/datasets/wflw.py'
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
'../../../_base_/default_runtime.py', '../../../_base_/datasets/wflw.py'
'../../../_base_/default_runtime.py'

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, this is redundant. I removed it in the latest commit.

@xin-li-67
Copy link
Contributor Author

The latest test result:

2023/04/21 13:08:36 - mmengine - WARNING - The prefix is not set in metric class NME.
2023/04/21 13:08:38 - mmengine - INFO - Load checkpoint from work_dirs/td-reg_res50_8x64e-210e_wflw-256x256/best_NME_epoch_1.pth
2023/04/21 13:08:42 - mmengine - INFO - Epoch(test) [50/79]    eta: 0:00:02  time: 0.095423  data_time: 0.040140  memory: 564  
2023/04/21 13:08:45 - mmengine - INFO - Evaluating NME...
2023/04/21 13:08:45 - mmengine - INFO - Epoch(test) [79/79]    NME: 0.453260  data_time: 0.041329  time: 0.078028

@xin-li-67
Copy link
Contributor Author

Training command:

# single RTX4090
python tools/train.py \
       configs/face_2d_keypoint/topdown_regression/wflw/td-reg_res50_8x64e-210e_wflw-256x256.py \
       --auto-scale-lr

The latest training log of the first seven epochs:

2023/04/21 11:11:22 - mmengine - INFO - LR is set based on batch size of 512 and the current batch size is 64. Scaling the original LR by 0.125.
2023/04/21 11:11:22 - mmengine - WARNING - The prefix is not set in metric class NME.
2023/04/21 11:11:24 - mmengine - INFO - load model from: torchvision://resnet50
2023/04/21 11:11:24 - mmengine - INFO - Loads checkpoint by torchvision backend from path: torchvision://resnet50
2023/04/21 11:11:24 - mmengine - WARNING - The model and loaded state dict do not match exactly

unexpected key in source state_dict: fc.weight, fc.bias

2023/04/21 11:11:24 - mmengine - WARNING - "FileClient" will be deprecated in future. Please use io functions in https://mmengine.readthedocs.io/en/latest/api/fileio.html#file-io
2023/04/21 11:11:24 - mmengine - WARNING - "HardDiskBackend" is the alias of "LocalBackend" and the former will be deprecated in future.
2023/04/21 11:11:24 - mmengine - INFO - Checkpoints will be saved to /home/xinli/Projects/mmpose/work_dirs/td-reg_res50_8x64e-210e_wflw-256x256.
2023/04/21 11:11:36 - mmengine - INFO - Epoch(train)   [1][ 50/118]  lr: 6.193637e-06  eta: 1:40:06  time: 0.242895  data_time: 0.092960  memory: 7279  loss: 0.147453  loss_kpt: 0.147453  acc_pose: 0.004794
2023/04/21 11:11:47 - mmengine - INFO - Epoch(train)   [1][100/118]  lr: 1.244990e-05  eta: 1:35:49  time: 0.223018  data_time: 0.088869  memory: 7279  loss: 0.067057  loss_kpt: 0.067057  acc_pose: 0.023216
2023/04/21 11:11:51 - mmengine - INFO - Exp name: td-reg_res50_8x64e-210e_wflw-256x256_20230421_111115
2023/04/21 11:11:55 - mmengine - INFO - Epoch(val)   [1][50/79]    eta: 0:00:02  time: 0.079252  data_time: 0.041528  memory: 7279  
2023/04/21 11:11:57 - mmengine - INFO - Evaluating NME...
2023/04/21 11:11:57 - mmengine - INFO - Epoch(val) [1][79/79]    NME: 0.453260  data_time: 0.038558  time: 0.075132
2023/04/21 11:11:59 - mmengine - INFO - The best checkpoint with 0.4533 NME at 1 epoch is saved to best_NME_epoch_1.pth.
2023/04/21 11:12:10 - mmengine - INFO - Epoch(train)   [2][ 50/118]  lr: 2.095842e-05  eta: 1:33:50  time: 0.225209  data_time: 0.099267  memory: 7279  loss: 0.012245  loss_kpt: 0.012245  acc_pose: 0.080672
2023/04/21 11:12:21 - mmengine - INFO - Epoch(train)   [2][100/118]  lr: 2.721468e-05  eta: 1:32:44  time: 0.219198  data_time: 0.091030  memory: 7279  loss: 0.006557  loss_kpt: 0.006557  acc_pose: 0.183330
2023/04/21 11:12:25 - mmengine - INFO - Exp name: td-reg_res50_8x64e-210e_wflw-256x256_20230421_111115
2023/04/21 11:12:28 - mmengine - INFO - Epoch(val)   [2][50/79]    eta: 0:00:02  time: 0.076775  data_time: 0.040070  memory: 7279  
2023/04/21 11:12:31 - mmengine - INFO - Evaluating NME...
2023/04/21 11:12:31 - mmengine - INFO - Epoch(val) [2][79/79]    NME: 0.183142  data_time: 0.038449  time: 0.075051
2023/04/21 11:12:42 - mmengine - INFO - Epoch(train)   [3][ 50/118]  lr: 3.572320e-05  eta: 1:31:46  time: 0.224758  data_time: 0.098777  memory: 7279  loss: 0.003118  loss_kpt: 0.003118  acc_pose: 0.292696
2023/04/21 11:12:53 - mmengine - INFO - Epoch(train)   [3][100/118]  lr: 4.197946e-05  eta: 1:31:10  time: 0.217985  data_time: 0.091819  memory: 7279  loss: 0.002136  loss_kpt: 0.002136  acc_pose: 0.369440
2023/04/21 11:12:56 - mmengine - INFO - Exp name: td-reg_res50_8x64e-210e_wflw-256x256_20230421_111115
2023/04/21 11:13:00 - mmengine - INFO - Epoch(val)   [3][50/79]    eta: 0:00:02  time: 0.077606  data_time: 0.040596  memory: 7279  
2023/04/21 11:13:02 - mmengine - INFO - Evaluating NME...
2023/04/21 11:13:02 - mmengine - INFO - Epoch(val) [3][79/79]    NME: 0.130631  data_time: 0.037954  time: 0.074449
2023/04/21 11:13:14 - mmengine - INFO - Epoch(train)   [4][ 50/118]  lr: 5.048798e-05  eta: 1:30:40  time: 0.224528  data_time: 0.098099  memory: 7279  loss: 0.001604  loss_kpt: 0.001604  acc_pose: 0.491262
2023/04/21 11:13:25 - mmengine - INFO - Epoch(train)   [4][100/118]  lr: 5.674424e-05  eta: 1:30:28  time: 0.223061  data_time: 0.095466  memory: 7279  loss: 0.001346  loss_kpt: 0.001346  acc_pose: 0.477106
2023/04/21 11:13:29 - mmengine - INFO - Exp name: td-reg_res50_8x64e-210e_wflw-256x256_20230421_111115
2023/04/21 11:13:33 - mmengine - INFO - Epoch(val)   [4][50/79]    eta: 0:00:02  time: 0.077998  data_time: 0.040809  memory: 7279  
2023/04/21 11:13:35 - mmengine - INFO - Evaluating NME...
2023/04/21 11:13:35 - mmengine - INFO - Epoch(val) [4][79/79]    NME: 0.110295  data_time: 0.038356  time: 0.074723
2023/04/21 11:13:46 - mmengine - INFO - Epoch(train)   [5][ 50/118]  lr: 6.250000e-05  eta: 1:30:12  time: 0.227513  data_time: 0.101975  memory: 7279  loss: 0.001146  loss_kpt: 0.001146  acc_pose: 0.522257
2023/04/21 11:13:57 - mmengine - INFO - Epoch(train)   [5][100/118]  lr: 6.250000e-05  eta: 1:29:53  time: 0.219175  data_time: 0.093254  memory: 7279  loss: 0.001119  loss_kpt: 0.001119  acc_pose: 0.640810
2023/04/21 11:14:01 - mmengine - INFO - Exp name: td-reg_res50_8x64e-210e_wflw-256x256_20230421_111115
2023/04/21 11:14:05 - mmengine - INFO - Epoch(val)   [5][50/79]    eta: 0:00:02  time: 0.077880  data_time: 0.040756  memory: 7279  
2023/04/21 11:14:07 - mmengine - INFO - Evaluating NME...
2023/04/21 11:14:07 - mmengine - INFO - Epoch(val) [5][79/79]    NME: 0.098834  data_time: 0.038761  time: 0.075731
2023/04/21 11:14:18 - mmengine - INFO - Epoch(train)   [6][ 50/118]  lr: 6.250000e-05  eta: 1:29:34  time: 0.226910  data_time: 0.101114  memory: 7279  loss: 0.000997  loss_kpt: 0.000997  acc_pose: 0.682640
2023/04/21 11:14:29 - mmengine - INFO - Epoch(train)   [6][100/118]  lr: 6.250000e-05  eta: 1:29:15  time: 0.218270  data_time: 0.090192  memory: 7279  loss: 0.000939  loss_kpt: 0.000939  acc_pose: 0.639634
2023/04/21 11:14:33 - mmengine - INFO - Exp name: td-reg_res50_8x64e-210e_wflw-256x256_20230421_111115
2023/04/21 11:14:37 - mmengine - INFO - Epoch(val)   [6][50/79]    eta: 0:00:02  time: 0.078331  data_time: 0.040750  memory: 7279  
2023/04/21 11:14:39 - mmengine - INFO - Evaluating NME...
2023/04/21 11:14:39 - mmengine - INFO - Epoch(val) [6][79/79]    NME: 0.091563  data_time: 0.038537  time: 0.075114
2023/04/21 11:14:50 - mmengine - INFO - Epoch(train)   [7][ 50/118]  lr: 6.250000e-05  eta: 1:29:06  time: 0.230306  data_time: 0.099948  memory: 7279  loss: 0.000848  loss_kpt: 0.000848  acc_pose: 0.591614
2023/04/21 11:15:02 - mmengine - INFO - Epoch(train)   [7][100/118]  lr: 6.250000e-05  eta: 1:28:57  time: 0.224550  data_time: 0.094634  memory: 7279  loss: 0.000743  loss_kpt: 0.000743  acc_pose: 0.690668
2023/04/21 11:15:05 - mmengine - INFO - Exp name: td-reg_res50_8x64e-210e_wflw-256x256_20230421_111115
2023/04/21 11:15:09 - mmengine - INFO - Epoch(val)   [7][50/79]    eta: 0:00:02  time: 0.081883  data_time: 0.042011  memory: 7279  
2023/04/21 11:15:12 - mmengine - INFO - Evaluating NME...
2023/04/21 11:15:12 - mmengine - INFO - Epoch(val) [7][79/79]    NME: 0.086411  data_time: 0.038975  time: 0.078306

@Tau-J
Copy link
Collaborator

Tau-J commented Apr 21, 2023

The latest test result:

2023/04/21 13:08:36 - mmengine - WARNING - The prefix is not set in metric class NME.
2023/04/21 13:08:38 - mmengine - INFO - Load checkpoint from work_dirs/td-reg_res50_8x64e-210e_wflw-256x256/best_NME_epoch_1.pth
2023/04/21 13:08:42 - mmengine - INFO - Epoch(test) [50/79]    eta: 0:00:02  time: 0.095423  data_time: 0.040140  memory: 564  
2023/04/21 13:08:45 - mmengine - INFO - Evaluating NME...
2023/04/21 13:08:45 - mmengine - INFO - Epoch(test) [79/79]    NME: 0.453260  data_time: 0.041329  time: 0.078028

this seems not a correct test result... compared with your previous test

@Tau-J
Copy link
Collaborator

Tau-J commented Apr 21, 2023

Could you please also complete README.md, resnet_wflw.md, resnet_wflw.yml refering to existed ones?It would be much appreciated!

@xin-li-67
Copy link
Contributor Author

The latest test result:

2023/04/21 13:08:36 - mmengine - WARNING - The prefix is not set in metric class NME.
2023/04/21 13:08:38 - mmengine - INFO - Load checkpoint from work_dirs/td-reg_res50_8x64e-210e_wflw-256x256/best_NME_epoch_1.pth
2023/04/21 13:08:42 - mmengine - INFO - Epoch(test) [50/79]    eta: 0:00:02  time: 0.095423  data_time: 0.040140  memory: 564  
2023/04/21 13:08:45 - mmengine - INFO - Evaluating NME...
2023/04/21 13:08:45 - mmengine - INFO - Epoch(test) [79/79]    NME: 0.453260  data_time: 0.041329  time: 0.078028

this seems not a correct test result... compared with your previous test

04/21 13:46:50 - mmengine - WARNING - The prefix is not set in metric class NME.
Loads checkpoint by local backend from path: work_dirs/td-reg_res50_8x64e-210e_wflw-256x256/deeppose_res50_wflw_256x256-92d0ba7f_20210303.pth
04/21 13:46:50 - mmengine - INFO - Load checkpoint from work_dirs/td-reg_res50_8x64e-210e_wflw-256x256/deeppose_res50_wflw_256x256-92d0ba7f_20210303.pth
/home/xinli/Projects/mmpose/mmpose/datasets/transforms/common_transforms.py:70: UserWarning: Use the existing "bbox_center" and "bbox_scale". The padding will still be applied.
  warnings.warn('Use the existing "bbox_center" and "bbox_scale"'
/home/xinli/Projects/mmpose/mmpose/datasets/transforms/common_transforms.py:70: UserWarning: Use the existing "bbox_center" and "bbox_scale". The padding will still be applied.
  warnings.warn('Use the existing "bbox_center" and "bbox_scale"'
04/21 13:46:55 - mmengine - INFO - Epoch(test) [50/79]    eta: 0:00:02  time: 0.095939  data_time: 0.040763  memory: 564  
04/21 13:46:57 - mmengine - INFO - Evaluating NME...
04/21 13:46:57 - mmengine - INFO - Epoch(test) [79/79]    NME: 0.048875  data_time: 0.040308  time: 0.077922

Could you please also complete README.md, resnet_wflw.md, resnet_wflw.yml refering to exist ones?It would be much appreciated!

sure!

@xin-li-67
Copy link
Contributor Author

test result using deeppose_res50_wflw_256x256-92d0ba7f_20210303.pth in 0.x

2023/04/21 20:09:45 - mmengine - WARNING - The prefix is not set in metric class NME.
2023/04/21 20:09:45 - mmengine - INFO - Load checkpoint from work_dirs/td-reg_res50_8x64e-210e_wflw-256x256/deeppose_res50_wflw_256x256-92d0ba7f_20210303.pth
2023/04/21 20:09:49 - mmengine - INFO - Epoch(test) [50/79]    eta: 0:00:02  time: 0.095429  data_time: 0.040326  memory: 564  
2023/04/21 20:09:52 - mmengine - INFO - Evaluating NME...
2023/04/21 20:09:52 - mmengine - INFO - Epoch(test) [79/79]    NME: 0.048875  data_time: 0.038478  time: 0.075723

Copy link
Collaborator

@Tau-J Tau-J left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@Tau-J Tau-J merged commit 222ef57 into open-mmlab:dev-1.x Apr 21, 2023
@xin-li-67 xin-li-67 deleted the wflw_deeppose_dev branch April 21, 2023 12:38
Tau-J pushed a commit to Tau-J/mmpose that referenced this pull request Apr 25, 2023
Tau-J pushed a commit to Tau-J/mmpose that referenced this pull request Apr 25, 2023
shuheilocale pushed a commit to shuheilocale/mmpose that referenced this pull request May 6, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants