Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Pretrained backbone network setting #69

Closed
MooreManor opened this issue Oct 24, 2022 · 6 comments
Closed

Pretrained backbone network setting #69

MooreManor opened this issue Oct 24, 2022 · 6 comments

Comments

@MooreManor
Copy link

MooreManor commented Oct 24, 2022

@HongwenZhang

  1. In this issue, you said that PyMAF followed PARE setting. It seems that PyMAF should use the weights pretrained in mpii to initialize the backbone network for fast convergence. But in the config file, it doesn't offer the mpii option. Should I just use backbone pretrained on IM as the default or add the mpii option?

  2. Can you offer the checkpoint of the first stage, i.e. training on COCO?

  3. Besides, do you have the PyMAF scores after adding the RandCrop and synthetic occlusion augmentations?

@HongwenZhang
Copy link
Owner

Hi, thanks for your questions.

Note that it is the PyMAF-X version (pdf) that followed the PARE setting. The checkpoint in the PyMAF(smpl) branch has not been updated and still follows the SPIN setting.

  1. The PARE code used the COCO 2D pose pre-trained backbone, see here.

  2. Currently, only the PyMAF-X version used COCO-EFT for training, but the PyMAF-X model can not be directly loaded in PyMAF (sorry about that). We plan to release the updated PyMAF with COCO-EFT training in the future.

  3. We did not add RandCrop and synthetic occlusion augmentations in the training. But such augmentations will certainly improve the robustness to occlusions.

@MooreManor
Copy link
Author

@HongwenZhang

Thanks for your quick reply!

  1. The PARE paper said that they used pretrained weights from mpii. Is the PARE code implementation different from paper setting?
    image
  2. SPIN setting uses h36m to train as the first stage. According to PyMAF docs, PyMAF uses COCO training as the first stage. It seems that it is a little different in the setting.

PyMAF is trained on COCO at the first stage and then trained on the mixture of both 2D and 3D datasets at the second stage.

@HongwenZhang
Copy link
Owner

Hi,

  1. I am not sure about the actual implementation of PARE. But using MPII and COCO for pretraining may have similar results (simply my thoughts).
  2. Sorry about the misleading readme. The README was modified recently from another push request. There may be some conflict descriptions. According to the conference version of PyMAF, we indeed use H36M at the first stage. The original README before can be found at https://github.com/HongwenZhang/PyMAF/tree/a5feca1623d2f890e3cec516a3d9f5136efc00da#training
    Since we have not updated the checkpoint of PyMAF, I will correct the README now. Thank you for pointing this out!

@MooreManor
Copy link
Author

Thanks! I'm clear now:)

@MooreManor
Copy link
Author

@HongwenZhang Have you compared the results of PyMAF-X on the first stage (i.e. COCO-EFT or h36m)? Is there exist a big difference in the training results?

@HongwenZhang
Copy link
Owner

I did not check the results of the first stage. The performance gap should be significant if the evaluation dataset is 3DPW.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants