You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Thank you for contributing such a great work. Recently, I have been studying the code of DMAE, and I wanna ask a stupid question.
In file models_mae_distill.py, the intermediate features collected (i.e., forward_encoder_customized func.) from ViT encoder are transformed into fp32 by using torch.float32, is this because the teacher model's tensor type is by default fp32 and it requires that both student and teacher's features are alignment w.r.t data type?
Thank a lot.
The text was updated successfully, but these errors were encountered:
Hi Yutong and Cihang,
Thank you for contributing such a great work. Recently, I have been studying the code of DMAE, and I wanna ask a stupid question.
In file
models_mae_distill.py
, the intermediate features collected (i.e.,forward_encoder_customized
func.) from ViT encoder are transformed into fp32 by usingtorch.float32
, is this because the teacher model's tensor type is by default fp32 and it requires that both student and teacher's features are alignment w.r.t data type?Thank a lot.
The text was updated successfully, but these errors were encountered: