You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I have been reviewing the MAE implementation, and I noticed that during both inference and finetuning, the model continues to use the same mask ratio (0.75) as was used during pretraining. Could you clarify why the model does not encode the entire image instead of using a masking approach? I am curious about the advantages or the rationale behind continuing with this masking strategy post-pretraining.
Thank you for your insights!
The text was updated successfully, but these errors were encountered:
Hello,
I have been reviewing the MAE implementation, and I noticed that during both inference and finetuning, the model continues to use the same mask ratio (0.75) as was used during pretraining. Could you clarify why the model does not encode the entire image instead of using a masking approach? I am curious about the advantages or the rationale behind continuing with this masking strategy post-pretraining.
Thank you for your insights!
The text was updated successfully, but these errors were encountered: