You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hello! Thank you for open-sourcing this great work. @yaoyuanTHU@guozonghao96@xrorrim
I tried pretraining and fine-tuning LLaVA-UHD but found a small error.
I calculated the number of trainable parameters of the LLM using this line of code:
if model_args.freeze_backbone:
model.model.requires_grad_(False)
trainable_params_info["LLM_backbone"] = {
"#params": sum(p.numel() for p in model.model.parameters()),
"#trainable_params": sum(p.numel() for p in model.model.parameters()if p.requires_grad)
}
When pretraining using pretrain.sh, the number of trainable parameters of the LLM is not 0 as stated in your paper "Stage 1: Pretraining details. During this stage, only the perceiver resampler is tuned".
Could you please clarify this small error? Thanks in advance.
The text was updated successfully, but these errors were encountered:
In the pre-training stage, the LLM is frozen. Our repository has been fully improved. You can select which part of the model can be trained when training the model.
For details, please refer to the main branch and the LLaVA-UHD v1 branch.
If there are any new problems, feel free to open a new issue.
Hello! Thank you for open-sourcing this great work. @yaoyuanTHU @guozonghao96 @xrorrim
I tried pretraining and fine-tuning LLaVA-UHD but found a small error.
I calculated the number of trainable parameters of the LLM using this line of code:
When pretraining using
pretrain.sh
, the number of trainable parameters of the LLM is not 0 as stated in your paper "Stage 1: Pretraining details. During this stage, only the perceiver resampler is tuned".Could you please clarify this small error? Thanks in advance.
The text was updated successfully, but these errors were encountered: