-
Notifications
You must be signed in to change notification settings - Fork 1.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Inconsistency between get_nb_trainable_parameters and num_parameters(only_trainable=True) for prompt tuning #1526
Comments
When I tried, I got 40960 and 0 parameters, respectively. But that isn't too surprising. The |
Right we should have trainable paramters > 0 this is consistent with LoRA. For CodeLlama 7B I get
However, for promptuning,
I agree with the outputs from
I see, does that mean LoRA is modifying the base model in place with the adapters? |
I see for prompt tuning, the promt_encoder is added as it is meant it wont modify the base model so num_parameters(only_trainable=True) reports it to be 0 |
Do you think overriding |
Yes, exactly, this is the reason. I agree that it could be confusing. In the PEFT docs, we only advertise |
Your call @BenjaminBossan. I happy to raise a PR for either of them. |
I think documenting is the better solution. Overriding |
@BenjaminBossan raised a PR - #1531 Thanks. |
fixed in #1531 |
System Info
peft==0.8.2
transformers==4.37.2
Who can help?
No response
Information
Tasks
examples
folderReproduction
Expected behavior
[UPDATED]
we should have trainable paramters > 0 this is consistent with LoRA. For CodeLlama 7B I get
However, for promptuning,
I agree with the outputs from
get_nb_trainable_parameters()
However, trying to understand this inconsistent behaviour ofnum_parameters(only_trainable=True)
for LoRA and prompt tuning techniques.The text was updated successfully, but these errors were encountered: