Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

FIX: Error in forward of 4bit linear lora layer #878

Merged

Conversation

BenjaminBossan
Copy link
Member

This was introduced during the refactoring of the forward function. It should now be fixed and be equivalent to the forward function before the refactoring:

peft/src/peft/tuners/lora.py

Lines 1207 to 1231 in 4df9c5a

def forward(self, x: torch.Tensor):
result = super().forward(x)
if self.disable_adapters or self.active_adapter not in self.lora_A.keys():
return result
elif self.r[self.active_adapter] > 0:
result = result.clone()
if not torch.is_autocast_enabled():
expected_dtype = result.dtype
x = x.to(self.lora_A[self.active_adapter].weight.dtype)
output = (
self.lora_B[self.active_adapter](
self.lora_A[self.active_adapter](self.lora_dropout[self.active_adapter](x))
).to(expected_dtype)
* self.scaling[self.active_adapter]
)
else:
output = (
self.lora_B[self.active_adapter](
self.lora_A[self.active_adapter](self.lora_dropout[self.active_adapter](x))
)
* self.scaling[self.active_adapter]
)
result += output
return result

Bug reported by @jiqing-feng

This was introduced during the refactoring of the forward function. It
should now be fixed and be equivalent to the forward function before the
refactoring:

https://github.com/huggingface/peft/blob/4df9c5a243194b03e703c1dd526d64163f9b4fd2/src/peft/tuners/lora.py#L1207

Bug reported by @jiqing-feng
@HuggingFaceDocBuilderDev
Copy link

HuggingFaceDocBuilderDev commented Aug 29, 2023

The documentation is not available anymore as the PR was closed or merged.

Copy link
Contributor

@younesbelkada younesbelkada left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the fix!

@BenjaminBossan BenjaminBossan merged commit 0b2f950 into huggingface:main Aug 30, 2023
11 checks passed
@BenjaminBossan BenjaminBossan deleted the fix-4bit-linear-lora-forward branch August 30, 2023 08:52
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants