You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Thanks for your amazing work! As title mentioned, my task is to successfully fine-tune a large language model via diffrent lora-like adapters. Currently I am exploring Finetuning LoKr on our customized model via different framework(i.e.: torch, paddle).
Is there any documentation reference that tells me how to apply LoKr adapter on such a LLM? I used the LoKr API from HuggingFace recently, but it seems that there are bugs, which I guess due to the Hf developers didn't test LoKr on any LLMs(See here for why I think so. Thanks.
The text was updated successfully, but these errors were encountered:
In HakuPhi I already show how to use lycoris on LLM.
LyCORIS is now designed to be a general PEFT library to wrap ALL the pytorch module. not matter what it is for. We just won't ensure the performance will surpass or on par with the algo that LyCORIS didn't implement.
Check the example, if your model is implemented in pytorch and you are using custom attention (instead of pytorch MHA). It will definitely work.
Thanks for your amazing work! As title mentioned, my task is to successfully fine-tune a large language model via diffrent lora-like adapters. Currently I am exploring Finetuning LoKr on our customized model via different framework(i.e.: torch, paddle).
Is there any documentation reference that tells me how to apply LoKr adapter on such a LLM? I used the LoKr API from HuggingFace recently, but it seems that there are bugs, which I guess due to the Hf developers didn't test LoKr on any LLMs(See here for why I think so. Thanks.
The text was updated successfully, but these errors were encountered: