We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
对Vicuna模型很感兴趣,刚看到这个项目,想请教一下。 LoRA finetune会生成Adapter, 想请教一下项目主页中提到的continuous-finetune是合并了不同语料finetune后不同批次的LAdapter吗? 还是说是训练语料的merge来实现? 谢谢。
The text was updated successfully, but these errors were encountered:
合并语料
Sorry, something went wrong.
我看到tools目录下有个merge.py 是把lora adapter参数合并到大模型 那是不是可以用参数合并的方式而不是语料合并来连续微调呢
直接多个lora参数相加肯定是不会有什么效果的,你想做的是类似MoE那种形式吧,可以参考这个issue tools目录那个merge完了之后其实还是套了一层peft的壳子,peft里面应该是套的loralib的壳子,虽然里面实现的是对矩阵的合并,但是代码层面还是两块东西
No branches or pull requests
对Vicuna模型很感兴趣,刚看到这个项目,想请教一下。
LoRA finetune会生成Adapter,
想请教一下项目主页中提到的continuous-finetune是合并了不同语料finetune后不同批次的LAdapter吗?
还是说是训练语料的merge来实现?
谢谢。
The text was updated successfully, but these errors were encountered: