Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

这是目前看到最全的大模型训练代码 #17

Closed
nieallen opened this issue Jun 7, 2023 · 8 comments
Closed

这是目前看到最全的大模型训练代码 #17

nieallen opened this issue Jun 7, 2023 · 8 comments
Labels
solved This problem has been already solved

Comments

@nieallen
Copy link

nieallen commented Jun 7, 2023

这套代码包含了预训练、rlhf流程,还有lora、qlora技术。真的是很全面了。
但如果可以实现多轮对话构建,比如[q1,a1,q2,a2,q3,a3],构建成训练样本为:prompt:q1*[IGNORE_INDEX]+a1++q2*[IGNORE_INDEX]+a2++q3*[IGNORE_INDEX],response: a3
就更好了哈哈

@hiyouga
Copy link
Owner

hiyouga commented Jun 7, 2023

目前的模型训练支持多轮对话,需要在 dataset_info.json 中指定 history 列。
在多轮对话的训练中,目前普遍采用的方式是

q1 + a1 + q2 + a2 + q3 + a3
[IGNORE] + [IGNORE] + [IGNORE] + [IGNORE] + [IGNORE] + a3

因此目前的实现方式适配多轮对话训练。

@hiyouga hiyouga added the pending This problem is yet to be addressed label Jun 7, 2023
@nieallen
Copy link
Author

nieallen commented Jun 7, 2023

目前的模型训练支持多轮对话,需要在 dataset_info.json 中指定 history 列。 在多轮对话的训练中,目前普遍采用的方式是

q1 + a1 + q2 + a2 + q3 + a3
[IGNORE] + [IGNORE] + [IGNORE] + [IGNORE] + [IGNORE] + a3

因此目前的实现方式适配多轮对话训练。

多轮语料,每一轮只遮挡q,不遮挡a,会不会更好,让模型学到每一轮的回答,帮助更好做对话

@hiyouga
Copy link
Owner

hiyouga commented Jun 7, 2023

这可能会破坏掉 BOS 和 EOS 的语义信息,我们不推荐这么做。

@hiyouga
Copy link
Owner

hiyouga commented Jun 7, 2023

抱歉,我的说法可能有误,我重新参考了 Vicuna 的训练代码,这种方式的确能加速模型在多轮对话上的训练,我们考虑在近期实现类似的功能,感谢你的建议!

@hiyouga hiyouga added the enhancement New feature or request label Jun 7, 2023
@nieallen
Copy link
Author

nieallen commented Jun 8, 2023

抱歉,我的说法可能有误,我重新参考了 Vicuna 的训练代码,这种方式的确能加速模型在多轮对话上的训练,我们考虑在近期实现类似的功能,感谢你的建议!

期待!我lora微调实验,vicuna那种多轮语料构建方式,效果要好于prompt全遮。不知道qlora会不会有变化,估计也会好一些

@flaviadeutsch
Copy link

flaviadeutsch commented Jun 8, 2023

期待+1

@nieallen
Copy link
Author

nieallen commented Jun 8, 2023

还有请问后续可以实现RWKV的lora微调吗?RWKV真的很快,感觉是gpt生成速度的两倍。但它不是纯transformers架构,不能用peft做lora训练,没有实现的脚本现在

@hiyouga
Copy link
Owner

hiyouga commented Jun 14, 2023

在最新的代码 b6faf02 中,我们实现了多轮对话语料的训练。

另外,我们暂时不会考虑加入 RWKV 的微调。

@hiyouga hiyouga added solved This problem has been already solved and removed pending This problem is yet to be addressed labels Jun 14, 2023
@hiyouga hiyouga closed this as completed Jun 16, 2023
@hiyouga hiyouga removed the enhancement New feature or request label Feb 6, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
solved This problem has been already solved
Projects
None yet
Development

No branches or pull requests

3 participants