Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Chinese-Vicuna-lora-13b-belle-and-guanaco 如何进行 finetune_continue 训练,我希望能基于13b继续训练新的数据 #106

Closed
greatewei opened this issue Apr 24, 2023 · 8 comments

Comments

@greatewei
Copy link

image

但是目前发现13b的lora没有这些文件,这些文件是如何生成的

@Facico
Copy link
Owner

Facico commented Apr 24, 2023

这些是训练的时候中间保存的结果

@greatewei
Copy link
Author

这些是训练的时候中间保存的结果

如果是自己基于merge.json进行训练13b-lora,如何进行保存

@Facico
Copy link
Owner

Facico commented Apr 24, 2023

如果你自己训练过的话,我们的程序应该都有保存的,你看看你的保存文件

@greatewei
Copy link
Author

如果你自己训练过的话,我们的程序应该都有保存的,你看看你的保存文件
这是我基于Chinese-Vicuna-lora-7b-belle-and-guanaco continue finetune, 不过最后生成的文件只有[adapter_config.json,adapter_model.bin],是少了什么参数吗?

python finetune.py \
--data_path /data/chat/Chinese-Vicuna/data/sql.json \
--output_path /data/chat/models/llama_lora/sql-lora/ \
--model_path /data/chat/models/llama_base/llama-7b-hf  \
--eval_steps 50 \
--save_steps 50 \
--resume_from_checkpoint /data/chat/models/llama_lora/Chinese-Vicuna-lora-7b-belle-and-guanaco \
--ignore_data_skip True

@Facico
Copy link
Owner

Facico commented Apr 24, 2023

有没有可能是你的数据量太少了导致最后没有到save_steps就停止了

@greatewei
Copy link
Author

有没有可能是你的数据量太少了导致最后没有到save_steps就停止了

数据300条

@Facico
Copy link
Owner

Facico commented Apr 24, 2023

算算不就知道了,300/128 < 3个step,你要50个step才保存一次

@greatewei
Copy link
Author

算算不就知道了,300/128 < 3个step,你要50个step才保存一次

原来如此

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants