You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
resuming from a "peft checkpoint" is not the same as resuming from a regular checkpoint. You'll want to set lora_model_dir to point to the checkpoint directory iirc. @NanoCode012 does that sound right?
Please check that this issue hasn't been reported before.
Expected Behavior
Generate the correct Lora after training is completed
Current behaviour
Using the following command to merge models, there is an error message:
python3 -m axolotl.cli.merge_lora sft_34b.yml --lora_model_dir="/workspace/axolotl/output/Yi-34B/ljf-yi-34b-lora" --output_dir=/data1/ljf2/data-check-test
Steps to reproduce
The meaning of this parameter is not effective
save_safetensors: true
Actually generated
adapter_model.bin
Config yaml
No response
Possible solution
No response
Which Operating Systems are you using?
Python Version
3.10
axolotl branch-commit
main
Acknowledgements
The text was updated successfully, but these errors were encountered: