AssertionError: The given checkpoint is not a LoRA checkpoint, please specify --finetuning_type full/freeze
instead.
#34
Labels
solved
This problem has been already solved
训练参数:
CUDA_VISIBLE_DEVICES=0 python src/train_sft.py --model_name_or_path ./Bloom/ --do_train --dataset alpaca_gpt4_en --finetuning_type lora --checkpoint_dir path_to_pt_checkpoint --output_dir path_to_sft_checkpoint --overwrite_cache --per_device_train_batch_size 4 --gradient_accumulation_steps 4 --lr_scheduler_type cosine --logging_steps 10 --save_steps 1000 --learning_rate 5e-5 --num_train_epochs 3.0 --resume_lora_training False --lora_target query_key_value --plot_loss --fp16
Bloom不支持lora吗?谢谢。
The text was updated successfully, but these errors were encountered: