-
Notifications
You must be signed in to change notification settings - Fork 64
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Finetune] Scripts for Llama2-7b lora finetune example using stock pytorch #327
[Finetune] Scripts for Llama2-7b lora finetune example using stock pytorch #327
Conversation
finetune/llama/README.md
Outdated
|
||
## Step-by-step run guide | ||
# Prepare dependency | ||
wget https://intel-extension-for-pytorch.s3.amazonaws.com/ipex_stable/cpu/oneccl_bind_pt-2.0.0%2Bcpu-cp39-cp39-linux_x86_64.whl |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
it would be better to display the relevant code commands using the markdown code format.
https://www.markdownguide.org/extended-syntax/#fenced-code-blocks
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You can directly place the WHL package URL into the Python requirements.txt
file, and pip will automatically download it. However, first, you need to specify the Python version. The package you provided is only compatible with Python 3.9
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
it would be better to display the relevant code commands using the markdown code format. https://www.markdownguide.org/extended-syntax/#fenced-code-blocks
Hi, thx for the comments. Updated accordingly.
Suggest to move finetune/llama/README.md into finetune/README.md for all finetune operation. |
…nts.txt to upper level folder
Hi changqing, thx for pointing out that, already moved common README.md and requirements.txt to upper level folder to benefit more models for finetune. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hi @marvin-Yu , here I introduced an upgrade for accelerate==0.27.2, which is not compatible with xFT requirements for version 0.23.0. The main reason is that I noticed for transformers==4.38.1, the training args includes an accelerator_config named 'use_seedable_sampler', however, when accelerate==0.23.0, it will report unexpected keyword argument 'use_seedable_sampler'.
The issue is much alike this one, hiyouga/LLaMA-Factory#2552. The solution is to upgrade accelerate==0.27.2 or downgrade transformers==4.36.0.
So here I would like some suggestions, can we just upgrade accelerate version? or I choose to do some modification to transformers source code to make it work with accelerate==0.23.0 and transformers==4.38.1?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's not recommended to modify the transformers source code and version. Currently, the new models have relatively high requirements for the transformers version. You can try to find an accelerate version that is compatible. Alternatively, you could simply remove the version specifications from the requirements.txt file, allowing pip to determine the appropriate versions automatically.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's not recommended to modify the transformers source code and version. Currently, the new models have relatively high requirements for the transformers version. You can try to find an accelerate version that is compatible. Alternatively, you could simply remove the version specifications from the requirements.txt file, allowing pip to determine the appropriate versions automatically.
Hi Marvin, thx for the quick response! I also agree that changing transformers source code and version here is not a proper solution for the whole project. That is why I introduce the upgrade for accelerate==0.27.2
here in the requirements which I have tried that it could work with transformers==4.38.1
for training.
I also noticed that in our web demo, they simply upgrade accelerate==0.26.1
https://github.com/intel/xFasterTransformer/blob/main/examples/web_demo/requirements.txt, thus I'm guessing it is acceptable that we upgrade version for accelerate
here for finetune. Thx!
This PR is a step-by-step example for running LLaMA2 7B lora finetuning using stock PyTorch on CPU, including single instance and distributed. Pls help to review, thx!