Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[BUG] <title> #1326

Open
2 tasks done
bxhsort opened this issue Oct 24, 2024 · 2 comments
Open
2 tasks done

[BUG] <title> #1326

bxhsort opened this issue Oct 24, 2024 · 2 comments
Labels

Comments

@bxhsort
Copy link

bxhsort commented Oct 24, 2024

是否已有关于该错误的issue或讨论? | Is there an existing issue / discussion for this?

  • 我已经搜索过已有的issues和讨论 | I have searched the existing issues / discussions

该问题是否在FAQ中有解答? | Is there an existing answer for this in FAQ?

  • 我已经搜索过FAQ | I have searched FAQ

当前行为 | Current Behavior

Traceback (most recent call last):
File "/root/autodl-tmp/Qwen/finetune.py", line 374, in
train()
File "/root/autodl-tmp/Qwen/finetune.py", line 363, in train
trainer = Trainer(
File "/root/miniconda3/envs/qwen/lib/python3.10/site-packages/transformers/trainer.py", line 340, in init
self.create_accelerator_and_postprocess()
File "/root/miniconda3/envs/qwen/lib/python3.10/site-packages/transformers/trainer.py", line 3883, in create_accelerator_and_postprocess
self.accelerator = Accelerator(
TypeError: Accelerator.init() got an unexpected keyword argument 'dispatch_batches'

运行环境 | Environment

- OS: linux
- Python: 3.10
- Transformers: 4.32.0
- PyTorch: 2.1.0
- CUDA (`python -c 'import torch; print(torch.version.cuda)'`):12.1

备注 | Anything else?

No response

@guoping1127
Copy link

我也有这个错误

Copy link

This issue has been automatically marked as inactive due to lack of recent activity. Should you believe it remains unresolved and warrants attention, kindly leave a comment on this thread.
此问题由于长期未有新进展而被系统自动标记为不活跃。如果您认为它仍有待解决,请在此帖下方留言以补充信息。

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

2 participants