-
-
Notifications
You must be signed in to change notification settings - Fork 16.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Training got stuck when I used DistributedDataParallel mode but dataParallel mode is useful #2375
Comments
👋 Hello @wuqi930907, thank you for your interest in 🚀 YOLOv5! Please visit our ⭐️ Tutorials to get started, where you can find quickstart guides for simple tasks like Custom Data Training all the way to advanced concepts like Hyperparameter Evolution. If this is a 🐛 Bug Report, please provide screenshots and minimum viable code to reproduce your issue, otherwise we can not help you. If this is a custom training ❓ Question, please provide as much information as possible, including dataset images, training logs, screenshots, and a public link to online W&B logging if available. For business inquiries or professional support requests please visit https://www.ultralytics.com or email Glenn Jocher at [email protected]. RequirementsPython 3.8 or later with all requirements.txt dependencies installed, including $ pip install -r requirements.txt EnvironmentsYOLOv5 may be run in any of the following up-to-date verified environments (with all dependencies including CUDA/CUDNN, Python and PyTorch preinstalled):
StatusIf this badge is green, all YOLOv5 GitHub Actions Continuous Integration (CI) tests are currently passing. CI tests verify correct operation of YOLOv5 training (train.py), testing (test.py), inference (detect.py) and export (export.py) on MacOS, Windows, and Ubuntu every 24 hours and on every commit. |
@wuqi930907 I think in your first command you need to also specify which two devices you are assigning if you use For example you might add --device 0,1 or --device 6,7 python -m torch.distributed.launch --nproc_per_node 2 train.py --batch-size 32 --data test.yaml --weights pretrained_model/yolov5l.pt --device 0,1 |
@glenn-jocher Thank you for your reply,I have added device parameter and changed my command to "python -m torch.distributed.launch --nproc_per_node 2 train.py --batch-size 32 --data test.yaml --weights pretrained_model/yolov5l.pt --device 0,1".However, training still got stuck. |
@Finnis @wuqi930907 you should train in a Docker environment if you are having issues with your local environment. See Docker quickstart: EnvironmentsYOLOv5 may be run in any of the following up-to-date verified environments (with all dependencies including CUDA/CUDNN, Python and PyTorch preinstalled):
StatusIf this badge is green, all YOLOv5 GitHub Actions Continuous Integration (CI) tests are passing. These tests evaluate proper operation of basic YOLOv5 functionality, including training (train.py), testing (test.py), inference (detect.py) and export (export.py) on MacOS, Windows, and Ubuntu. |
@roytseng-tw @Finnis @wuqi930907 this problem seems to be related to #2405. @wudashuo found that rolling back to commit a3ecf0f allowed for proper DDP training. The way to do this would be: git clone https://github.com/ultralytics/yolov5 -b a3ecf0fd640465f9a7c009e81bcc5ecabf381004 or git clone https://github.com/ultralytics/yolov5
git checkout a3ecf0fd640465f9a7c009e81bcc5ecabf381004 Alternatively I would highly recommend training DDP in a docker image. We do all of our trainings in Docker images for speed and reproducibility. You can get started with the Docker image here:
|
@glenn-jocher Update: |
@roytseng-tw which suggestion did you try? If you believe you have a reproducible issue (happens on a common environment like Docker, on a common dataset like COCO/COCO128/VOC), on unmodified and current master code, then I would recommend filing a complete bug report using the 🐛 Bug Report template, providing screenshots and a minimum reproducible example to help us better understand and reproduce the problem. Thank you! |
@roytseng-tw @glenn-jocher Thanks,I have cloned the latest code and this problem was solved. |
Hi,I have builded a docker image according to the Dockerfile.
But,my training got stuck with command:“python -m torch.distributed.launch --nproc_per_node 2 train.py --batch-size 32 --data test.yaml --weights pretrained_model/yolov5l.pt”.Following is my log.
Although the terminal log has stopped,but the training process still exists.
Then, I changed my command:"python train.py --batch-size 32 --data test.yaml --weights pretrained_model/yolov5m.pt --device 0,1", everything is normal.
The text was updated successfully, but these errors were encountered: