Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Syncing wandb runs more than once when evolution #6065

Closed
1 of 2 tasks
SarperYurttas opened this issue Dec 22, 2021 · 25 comments · Fixed by #6374
Closed
1 of 2 tasks

Syncing wandb runs more than once when evolution #6065

SarperYurttas opened this issue Dec 22, 2021 · 25 comments · Fixed by #6374
Labels
bug Something isn't working

Comments

@SarperYurttas
Copy link

Search before asking

  • I have searched the YOLOv5 issues and found no similar bug report.

YOLOv5 Component

Evolution

Bug

When I tried to evolve hyperparameters it launches so many runs and it's getting bigger furthermore it tries to upload approximately 600 mb file after 20 generations, it was 200 mb at first. This problem makes it slower and slower how can I fix this? I looked the code in order to fix this but couldn't find something wrong.
And I have one more question that I didn't understand by looking tutorials, that is, should I train and use the trained weights for evolution (python train.py --evolve --weights best.pt )
or should I use the default weights (python train.py --evolve --weights yolov5s.pt ) which one is correct?

Screenshot 2021-12-22 142229
Screenshot 2021-12-22 142911
Screenshot 2021-12-22 142259
Screenshot 2021-12-22 142447

Environment

  • YOLOv5 2021-11-17 torch 1.10.0 CUDA:0 (NVIDIA GeForce RTX 2070, 8192MiB)
  • OS: Windows 10
  • Python 3.9.7

Minimal Reproducible Example

No response

Additional

No response

Are you willing to submit a PR?

  • Yes I'd like to help by submitting a PR!
@SarperYurttas SarperYurttas added the bug Something isn't working label Dec 22, 2021
@github-actions
Copy link
Contributor

github-actions bot commented Dec 22, 2021

👋 Hello @SarperYurttas, thank you for your interest in YOLOv5 🚀! Please visit our ⭐️ Tutorials to get started, where you can find quickstart guides for simple tasks like Custom Data Training all the way to advanced concepts like Hyperparameter Evolution.

If this is a 🐛 Bug Report, please provide screenshots and minimum viable code to reproduce your issue, otherwise we can not help you.

If this is a custom training ❓ Question, please provide as much information as possible, including dataset images, training logs, screenshots, and a public link to online W&B logging if available.

For business inquiries or professional support requests please visit https://ultralytics.com or email Glenn Jocher at [email protected].

Requirements

Python>=3.6.0 with all requirements.txt installed including PyTorch>=1.7. To get started:

$ git clone https://github.com/ultralytics/yolov5
$ cd yolov5
$ pip install -r requirements.txt

Environments

YOLOv5 may be run in any of the following up-to-date verified environments (with all dependencies including CUDA/CUDNN, Python and PyTorch preinstalled):

Status

CI CPU testing

If this badge is green, all YOLOv5 GitHub Actions Continuous Integration (CI) tests are currently passing. CI tests verify correct operation of YOLOv5 training (train.py), validation (val.py), inference (detect.py) and export (export.py) on MacOS, Windows, and Ubuntu every 24 hours and on every commit.

@AyushExel
Copy link
Contributor

@SarperYurttas Let's take a look at each of these issues one by one:

  • each evolve run is picked up as a different run so it syncs multiple times. Is this not ideal for your use case? Would you prefer some other setting by default?
  • The incremental size is something that I can investigate. I see a possible reason for this is the network errors which cause large tqdm progress bar logs to be uploaded multiple times.

Is this behaviour reproducible? Are you seeing this even without any network issues?

@SarperYurttas
Copy link
Author

Thank you for your reply @AyushExel. What I'm trying to say is that until 25th evolve it synced approximately 400 run and most of them is empty and it continues syncing much more runs. I think it behaviors like that 1 run for 1st evolve, 2 run for 2nd evolve . . . 25 run for 25th evolve and it's end up with hundreds of runs in wandb. In my opinion there is a loop like "for i in range(evolve): wandb.run" but I couldn't find something like this. In fact I tried it in the kaggle notebooks it's behavior was same. Finally network error isn't important because it just print it and continue upload but the file size getting bigger and uploading it takes too much time even 25th evolve, it will be about 2 gb in 100th evolve I think.

@AyushExel
Copy link
Contributor

@SarperYurttas So you're saying that the number of wandb runs != to the number of evolve runs?

@SarperYurttas
Copy link
Author

SarperYurttas commented Dec 22, 2021

@SarperYurttas So you're saying that the number of wandb runs != to the number of evolve runs?

Yes that's true! While number of evolve runs equals 25, number of wandb runs equals 440.

Apparently the problem is in between that two lines of code(train.py).

Screenshot 2021-12-22 191241
Screenshot 2021-12-22 190805
Screenshot 2021-12-22 190846

@AyushExel
Copy link
Contributor

@SarperYurttas yes I was able to reproduce this behavior in colab. I'll try to find the cause for this and get back

@AyushExel
Copy link
Contributor

AyushExel commented Dec 23, 2021

@glenn-jocher can you confirm that number of wandb runs should be equal to the number of evolve runs? We're seeing some extra runs generated.
The size of runs increases after every runs, because the media files synced increase, I don't know what is the reason for this. I'm investigating

@glenn-jocher
Copy link
Member

@AyushExel if we --evolve 300 generations then we should 300 runs.

@glenn-jocher
Copy link
Member

@AyushExel @SarperYurttas ah sorry let me explain more. --evolve 300 will not generate 300 runs/train/exp directories locally, it will just generate one runs/evolve/exp directory.

In wandb I think this does create 300 runs though.

@SarperYurttas
Copy link
Author

SarperYurttas commented Dec 23, 2021

Thank you for your reply @glenn-jocher
Actual problem is arise from wandb logger or something like that and there is an obvious bug but I don't know whether this happening everyone.

this is my command;
python train.py --epochs 10 --img 1280 --data ../dataset.yaml --workers 4 --batch-size 8 --adam --evolve 100

Normally it should have taken almost 50 hours (3(epoch duration)*10(epoch number)*100(evolve number) = 3000 mins). however, at the 50th hour it completed merely 50 evolve.

I had to press ctrl+c at 50th evolve because it started to creating hundreds of runs.

this directory is yolov5/wandb, it created almost 1000 runs (I deleted nearly 300 run folders);
Screenshot 2021-12-23 164401,
this is my wandb workspace;
Screenshot 2021-12-23 164437

@glenn-jocher
Copy link
Member

@SarperYurttas hmm yes seems like a possible wandb bug. You should have no more than 1 run per generation at most, i.e. so if you --evolve 300 there should be 300 wandb runs and no more, unless they pertain to previous evolutions.

@SarperYurttas can you provide exact steps to reproduce what you are seeing to help @AyushExel debug this? Thanks!

@SarperYurttas
Copy link
Author

@glenn-jocher I'm doing nothing more than running this command;
python train.py --epochs 10 --img 1280 --data ../dataset.yaml --workers 4 --batch-size 8 --adam --evolve 100

@AyushExel
Copy link
Contributor

@SarperYurttas Thanks for the detailed info. I'm taking a look at this. Wasn't able to pinpoint the cause yesterday. Expect a delayed response as I might be in and out today. Thanks for your patience :)

@AyushExel
Copy link
Contributor

@glenn-jocher I'm working on fixing this. and I just noticed that in evolve setting, the on_train_end hook is called twice which is why multiple runs were initialized. Do you think it is a bug?

@glenn-jocher
Copy link
Member

@AyushExel are you sure? I only see the on_train_end hook appear once in train.py L444 here:

yolov5/train.py

Lines 443 to 449 in db1f83b

callbacks.run('on_train_end', last, best, plots, epoch, results)
LOGGER.info(f"Results saved to {colorstr('bold', save_dir)}")
torch.cuda.empty_cache()
return results

@AyushExel
Copy link
Contributor

@glenn-jocher yeah I saw that and I confirmed it by running this command:
python train.py --epochs 1 --data coco128.yaml --weights yolov5s.pt --evolve 2 . In executing this, the on_train_end function should be called exactly 2 times right? But it's being called 3 times. I verified it by setting a pdb.set_trace() breakpoint. You can also verify by just logging something from the top of the function

@glenn-jocher
Copy link
Member

@AyushExel strange. Is --evolve 2 just running 3 evolutions? Maybe there's a zero-indexed mixup in there somewhere so it loops through 0,1,2?

@AyushExel
Copy link
Contributor

@glenn-jocher I just confirmed this multiple times to be sure. I ran an evolution 10 times but the on_train_end function was called more than 21 times.
I just created a global variable in the utils/loggers/__init__.py and incremented the count each time on_train_end was called. Should be easy to reproduce.

@AyushExel
Copy link
Contributor

Still going
Screenshot 2022-01-21 at 2 25 23 AM

@glenn-jocher
Copy link
Member

@AyushExel ok thanks! Really strange. If you just train normally once is on_train_end only called once?

@glenn-jocher
Copy link
Member

glenn-jocher commented Jan 20, 2022

@AyushExel do you have any idea why this might be occurring? The on_train_end callback line is not part of any for loop, and train() is only called once per generation.

Maybe something outside of train.py is calling on_train_end? I'll try to reproduce and investigate myself in the next few days, we've gotten pretty busy with increased issues mid January.

EDIT: TODO: Investigate multiple on_train_end calls when evolving (should only be 1 per gen).

@glenn-jocher glenn-jocher added the TODO High priority items label Jan 20, 2022
@AyushExel
Copy link
Contributor

@glenn-jocher Yeah I'm trying to track the cause of this. Will let you know if I find something

@AyushExel
Copy link
Contributor

Okay I found something interesting. I'm investigating the place from where the call originates. Here in callbacks.py
At the end of the first evolve, the length of self._callbacks[hook] is 1 which what it should be.

Screenshot 2022-01-21 at 3 01 39 AM

But its length changes to 2 after the 2nd evolve train ends, which explains the multiple calls.
Screenshot 2022-01-21 at 3 01 23 AM

So, probably some there is a need to flush something when train.py ends. I'll see if I can propose a solution for this

@AyushExel
Copy link
Contributor

AyushExel commented Jan 20, 2022

Okay I'm pretty sure the cause is the same callbacks object being used for all the evolve runs. It is just initialized once in def main(.., callback=Callbacks()) which is fine for 1 train run but causing problems with evolve. I'm testing a fix, if it works I'll make a PR

@glenn-jocher
Copy link
Member

glenn-jocher commented Jan 20, 2022

@SarperYurttas good news 😃! Your original issue may now be fixed ✅ in PR #6374 by @AyushExel. To receive this update:

  • Gitgit pull from within your yolov5/ directory or git clone https://github.com/ultralytics/yolov5 again
  • PyTorch Hub – Force-reload model = torch.hub.load('ultralytics/yolov5', 'yolov5s', force_reload=True)
  • Notebooks – View updated notebooks Open In Colab Open In Kaggle
  • Dockersudo docker pull ultralytics/yolov5:latest to update your image Docker Pulls

Thank you for spotting this issue and informing us of the problem. Please let us know if this update resolves the issue for you, and feel free to inform us of any other issues you discover or feature requests that come to mind. Happy trainings with YOLOv5 🚀!

@glenn-jocher glenn-jocher removed the TODO High priority items label Jan 20, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants