Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

less scary overflow notice #833

Merged
merged 4 commits into from
Mar 11, 2021
Merged

Conversation

stas00
Copy link
Collaborator

@stas00 stas00 commented Mar 8, 2021

This all-caps OVERFLOW in:

[deepspeed] OVERFLOW! Skipping step. Attempted loss

is quite intimidating for the users and makes them feel that something is wrong.

This PR is suggesting a more Info-style no-caps version:

[deepspeed] Overflow! Skipping step. Attempted loss

The biggest problem here is that the "overflow" message lacks context. It'd be even more informative and less confusing to specifically say what it is an overflow of. Could it say perhaps:

[deepspeed] fp16 dynamic loss scale overflow! Skipping step. Attempted loss

? Now users can act on it, since they know where to look for information to adjusting this if they prefer to get the optimizer to work from step 1.

It would have been even better, IMHO, if only 1 message were printed e.g. at the point when the first step is finally taken, e.g. as in:

First 22 steps were skipped due to fp16 dynamic loss scale overflow, starting stepping from step 23.

Otherwise there is a huge flurry of these messages.

@stas00
Copy link
Collaborator Author

stas00 commented Mar 11, 2021

ok, as discussed changed it to:

[deepspeed] fp16 dynamic loss scale overflow! Skipping step. Attempted loss 

also fixed [deepscale] - I hear that was the original name.

@jeffra jeffra merged commit 29853c3 into microsoft:master Mar 11, 2021
jeffra added a commit to jeffra/DeepSpeed that referenced this pull request Aug 25, 2021
* set adamw_mode default true (follows FusedAdam and < 0.3.11 logic) (microsoft#844)

* less scary overflow notice (microsoft#833)

Co-authored-by: Jeff Rasley <[email protected]>

* Add optimizers and schedules to RTD and updated the corresponding part in the website (microsoft#799)

* add optimizers and schedules to rtd

* update ds website and fix links

* add optimizers and schedules to rtd

* update ds website and fix links

* add flops profiler to rtd

* fix

Co-authored-by: Shaden Smith <[email protected]>

* small tweaks (microsoft#839)

* Control ZeRO wall clock timers (microsoft#849)

* Control ZeRO wall clock timers

* Disable more ZeRO3 debug prints

Co-authored-by: Jeff Rasley <[email protected]>

* [WarmupDecayLR] fix log(0) & 1/log(1) bugs (microsoft#772)

* fix log(0) & 1/log(1) bugs

* simplify

Co-authored-by: Jeff Rasley <[email protected]>
Co-authored-by: Reza Yazdani <[email protected]>
Co-authored-by: Cheng Li <[email protected]>

* bump to v0.3.12

* Bug fix: Remove client optimizer param_group list item that does not have 'params' (microsoft#827)

Co-authored-by: Jeff Rasley <[email protected]>

* [doc] pipeline doc typos/improvements (microsoft#659)

Admin merging for pure-doc PR that does not trigger build.

* Samyamr/inference hook fix (microsoft#851)

* Fix mis-aligned-grad

When a parameter is not divisible by world size, the partitioned gradients are mis-aligned due to incorrect padding handling. This PR should fix for that.

* Formatting fix

* Adding static_scale test back for Z3, and also changing hidden size to be not divisile by world_size

* also removing alignment from flat fp16 buffers

* Testing for hidden dim alignment

* inference hook fix

* Update stage3.py

* formatting

* [bug-fix] move params to gpu if offload params is turned off

Co-authored-by: Samyam Rajbhandari <[email protected]>
Co-authored-by: Shaden Smith <[email protected]>
Co-authored-by: Jeff Rasley <[email protected]>

* ZeRO Stage 2: Clear reduced gradients (microsoft#856)

* Ensure gradients of other partitions are cleared after reduction

* Remove redundant code

Co-authored-by: Jeff Rasley <[email protected]>

* Squash stage3 v1 (microsoft#146)

Co-authored-by: Samyam <[email protected]>
Co-authored-by: Jeff Rasley <[email protected]>
Co-authored-by: Samyam Rajbhandari <[email protected]>
Co-authored-by: Olatunji Ruwase <[email protected]>
Co-authored-by: Shaden Smith <[email protected]>
Co-authored-by: Shaden Smith <[email protected]>
Co-authored-by: eltonzheng <[email protected]>

* formatting fix (microsoft#150)

* stage3 bugfix (API) update and simplified FP16 Z3 tests (microsoft#151)

* fp16 Z3 API update and bugfix

* revert debug change

* docs

* filling in allocation docs

* better assumption docs

* doc progress

* config json

* major docs edits

* auto registration works for accessed cases

* working on small models.

* debugging large-model discovery?

* fix discovery to first forward pass?

* return obj ext param

* support None parameters in auto-discovery

Co-authored-by: Jeff Rasley <[email protected]>
Co-authored-by: Stas Bekman <[email protected]>
Co-authored-by: Cheng Li <[email protected]>
Co-authored-by: Olatunji Ruwase <[email protected]>
Co-authored-by: Reza Yazdani <[email protected]>
Co-authored-by: Samyam Rajbhandari <[email protected]>
Co-authored-by: eltonzheng <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants