Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update trainer for easier handling of accumulate, compile fixes, and proper reporting #34511

Merged
merged 17 commits into from
Nov 4, 2024

Conversation

muellerzr
Copy link
Contributor

What does this PR do?

Alternative to #34442

TL;DR we just need to remove lru_cache and everything will work fine. (and adds a test)

This PR also takes the full lessons from my article and adds it to the Trainer for a simpler solution to the grad accum calculation (we shouldn't rely on accelerator from now on bc it can't handle the nuances with the grad accum fix at the highest level API, so we use a lower level version instead)

Fixes #34402

Would recommend a patch after this

Before submitting

  • This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
  • Did you read the contributor guideline,
    Pull Request section?
  • Was this discussed/approved via a Github issue or the forum? Please add a link
    to it if that's the case.
  • Did you make sure to update the documentation with your changes? Here are the
    documentation guidelines, and
    here are tips on formatting docstrings.
  • Did you write any new necessary tests?

Who can review?

Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.

@ArthurZucker @Rocketknight1

@HuggingFaceDocBuilderDev

The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.

Copy link
Member

@Rocketknight1 Rocketknight1 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Couple comments about the test!

tests/trainer/test_trainer.py Show resolved Hide resolved
src/transformers/testing_utils.py Outdated Show resolved Hide resolved
Copy link
Member

@Rocketknight1 Rocketknight1 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Tests look clean to me now, and I'm trusting you on the accelerate side of things! 😅

cc @LysandreJik / @ArthurZucker for core maintainer review

Comment on lines +2494 to +2498
context = (
functools.partial(self.accelerator.no_sync, model=model)
if i == len(batch_samples) - 1
else contextlib.nullcontext
)
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For an explanation on what we have going on here @Rocketknight1 , during DDP we use model.no_sync() to only communicate across all GPUs during the next step outside it (so we speed up training when not needed when doing gradient accumulation). accelerator.no_sync() is the lower-level accumulate() API which makes that op backed-independent (so on a single GPU it just does nullcontext)

Copy link

@Milad335t Milad335t left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

M

@ArthurZucker
Copy link
Collaborator

@Milad335t just warning you to stop spamming or we'll have to block you 😢

Copy link
Collaborator

@ArthurZucker ArthurZucker left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks, let's hope this gets stabilized!

src/transformers/trainer.py Outdated Show resolved Hide resolved
num_items_in_batch = sum(
[data_batch["labels"][..., 1:].ne(-100).sum().item() for data_batch in batch_samples]
)
num_items_in_batch = sum([(batch["labels"].ne(-100)).sum() for batch in batch_samples])
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

weird to me that we have to use -100 here, instead of a general parameter but whit was already the case

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

IIRC we use -100 for padding by default in the Trainer. I can align it to self.processor if it exists else -100 if that's better?:)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actually our padding index is -100 everywhere.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

okay sounds good then sorry

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No worries, it's weird for me too :)

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why do we no longer need to shift labels ["labels"][...,1:] when getting num_items_in_batch?

Copy link
Collaborator

@ArthurZucker ArthurZucker left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks patching today!

num_items_in_batch = sum(
[data_batch["labels"][..., 1:].ne(-100).sum().item() for data_batch in batch_samples]
)
num_items_in_batch = sum([(batch["labels"].ne(-100)).sum() for batch in batch_samples])
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

okay sounds good then sorry

@muellerzr muellerzr merged commit ef976a7 into main Nov 4, 2024
27 checks passed
@muellerzr muellerzr deleted the muellerzr-final-gradaccum-check branch November 4, 2024 12:47
ArthurZucker pushed a commit that referenced this pull request Nov 5, 2024
…proper reporting (#34511)

* Update trainer for easier handling of accumulate + proper reporting

* test

* Fixup tests

* Full fix

* Fix style

* rm comment

* Fix tests

* Minimize test + remove py 311 check

* Unused import

* Forward contrib credits from discussions

* Fix reported metrics

* Refactor, good as it's going to get

* rm pad tok id check

* object detection and audio are being annoying

* Fin

* Fin x2

---------

Co-authored-by: Gyanateet Dutta <[email protected]>
ArthurZucker pushed a commit that referenced this pull request Nov 5, 2024
…proper reporting (#34511)

* Update trainer for easier handling of accumulate + proper reporting

* test

* Fixup tests

* Full fix

* Fix style

* rm comment

* Fix tests

* Minimize test + remove py 311 check

* Unused import

* Forward contrib credits from discussions

* Fix reported metrics

* Refactor, good as it's going to get

* rm pad tok id check

* object detection and audio are being annoying

* Fin

* Fin x2

---------

Co-authored-by: Gyanateet Dutta <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Accelerate + Dynamo broken in 4.46.0 due to model loss functions refactor
7 participants