-
Notifications
You must be signed in to change notification settings - Fork 448
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
chore: remove PyTorch 2.5.0 checks #1877
chore: remove PyTorch 2.5.0 checks #1877
Conversation
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/torchtune/1877
Note: Links to docs will display an error until the docs builds have been completed. ❗ 2 Active SEVsThere are 2 currently active SEVs. If your PR is affected, please view them below:
✅ No FailuresAs of commit 37d5a01 with merge base 4b6877a (): This comment was automatically generated by Dr. CI and updates every 15 minutes. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks again for doing these - looks great. A few minor nits and one comment we should decide on.
@@ -108,7 +108,7 @@ checkpointing, where all activations will either be recomputed later in the back | |||
|
|||
To enable activation offloading, use the ``enable_activation_offloading`` config entry or flag | |||
in our lora finetuning single device recipe, e.g. ``enable_activation_offloading=True``. To allow | |||
usage of streams, make sure you are on a torch version later than PyTorch 2.5.0.dev20240907. | |||
usage of streams, make sure you are on a torch version equal to or later than PyTorch. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
usage of streams, make sure you are on a torch version equal to or later than PyTorch. | |
usage of streams, make sure you are on a torch version equal to or later than PyTorch 2.5.0. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
bumping this
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we need a merge here? The docs in main read:
To enable activation offloading, use enable_activation_offloading=True. If you are on torch version later than PyTorch 2.5.0, it will allow the usage of multiple CUDA streams automatically.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I believe we can just revert the changes here to leave the file as-as since it's been updated in another PR @JP-sDEV
@@ -33,7 +33,7 @@ class OffloadActivations(saved_tensors_hooks): | |||
|
|||
use_streams (Optional[bool]): Whether or not to use streams for performance optimization where | |||
the communications get overlapped with the computation. Requires a torch build | |||
after torch-2.5.0.dev20240907. Default: True if a later torch build is found, else False. | |||
after torch-2.5.0.]. Default: True. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
after torch-2.5.0.]. Default: True. | |
after torch-2.5.0. Default: True. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
bumping this
torchtune/training/_compile.py
Outdated
Compiling full model with torch.compile... | ||
For faster compile times via per-layer compile, please run on PyTorch nightlies. | ||
""" | ||
log.warning( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not sure if we want to retain the fallback logic for older pytorch versions. If so, then the if-else should remain the same and only the warning message should be updated. any thoughts? @ebsmothers @felipemello1
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sorry just seeing this now. I think if we claim to not support PyTorch < 2.5 then we shouldn't leave in the full-model compile option at all. For the same reason I'm ambivalent about leaving in the log warning.. really if we want to check someone is at least on the latest stable PyTorch we should just do it in a single consolidated place. So not the end of the world to keep the warning in, but personally I'd just take it out.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Where did we land on this?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I vote remove it altogether. We claim to only support latest stable PyTorch so no need to keep this around. We can treat adding a consolidated PyTorch version check as a separate task
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I can't wait to simplify our attention dispatching
Thanks, will keep up with the updates! Once finalized, let me know what updates are needed. Cheers! :) |
Hey @JP-sDEV, pinging this thread again to see if you're able to resolve the merge conflicts so we can get this landed. I believe we agreed that we can remove the fallback logic in |
Hi @RdoubleA, apologies for the delay - will resolve the conflicts and remove the fallback logic first thing tomorrow morning (EST) |
@@ -67,7 +67,7 @@ class OffloadActivations(saved_tensors_hooks): | |||
def __init__( | |||
self, | |||
use_pin_memory: bool = True, | |||
use_streams: Optional[bool] = None, | |||
use_streams: Optional[bool] = True, | |||
max_fwd_stash_size: int = 5, | |||
min_offload_size: int = 1024, | |||
) -> None: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
let's remove the check below and make use_streams: bool = True
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
are you referring to this use_streams
check found in __init__
?
if use_streams is None:
# Default to True if an acceptable torch is installed (later nightly/version or from source)
self.use_streams = torch.__version__ >= "2.5.0.dev20240907"
else:
self.use_streams = use_streams
or should it be changed like this, where use_steams
is set to False?
if use_streams is False:
# Default to True if an acceptable torch is installed (later nightly/version or from source)
self.use_streams = torch.__version__ >= "2.5.0.dev20240907"
else:
self.use_streams = use_streams
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yep! it would just be:
self.use_streams = use_streams
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I have noticed that OffloadingActivations
in torchtune/training/_activation_offloading.py
still checks for torch version: 2.5.0.dev20240907
. Should this check also be removed?
# for streaming
if self.use_streams:
if torch.__version__ < "2.5.0.dev20240907":
raise RuntimeError(
"OffloadActivations with use_streams=True requires PyTorch 2.5.0.dev20240907 or later."
)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, good catch. Let's remove that as well. I believe these may have been added after I put the issue up, or maybe I just missed it
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
sounds good, will update all 2.5.0.dev20240907
checks in the file
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the follow up, a few more comments and it's good to go
The gpu test failure is interesting, let me take a closer look |
Can we update these checks here? Are we able to remove them? torchtune/recipes/qat_distributed.py Line 139 in 4b6877a
|
@@ -82,10 +82,6 @@ def test_packed_block_causal_mask_sdpa(self, seq_lens): | |||
) | |||
torch.testing.assert_close(actual, expected) | |||
|
|||
@pytest.mark.skipif( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ok looks like we need to keep this check, in case the hardware that runs the gpu tests on GitHub CI does not support flex attention
@JP-sDEV thanks for pushing this through, just one more comment which should fix the failing gpu test |
I am reading the comments and reviewing the commits, and if I understand correctly - the changes needed for checking hardware compatibility is something outside this issue? If so, I'll merge with the main branch once resolved :) |
The update is that we just need to keep the decorator that was removed on |
Codecov ReportAttention: Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## main #1877 +/- ##
==========================================
- Coverage 67.51% 64.66% -2.86%
==========================================
Files 318 318
Lines 17684 17641 -43
==========================================
- Hits 11940 11408 -532
- Misses 5744 6233 +489 ☔ View full report in Codecov by Sentry. 🚨 Try these New Features:
|
@JP-sDEV I went ahead and quickly made the change - thank you again for your help! |
No problem! Thank you for guiding me through the issue and for being patient! :) |
Context
What is the purpose of this PR? Is it to
#1861
Changelog
What are the changes made in this PR?
Note: Line 69 in
torchtune/torchtune/training/_activation_offloading.py
, should thisif
block be removed? or modified to check for pytorch2.5.0
? - was not included in the issue to modify this block.Test plan
Please make sure to do each of the following if applicable to your PR. If you're unsure about any one of these just ask and we will happily help. We also have a contributing page for some guidance on contributing.
pre-commit install
)pytest tests
pytest tests -m integration_test
UX
If your function changed a public API, please add a dummy example of what the user experience will look like when calling it.
Here is a docstring example
and a tutorial example