-
Notifications
You must be signed in to change notification settings - Fork 449
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Remove PyTorch 2.5.0 checks #1861
Comments
Hi @RdoubleA, can I be assigned this issue? |
Hi @JP-sDEV, sure! feel free to take on as many or as few of the items and link this issue in your PR. Feel free to comment if you run into any trouble and please include me or @joecummings as reviewers. Appreciate it! |
Thanks! I have opened a pull request that completes all the tasks in this issue.
|
PyTorch 2.5.0 is officially released which includes features such as FlexAttention that is now public API and other compile features. We can now remove the following checks:
torchtune/torchtune/utils/_import_guard.py
Line 13 in 7d29c21
torchtune/torchtune/training/_compile.py
Line 45 in 7d29c21
torchtune/torchtune/modules/attention_utils.py
Line 119 in 7d29c21
torchtune/torchtune/modules/common_utils.py
Line 150 in 7d29c21
torchtune/torchtune/training/_activation_offloading.py
Line 36 in 7d29c21
torchtune/docs/source/tutorials/memory_optimizations.rst
Line 111 in 7d29c21
torchtune/tests/torchtune/modules/test_attention_utils.py
Line 87 in 7d29c21
torchtune/recipes/lora_finetune_distributed.py
Line 77 in 7d29c21
The text was updated successfully, but these errors were encountered: