-
Notifications
You must be signed in to change notification settings - Fork 52
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Remove NVFUSER_DISTRIBUTED. #2155
Conversation
!build --dist |
Can you please explain the motivation for this change? |
@xwang233 talked about this in the nvFuser-MultiGPU chatroom. At this moment, there isn't much value to support non-distributed build of pytorch. The cost of additional CI and knobs like NVFUSER_DISTRIBUTED/USE_DISTRIBUTED just for that config isn't worthwhile. |
!build |
!build --dist |
!build --dist |
Glad that CI has recovered, making this PR ready to review again. PTAL |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM for multidevice code and tests. I would wait for @xwang233's stamp on the build changes.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM. Thanks for working on this.
@xwang233 discussed this in the nvFuser-MultiGPU chatroom. At this moment, supporting non-distributed build of pytorch isn't worth the cost of additional CI and using knobs like `NVFUSER_DISTRIBUTED` or `USE_DISTRIBUTED`. Feel free to revert this PR if non-distributed build becomes important.
This reverts commit 6e05931.
@xwang233 discussed this in the nvFuser-MultiGPU chatroom. At this moment, supporting non-distributed build of pytorch isn't worth the cost of additional CI and using knobs like
NVFUSER_DISTRIBUTED
orUSE_DISTRIBUTED
.This PR reverts #1711
Feel free to revert this PR if non-distributed build becomes important.