-
Notifications
You must be signed in to change notification settings - Fork 116
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add DefaultValuedOptionalAttr
and use_global_device_ids
#272
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks a lot for your contributions! Keeping StableHLO and MHLO in sync is going to be a very important theme on our roadmap.
@@ -1242,7 +1242,8 @@ def StableHLO_AllGatherOp : StableHLO_Op<"all_gather", [SameOperandsAndResultEle | |||
HLO_Tensor:$operand, | |||
I64Attr:$all_gather_dim, | |||
I64ElementsAttr:$replica_groups, | |||
OptionalAttr<StableHLO_ChannelHandle>:$channel_handle | |||
OptionalAttr<StableHLO_ChannelHandle>:$channel_handle, | |||
UnitAttr:$use_global_device_ids |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I've been thinking about the logistics of adding use_global_device_ids
, given that: 1) we promised compatibility guarantees for StableHLO in #1, 2) we have a pending proposal in #115 to define the exact extent of these guarantees, 3) we have another pending proposal in #196 to define the StableHLO evolution process, 4) we don't yet have a spec for this op.
Given that this is a non-controversial backward-compatible change, and that at the moment we don't have policies that govern opset changes, I'm inclined to approve it. Let me just request another review from @GleasonK - our compatibility expert - and if he signs off, let's merge.
The "non-controversial" part is a judgement call, given that this change is synchronized with MHLO and isn't tied to functionality private to XLA (this functionality is used by JAX). Another example of a change that seems similarly non-controversial is #235. In the future, we'll have clear policies which significantly reduce the role of judgement calls for opset changes, but at the moment we're playing it by ear.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Non-controversial and backward compatible
Agree. These changes are also (somewhat) forward compatible since default valued attributes do not need to be present in the input IR.
The exception would be "If an op uses "use_global_device_ids" we should warn that it is may not be forward compatible, since this is a new feature, and I'm guessing ignoring the value in a previous version could lead to some semantic differences? The machinery for this warning is not in place yet, but should be soon. If no semantic difference would be caused by ignoring the attr, then probably ok to approve. Interested in your thoughts @burmako.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
"I'm guessing ignoring the value in a previous version could lead to some semantic differences". I agree that it will lead to semantic differences. The old consumer will likely ignore the (from its perspective) unregistered use_global_device_ids
attribute, which will result in a semantic difference.
On the other hand, the only piece of documentation for StableHLO compatibility guarantees is the "Backward compatible ML compute opset inspired by HLO/MHLO" tagline on our homepage. #1 also talks about backward compatibility only. #115 aims to provide stronger guarantees, but it's still under review.
Moreover, the work of migrating MHLO users to StableHLO is still ongoing, so I don't think we have anyone at the moment who can rely on forward compatibility of StableHLO in the first place.
Given that, I think that both de jure and de facto we have good grounds for approving this change, and that would be my recommendation.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
) This PR has two patches that close openxla#236 and openxla#237.
) This PR has two patches that close openxla#236 and openxla#237.
Rationale for approval is the same as in #272, #388, #403 and #673: this is a non-controversial backward-compatible change, and accepting it doesn't violate any of the existing commitments (it sticks to existing HLO semantics, and it is compatible with the extent of the current compatibility commitments). MLIR-HLO commit: tensorflow/mlir-hlo@bd07cb9.
This PR has two patches that close #236 and #237.
cc: @burmako These are two very small patches, but in general, I am not sure if you prefer each issue to be addressed in a separate commit. Apologies if I misjudged!