-
Notifications
You must be signed in to change notification settings - Fork 46
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Clarification for non-portable behavior for in-place Python operations #828
Comments
Did we agree that it should always be implementation defined if the types are different, or should |
@asmeurer I believe that is what @oleksandr-pavlyk was getting at in the OP. IMO, it should be okay to allow type promotion to |
So there are two cases: Case 1: x1 = asarray([0], dtype=int64)
x2 = asarray([0], dtype=int32)
x1 += x2 Case 2: x1 = asarray([0], dtype=int32)
x2 = asarray([0], dtype=int64)
x1 += x2 I think @oleksandr-pavlyk was asking about case 2:
In my opinion, this is actually spelled out already https://data-apis.org/array-api/latest/API_specification/array_object.html#in-place-operators:
In other words, case 2 is currently required to error (which is stronger than implementation defined). Case 1 is perhaps more ambiguous whether it is required or not. My reading of the current text is that it is. If we want to make it implementation defined, we should explicitly state that. I'm not aware of any reasons why it would be a problem, though. |
Does this make current NumPy behavior non-compliant, then? Because in NumPy
it simply casts the second array into the type of the first array per And per the spec
This seems like a very strong condition implying that |
I think the The sentence before that, which I quoted, "An in-place operation must not change the data type or shape of the in-place array..." does indeed imply that NumPy is currently noncompliant, because it uses must and not should. We could potentially loosen this to be implementation defined. I guess one question is whether the NumPy team agrees that this is nonideal behavior and should be deprecated. I certainly find the behavior surprising (it really is an in-place change of dtype: even views of Also we should check if other libraries allow this. This is the behavior in PyTorch: >>> x1 = torch.asarray([0], dtype=torch.int32)
>>> x2 = torch.asarray([1], dtype=torch.int64)
>>> x1 += x2
>>> x1
tensor([1], dtype=torch.int32) That actually could arguably be within what the spec says, because it didn't change the dtype of |
The wording for in-place operators may be more explicit to warn users that in-place operations, e.g.,
x1 += x2
, where Type Promotion Rules requirex1 + x2
to have data type different from data type ofx1
are implementation defined.Present wording hints at it:
but states that result of the in-place operation must "equal" the result of the out-of-place operation and the equality may hold true for arrays of different data types.
The text was updated successfully, but these errors were encountered: