-
Notifications
You must be signed in to change notification settings - Fork 104
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Forbid runtime broadcasting by Alloc
#390
Forbid runtime broadcasting by Alloc
#390
Conversation
84747c9
to
8775f87
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We might also want to run the pymc tests on this branch at some point? I'm a bit worried that there could be something in pymc that breaks due to this change.
): | ||
if v_static_dim is None: | ||
code += f""" | ||
if (PyArray_DIMS({vv})[{v_ndim - i - 1}] == 1 && shape[{o_ndim - i - 1}] != 1) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we be sure that the arrays are long enough for the indices? Unless this is guaranteed for some reason even for invalid inputs I think an explicit check would be good.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The docs say this is guaranteed.
https://pytensor.readthedocs.io/en/latest/extending/creating_a_c_op.html#simple-cop-example
Also, in the C code, it is very important to properly validate the inputs
and outputs storage. PyTensor guarantees that the inputs exist and have the
right number of dimensions but it does not guarantee their exact shape. For
instance, if an :class:Op
computes the sum of two vectors, it needs to validate that
its two inputs have the same shape. In our case, we do not need to validate
the exact shapes of the inputs because we don't have a need that they match
in any way.
For the outputs, things are a little bit more subtle. PyTensor does not
guarantee that they have been allocated but it does guarantee that, if they
have been allocated, they have the right number of dimension. Again, PyTensor
offers no guarantee on the exact shapes. This means that, in our example, we
need to validate that the output storage has been allocated and has the same
shape as our vector input. If it is not the case, we allocate a new output
storage with the right shape and number of dimensions.
This is not true if fn.trust_input=True
, and/or if an Op returns an output with the wrong shape, so I don't know what they mean by "guarantee". However this probably means most Ops are written with this assumption?
This seems to align with the pre-existing check for output having right shape (they index in a loop without checking if ndims are enough).
8775f87
to
81e6d66
Compare
81e6d66
to
d13eca7
Compare
d13eca7
to
ca9082a
Compare
Alloc
Also fixes a bug encountered when doing alloc on a specify shape
Closes #379