-
Notifications
You must be signed in to change notification settings - Fork 104
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add rewrite to merge multiple SVD
Op
s with different settings
#732
Comments
The rewrite can do the merge immediately, it's just not a local rewrite but a global one then. Also if an Op has compute_uv=True but the arrays are not used in the graph we can set it to False. That can be a local rewrite, but probably fine to handle together in the same global rewrite |
Hi, I want to work on this. As I understand it, I will need to create a class
From the documentation, I know roughly how I should do it. I am still not 100% sure on all the decorators used (e.g., |
The decorators tell pytensor at which step of the rewriting process the rewrite should be preformed. This one can come last, so I guess it should be Tag me on your draft PR and I'm happy to walk you through the sharp bits. |
This one is pretty cheap that we can run in all 3 stages. It will only be triggered if there's an SVD Op anyway |
Description
SVD comes with a bunch of keyword arguments, most important of which is
compute_uv
. If False, it will return only the singular values for a given matrix. This is nice if you want to save on computation, but it can actually be inefficient if the user wants gradients. In the reverse mode, we need to compute the U and V matrices anyway, and indeed theL_op
for SVD (added in #614 ) adds a 2nd SVD Op to the graph withcompute_uv = True
When we see two SVD Ops with the same inputs on a graph, differing only by
compute_uv
, we should changecompute_uv = False
toTrue
everywhere. This will allow pytensor to see that these outputs are equivalent and re-use them, rather than computing the decomposition multiple times.The text was updated successfully, but these errors were encountered: