-
Notifications
You must be signed in to change notification settings - Fork 13
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support multiple op users with different layouts #890
Comments
Paging @nobradovictt for optimizer, @nsmithtt for experience with tt metal Any thoughts on this guys? For generality purposes, we could try with copying tensors for the consumers, but it does seem like a tricky problem even then, in terms of memory usage (e.g. what if multiple consumers require different layout properties, but both need the inputs to be in L1, sharded). |
So in the generality path this is a non-issue right? Just to put down some terms to speak to, the case we're thinking of is a fork:
In the generality path the above graph is legal, we just leave it as is. If it's not then there's a bug we need to file with TTNN, WAs are potentially related and can be handled in the same way as the optimizer example below. For optimizer we're thinking that
|
I see, let's have the TTNN Defaults talk and see where we land - it'd be ideal if we didn't have to worry about this. |
Optimizer will def have to worry about it. No optimizer path, yeah ideally shouldn't. |
Closing as it's a non-issue for the default path. @nobradovictt feel free to reopen if this is something you wish to track. |
Today, in TTIR -> TTNN conversion path, we don't handle scenarios where a producer op has multiple consumers that expect different layouts (tile vs row_major). This might be true for other layout properties (sharded vs interleaved, device vs cpu, etc.).
Example: #863 (comment)
The text was updated successfully, but these errors were encountered: