Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support multiple op users with different layouts #890

Closed
svuckovicTT opened this issue Oct 10, 2024 · 5 comments
Closed

Support multiple op users with different layouts #890

svuckovicTT opened this issue Oct 10, 2024 · 5 comments

Comments

@svuckovicTT
Copy link
Contributor

Today, in TTIR -> TTNN conversion path, we don't handle scenarios where a producer op has multiple consumers that expect different layouts (tile vs row_major). This might be true for other layout properties (sharded vs interleaved, device vs cpu, etc.).

Example: #863 (comment)

@svuckovicTT
Copy link
Contributor Author

Paging @nobradovictt for optimizer, @nsmithtt for experience with tt metal

Any thoughts on this guys? For generality purposes, we could try with copying tensors for the consumers, but it does seem like a tricky problem even then, in terms of memory usage (e.g. what if multiple consumers require different layout properties, but both need the inputs to be in L1, sharded).

@nsmithtt
Copy link
Contributor

nsmithtt commented Oct 10, 2024

So in the generality path this is a non-issue right?

Just to put down some terms to speak to, the case we're thinking of is a fork:

a = opA(...)
b = opB(a)
c = opC(a)

In the generality path the above graph is legal, we just leave it as is. If it's not then there's a bug we need to file with TTNN, WAs are potentially related and can be handled in the same way as the optimizer example below.

For optimizer we're thinking that opB and opC prefer different input layouts. So don't we just insert ops to convert?

a = opA(...)
b = opB(a)
a' = toRowMajor(a)
c = opC(a')

@svuckovicTT
Copy link
Contributor Author

I see, let's have the TTNN Defaults talk and see where we land - it'd be ideal if we didn't have to worry about this.

@nsmithtt
Copy link
Contributor

I see, let's have the TTNN Defaults talk and see where we land - it'd be ideal if we didn't have to worry about this.

Optimizer will def have to worry about it. No optimizer path, yeah ideally shouldn't.

@svuckovicTT
Copy link
Contributor Author

Closing as it's a non-issue for the default path. @nobradovictt feel free to reopen if this is something you wish to track.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants