-
Notifications
You must be signed in to change notification settings - Fork 14
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix tensor read (device->host path) #747
Conversation
@svuckovicTT assuming you're getting this from core.py, I think there's a fix going in that flips these two operations: essentially untilize on device first before moving to host. |
Oh awesome, thanks for the heads up! I'll test it and update accordingly. Do you happen to have a link to their issue/PR? |
This is the PR AFAIK: tenstorrent/tt-metal#12001 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM after reordering calls as Jackson pointed out
op.getLoc(), this->getTypeConverter()->convertType(op.getType()), | ||
op->getOperand(0)); | ||
op.getOperand(0), device, | ||
ttnn::LayoutAttr::get(op->getContext(), ttnn::Layout::RowMajor)); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If the operand is already row major maybe we don't need to emit this layout op that would otherwise be a nop?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@svuckovicTT is on PTO right now, and assigned me to check in this change after it's approved. Is it okay to leave it for him to fix toLayout no-op when he gets back? I can create a follow up issue.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Chiming in from the airport - cheers to delayed flights! :)
Yeah, we could avoid it, I'll create a follow-up task (can't VPN from Hungary, so can't test the runtime...). However, I think it's okay to check this in as-is for now, since it'll end up being a nop - the method checks for this and returns if the layout is already set to the desired one.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Currently, we're calling to_memory_config to get tensor from device to host. This isn't correct, so we're changing to the approach used in ttnn lib, which is to call
FromDevice
+ToLayout
ops, as can be seen in theto_torch
method inthird_party/tt-metal/src/tt-metal/ttnn/ttnn/operations/core.py
:This PR updates the conversion of
ToLayout
op from TTIR to TTNN. It also adds theFromDevice
op.