We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Test that fails with "Assertion utils::isOnHost(inputTensor) && "Calling ttnn::to_device on a device tensor"' failed.":
utils::isOnHost(inputTensor) && "Calling ttnn::to_device on a device tensor"' failed.
func.func @softmax(%arg0: tensor<224x64xbf16>) -> tensor<224x64xbf16> { // CHECK: %[[C:.*]] = "ttnn.empty"[[C:.*]] %0 = tensor.empty() : tensor<224x64xbf16> // CHECK: %[[C:.*]] = "ttnn.softmax"[[C:.*]] // Check for positive dimension attribute %1 = "ttir.softmax"(%arg0, %0) <{dimension = 1 : si32, operand_constraints = [#l1_block_sharded_tile, #l1_block_sharded]}> : (tensor<224x64xbf16>, tensor<224x64xbf16>) -> tensor<224x64xbf16> // CHECK: %[[C:.*]] = "ttnn.empty"[[C:.*]] %2 = tensor.empty() : tensor<224x64xbf16> // CHECK: %[[C:.*]] = "ttnn.softmax"[[C:.*]] // Check for negative dimension attribute %3 = "ttir.softmax"(%1, %2) <{dimension = -1 : si32, operand_constraints = [#l1_block_sharded_tile, #l1_block_sharded]}> : (tensor<224x64xbf16>, tensor<224x64xbf16>) -> tensor<224x64xbf16> return %3 : tensor<224x64xbf16> }
The resulting TTNN IR looks like this:
func.func @softmax(%arg0: tensor<224x64xbf16, #layout11>) -> tensor<224x64xbf16, #layout11> { %0 = "ttnn.get_device"() <{mesh_shape = #ttnn<mesh_shape 1x1>}> : () -> !tt.device<#device> %1 = "ttnn.to_layout"(%arg0, %0) <{layout = #ttnn.layout<tile>}> : (tensor<224x64xbf16, #layout11>, !tt.device<#device>) -> tensor<224x64xbf16, #layout12> %2 = "ttnn.to_device"(%1, %0) <{memory_config = #ttnn.memory_config<<block_sharded>, <l1>>}> : (tensor<224x64xbf16, #layout12>, !tt.device<#device>) -> tensor<224x64xbf16, #layout12> %3 = "ttnn.empty"(%0) <{dtype = #tt.supportedDataTypes<bf16>, layout = #ttnn.layout<row_major>, memory_config = #ttnn.memory_config<<block_sharded>, <l1>>, shape = #ttnn.shape<224x64>}> : (!tt.device<#device>) -> tensor<224x64xbf16, #layout13> %4 = "ttnn.softmax"(%2, %3) <{dimension = 1 : si32}> : (tensor<224x64xbf16, #layout12>, tensor<224x64xbf16, #layout13>) -> tensor<224x64xbf16, #layout13> %5 = "ttnn.to_layout"(%4, %0) <{layout = #ttnn.layout<tile>}> : (tensor<224x64xbf16, #layout13>, !tt.device<#device>) -> tensor<224x64xbf16, #layout12> %6 = "ttnn.to_device"(%5, %0) <{memory_config = #ttnn.memory_config<<block_sharded>, <l1>>}> : (tensor<224x64xbf16, #layout12>, !tt.device<#device>) -> tensor<224x64xbf16, #layout12> %7 = "ttnn.empty"(%0) <{dtype = #tt.supportedDataTypes<bf16>, layout = #ttnn.layout<row_major>, memory_config = #ttnn.memory_config<<block_sharded>, <l1>>, shape = #ttnn.shape<224x64>}> : (!tt.device<#device>) -> tensor<224x64xbf16, #layout13> %8 = "ttnn.softmax"(%6, %7) <{dimension = -1 : si32}> : (tensor<224x64xbf16, #layout12>, tensor<224x64xbf16, #layout13>) -> tensor<224x64xbf16, #layout13> %9 = "ttnn.to_memory_config"(%8, %0) : (tensor<224x64xbf16, #layout13>, !tt.device<#device>) -> tensor<224x64xbf16, #layout11> return %9 : tensor<224x64xbf16, #layout11> }
%6 = "ttnn.to_device" is the problematic call. to_device should be called only on tensors that aren't already on device.
%6 = "ttnn.to_device"
to_device
The text was updated successfully, but these errors were encountered:
Solved in #840.
Sorry, something went wrong.
svuckovicTT
No branches or pull requests
Test that fails with "Assertion
utils::isOnHost(inputTensor) && "Calling ttnn::to_device on a device tensor"' failed.
":The resulting TTNN IR looks like this:
%6 = "ttnn.to_device"
is the problematic call.to_device
should be called only on tensors that aren't already on device.The text was updated successfully, but these errors were encountered: