We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Describe the bug A clear and concise description of what the bug is.
The hardsigmoid_bw function returns an invalid gradient value.
hardsigmoid_bw
To Reproduce Steps to reproduce the behavior:
/tests/tt_eager/python_api_testing/unit_testing/backward_ops/test_backward_hardsigmoid.py
# SPDX-FileCopyrightText: © 2023 Tenstorrent Inc. # SPDX-License-Identifier: Apache-2.0 import torch import pytest import tt_lib from tests.tt_eager.python_api_testing.unit_testing.backward_ops.utility_funcs import compare_results, data_gen_pt_tt def data_gen_pt_tt(input_shapes, device, required_grad=False, val=1): pt_tensor = (torch.ones(input_shapes, requires_grad=required_grad) * val).bfloat16() tt_tensor = ( tt_lib.tensor.Tensor(pt_tensor, tt_lib.tensor.DataType.BFLOAT16).to(tt_lib.tensor.Layout.TILE).to(device) ) return pt_tensor, tt_tensor @pytest.mark.parametrize( "input_shapes", ( (torch.Size([1, 1, 32, 32])), ), ) def test_bw_hardsigmoid(input_shapes, device): in_data, input_tensor = data_gen_pt_tt(input_shapes, device, True, val=10) grad_data, grad_tensor = data_gen_pt_tt(input_shapes, device, False, 100) print("input_tensor", input_tensor) # 10 print("grad_tensor", grad_tensor) # 100 pyt_y = torch.nn.functional.hardsigmoid(in_data) tt_output_tensor_on_device = tt_lib.tensor.hardsigmoid_bw(grad_tensor, input_tensor) in_data.retain_grad() pyt_y.backward(gradient=grad_data) golden_tensor = [in_data.grad] comp_pass = compare_results(tt_output_tensor_on_device, golden_tensor) print("tt_output_tensor_on_device", tt_output_tensor_on_device) # 16.62500 print("golden_tensor", golden_tensor) # 0 assert comp_pass # Assert False here
input_tensor ttnn.Tensor([[[[10.00000, 10.00000, ..., 10.00000, 10.00000], [10.00000, 10.00000, ..., 10.00000, 10.00000], ..., [10.00000, 10.00000, ..., 10.00000, 10.00000], [10.00000, 10.00000, ..., 10.00000, 10.00000]]]], shape=Shape([1, 1, 32, 32]), dtype=DataType::BFLOAT16, layout=Layout::TILE) grad_tensor ttnn.Tensor([[[[100.00000, 100.00000, ..., 100.00000, 100.00000], [100.00000, 100.00000, ..., 100.00000, 100.00000], ..., [100.00000, 100.00000, ..., 100.00000, 100.00000], [100.00000, 100.00000, ..., 100.00000, 100.00000]]]], shape=Shape([1, 1, 32, 32]), dtype=DataType::BFLOAT16, layout=Layout::TILE) tt_output_tensor_on_device [ttnn.Tensor([[[[16.62500, 16.62500, ..., 16.62500, 16.62500], [16.62500, 16.62500, ..., 16.62500, 16.62500], ..., [16.62500, 16.62500, ..., 16.62500, 16.62500], [16.62500, 16.62500, ..., 16.62500, 16.62500]]]], shape=Shape([1, 1, 32, 32]), dtype=DataType::BFLOAT16, layout=Layout::TILE)] golden_tensor [tensor([[[[0., 0., 0., ..., 0., 0., 0.], [0., 0., 0., ..., 0., 0., 0.], [0., 0., 0., ..., 0., 0., 0.], ..., [0., 0., 0., ..., 0., 0., 0.], [0., 0., 0., ..., 0., 0., 0.], [0., 0., 0., ..., 0., 0., 0.]]]], dtype=torch.bfloat16)]
Expected behavior A clear and concise description of what you expected to happen.
I want hardsigmoid_bw to return the correct gradient
Screenshots If applicable, add screenshots to help explain your problem.
Please complete the following environment information:
Additional context Add any other context about the problem here.
The text was updated successfully, but these errors were encountered:
#6534: Fix hardsigmoid for backward op
568b4e3
1f47f9d
3528483
813bf41
Merged to main
Sorry, something went wrong.
umadevimcw
No branches or pull requests
Describe the bug
A clear and concise description of what the bug is.
The
hardsigmoid_bw
function returns an invalid gradient value.To Reproduce
Steps to reproduce the behavior:
/tests/tt_eager/python_api_testing/unit_testing/backward_ops/test_backward_hardsigmoid.py
Expected behavior
A clear and concise description of what you expected to happen.
I want
hardsigmoid_bw
to return the correct gradientScreenshots
If applicable, add screenshots to help explain your problem.
Please complete the following environment information:
Additional context
Add any other context about the problem here.
The text was updated successfully, but these errors were encountered: