Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug Report] invalid asin_bw backward result #6536

Closed
Tracked by #6443
hschoi4448 opened this issue Mar 19, 2024 · 1 comment
Closed
Tracked by #6443

[Bug Report] invalid asin_bw backward result #6536

hschoi4448 opened this issue Mar 19, 2024 · 1 comment
Assignees
Labels
backward bug Something isn't working moreh moreh contribution op_cat: eltwise P1

Comments

@hschoi4448
Copy link
Contributor

hschoi4448 commented Mar 19, 2024

Describe the bug
A clear and concise description of what the bug is.

The asin_bw function returns an invalid gradient value.

To Reproduce
Steps to reproduce the behavior:

  1. Copy and past below code to /tests/tt_eager/python_api_testing/unit_testing/backward_ops/test_backward_asin.py
# SPDX-FileCopyrightText: © 2023 Tenstorrent Inc.

# SPDX-License-Identifier: Apache-2.0

import torch
import pytest
import tt_lib
from tests.tt_eager.python_api_testing.unit_testing.backward_ops.utility_funcs import data_gen_pt_tt, compare_results

def data_gen_pt_tt(input_shapes, device, required_grad=False, val=1):
    pt_tensor = (torch.ones(input_shapes, requires_grad=required_grad) * val).bfloat16()
    tt_tensor = (
        tt_lib.tensor.Tensor(pt_tensor, tt_lib.tensor.DataType.BFLOAT16).to(tt_lib.tensor.Layout.TILE).to(device)
    )
    return pt_tensor, tt_tensor


@pytest.mark.parametrize(
    "input_shapes",
    (
        (torch.Size([1, 1, 32, 32])),
    ),
)
def test_bw_asin(input_shapes, device):
    in_data, input_tensor = data_gen_pt_tt(input_shapes, device, True, val=0)
    grad_data, grad_tensor = data_gen_pt_tt(input_shapes, device, False, val=0.95)
    
    print("input_tensor", input_tensor) # 0
    print("grad_tensor", grad_tensor) # 0.94922
    pyt_y = torch.asin(in_data)

    tt_output_tensor_on_device = tt_lib.tensor.asin_bw(grad_tensor, input_tensor)

    in_data.retain_grad()

    pyt_y.backward(gradient=grad_data)

    golden_tensor = [in_data.grad]
    comp_pass = compare_results(tt_output_tensor_on_device, golden_tensor)
    
    print("tt_output_tensor_on_device", tt_output_tensor_on_device) # 2.96875
    print("golden_tensor", golden_tensor) # 0.9492
    
    assert comp_pass
  1. Run cmd pytest ./tests/tt_eager/python_api_testing/unit_testing/backward_ops/test_backward_asin.py
input_tensor ttnn.Tensor([[[[ 0.00000,  0.00000,  ...,  0.00000,  0.00000],
               [ 0.00000,  0.00000,  ...,  0.00000,  0.00000],
               ...,
               [ 0.00000,  0.00000,  ...,  0.00000,  0.00000],
               [ 0.00000,  0.00000,  ...,  0.00000,  0.00000]]]], shape=Shape([1, 1, 32, 32]), dtype=DataType::BFLOAT16, layout=Layout::TILE)
grad_tensor ttnn.Tensor([[[[ 0.94922,  0.94922,  ...,  0.94922,  0.94922],
               [ 0.94922,  0.94922,  ...,  0.94922,  0.94922],
               ...,
               [ 0.94922,  0.94922,  ...,  0.94922,  0.94922],
               [ 0.94922,  0.94922,  ...,  0.94922,  0.94922]]]], shape=Shape([1, 1, 32, 32]), dtype=DataType::BFLOAT16, layout=Layout::TILE)

tt_output_tensor_on_device [ttnn.Tensor([[[[ 2.96875,  2.96875,  ...,  2.96875,  2.96875],
               [ 2.96875,  2.96875,  ...,  2.96875,  2.96875],
               ...,
               [ 2.96875,  2.96875,  ...,  2.96875,  2.96875],
               [ 2.96875,  2.96875,  ...,  2.96875,  2.96875]]]], shape=Shape([1, 1, 32, 32]), dtype=DataType::BFLOAT16, layout=Layout::TILE)]
golden_tensor [tensor([[[[0.9492, 0.9492, 0.9492,  ..., 0.9492, 0.9492, 0.9492],
          [0.9492, 0.9492, 0.9492,  ..., 0.9492, 0.9492, 0.9492],
          [0.9492, 0.9492, 0.9492,  ..., 0.9492, 0.9492, 0.9492],
          ...,
          [0.9492, 0.9492, 0.9492,  ..., 0.9492, 0.9492, 0.9492],
          [0.9492, 0.9492, 0.9492,  ..., 0.9492, 0.9492, 0.9492],
          [0.9492, 0.9492, 0.9492,  ..., 0.9492, 0.9492, 0.9492]]]],
       dtype=torch.bfloat16)]

Expected behavior
A clear and concise description of what you expected to happen.

I want asin_bw to return the correct gradient

Screenshots
If applicable, add screenshots to help explain your problem.

Please complete the following environment information:

  • OS: [e.g. Ubuntu 20.04]
  • Version of software (eg. commit e14b574)

Additional context
Add any other context about the problem here.

@umadevimcw
Copy link
Contributor

Merged to main

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
backward bug Something isn't working moreh moreh contribution op_cat: eltwise P1
Projects
None yet
Development

No branches or pull requests

3 participants