Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

empty tensor moving to default device #2948

Merged
merged 2 commits into from
Jun 28, 2024
Merged

Conversation

apbose
Copy link
Collaborator

@apbose apbose commented Jun 24, 2024

No description provided.

@apbose apbose self-assigned this Jun 24, 2024
@apbose apbose marked this pull request as draft June 24, 2024 21:23
@github-actions github-actions bot added component: lowering Issues re: The lowering / preprocessing passes component: api [Python] Issues re: Python API component: dynamo Issues relating to the `torch.compile` or `torch._dynamo.export` paths labels Jun 24, 2024
@github-actions github-actions bot requested a review from peri044 June 24, 2024 21:23
Copy link

@github-actions github-actions bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There are some changes that do not conform to Python style guidelines:

--- /home/runner/work/TensorRT/TensorRT/py/torch_tensorrt/dynamo/lowering/_decompositions.py	2024-06-24 21:23:46.440916+00:00
+++ /home/runner/work/TensorRT/TensorRT/py/torch_tensorrt/dynamo/lowering/_decompositions.py	2024-06-24 21:25:39.065443+00:00
@@ -170,11 +170,11 @@
    empty_size = args[0]
    empty_permute = args[1]
    perm = [0] * len(empty_size)
    for permute_index, permute_element in enumerate(empty_permute):
        perm[permute_element] = permute_index
-    default_device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
+    default_device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
    kwargs[device] = default_device
    return torch.empty([empty_size[l] for l in empty_permute], **kwargs).permute(perm)


@register_torch_trt_decomposition(
@@ -233,12 +233,14 @@
    torch.ops.aten.empty_strided.default, registry=TORCH_TRT_DECOMPOSITIONS
)
def empty_strided_decomposition(*args, **kwargs) -> torch.Tensor:
    empty_size = args[0]
    empty_stride = args[1]
-    default_device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
-    return torch.as_strided(torch.empty(empty_size, device = default_device), empty_size, empty_stride)
+    default_device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
+    return torch.as_strided(
+        torch.empty(empty_size, device=default_device), empty_size, empty_stride
+    )


def get_decompositions(
    enable_experimental_decompositions: bool = False,
) -> Dict[OpOverload, Callable[[Any], Any]]:

@apbose apbose force-pushed the torchTRT_empty_decomp_device_fix branch from 5431f29 to 7f8bb4f Compare June 24, 2024 22:58
@apbose apbose marked this pull request as ready for review June 24, 2024 23:14
@apbose apbose force-pushed the torchTRT_empty_decomp_device_fix branch from 7f8bb4f to c67cad5 Compare June 24, 2024 23:21
@@ -172,6 +172,8 @@ def empty_permuted_decomposition(*args, **kwargs) -> torch.Tensor:
perm = [0] * len(empty_size)
for permute_index, permute_element in enumerate(empty_permute):
perm[permute_element] = permute_index
default_device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

use the default device defined in our _defaults.py : https://github.com/pytorch/TensorRT/blob/main/py/torch_tensorrt/dynamo/_defaults.py#L37
and convert this into a torch device via _enums.to()

@@ -233,7 +235,10 @@ def select_scatter_decomposition(
def empty_strided_decomposition(*args, **kwargs) -> torch.Tensor:
empty_size = args[0]
empty_stride = args[1]
return torch.as_strided(torch.empty(empty_size), empty_size, empty_stride)
default_device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

same comment as above

@apbose apbose requested a review from peri044 June 25, 2024 00:07
Copy link
Collaborator

@peri044 peri044 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@peri044 peri044 merged commit 3323156 into main Jun 28, 2024
58 of 64 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
cla signed component: api [Python] Issues re: Python API component: dynamo Issues relating to the `torch.compile` or `torch._dynamo.export` paths component: lowering Issues re: The lowering / preprocessing passes needs-release-cherrypick
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants