Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: dynamic shape support for pad ops #3045

Merged
merged 3 commits into from
Aug 13, 2024
Merged

feat: dynamic shape support for pad ops #3045

merged 3 commits into from
Aug 13, 2024

Conversation

chohk88
Copy link
Collaborator

@chohk88 chohk88 commented Jul 26, 2024

Description

Added dynamic shape support for the following padding operations:

  • aten.constant_pad_nd
  • aten.reflection_pad1d
  • aten.reflection_pad2d
  • aten.reflection_pad3d
  • aten.replication_pad1d
  • aten.replication_pad2d
  • aten.replication_pad3d
  • aten._pad_circular
  • aten.pad

A commonly used function, get_padded_shape_tensors, has been introduced to adjust the dimensions of the input tensor to account for padding in various padding modes. This function simplifies the handling of dynamic shapes across different padding operations.

Type of change

  • New feature (non-breaking change which adds functionality)

Checklist:

  • My code follows the style guidelines of this project (You can use the linters)
  • I have performed a self-review of my own code
  • I have commented my code, particularly in hard-to-understand areas and hacks
  • I have made corresponding changes to the documentation
  • I have added tests to verify my fix or my feature
  • New and existing unit tests pass locally with my changes
  • I have added the relevant labels to my PR in so that relevant reviewers are notified

@chohk88 chohk88 requested a review from peri044 August 6, 2024 09:03
Copy link
Collaborator

@peri044 peri044 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

posted minor comments - overall the changes LGTM

set_layer_name,
)
from torch_tensorrt.dynamo.conversion.impl.shape import get_shape_with_dynamic_shape
from torch_tensorrt.fx.types import TRTTensor
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could you import from torch_tensorrt.dynamo ?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you for your comment! I've updated the TRTTensor to use torch_tensorrt.dynamo as suggested.

new_dim_shape = impl.elementwise.add(
ctx, target, source_ir, f"{name}_shape_dim_{i}", dim_shape, pad_sum
)
start_list[dim_index] = -pad_before
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Curious : Why are the values negative here and what is the expectation of TensorRT when you do layer.set_input(1, start_indices_tensor) later ?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The negative values in start_list[dim_index] = -pad_before are used to indicate the starting point for the slicing operation in TensorRT's ISliceLayer. When setting layer.set_input(1, start_indices_tensor), TensorRT interprets these negative start indices to appropriately offset the input tensor before applying padding.

@chohk88 chohk88 merged commit f32c7a9 into main Aug 13, 2024
13 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
cla signed component: api [Python] Issues re: Python API component: conversion Issues re: Conversion stage component: converters Issues re: Specific op converters component: dynamo Issues relating to the `torch.compile` or `torch._dynamo.export` paths component: tests Issues re: Tests
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants