Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

✨[Feature] Support ShapeTensor I/O #2300

Closed
Tracked by #2274
peri044 opened this issue Sep 7, 2023 · 0 comments
Closed
Tracked by #2274

✨[Feature] Support ShapeTensor I/O #2300

peri044 opened this issue Sep 7, 2023 · 0 comments
Assignees
Labels
feature request New feature or request

Comments

@peri044
Copy link
Collaborator

peri044 commented Sep 7, 2023

Is your feature request related to a problem? Please describe.
Currently if a network takes an input tensor which is used as shape for other layers in the graph, a shape optimization profile has to be set for this input. Eg: In the case of dynamic shapes + fallback, sym_int layers fallback to Pytorch which are passed as inputs to other subgraphs. The min, opt, max values for sym_int inputs need to be set at compile time for TensorRT to pass.

Describe the solution you'd like

Describe alternatives you've considered

Additional context

@peri044 peri044 added the feature request New feature or request label Sep 7, 2023
@peri044 peri044 assigned peri044 and unassigned narendasan Dec 5, 2023
@narendasan narendasan added the Blocked [TensorRT] Issue is blocked by some limitation of TensorRT label Dec 19, 2023
@narendasan narendasan removed the Blocked [TensorRT] Issue is blocked by some limitation of TensorRT label Jan 16, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
feature request New feature or request
Projects
None yet
Development

No branches or pull requests

2 participants