You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I would expect for torch to raise an exception when inference fails for any reason, such as wrong input tensor shape or wrong dtype. Instead, a warning is raised in console but the program continues successfully. This can have serious implications in production environments.
To Reproduce
I have a model compiled on float16 that accepts a static input shape of (1, 3, 538, 538)
Bug Description
I would expect for torch to raise an exception when inference fails for any reason, such as wrong input tensor shape or wrong dtype. Instead, a warning is raised in console but the program continues successfully. This can have serious implications in production environments.
To Reproduce
I have a model compiled on float16 that accepts a static input shape of
(1, 3, 538, 538)
This is what happens if I pass a wrong shape
This is what happens if I pass a wrong dtype
Expected behavior
An exception should be raised if the TensorRT Engine returns an error.
Environment
conda
,pip
,libtorch
, source): pip (custom whl)The text was updated successfully, but these errors were encountered: