-
Notifications
You must be signed in to change notification settings - Fork 2.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
myelin::ir::tensor_t*& myelin::ir::operand_t::tensor() #1541
Comments
pointpillars scatter ops caused this problem,does anyone tried this? |
Facing the same error. I have used index_select and index_put in my pytorch code which successfully translated to onnx. However, gave me the following when creating tensorrt:
Any help is appreciated. my environment : |
Hello @quintetoy @yasserkhalil93 , sounds bug in TensorRT, could you share us the onnx the model to debug? thanks! |
@ttyio @yasserkhalil93 hello~, any update now? is it succeed? |
Hello @Keysmis , we did not get the repro to debug the issue. could you try 8.2 release? send us repro if it still failed. thanks! |
I also get this error! when using 8.2 tensorrt.
For my model, it works well in 8.0 tensorrt |
I am having the same issue with 8.2.2.1 version. |
@KeyKy @aeoleader , could you provide us repro to debug the issue? thanks! |
@ttyio Repro steps:
def test_func(x):
t = torch.zeros(x.shape)
# is_tensor error
t[0] = x[0]
return t
class MyModule(torch.nn.Module):
def __init__(self):
super(MyModule, self).__init__()
def forward(self, x):
return test_func(x)
if __name__ == '__main__':
model = MyModule()
input_shape = (1, 16, 4, 224, 224)
model.cpu().eval()
model.to('cuda:0')
input_tensor = torch.randn(input_shape).to('cuda:0')
output_file = 'debug.onnx'
torch.onnx.export(
model,
input_tensor,
output_file,
export_params=True,
keep_initializers_as_inputs=True,
verbose=False,
opset_version=11)
trtexec --onnx=debug.onnx --saveEngine=debug.trt --best --buildOnly --workspace=4096 --verbose |
Thank you @aeoleader for the repro, I am able to repro the failure and create internal bug to track this. |
Hi @aeoleader , this |
is it fix now? which trt version that i will use? |
i met the error: |
This will be fixed in 8.4GA, thanks all |
Just my two cents: If you have to use tensorrt8.2, you can just replace your "slice assignment" operations (which is implemented by a scatter) with a concat. This is a workaround until 8.4 is out. |
In TensorRT8.2, I meet the same error. But in TensorRT8.4, it work well. |
Why NOT fix it in 8.2??? |
I am facing the same error when trying to convert torch.index_add_() operator. Is there a work around for this with v8.2? |
We don't have bandwidth to back-integrate all bug fixes to older releases because we are busy with getting other new features (like ND shape tensor support, better dynamic shape support, more ONNX op support, etc.) and new bug fixes into our latest TRT release as soon as possible. TRT 8.4.1 (8.4 GA) has just been released. closing this issue for now. please feel free to reopen if the issue still exists. Thanks |
I have got a problem like this,I was running pointpillars model-- pillarscatter, and when i transform an onnx to tensorrt ,this occured.
python: /root/gpgpu/MachineLearning/myelin/src/compiler/./ir/operand.h:166: myelin::ir::tensor_t*& myelin::ir::operand_t::tensor(): Assertion `is_tensor()' failed.
who can tell me how to solve this problem?
my environment :
cuda:10.2
tensorrt:8.2
The text was updated successfully, but these errors were encountered: