-
Notifications
You must be signed in to change notification settings - Fork 352
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
📖 [Story] Windows Support #2495
Comments
I have managed to built the C++ and Python API with Bazel on Windows. While the Firstly, I tried to run the below script using import torch
import torch_tensorrt
class MyModule(torch.nn.Module):
def __init__(self):
super().__init__()
self.m = torch.nn.LeakyReLU()
def forward(self, x):
return self.m(x)
model = MyModule().eval().cuda()
inputs = [torch.randn((1, 3, 8, 8), dtype=torch.float, device="cuda")]
optimized_model = torch_tensorrt.compile(
model,
ir="ts",
inputs=inputs,
enabled_precisions={torch.float},
debug=True,
min_block_size=1,
)
print(optimized_model(*inputs))
torch._dynamo.reset() Then, I ran the above script again with The changes I have done are at https://github.com/HolyWu/TensorRT/tree/windows. Here is the Python wheel if anyone wants to try it. |
I think I have found the culprit of the crash. The crash upon inference is caused by the call to
Here is the Python wheel built with TensorRT 9.2.0.5. |
@HolyWu - thank you very much for this information - this is very helpful. I will take a look at your branch as well |
TL;DR
First iteration of Windows support task tracking
Tasks
Tasks
The text was updated successfully, but these errors were encountered: