You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Is your feature request related to a problem? Please describe.
I am new to machine learning, my current project involves inference with a large CNN on Windows. As is, it's too large to fit in VRAM, so I need to quantize it - hence why I need Torch-TensorRT.
However, while I am trying my best to follow #856 and #960, their instructions are not easy to understand since I have no experience with Docker or bazel.
Describe the solution you'd like
I would like Windows binaries (.lib and .dll's) to be officially released.
Describe alternatives you've considered
I briefly considered compiling the torchscript modules in docker first, but then I learned I can't run them without the libtorchtrt_runtime.dll, and I also need it to run on many different hardware setups, and as I understand it, this means I need Torch-TensorRT to compile the AOT module every time. So it seems the only solution is having a fully compiled Torch-TensorRT for Windows.
I am also considering using an older version of TRTorch where Windows binaries were easier to compile, at least to see that it does actually do what I expect and therefore it's worth it to continue - but I expect the older versions don't work with CUDA 11, which I want to deploy this project with, and are also probably less performant and more buggy, so that is not an ideal solution.
Additional context
I am happy to do as much legwork as I can to help get this finished, both for me and for others who could benefit from precompiled binaries in the future. However, as I have such little experience in this area, I expect other members of the dev team could probably get it done in much less time than I could.
The text was updated successfully, but these errors were encountered:
Is your feature request related to a problem? Please describe.
I am new to machine learning, my current project involves inference with a large CNN on Windows. As is, it's too large to fit in VRAM, so I need to quantize it - hence why I need Torch-TensorRT.
However, while I am trying my best to follow #856 and #960, their instructions are not easy to understand since I have no experience with Docker or bazel.
Describe the solution you'd like
I would like Windows binaries (.lib and .dll's) to be officially released.
Describe alternatives you've considered
I briefly considered compiling the torchscript modules in docker first, but then I learned I can't run them without the libtorchtrt_runtime.dll, and I also need it to run on many different hardware setups, and as I understand it, this means I need Torch-TensorRT to compile the AOT module every time. So it seems the only solution is having a fully compiled Torch-TensorRT for Windows.
I am also considering using an older version of TRTorch where Windows binaries were easier to compile, at least to see that it does actually do what I expect and therefore it's worth it to continue - but I expect the older versions don't work with CUDA 11, which I want to deploy this project with, and are also probably less performant and more buggy, so that is not an ideal solution.
Additional context
I am happy to do as much legwork as I can to help get this finished, both for me and for others who could benefit from precompiled binaries in the future. However, as I have such little experience in this area, I expect other members of the dev team could probably get it done in much less time than I could.
The text was updated successfully, but these errors were encountered: