-
Notifications
You must be signed in to change notification settings - Fork 350
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
TRTorch on Jetson Nano #70
Comments
Try using the local dependencies path for bringing sources for cuDNN and TensorRT. https://nvidia.github.io/TRTorch/tutorials/installation.html#building-using-locally-installed-cudnn-tensorrt. If you have a libtorch build for Jetson, you can replace the
otherwise replace the CXX11 ABI target with the path to your library like this:
We are working on a better workflow, including hopefully cross compilation support but it is lower priority than other tasks right now. |
Ah, finally got it working. I made those changes and then had to change some of the BUILD files for the third party libraries so it would find them correctly. I have all those edits made on a branch in my fork just in case anyone else would like to use them until cross compilation is supported in a cleaner way. https://github.com/BytesRobotics/TRTorch/tree/jetson-nano |
Hi, Did you had to switch to tensorrt 8.0 on jetson in order to install TRTorch ? |
Your TensorRT version should be determined by your Jetpack version. By adding the jetpack version as a flag you can use the previous version (4.5) if you are still using it. https://nvidia.github.io/TRTorch/tutorials/installation.html#building-natively-on-aarch64-jetson |
Summary: Pull Request resolved: https://github.com/pytorch/fx2trt/pull/70 Enable explicit batch dim support(with limited support for dynamic shape) for acc_op getitem and chunk. Right now the converter can only process 1 dynamic shape dim. Reviewed By: frank-wei, 842974287 Differential Revision: D34454742 fbshipit-source-id: f1bf643ca94b268be7193d332a5819e6bc8d876d
# Description Use float types for compile-time calculations around batch_norm. Improves fp16 accuracy relative to pytorch. Fixes # (issue) ## Type of change Please delete options that are not relevant and/or add your own. - Bug fix (non-breaking change which fixes an issue) - New feature (non-breaking change which adds functionality) - Breaking change (fix or feature that would cause existing functionality to not work as expected) - This change requires a documentation update # Checklist: - [ ] My code follows the style guidelines of this project (You can use the linters) - [ ] I have performed a self-review of my own code - [ ] I have commented my code, particularly in hard-to-understand areas and hacks - [ ] I have made corresponding changes to the documentation - [ ] I have added tests to verify my fix or my feature - [ ] New and existing unit tests pass locally with my changes - [ ] I have added the relevant labels to my PR in so that relevant reviewers are notified
# Description Use float types for compile-time calculations around batch_norm. Improves fp16 accuracy relative to pytorch. Fixes # (issue) ## Type of change Please delete options that are not relevant and/or add your own. - Bug fix (non-breaking change which fixes an issue) - New feature (non-breaking change which adds functionality) - Breaking change (fix or feature that would cause existing functionality to not work as expected) - This change requires a documentation update # Checklist: - [ ] My code follows the style guidelines of this project (You can use the linters) - [ ] I have performed a self-review of my own code - [ ] I have commented my code, particularly in hard-to-understand areas and hacks - [ ] I have made corresponding changes to the documentation - [ ] I have added tests to verify my fix or my feature - [ ] New and existing unit tests pass locally with my changes - [ ] I have added the relevant labels to my PR in so that relevant reviewers are notified
# Description Use float types for compile-time calculations around batch_norm. Improves fp16 accuracy relative to pytorch. Fixes # (issue) ## Type of change Please delete options that are not relevant and/or add your own. - Bug fix (non-breaking change which fixes an issue) - New feature (non-breaking change which adds functionality) - Breaking change (fix or feature that would cause existing functionality to not work as expected) - This change requires a documentation update # Checklist: - [ ] My code follows the style guidelines of this project (You can use the linters) - [ ] I have performed a self-review of my own code - [ ] I have commented my code, particularly in hard-to-understand areas and hacks - [ ] I have made corresponding changes to the documentation - [ ] I have added tests to verify my fix or my feature - [ ] New and existing unit tests pass locally with my changes - [ ] I have added the relevant labels to my PR in so that relevant reviewers are notified
Is there a build process outlined for getting this library built on a Jetson Nano or another ARM64 platform (TX2, Xavier, etc.)? The Bazel WORKSPACE seems rather dependent on X86 binaries and since I am new to Bazel I am having a hard time figuring out how to get it to build on Jetson Nano that already has TensorRT, CUDA, cuDNN, and libtorch/pytorch installed. Thanks
The text was updated successfully, but these errors were encountered: