Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

TRTorch on Jetson Nano #70

Closed
Michael-Equi opened this issue May 24, 2020 · 4 comments
Closed

TRTorch on Jetson Nano #70

Michael-Equi opened this issue May 24, 2020 · 4 comments

Comments

@Michael-Equi
Copy link

Is there a build process outlined for getting this library built on a Jetson Nano or another ARM64 platform (TX2, Xavier, etc.)? The Bazel WORKSPACE seems rather dependent on X86 binaries and since I am new to Bazel I am having a hard time figuring out how to get it to build on Jetson Nano that already has TensorRT, CUDA, cuDNN, and libtorch/pytorch installed. Thanks

@narendasan
Copy link
Collaborator

Try using the local dependencies path for bringing sources for cuDNN and TensorRT. https://nvidia.github.io/TRTorch/tutorials/installation.html#building-using-locally-installed-cudnn-tensorrt.

If you have a libtorch build for Jetson, you can replace the http_archive with a similar local_repository target. If you are building for Python or are using the PyTorch distribution of libtorch (the one that comes with the wheel file) replace the pre_cxx11_abi libtorch target with something like this.

new_local_repository(
    name = "libtorch_pre_cxx11_abi",
    path = ".../site-packages/torch",
    build_file = "@//third_party/libtorch:BUILD",
)

otherwise replace the CXX11 ABI target with the path to your library like this:

new_local_repository(
    name = "libtorch",
    path = "<PATH TO LIBTORCH ROOT>",
    build_file = "@//third_party/libtorch:BUILD",
)

We are working on a better workflow, including hopefully cross compilation support but it is lower priority than other tasks right now.

@Michael-Equi
Copy link
Author

Ah, finally got it working. I made those changes and then had to change some of the BUILD files for the third party libraries so it would find them correctly. I have all those edits made on a branch in my fork just in case anyone else would like to use them until cross compilation is supported in a cleaner way. https://github.com/BytesRobotics/TRTorch/tree/jetson-nano

@suyashhchougule
Copy link

Ah, finally got it working. I made those changes and then had to change some of the BUILD files for the third party libraries so it would find them correctly. I have all those edits made on a branch in my fork just in case anyone else would like to use them until cross compilation is supported in a cleaner way. https://github.com/BytesRobotics/TRTorch/tree/jetson-nano

Hi, Did you had to switch to tensorrt 8.0 on jetson in order to install TRTorch ?

@narendasan
Copy link
Collaborator

Your TensorRT version should be determined by your Jetpack version. By adding the jetpack version as a flag you can use the previous version (4.5) if you are still using it. https://nvidia.github.io/TRTorch/tutorials/installation.html#building-natively-on-aarch64-jetson

frank-wei pushed a commit that referenced this issue Jun 4, 2022
Summary:
Pull Request resolved: https://github.com/pytorch/fx2trt/pull/70

Enable explicit batch dim support(with limited support for dynamic shape) for acc_op getitem and chunk. Right now the converter can only process 1 dynamic shape dim.

Reviewed By: frank-wei, 842974287

Differential Revision: D34454742

fbshipit-source-id: f1bf643ca94b268be7193d332a5819e6bc8d876d
mfeliz-cruise added a commit to mfeliz-cruise/Torch-TensorRT that referenced this issue Nov 15, 2022
# Description

Use float types for compile-time calculations around batch_norm. Improves fp16 accuracy relative to pytorch.
Fixes # (issue)

## Type of change

Please delete options that are not relevant and/or add your own.

- Bug fix (non-breaking change which fixes an issue)
- New feature (non-breaking change which adds functionality)
- Breaking change (fix or feature that would cause existing functionality to not work as expected)
- This change requires a documentation update

# Checklist:

- [ ] My code follows the style guidelines of this project (You can use the linters)
- [ ] I have performed a self-review of my own code
- [ ] I have commented my code, particularly in hard-to-understand areas and hacks
- [ ] I have made corresponding changes to the documentation
- [ ] I have added tests to verify my fix or my feature
- [ ] New and existing unit tests pass locally with my changes
- [ ] I have added the relevant labels to my PR in so that relevant reviewers are notified
mfeliz-cruise added a commit to mfeliz-cruise/Torch-TensorRT that referenced this issue Jan 3, 2023
# Description

Use float types for compile-time calculations around batch_norm. Improves fp16 accuracy relative to pytorch.
Fixes # (issue)

## Type of change

Please delete options that are not relevant and/or add your own.

- Bug fix (non-breaking change which fixes an issue)
- New feature (non-breaking change which adds functionality)
- Breaking change (fix or feature that would cause existing functionality to not work as expected)
- This change requires a documentation update

# Checklist:

- [ ] My code follows the style guidelines of this project (You can use the linters)
- [ ] I have performed a self-review of my own code
- [ ] I have commented my code, particularly in hard-to-understand areas and hacks
- [ ] I have made corresponding changes to the documentation
- [ ] I have added tests to verify my fix or my feature
- [ ] New and existing unit tests pass locally with my changes
- [ ] I have added the relevant labels to my PR in so that relevant reviewers are notified
mfeliz-cruise added a commit to mfeliz-cruise/Torch-TensorRT that referenced this issue Jan 9, 2023
# Description

Use float types for compile-time calculations around batch_norm. Improves fp16 accuracy relative to pytorch.
Fixes # (issue)

## Type of change

Please delete options that are not relevant and/or add your own.

- Bug fix (non-breaking change which fixes an issue)
- New feature (non-breaking change which adds functionality)
- Breaking change (fix or feature that would cause existing functionality to not work as expected)
- This change requires a documentation update

# Checklist:

- [ ] My code follows the style guidelines of this project (You can use the linters)
- [ ] I have performed a self-review of my own code
- [ ] I have commented my code, particularly in hard-to-understand areas and hacks
- [ ] I have made corresponding changes to the documentation
- [ ] I have added tests to verify my fix or my feature
- [ ] New and existing unit tests pass locally with my changes
- [ ] I have added the relevant labels to my PR in so that relevant reviewers are notified
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants