-
Notifications
You must be signed in to change notification settings - Fork 352
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Use Compilation Groups to indicate what operations a TRT engine executes #47
Comments
This issue has not seen activity for 30 days, Remove stale label or comment or this will be closed in 5 days |
This issue has not seen activity for 30 days, Remove stale label or comment or this will be closed in 5 days |
This issue has not seen activity for 30 days, Remove stale label or comment or this will be closed in 5 days |
This issue has not seen activity for 30 days, Remove stale label or comment or this will be closed in 5 days |
This issue has not seen activity for 30 days, Remove stale label or comment or this will be closed in 5 days |
This issue has not seen activity for 30 days, Remove stale label or comment or this will be closed in 5 days |
This issue has not seen activity for 30 days, Remove stale label or comment or this will be closed in 5 days |
This issue has not seen activity for 90 days, Remove stale label or comment or this will be closed in 10 days |
This issue has not seen activity for 90 days, Remove stale label or comment or this will be closed in 10 days |
Summary: Pull Request resolved: https://github.com/pytorch/fx2trt/pull/47 - Make lowering log less verbose - Support different dtype for cumsum Although I think cumsum lowering is making things *slower*! We should just make a pytorch plugin for it. Reviewed By: wushirong Differential Revision: D35458649 fbshipit-source-id: 502d09ab98f6a4a288931d59670c01fe39a2e153
To help debugging it would be great to use compilation groups to indicate the work a TRT engine is doing. (This is a good tutorial on this https://jott.live/markdown/Writing%20a%20Toy%20Backend%20Compiler%20for%20PyTorch)
The text was updated successfully, but these errors were encountered: