-
Notifications
You must be signed in to change notification settings - Fork 3.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
RFC: initial stab at TorchScript fallback #7401
Conversation
I'm curious how it integrates with PyTorch frontend. Do we convert every op not supported to |
This is how I'd like it to work out. I've been thinking what the best "level" is and while the operator level might seem attractive, but I I'm not sure there is an easy way to run individual operators. |
I have the same question as masahi. IIUC, after this PR, the PyTorch frontend has the capablility to convert all unsupported ops to cc @zhiics |
Yeah, the general idea is to use this as the fallback. I can add the fallback generation here in the PR if that is better. |
Hey @t-vi, the idea of a fallback for unsupported TorchScript Ops is great. I am currently pursuing a similar approach for unsupported (and custom) TFLite Ops. I also stumbled over the issue that |
I'm all for it, but I wouldn't know how to add tests in lieu of something using it. If you or @masahi has any opinion on this, I'd be glad to split it out... |
Just a quick note that when I tried to revive this back in the summer it got a bit stalled around pytorch/pytorch#65047 . |
@t-vi Is this still WIP? I'm happy to merge whenever you think it is ready. |
@masahi So I had hoped to get the dlpack header version in PyTorch bumped (see the linked bug) but Facebook has internal uses that let it insist on the old one. |
Does this mean the same thing as pytorch/pytorch#65047 (comment)? (which sounds good to me). Anyway it seems we cannot count on the PyTorch-side to change, so I'd say anything that can unblock you from our side should be a good plan. cc @tqchen @junrushao1994 |
As commented by the author of PyTorchTVM, apache/tvm-rfcs#25 (comment), many people are interested in this feature. Also people are actively talking about deeper integration of TVM into PyTorch workflow. So we should definitely land this. As for me personally, I want to try out this feature to import models having fft-related ops (specifically, this cool model https://github.com/raoyongming/GFNet). FFT ops won't come to TVM any time soon. |
So I thought, I could wait it out, but I'll look into working around the version discrepancy in the next few weeks. |
Another interesting use case this fallback could enable is mmdetection https://github.com/open-mmlab/mmdetection. It has a lot of cool detection models but rely on many custom ops that cannot convert to relay. |
M. Ruberry of the PyTorch team re-landed the update of the dlpack.h in PyTorch. If this still holds next week, it'll be exciting to bring this up to date. :) |
Hi, so I rebased this finally and it all compiles and runs one test against a current PyTorch master, so I think I'm back in business with this PR (unless it has been obsoleted, but from what I understand, the bridge is in the other direction). |
Great! Can you remove |
Are you on the tvm discord or so to quickly discuss? |
yeah I've just logged in. |
Thank you @masahi ! |
* initial stab at TorchScript fallback * make types more flexible * Easy review bits. Thank you @masahi
Hi, @t-vi. Thank you for the great contribution! This seems very interesting. For more detailed info, I installed pytorch package with conda and downloaded prebuilt binary by following the instruction. |
The fundamental problem is that (pre-compiled) PyTorch python modules use the pre C++-11 string ABI to better blend into the Python ecosystem or so. TVM does not, so it needs to link to LibTorch with the "new" C++ string ABI. One option is to use self-compiled PyTorch (which is what I do and so got us into this mess...). I've been trying on and off to look into linking LibTorch such that all symbols are hidden (and you can load PyTorch with whatever C++ ABI later), but I have not managed yet. |
@t-vi , thank you for the quick reply! |
This patch adds a support for calling TorchScript. This can be used fallback for when torch operators are not yet implemented or if one wants to incorporate bespoke PyTorch custom ops into TVM with ease.
It adds
torchop
that takes a variable number of inputs and executes a provided TorchScript (aka PyTorch JIT) function,The key addition to the relay infrastructure is that while leaving
num_inputs == -1
on operator registration is documented to indicate a variable number of inputs, the type inference pass is not prepared to deal with it and instead requires the number of arguments provided to match the number of arguments withadd_argument
on operator registration. The proposed change to the type inference is then to match the declared arguments but to allow additional arguments ifnum_inputs
is-1
.The other detail is that it uses a string attribute in the call node's attributes to take the serialized TorchScript. This is a bit fishy as the serialized representation is binary. I used serialization to get TorchScript into TVM at the C++ level as it is tricky to interoperate between PyBind-wrapped objects (TorchScript in PyTorch) and the TVM FFI, but we might pass things around as handles later (but I'm not sure if that works well with attributes). I would be glad to have your advice on this.
Currently I only support one output tensor and FP32, but that is straightforward to make flexible, and I would do this in parallel to the more fundamental discussion e.g. around the things above.
Even though this is in draft state, I'm opening this PR to make discussions concrete in terms of code.
I also posted a discussion to the forum.