-
Notifications
You must be signed in to change notification settings - Fork 81
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[RFC]PyTorchTVM #25
[RFC]PyTorchTVM #25
Conversation
cc @t-vi |
Thanks for the nice RFC @Meteorix! It is definitely super important direction to embed TVM into PyTorch. I will read it carefully later this week. |
Is this supposed to be useful for accelerating PyTorch training? |
From my perspective, we are a little far from supporting PyTorch training. I can think of 2 possible ways:
It seems both ways are in early stage. Maybe we can further discuss the possibility, but not in the scope of this RFC. |
I wonder whether this would make the torch fallback op (apache/tvm#7401) more or less useful (it would depend on what you (plan to) do with unsupported ops). I am still pondering whether to close it or dust it off. I should note that as far as I know NVidia has a TensorRT front end serving a similar goal and there also is one for ONNXRuntime-as-a-module (recently featured in the PyTorch blog). There may be useful design insights in how they work (or maybe they're too different to what you have in mind). |
* Onnx cannot cover all models with dynamic control flow (e.g. for loop) | ||
* TensorRT can only accelerate some standard networks | ||
|
||
So we hope to use TVM to accelerate PyTorch model inference. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We should discuss more about the benefit of PyTorchTVM here
images: List[torch.Tensor] = [] | ||
for path in image_path: | ||
img = read_image(path) | ||
images.append(img) | ||
x = torch.stack(images).cuda().half() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Shall we conclude these lines to an abstract data loader to be consistent with our description? like, self.load_data()
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
maybe just keep the class
simple?
rfcs/0025-add-pytorch-tvm.md
Outdated
# Prior art | ||
[prior-art]: #prior-art | ||
|
||
Our implementation is inspired by this RFC: https://discuss.tvm.apache.org/t/rfc-add-tensorflow-custom-op-to-embed-tvm-runtime-in-tensorflow-graph-and-session/4601 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Our implementation is inspired by this RFC: https://discuss.tvm.apache.org/t/rfc-add-tensorflow-custom-op-to-embed-tvm-runtime-in-tensorflow-graph-and-session/4601 | |
Our implementation is inspired by this RFC which allows TVM operators to be embedded into TensorFlow: https://discuss.tvm.apache.org/t/rfc-add-tensorflow-custom-op-to-embed-tvm-runtime-in-tensorflow-graph-and-session/4601. Beyond a single operator, our RFC further extends the scope to embedding arbitrary TVM graphs to TorchScript, and enables smooth integration and serving with both `torch.jit.trace` and `torch.jit.script` APIs. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
To clarify: a major difference is that, the TF RFC only support embedding a TVM op, but not a TVM graph. And we also have an internal implementation that embeds TVM graph into TF.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Updated with clarification
Thanks again for the contribution! I have no doubt that it is going to be hugely important piece of work. Just made some suggestions in terms of wording. |
Co-authored-by: Junru Shao <[email protected]>
Co-authored-by: Junru Shao <[email protected]>
Co-authored-by: Junru Shao <[email protected]>
@t-vi It will help a lot. If we have the torch fallback op support, users will less likely get stuck in frontend conversion phase. We are looking forward to this feature. Could you complete the PR?
And thanks for this information. Actually, we are inspired by another NVIDIA work: TRTorch. |
Co-authored-by: Junru Shao <[email protected]>
Co-authored-by: Junru Shao <[email protected]>
Co-authored-by: Junru Shao <[email protected]>
Co-authored-by: Junru Shao <[email protected]>
Co-authored-by: Junru Shao <[email protected]>
Co-authored-by: Junru Shao <[email protected]>
Can this method split the graph automaticly given a torchscript, and convert the subgraph that tvm Relay IR can support or tvm outperforming pytorch? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In general I support this effort. I'd like to see stronger integrations of TVM into adjacent projects like PyTorch. This looks like a good start.
One minor spelling point, but otherwise this looks good to me.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Great work!
Thanks for the great work, a minor note on rendering. it would be great to add the codeblock type to the code so we have syntax highlights |
@hogepodge @tqchen thanks for your advices. I have updated the PR. |
It seems that we all reach consensus and no more comments are raised for several days, let's get this RFC finally merged! 🔥 🎉 |
This RFC add a
PyTorchTVM
module to support: compile TorchScript to TVM and use accelerated module in PyTorch.Initial PR: apache/tvm#8777
Discuss: https://discuss.tvm.apache.org/t/rfc-pytorchtvm-compile-torchscript-to-tvm-and-use-accelerated-module-in-pytorch/10873