-
Notifications
You must be signed in to change notification settings - Fork 3.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Relay][Transform] Support Dumping IR to help debugging #3493
Conversation
I am not sure if it is the best approach user would like. For example, if I were to construct the pipeline pragmatically, I might prefer seq = _transform.Sequential([
relay.transform.InferType(),
relay.transform.FoldConstant(),
relay.transform.DebugPrint(),
relay.transform.DeadCodeElimination()
]) |
@tqchen yeah, I also thought about the way you were mentioning. Pytorch does provide I thought the compiler dumping style is more favorableI and decided to implement it this way because, IMHO, the major compilation pipeline is usually invisible users. For example, they will more frequently access the compilation using opt level in |
The reason that major compiler pipeline are invisible to the users is because compiler pipeline themselves are not composed pragmatically, but pre configured. The only option to tweak major compiler pipelines were CLI, where it makes sense to use string names. When we are using python API it is good to use the pragmatic approach as Pytorch did, rather than Limiting our programming model to CLI style We could also provide snippet of the default pipeline so a user can modify pragmatically in python |
I think the fundamental question here is whether or not do we want users to pragmatically customize optimization pipelines. One could argue that, like traditional compilers, most pipelines are pre-configured officially and do not need to be tweaked. On the other hand, we could also say that given the different kinds of optimizations that we want to perform, there is a bigger amount of demand to pragmatically construct pipelines. It will become more and more important to make users' life easier to do so(just like making users' life easier to build new models in deep learning). In that case, we might want to go beyond the API offered by traditional compilers. This was just like the argument of deep learning framework configuration evolution, the first stage(Caffe1) config file is just like the options in traditional compilers, then we have TF/MX/Pytorch that offers more pragmatic constructions. Would the same thing happen in our case? Should our pipeline construction be treated as static stages as in traditional compilers, or should they be like model constructions that can benefit from greater diversity and ease of exploration? I do not know, but at least I personally think we should strive to enable that possibility. And learn more from deep learning frameworks like Pytorch. |
Hooking something in to print every time a pass is run might be too cumbersome to use in some situations, such as when |
@tqchen I think it's helpful to encourage user to at least be able to customize(for now just turn on/off?) certain optimization related passes, such as CombineParallelConv2d. The usefulness of these passes can be highly related to input models, and some experiments might be beneficial. |
Thanks for all the discussion and feedback:) I totally agree that it is fundamentally more about if the programmatic or the default compilation is more common used. This is something I am not sure as well. I am very glad to hear the voice from the community. If we go the programmatic way, I think we probably need to bring the python pipeline back because otherwise users may need to intrusively insert Another thing we probably need to be aware of is that if we only use Or should we enable both? That may seem overkilling. Any suggestions/ideas about this? |
Re the question of whether pipeline constructions should be made in python(will there be duplications if we do things both in python and c++). I think it boils down to the fact that whether we treat compilation pipelines just like models in deep learning frameworks. Models are implemented in many languages, you see resnet pipelines being implemented in official python part, c++, or other parts of the language. While it is important to have a default one, and if that default one is concise enough, I think it might make sense to bring a python version of pipeline construction. |
Okay. I will use the Should we allow users to dump all passes instead of inserting |
Let us not enable global printing for now and think about the need later |
Thanks all. I updated the PR. |
Glad to see the test of the printing. I hope we will have tests for everything that produces error messages or text (perhaps down the line) |
THANKS, @slyubomirsky @zhiics @eqy @jroesch this PR is now merged |
* [Relay][Transform] Support Dumping IR to help debugging * debugprint->printir
* [Relay][Transform] Support Dumping IR to help debugging * debugprint->printir
* [Relay][Transform] Support Dumping IR to help debugging * debugprint->printir
This PR enables users to provide the interested passes for debugging during optimization. For example, users can provide a certain pass name for dumping. The module IR will be dumped after this passes is applied.
An example could be like the following:
The output:
cc @jroesch @tqchen @icemelon9 @wweic @slyubomirsky @MarisaKirisame