Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[MetaSchedule] Use current pass context in compile_relay, extract_tasks #13470

Closed
wants to merge 2 commits into from

Conversation

tkonolige
Copy link
Contributor

Adds the pass config information necessary for tuning and compiling relay with metaschedule to the existing pass context instead of overriding the existing one. Allows users to pass in their own pass instruments, required passes, and disabled passes. This also keeps the same API used to compile relay with autotvm and auto_scheduler.

@tvm-bot
Copy link
Collaborator

tvm-bot commented Nov 22, 2022

Thanks for contributing to TVM! Please refer to the contributing guidelines https://tvm.apache.org/docs/contribute/ for useful information and tips. Please request code reviews from Reviewers by @-ing them in a comment.

Generated by tvm-bot

@tkonolige
Copy link
Contributor Author

@junrushao @masahi This is a change to the API for compiling relay with metaschedule tuning. I believe it to be an improvement as it keeps the same passcontext api that is used in the rest of the codebase.

Adds the pass config information necessary for tuning and compiling
relay with metaschedule to the existing pass context instead of
overriding the existing one. Allows users to pass in their own pass
instruments, required passes, and disabled passes. This also keeps the
same API used to compile relay with autotvm and auto_scheduler.
@tkonolige tkonolige force-pushed the pass_context_ms_compile branch from a50e95c to b26d443 Compare January 3, 2023 18:10
@tkonolige
Copy link
Contributor Author

@junrushao @masahi any thoughts on this? The change is from

ms.compile_relay(mod, opt_level=3)

to

with transform.PassContext(opt_level=3):
  ms.compile_relay(mod)

Although more verbose this mirrors the compilation process for non-metascheduled relay:

with transform.PassContext(opt_level=3):
  vm.compile(mod)

@junrushao
Copy link
Member

They are intentionally designed this way to keep the primary APIs as simple as possible, for advanced usecases like you mentioned in this PR, please do not use compile_relay and extract_tasks directly, but assemble those APIs call yourself. That is being said, I am not in favor of this change

@masahi
Copy link
Member

masahi commented Jan 3, 2023

@junrushao Isn't it actually simplifying the API? Also considering that this changes makes the MS API consistent with other tuner APIs, I think it is a good change.

@masahi
Copy link
Member

masahi commented Jan 3, 2023

This also makes PRs like #13688 and #13659 unnecessary.

If we want to make the tuning API as simple as possible, we should remove all those default arguments (builder / runner / db etc, and even random seeds) that never get used in practice as well.

)
mod, target, params, executor = _normalize_params(mod, target, params, executor)
pass_ctx = transform.PassContext.current()
pass_config = dict(pass_ctx.config)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

To prevent breaking user's script after this change, how about setting a default ctx with opt_level = 3 if there is no current ctx?

Copy link
Contributor Author

@tkonolige tkonolige Jan 3, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Right now there is no way to determine whether or not if the current pass context is the default or not. It might be possible to add though.

It is not consistent with the rest of TVM to default the opt_level to 3 in meta schedule. Everywhere else defaults to 2. This does mean that we might break some user scripts.

@tqchen tqchen closed this Sep 6, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants