-
Notifications
You must be signed in to change notification settings - Fork 3.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[MetaSchedule][Refactor] Introduce TuneConfig #10986
[MetaSchedule][Refactor] Introduce TuneConfig #10986
Conversation
4d6b4ab
to
e492de2
Compare
@masahi @zxybazh @michalpiszczek @mbrookhart would love to ask you guys to review this PR given it's going to be breaking the existing tune_relay interface |
e492de2
to
b1c1800
Compare
b1c1800
to
b9edb3d
Compare
@masahi @zxybazh comments addressed :-) To be extremely careful, please verify every test in @michalpiszczek This API breaking change precisely follows our offload discussion on Tuesday. Having left it open for a few days, and will continue to keep you updated 👍 |
b9edb3d
to
2d4f74d
Compare
Thanks @junrushao1994 , this LGTM to me! |
2d4f74d
to
43167a9
Compare
num_trials_per_iter=32, | ||
max_trials_per_task=32, | ||
max_trials_per_task=20000, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In general, do you recommend setting max_trials_per_task
to be the same value as max_trials_global
, or some arbitrary large number? This change makes me think that the value of max_trials_per_task
doesn't matter much as long as it is sufficiently big.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I see now that the value of max_trials_per_task
is optional:
max_trials_per_task: Optional[int] = None
How about removing the explicit specification of max_trials_per_task
here and elsewhere in tests, if that's not important?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In this case, I think max_trials_global
is the default for max_trials_per_task
, except when we want to limit the trials for each task. In these cases, if the search space is very small, the evolutionary search strategy would do an early stopping; when we only want to do test, we can limit the max_trials_global
to be a small number close to num_trials_per_iter*task_num
. Therefore, I think it could be a good move to use max_trials_global
as default to max_trials_per_task
unless specified.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yeah I mean, if this value is optional, at least one test should exercise that scenario / code path.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hi Masa, this is actually a great point! Generally speaking, I would recommend max_trials_per_task == max_trials_global
by default, and leave the TaskSchedule to decide how to allocate those trials
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Let me improve this API by marking max_trials_per_task
as optional
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
oh actually I just found myself implemented the logic in create_strategy
already...Let me update the docs to reflect this
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
IMHO we should still set max_trials_global
as something like 2000
here and add comment suggested 20k or 1k * num of subgraphs
beside because this is still a functionality test. I will send a PR for meta schedule tutorial as Masa suggested later this week and that would be a better example to show case our interface usage.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Well, yes or no. There is no "definitely correct" suggestion in terms of how many trials should be allocated globally or locally...
Thanks @masahi @zxybazh @michalpiszczek for the careful review and discussion! |
This PR unifies the existing `EvolutionarySearchConfig`, `ReplayFuncConfig` and `ReplayTraceConfig` into `TuneConfig`, and refactored the logic in `meta_schedule/tune.py`
This PR unifies the existing `EvolutionarySearchConfig`, `ReplayFuncConfig` and `ReplayTraceConfig` into `TuneConfig`, and refactored the logic in `meta_schedule/tune.py`
This PR unifies the existing
EvolutionarySearchConfig
,ReplayFuncConfig
andReplayTraceConfig
intoTuneConfig
, and refactored the logic inmeta_schedule/tune.py