-
Notifications
You must be signed in to change notification settings - Fork 993
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Proposal: Volcano job support scale up and down #782
Conversation
@hzxuzhonghu: GitHub didn't allow me to request PR reviews from the following users: zrss. Note that only volcano-sh members and repo collaborators can review this PR, and authors cannot review their own PRs. In response to this: Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
However, only when the job is not started, the initialization is run. | ||
So we need a way to know whether it is a scale up/down event that triggered this round of sync. | ||
|
||
The way I propose is to add a new event `JobUpdatedEvent` to indicate that the job is updated(here only cares about the scale up/down). |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
When Pod was created/deleted, how /when to handle configmap? It's better to highlight the time sequence for the user.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would add a new OnJobUpdate
method to the plugin
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would add a new
OnJobUpdate
method to the plugin
So?
we need to highlight when create pod, when configmap was updated and which pod will be deleted when scale down.
docs/design/job-scale-up-down.md
Outdated
|
||
### Admission webhook | ||
|
||
Should prevent invalid mutating Job Spec on the fly. In this proposal, we only allow replicas update. Any other spec changes will be prohibited. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
what's our expected behaviour if minMember & replicas does not match?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
invalid update, the api calling will fail
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
let's document it, if it's the case
docs/design/job-scale-up-down.md
Outdated
## Motivation | ||
|
||
Currently, Volcano does not support Job update. It is not allowed to update the `Job.Spec` on the fly. | ||
However, users like ModelArts want to dynamically adjust Job's replicas according to the cluster idle resources |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That's not only for ModelArts; AFAIK, several ML framework already support elastic model, e.g. https://github.com/pytorch/elastic
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Will read about that for more context
/cc @Jeffwan |
LGTM overall :) @zrss , please help to confirm whether that's behavour cover your cases :) |
|
||
2. create pods when scale up | ||
|
||
3. delete pods when scale down |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Does pod of volcano job has corresponding headless service?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, it is similar to the statefulset pods, we can not delete the headless service when deleting the pods. But we will update the accessible hostfile.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just curious, Any reason headless service can not be deleted?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Here we just scale down, there are still some tasks maybe ps exist for tensoflow job. BTW, the headless service is deleted when the job completes or deleted.
And accordingly add a new action `UpdateJobAction` to run `UpdateJob` function. And the overall workflow is: | ||
![workflow](images/Job-scale-up-down.PNG) | ||
|
||
To scale up/down on the fly, Volcano should be responsible to notify the original pods the current status, including the hosts of all the pods. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we probably need more update categories here.
- pod template change
- job replica change
- job manifest change like annotation
I assume only specific change will trigger UpdateJobAction
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
These kinds of updates are prohibited.
@@ -0,0 +1,103 @@ | |||
# Volcano Job scale up and down |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I assume the scope of this proposal is to make sure controller can response to the job scale up and down. no scale up/down decision need to be made here.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, correct.
lgtm, sorry for a long delay ... @hzxuzhonghu @k82cn |
/lgtm |
/approve |
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: hzxuzhonghu, Jeffwan, k82cn The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
…2-origin-release-0.4 Automated cherry pick of #782: Support scale up and down
/cc @k82cn @zrss