-
Notifications
You must be signed in to change notification settings - Fork 4.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
run scheduler by wiring up to a command #16015
run scheduler by wiring up to a command #16015
Conversation
pkg/cmd/server/start/start_master.go
Outdated
@@ -472,6 +480,10 @@ func (m *Master) Start() error { | |||
go func() { | |||
controllerPlug.WaitForStart() | |||
|
|||
// this does a second leader election, but doing the second leader election will allow us to move out process in | |||
// 3.8 if we so choose. | |||
go schedulerapp.Run(scheduler) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Doing this will run a second leader election for the scheduler, but doing it like this will allow us to split into a separate binary/unit file in 3.8 without worrying about ending up with dueling schedulers since the second leader election will gate the separate binary.
@mfojtik looks like I just forgot to initialize a map somewhere. Look good otherwise? |
changes LGTM @sjenning FYI since scheduler is your baby :-) |
f98eaa9
to
dabf403
Compare
dabf403
to
3783e1b
Compare
@aveshagarwal can you take a look? @ravisantoshgudimetla too. |
/retest |
@sjenning @aveshagarwal @ravisantoshgudimetla this formalizes previous attempts to stop wiring the config of the upstream scheduler and start using the bare upstream command. Are you planning to review, hold, or defer? This is one of the two remaining pieces that are left to stop wiring kube controllers specially, brings us closer into line other deployments, and has a clear migration path towards "normal" deployment. I'd rather not let it linger. |
/lgtm. Changes LGTM to me @deads2k. Thanks |
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: deads2k, ravisantoshgudimetla The full list of commands accepted by this bot can be found here.
Needs approval from an approver in each of these OWNERS Files:
You can indicate your approval by writing |
/retest |
scheduler, err := newScheduler(kubeconfigFile, schedulerConfigFile, cmdLineArgs) | ||
if err != nil { | ||
glog.Error(err) | ||
continue |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It seems to me that if there is any parsing error or anything, it will keep filling the logs. How continuing here is helpful?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It seems to me that if there is any parsing error or anything, it will keep filling the logs. How continuing here is helpful?
It keeps the entire process (including the apiserver) from dying.
/retest |
ret := &KubeControllerConfig{} | ||
|
||
kubeExternal, _, err := configapi.GetExternalKubeClient(options.MasterClients.OpenShiftLoopbackKubeConfig, options.MasterClients.OpenShiftLoopbackClientConnectionOverrides) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I dont see we are making sure now that connection overrides still apply, since now the client will be created in scheduler itself, are we ok with not having this?
// applyClientConnectionOverrides updates a kubeConfig with the overrides from the config.
func applyClientConnectionOverrides(overrides *ClientConnectionOverrides, kubeConfig *restclient.Config) {
if overrides == nil {
return
}
kubeConfig.QPS = overrides.QPS
kubeConfig.Burst = int(overrides.Burst)
kubeConfig.ContentConfig.AcceptContentTypes = overrides.AcceptContentTypes
kubeConfig.ContentConfig.ContentType = overrides.ContentType
}
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I dont see we are making sure now that connection overrides still apply, since now the client will be created in scheduler itself, are we ok with this?
Yes. The path forward will be to configure the differences we need in the scheduler CLI. If the scheduler doesn't support it feature we need, we'll be pursuing them upstream and perhaps picking them back.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thats ok I agree with this. I am concerned if someone supplied QPS/Burst/ContentType in openshift, and doesn't realize that these values wont be passed to scheduler any more? wouldn't it create regressions?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thats ok I agree with this. I am concerned if someone supplied QPS/Burst/ContentType in openshift, and doesn't realize that these values wont be passed to scheduler any more? wouldn't it create regressions?
Are the defaults bad? If the defaults upstream are good, I'm probably willing to crack some eggs. If they're bad, we do some shenanigans to override. The overrides in openshift are pretty coarse grained and targeted at loopback (which we'll probably be disabling) and controllers (which we have to keep).
Just remember that everything we plumb through special is another thing you'll be describing to ansible to special-case during the (hopefully) 3.8 migration.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I am not really sure upstream defaults are good or bad, lets say they are good. But I agree with your ansible thing.
But It is getting a bit confusing now as there are 4 ways to set QPS/Burst etc
- Via kubeconfig
- Via overrides
- Via scheduler's defaults (https://github.com/openshift/origin/blob/master/vendor/k8s.io/kubernetes/pkg/apis/componentconfig/v1alpha1/defaults.go#L139)
- Via scheduler server's flags
Lets say overrides are not in picture any more, I think I still see issue in createClient function: https://github.com/openshift/origin/blob/master/vendor/k8s.io/kubernetes/plugin/cmd/kube-scheduler/app/configurator.go#L57
Even if a user does not set any scheduler flags for QPS/burst etc, QPS/Burst kubeconfig values are being overwritten by defaults, as there is no way to differentiate if these were set by user or are defaults.
I think we will have to decide if we are ok with kubeconfig supplied values being overwritten by defaults. I think this issue is something I will have to look into kube upstream too.
Though, one thing I am wondering, I am not sure, in practice what is preferred way of configuration for these QPS/Burst, via kubeconfig, or via scheduler's flags etc.
/retest |
1 similar comment
/retest |
/retest |
2 similar comments
/retest |
/retest |
/test integration seems like quay.io infra problem |
Automatic merge from submit-queue |
Wires the scheduler into the controllers by running the command with flags. This prevents nearly all of our rebase pain in the wiring area with scheduler.