-
Notifications
You must be signed in to change notification settings - Fork 4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Update scheduler initialization in predicates.go #1794
Update scheduler initialization in predicates.go #1794
Conversation
… in predicates.go
/assign @aleksandra-malinowska |
4c8a7c6
to
2076ab0
Compare
With previous code and recent changes to scheduler codebase we did not initialize informer hooks which updated scheduler cache used by some of the predicate function (especially MatchInterPodAffinity). This change replicates how scheduler is initialized now - I.e now we actually create Scheduler object now and do explicity call AddAllEventHandlers. As a followup a discussion will be started with sig-scheduling to make this initialization in CA simpler and with less copy-pasting of code from scheduler codebase.
2076ab0
to
5059184
Compare
/lgtm @bsalamat this change highlights how entangled the predicate code is with discovering cluster state via informers. Is there any plan to improve separation of concerns and make it less fragile to use? Can we help make it happen? |
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: aleksandra-malinowska The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
Oops, sorry about that. The requirement for an explicit call to |
@aleksandra-malinowska Your help on this effort would be greatly appreciated. The scheduler cleanup effort (kubernetes/kubernetes#68951) is meant to at least partially address this issue. I don't think the current plan gets us all the way there - that could be "phase 3". Design help on that phase would also be useful. |
QQ - how are your vendored deps being tracked? I see k8s.io/kubernetes being added to your vendor, but no go.mod or Gopkg.toml. |
@misterikkit we are vendoring only(*) k8s.io/kubernetes and all other packages are transitive dependencies. We are not using go dep nor go modules, but old and ugly godeps (I think we do not have other option if k8s.io/kubernetes uses that, right?). As it was not enough we are following awkward protocol for updating dependencies described here: https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/FAQ.md#how-can-i-update-ca-dependencies-particularly-k8siokubernetes. That is because k8s.io/kubernetes is not designed to be vendored as a whole. Some of the dependencies which are being referenced from k8s.io/kuberenetes/Godeps.json are not aligned with code in that repo. Actually they are not used for compiling k8s.io/kubernetes at all; instead the build scripts are using the class residing in k8s.io/kubernetes/staging. But godeps does not know that when we are vendoring k8s.io/kubernetes to CA. Hence the manual distributing of contents of staging directory around GOPATH before calling godeps save. All in all it is mess - and we would love to clean this up. We just do not have a concrete plan how to approach that yet. (*) a few extra dependencies are added on top of that as required by some external cloud providers (e.g Azure) cc: @MaciekPytel who may know more about the staging hell :) |
@misterikkit Actually approach with explicit call was the second one. First I tried to use If you are interested in the previous apprach here is the code: func NewPredicateCheckerNewNoCopy(kubeClient kube_client.Interface, stop <-chan struct{}) (*PredicateChecker, error) {
informerFactory := informers.NewSharedInformerFactory(kubeClient, 0)
recorder := NoOpEventRecorder{}
algorithmProvider := factory.DefaultProvider
algorithmSource := config.SchedulerAlgorithmSource{
Provider: &algorithmProvider,
}
sched, err := scheduler.New(
kubeClient,
informerFactory.Core().V1().Nodes(),
informerFactory.Core().V1().Pods(),
informerFactory.Core().V1().PersistentVolumes(),
informerFactory.Core().V1().PersistentVolumeClaims(),
informerFactory.Core().V1().ReplicationControllers(),
informerFactory.Apps().V1().ReplicaSets(),
informerFactory.Apps().V1().StatefulSets(),
informerFactory.Core().V1().Services(),
informerFactory.Policy().V1beta1().PodDisruptionBudgets(),
informerFactory.Storage().V1().StorageClasses(),
recorder,
algorithmSource,
stop,
scheduler.WithHardPodAffinitySymmetricWeight(apiv1.DefaultHardPodAffinitySymmetricWeight))
if err != nil {
return nil, fmt.Errorf("could not create scheduler; %s", err.Error())
}
predicateMap := map[string]predicates.FitPredicate{}
for predicateName, predicateFunc := range sched.Config().Algorithm.Predicates() {
predicateMap[predicateName] = predicateFunc
}
predicateMap["ready"] = isNodeReadyAndSchedulablePredicate
if err != nil {
return nil, err
}
// We always want to have PodFitsResources as a first predicate we run
// as this is cheap to check and it should be enough to fail predicates
// in most of our simulations (especially binpacking).
if _, found := predicateMap["PodFitsResources"]; !found {
predicateMap["PodFitsResources"] = predicates.PodFitsResources
}
predicateList := make([]predicateInfo, 0)
for _, predicateName := range priorityPredicates {
if predicate, found := predicateMap[predicateName]; found {
predicateList = append(predicateList, predicateInfo{name: predicateName, predicate: predicate})
delete(predicateMap, predicateName)
}
}
for predicateName, predicate := range predicateMap {
predicateList = append(predicateList, predicateInfo{name: predicateName, predicate: predicate})
}
for _, predInfo := range predicateList {
klog.V(1).Infof("Using predicate %s", predInfo.name)
}
informerFactory.Start(stop)
return &PredicateChecker{
predicates: predicateList,
//predicateMetadataProducer: metadataProducer, // TODO we have a problem here
enableAffinityPredicate: true,
}, nil
} |
@aleksandra-malinowska Thanks for your offer. We know that the scheduler initialization is convoluted. Jonathan has made a lot of progress on that recently, but there is work left to be done. |
I think that assumption might be worth exploring. It is true that k8s.io/kubernetes does not play nice with vendor folders, but you may still be able to use dep instead of Godeps. |
Update scheduler initialization in predicates.go
With previous code and recent changes to scheduler codebase we did not
initialize informer hooks which updated scheduler cache used by some of
the predicate function (especially MatchInterPodAffinity).
This change replicates how scheduler is initialized now - I.e now we
actually create Scheduler object now and do explicity call AddAllEventHandlers.
As a followup a discussion will be started with sig-scheduling to make
this initialization in CA simpler and with less copy-pasting of code
from scheduler codebase.
cc: @bsalamat, @misterikkit