-
Notifications
You must be signed in to change notification settings - Fork 4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
CA fails to schedule nodes onto spot ASG's with zero instances producing "node(s) didn't match node selector" on EKS #4010
Comments
Hi, |
cluster autoscaler deployed with helm through with the following values.yaml. I don't see that option in https://github.com/kubernetes/autoscaler/blob/master/charts/cluster-autoscaler/values.yaml
Node labels where provides as "--node-labels=node.kubernetes.io/lifecycle=spot,group-name=spot,instance=m5.xlarge" in the spot template. I didn't know that the node labels had to follow any specific nomenclature to be target.
must be replace with
and the deployment.yaml should target nodeSelector using a fully qualified label name with k8s.... i.e. ?
I see that when the nodes do come up labels are present
|
@bpineau I tried adding these labels to the ASG yesterday as you mentioned, and to the kubelet extra args --node-labels i.e. "--node-labels=node.kubernetes.io/lifecycle=spot,group-name=spot,instance=m5a.large" I did a deployment with both selectors and still the same results
|
I0421 15:32:06.329470 1 flags.go:52] FLAG: --add-dir-header="false" |
The |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-contributor-experience at kubernetes/community. |
Havng the same problem. ASGs labeled with |
^ Bump Having the same issue as well. ASGs labeled with Scale up from zero is enabled on the autoscaler deployment. |
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle rotten |
/remove-lifecycle rotten |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle rotten |
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /close |
@k8s-triage-robot: Closing this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
My nodes:
My ASG of instance=m5.xlarge
Scheduling this deployment:
CA logs:
Related to #4002 ?
The text was updated successfully, but these errors were encountered: