-
Notifications
You must be signed in to change notification settings - Fork 4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Cluster Autoscaler for Exoscale - FailedScheduling #4782
Comments
@jbartosik any update on the issue? |
Hi, I work on VPA. I also label issues to make it easier to find issues I want to check. I think @MaciekPytel is point of contact for CA |
Hello, On my view, the deployment provided as an example is missing a proper toleration field. If you are deploying your cluster with
Please note that |
@PhilippeChepy thank you. I redeployed with the updated toleration. However, the pod deployed in control plane is stuck in pending status. So I tried the other manifest "cluster-autoscaler.yaml" to deploy it in the worker node but the pod is on crashloopbackoff status. See pod details:
See logs below:
|
The relevent part of your log is:
From your Pod description:
The error is comming from this potion of code What is the content of the This secret can be set by the
|
Which component are you using?: cluster-autoscaler
What version of the component are you using?: 9.16.2
Component version: latest
What k8s version are you using (
kubectl version
)?: 1.23kubectl version
OutputWhat environment is this in?: exoscale
What did you expect to happen?: to deploy cluster-autoscaler
What happened instead?: the pod got stuck in pending status.
How to reproduce it (as minimally and precisely as possible):
Anything else we need to know?:
I was expecting it will tolerate the taints in the control plane, but I don't think it is the case here.
The text was updated successfully, but these errors were encountered: