-
Notifications
You must be signed in to change notification settings - Fork 4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Scale from 0, unwanted nodes #2165
Comments
relates #2008 |
what's the expand strategy are you using? |
Sorry for late response. just come back from vacation :D |
I'm using the default expander (i.e. random). |
What settings are you using when running the autoscaler (the flags)? And which version? Could be a something in the configuration causing this |
|
Maybe something to do with |
My guess would be scale-from-0 logic incorrectly guessing how the node would look like. CA sees the template node that would help the pending pods, so it scales up. Once the node is created it turns out it looks differently than expected and it doesn't really fit the pods. So CA deletes it. Once there are 0 nodes it goes back to using scale-from-0 template and the situation repeats. |
I don't like the message: It seems something is going wrong. |
/area provider/aws |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
It seems it got fixed. |
Hello. I have three ASGs:
main [min:1, max: 1]
spots [min: 1, max:10]
test-asg [min:0, max:0, tainted]
Taint is specified for ASG and instances tags.
CA always creates a new node in the
test-asg
group there are no pods being scheduled on the test ASG nodes though. Then, it deletes the node (after unneeded period) and creates again (loop).How can I fix this?
The text was updated successfully, but these errors were encountered: