Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

AWSMachinePool minSize should be allowed to go to 0 #3242

Closed
mweibel opened this issue Feb 22, 2022 · 5 comments · Fixed by #3468
Closed

AWSMachinePool minSize should be allowed to go to 0 #3242

mweibel opened this issue Feb 22, 2022 · 5 comments · Fixed by #3468
Labels
help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. kind/bug Categorizes issue or PR as related to a bug. priority/important-longterm Important over the long term, but may not be staffed and/or may need multiple releases to complete. triage/accepted Indicates an issue or PR is ready to be actively worked on.

Comments

@mweibel
Copy link
Contributor

mweibel commented Feb 22, 2022

/kind bug

What steps did you take and what happened:
I tried to set the AWSMachinePool minSize to 0 but this failed to apply

The current validation rule of AWSMachinePoolSpec.minSize specifies a validation rule of Minimum=1.
An ASG does not have such a minimum and therefore allows setting minSize to zero.

It might (depending on the implementation) also be important for the autoscaler scale to zero proposal, to have this validation corrected.

What did you expect to happen:
Setting minSize: 0 works and updates the autoscaling group accordingly.

Anything else you would like to add:
Happy to provide a PR for this, if that sounds like something you'd accept :) Not sure what kind of tests would be needed to do so - would need some guidance there.

Environment:

  • Cluster-api-provider-aws version: v1.3.0
  • Kubernetes version: (use kubectl version): 1.23
@k8s-ci-robot k8s-ci-robot added kind/bug Categorizes issue or PR as related to a bug. needs-priority needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. labels Feb 22, 2022
@richardcase
Copy link
Member

/triage accepted
/priority important-longterm
/help

@k8s-ci-robot
Copy link
Contributor

@richardcase:
This request has been marked as needing help from a contributor.

Guidelines

Please ensure that the issue body includes answers to the following questions:

  • Why are we solving this issue?
  • To address this issue, are there any code changes? If there are code changes, what needs to be done in the code and what places can the assignee treat as reference points?
  • Does this issue have zero to low barrier of entry?
  • How can the assignee reach out to you for help?

For more details on the requirements of such an issue, please see here and ensure that they are met.

If this request no longer meets these requirements, the label can be removed
by commenting with the /remove-help command.

In response to this:

/triage accepted
/priority important-longterm
/help

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@k8s-ci-robot k8s-ci-robot added triage/accepted Indicates an issue or PR is ready to be actively worked on. help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. priority/important-longterm Important over the long term, but may not be staffed and/or may need multiple releases to complete. and removed needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. needs-priority labels Feb 22, 2022
@richardcase
Copy link
Member

When doing this we should also update the e2e tests to test scale to 0.

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label May 23, 2022
@mweibel
Copy link
Contributor Author

mweibel commented May 23, 2022

/remove-lifecycle stale

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. kind/bug Categorizes issue or PR as related to a bug. priority/important-longterm Important over the long term, but may not be staffed and/or may need multiple releases to complete. triage/accepted Indicates an issue or PR is ready to be actively worked on.
Projects
None yet
4 participants