-
Notifications
You must be signed in to change notification settings - Fork 430
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Ensure that Standard LBs have configurable number of frontend IPs #541
Comments
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
Stale issues rot after 30d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
/remove-lifecycle rotten |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
/remove-lifecycle stale |
/help |
@CecileRobertMichon: Please ensure the request meets the requirements listed here. If this request no longer meets these requirements, the label can be removed In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
/assign |
/kind feature
Describe the solution you'd like
[A clear and concise description of what you want to happen.]
Because Kubernetes on Azure implementations share a common Standard LoadBalancer resource to route outbound traffic across all node pools, it's important to be able to scale up the number of IP addresses shared by the backend pool members for outbound access, because outbound port allocation is capped per-IP address.
This is a feature request to make sure we don't harden the Standard LoadBalancer implementation so that it is statically set to 1 public IP address for outbound SNAT. If we do so, then clusters at scale will break (port exhaustion).
This PR will do this for aks-engine, which can be used as a reference:
Azure/aks-engine#3085
Anything else you would like to add:
[Miscellaneous information that will assist in solving the issue.]
Environment:
kubectl version
):/etc/os-release
):The text was updated successfully, but these errors were encountered: