-
Notifications
You must be signed in to change notification settings - Fork 261
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
✨ Enable Worker Nodes to Associate with Floating IPs #1725
Conversation
|
Welcome @mikaelgron! |
Hi @mikaelgron. Thanks for your PR. I'm waiting for a kubernetes-sigs member to verify that this patch is reasonable to test. If it is, they should reply with Once the patch is verified, the new status will be reflected by the I understand the commands that are listed here. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: mikaelgron The full list of commands accepted by this bot can be found here.
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
✅ Deploy Preview for kubernetes-sigs-cluster-api-openstack ready!
To edit notification comments on pull requests, go to your Netlify site configuration. |
/ok-to-test |
} | ||
|
||
if fp.PortID != "" { | ||
scope.Logger().Info("Floating IP already associated to a port", "id", fp.ID, "fixedIP", fp.FixedIP, "portID", port.ID) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This feels like something happening all the time, it should have lower log level.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Agreed. It should perhaps be reduced on the APIServerLoadBalancer flow on line 426 as well.
return ctrl.Result{}, fmt.Errorf("associate floating IP %q to worker node with port %q: %w", fp.FloatingIP, port.ID, err) | ||
} | ||
} | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We should log a situation when node haven't got any FIP because pool got exhausted.
if !util.IsControlPlaneMachine(machine) && openStackCluster.Spec.WorkerFloatingIPConfig.Enabled { | ||
scope.Logger().Info("Processing worker floating IPs") | ||
for _, floatingIP := range openStackCluster.Spec.WorkerFloatingIPConfig.IPAddresses { | ||
fp, err := networkingService.GetOrCreateFloatingIP(openStackMachine, openStackCluster, clusterName, floatingIP) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If FIP actually got created, where do we remove it on cluster deinstallation?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For the use case where you have to create IPs first to get the actual IP address strings to put in the spec, they would need to be removed manually anyway. It would be better to use a getter instead of a get-or-create in this case.
@@ -29,6 +29,11 @@ type ExternalRouterIPParam struct { | |||
Subnet SubnetFilter `json:"subnet"` | |||
} | |||
|
|||
type WorkerFloatingIPConfig struct { | |||
Enabled bool `json:"enabled,omitempty"` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we even need this flag? Could a nil
or empty list work as a disabled state?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Let me start with a negative: I'm not a fan of the design here, particularly that this would apply to all workers without any obvious mechanism for restricting it. It also touches floating IPs in the machine controller, which is already an almighty mess. I actually opened #1674 to track removing floatingIP from the machine spec because it can't be used correctly.
However, the use case you describe sounds interesting. I'd like to help you implement it.
This sounds like it could be an IPAM controller? I believe this was first implemented by metal3, so @lentzi90 may know more about it. How about we created an OpenStackFloatingIPPool. It could contain your pre-populated list of FIPs, or create them on demand. We could add floatingIP to PortOpts, except that instead of being an IP it would be a reference to an IPAM provider (in this case our OpenStackFloatingIPPool). The machine controller would create an IPClaim for the port and wait for an IPAddress to be allocated, at which point it could create the port.
The metal3 docs are here: https://book.metal3.io/ipam/introduction.html. There was talk of CAPI adopting it, and I understand CAPV are also using it.
9d64ad3
to
8da55ac
Compare
/test pull-cluster-api-provider-openstack-test |
@mikaelgron: The following test failed, say
Full PR test history. Your PR dashboard. Please help us cut down on flakes by linking to an open issue when you hit one in your PR. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here. |
The Kubernetes project currently lacks enough contributors to adequately respond to all PRs. This bot triages PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
Looks like something being implemented by #1763? |
Closed in favour of #1762 |
What this PR does / why we need it:
This PR aims to enable worker nodes to associate with pre-allocated Floating IPs in OpenStack. The use-case is primarily for clients who want to whitelist their worker nodes in external services. By specifying a set of Floating IPs during the cluster setup, worker nodes can be configured to reuse these IPs, thus avoiding the need to track new floating IPs.
Future Steps
We're already contemplating further enhancements to add dynamicity to this feature. One idea is to allow the cluster CRD to dynamically create Floating IPs based on the number of machine deployments + X or something like that. While this isn't part of the current PR, it is a direction we're interested in exploring.
Feedback Requested
We are actively looking for feedback on this feature to understand its viability and possible improvements. Feel free to comment with your suggestions, questions, or concerns.
Special notes for your reviewer:
TODOs:
/hold