-
Notifications
You must be signed in to change notification settings - Fork 42
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
OPNET-197: Extend logic for detecting Node IP #218
OPNET-197: Extend logic for detecting Node IP #218
Conversation
@mkowalski: This pull request references OPNET-197 which is a valid jira issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
1 similar comment
@mkowalski: This pull request references OPNET-197 which is a valid jira issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
/test ? |
@mkowalski: The following commands are available to trigger required jobs:
The following commands are available to trigger optional jobs:
Use
In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
c9dde99
to
661f51e
Compare
661f51e
to
9d8f8ce
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I need to come back and look at this when it's not the end of the day and I'm a little more lucid, but the broad strokes look good to me.
pkg/utils/network.go
Outdated
return strings.Contains(ip, ".") && net.ParseIP(ip) != nil | ||
} | ||
|
||
func IsIPv6Addr(ip string) bool { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Note that we already have an IsIPv6 function:
baremetal-runtimecfg/pkg/utils/addresses.go
Line 131 in 2905c04
func IsIPv6(ip net.IP) bool { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LOL of course I see it now. I will clean&fix this
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The current implementation,
func IsIPv6(ip net.IP) bool {
return ip.To4() == nil
}
is prone to give wrong results when running against malformed IPs. I will let myself to change it, while keeping the same signature
pkg/config/node.go
Outdated
// by detecting which of the local interfaces belongs to the same subnet as requested VIP. | ||
// This interface can be used to detect what was the original machine network as it contains | ||
// the subnet mask that we need. | ||
machineNetwork, err := utils.GetLocalCIDRByIP(apiVip.String()) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It makes me a little twitchy that we use machineNetwork later even if this returns an error. Maybe we just check for "" like you did elsewhere (since I think that's what machineNetwork will be in the error case)?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I see two options to solve this. Hard one and graceful one, let's see what seems wiser here
- If we failed to get the machine network, throw an error; this feels okay now because I tend to say that for any given API VIP your node must have at least one subnet shared with the VIP subnet. On the other hand, when Control Plane across multiple subnets is introduced, this statement may not hold true any more; I think in this scenario the logic below should not be executed anyway as-is, but I can see some potential for it to fail (which may be also good because it will show us quickly what needs changing in runtimecfg to support this feature)
machineNetwork, err := utils.GetLocalCIDRByIP(apiVip.String())
if err != nil {
return []Backend{}, fmt.Errorf("Could not retrieve subnet for IP %s", apiVip.String())
}
- Inside the function doing the real calculation just check if any machine network was provided. If not, then it makes no sense to use this logic so just skip the part of the code that uses it
func getNodeIpForRequestedIpStack(
[...]
if addr == "" && machineNetwork != "" {
log.Debugf("For node %s can't find address using NodeInternalIP. Fallback to OVN annotation.", node.Name)
[...]
Any preference?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not sure this code will ever get called in a multiple subnet control plane (shorter name needed for this feature ;-). The internal loadbalancer will be disabled so we won't be building peer lists at all. The only question would be whether this gets called at some point by the DNS code since we're planning to keep that part. I don't think it does because the only references I can find to it are in the loadbalancer monitors.
So I think I'm good with making this an error. Keepalived can't run on a node that isn't on the VIP subnet so I can't see where we would need to handle that.
e844f3b
to
5edf305
Compare
What are the cases where the IP won't be part of node.Status.Addresses? Would it be if the intended IP is on a second NIC that is not the same NIC as what was given to kubelet? |
Dual-stack on vSphere (or any other cloud provider, but we target only vSphere now). This is because kubelet cannot take But... Our aim was to explicitly state that till kubernetes/enhancements#3706 is implemented, topologies where dual-stack is spawned across multi NICs are not supported because in such a scenario I am not sure if we are able to correctly unwrap&wrap traffic coming to the loadbalancer. I somehow have a feeling that multi NIC could just work (because internally every node knows how to talk to each other and loadbalancer can use internally IPv4 even for incoming IPv6 traffic), but it's probably too cumbersome to test unless we have a customer with this specific topology |
Hmm, that's a good point. I was thinking Kubelet would use its own internal logic to select ipv6, but since it doesn't get a v6 address at all in this scenario that might not actually matter. Probably still safest to just not support that though since I'm not positive what the behavior will be. |
5edf305
to
e7b10e3
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
One minor question about the logging inline, but otherwise lgtm.
pkg/config/node.go
Outdated
} | ||
} | ||
|
||
return ingressConfig, nil | ||
} | ||
|
||
func getNodeIpForRequestedIpStack(node v1.Node, filterIps []string, machineNetwork string) (string, error) { | ||
log.SetLevel(logrus.DebugLevel) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If we're forcing this on, should these log messages just be Info level?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sure, just changed. I kind of thought about a future when we want to have an env variable to control the log level to decrease verbosity, but it will be simple to then lower level from info to debug
When generating keepalived.conf we are relying on the logic to gather IPs of all the cluster nodes for the IP stack used by the specific VIP. This logic currently relies only on the addresses reported as part of Node.Status.Addresses. In some scenarios it may be that the node is not reporting all its IPs via kubelet but still have those available. If we detect such a scenario (e.g. kubelet reporting only IPv4, but VIP being IPv6), we will check for Node annotations created by OVN as those use different source of truth so kubelet not reporting IPs is not affecting it. The newly introduced behaviour is just a fallback in case Node.Status.Addresses does not contain an IP of a requested stack, therefore not changing the behaviour for currently working scenarios. Contributes-to: OPNET-197
e7b10e3
to
61a14ac
Compare
@mkowalski: all tests passed! Full PR test history. Your PR dashboard. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
/lgtm
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: cybertron, mkowalski The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
When generating keepalived.conf we are relying on the logic to gather IPs of all the cluster nodes for the IP stack used by the specific VIP. This logic currently relies only on the addresses reported as part of Node.Status.Addresses.
In some scenarios it may be that the node is not reporting all its IPs via kubelet but still have those available. If we detect such a scenario (e.g. kubelet reporting only IPv4, but VIP being IPv6), we will check for Node annotations created by OVN as those use different source of truth so kubelet not reporting IPs is not affecting it.
The newly introduced behaviour is just a fallback in case Node.Status.Addresses does not contain an IP of a requested stack, therefore not changing the behaviour for currently working scenarios.
Contributes-to: OPNET-197