Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

OPNET-197: Extend logic for detecting Node IP #218

Merged
merged 1 commit into from
Feb 22, 2023

Conversation

mkowalski
Copy link
Contributor

@mkowalski mkowalski commented Jan 30, 2023

When generating keepalived.conf we are relying on the logic to gather IPs of all the cluster nodes for the IP stack used by the specific VIP. This logic currently relies only on the addresses reported as part of Node.Status.Addresses.

In some scenarios it may be that the node is not reporting all its IPs via kubelet but still have those available. If we detect such a scenario (e.g. kubelet reporting only IPv4, but VIP being IPv6), we will check for Node annotations created by OVN as those use different source of truth so kubelet not reporting IPs is not affecting it.

The newly introduced behaviour is just a fallback in case Node.Status.Addresses does not contain an IP of a requested stack, therefore not changing the behaviour for currently working scenarios.

Contributes-to: OPNET-197

@openshift-ci-robot openshift-ci-robot added the jira/valid-reference Indicates that this PR references a valid Jira ticket of any type. label Jan 30, 2023
@openshift-ci-robot
Copy link
Contributor

openshift-ci-robot commented Jan 30, 2023

@mkowalski: This pull request references OPNET-197 which is a valid jira issue.

In response to this:

When generating keepalived.conf we are relying on the logic to gather IPs of all the cluster nodes for the IP stack used by the specific VIP. This logic currently relies only on the addresses reported as part of Node.Status.Addresses.

In some scenarios it may be that the node is not reporting all its IPs via kubelet but still have those available. If we detect such a scenario (e.g. kubelet reporting only IPv4, but VIP being IPv6), we will check for Node annotations created by OVN as those use different source of truth so kubelet not reporting IPs is not affecting it.

The newly introduced behaviour is just a fallback in case Node.Status.Addresses does not contain an IP of a requested stack, therefore not changing the behaviour for currently working scenarios.

Contributes-to: OPNET-197

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

1 similar comment
@openshift-ci-robot
Copy link
Contributor

openshift-ci-robot commented Jan 30, 2023

@mkowalski: This pull request references OPNET-197 which is a valid jira issue.

In response to this:

When generating keepalived.conf we are relying on the logic to gather IPs of all the cluster nodes for the IP stack used by the specific VIP. This logic currently relies only on the addresses reported as part of Node.Status.Addresses.

In some scenarios it may be that the node is not reporting all its IPs via kubelet but still have those available. If we detect such a scenario (e.g. kubelet reporting only IPv4, but VIP being IPv6), we will check for Node annotations created by OVN as those use different source of truth so kubelet not reporting IPs is not affecting it.

The newly introduced behaviour is just a fallback in case Node.Status.Addresses does not contain an IP of a requested stack, therefore not changing the behaviour for currently working scenarios.

Contributes-to: OPNET-197

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@openshift-ci openshift-ci bot requested review from cybertron and dougsland January 30, 2023 11:03
@openshift-ci openshift-ci bot added the approved Indicates a PR has been approved by an approver from all required OWNERS files. label Jan 30, 2023
@mkowalski
Copy link
Contributor Author

/test ?

@openshift-ci
Copy link
Contributor

openshift-ci bot commented Jan 30, 2023

@mkowalski: The following commands are available to trigger required jobs:

  • /test e2e-metal-ipi-ovn-ipv6
  • /test gofmt
  • /test govet
  • /test images
  • /test unit

The following commands are available to trigger optional jobs:

  • /test e2e-metal-ipi
  • /test e2e-metal-ipi-ovn-dualstack
  • /test e2e-openstack

Use /test all to run the following jobs that were automatically triggered:

  • pull-ci-openshift-baremetal-runtimecfg-master-e2e-metal-ipi
  • pull-ci-openshift-baremetal-runtimecfg-master-e2e-metal-ipi-ovn-ipv6
  • pull-ci-openshift-baremetal-runtimecfg-master-gofmt
  • pull-ci-openshift-baremetal-runtimecfg-master-govet
  • pull-ci-openshift-baremetal-runtimecfg-master-images
  • pull-ci-openshift-baremetal-runtimecfg-master-unit

In response to this:

/test ?

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@mkowalski mkowalski force-pushed the detect-ips-better branch 12 times, most recently from c9dde99 to 661f51e Compare January 31, 2023 19:33
@mkowalski mkowalski marked this pull request as draft January 31, 2023 19:47
@openshift-ci openshift-ci bot added the do-not-merge/work-in-progress Indicates that a PR should not merge because it is a work in progress. label Jan 31, 2023
Copy link
Member

@cybertron cybertron left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I need to come back and look at this when it's not the end of the day and I'm a little more lucid, but the broad strokes look good to me.

pkg/config/node.go Outdated Show resolved Hide resolved
return strings.Contains(ip, ".") && net.ParseIP(ip) != nil
}

func IsIPv6Addr(ip string) bool {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Note that we already have an IsIPv6 function:

func IsIPv6(ip net.IP) bool {

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LOL of course I see it now. I will clean&fix this

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The current implementation,

func IsIPv6(ip net.IP) bool {
	return ip.To4() == nil
}

is prone to give wrong results when running against malformed IPs. I will let myself to change it, while keeping the same signature

// by detecting which of the local interfaces belongs to the same subnet as requested VIP.
// This interface can be used to detect what was the original machine network as it contains
// the subnet mask that we need.
machineNetwork, err := utils.GetLocalCIDRByIP(apiVip.String())
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It makes me a little twitchy that we use machineNetwork later even if this returns an error. Maybe we just check for "" like you did elsewhere (since I think that's what machineNetwork will be in the error case)?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I see two options to solve this. Hard one and graceful one, let's see what seems wiser here

  1. If we failed to get the machine network, throw an error; this feels okay now because I tend to say that for any given API VIP your node must have at least one subnet shared with the VIP subnet. On the other hand, when Control Plane across multiple subnets is introduced, this statement may not hold true any more; I think in this scenario the logic below should not be executed anyway as-is, but I can see some potential for it to fail (which may be also good because it will show us quickly what needs changing in runtimecfg to support this feature)
machineNetwork, err := utils.GetLocalCIDRByIP(apiVip.String())
	if err != nil {
		return []Backend{}, fmt.Errorf("Could not retrieve subnet for IP %s", apiVip.String())
	}
  1. Inside the function doing the real calculation just check if any machine network was provided. If not, then it makes no sense to use this logic so just skip the part of the code that uses it
func getNodeIpForRequestedIpStack(
[...]
	if addr == "" && machineNetwork != "" {
		log.Debugf("For node %s can't find address using NodeInternalIP. Fallback to OVN annotation.", node.Name)
[...]

Any preference?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not sure this code will ever get called in a multiple subnet control plane (shorter name needed for this feature ;-). The internal loadbalancer will be disabled so we won't be building peer lists at all. The only question would be whether this gets called at some point by the DNS code since we're planning to keep that part. I don't think it does because the only references I can find to it are in the loadbalancer monitors.

So I think I'm good with making this an error. Keepalived can't run on a node that isn't on the VIP subnet so I can't see where we would need to handle that.

@mkowalski mkowalski force-pushed the detect-ips-better branch 6 times, most recently from e844f3b to 5edf305 Compare February 3, 2023 16:19
@dcbw
Copy link

dcbw commented Feb 3, 2023

What are the cases where the IP won't be part of node.Status.Addresses? Would it be if the intended IP is on a second NIC that is not the same NIC as what was given to kubelet?

@mkowalski
Copy link
Contributor Author

What are the cases where the IP won't be part of node.Status.Addresses? Would it be if the intended IP is on a second NIC that is not the same NIC as what was given to kubelet?

Dual-stack on vSphere (or any other cloud provider, but we target only vSphere now). This is because kubelet cannot take --cloud-provider together with --node-ip when the latter contains dual-stack IPs. So for such a scenario, --node-ip gets IPv4 address, --cloud-provider gets vSphere and as a result. Node.Status.Addresses contains only IPv4. Nothing to do really with multi NIC topology at this moment.

But... Our aim was to explicitly state that till kubernetes/enhancements#3706 is implemented, topologies where dual-stack is spawned across multi NICs are not supported because in such a scenario I am not sure if we are able to correctly unwrap&wrap traffic coming to the loadbalancer.

I somehow have a feeling that multi NIC could just work (because internally every node knows how to talk to each other and loadbalancer can use internally IPv4 even for incoming IPv6 traffic), but it's probably too cumbersome to test unless we have a customer with this specific topology

@cybertron
Copy link
Member

I somehow have a feeling that multi NIC could just work (because internally every node knows how to talk to each other and loadbalancer can use internally IPv4 even for incoming IPv6 traffic), but it's probably too cumbersome to test unless we have a customer with this specific topology

Hmm, that's a good point. I was thinking Kubelet would use its own internal logic to select ipv6, but since it doesn't get a v6 address at all in this scenario that might not actually matter. Probably still safest to just not support that though since I'm not positive what the behavior will be.

@mkowalski mkowalski marked this pull request as ready for review February 19, 2023 15:18
@openshift-ci openshift-ci bot removed the do-not-merge/work-in-progress Indicates that a PR should not merge because it is a work in progress. label Feb 19, 2023
@openshift-ci openshift-ci bot requested a review from cybertron February 19, 2023 15:18
Copy link
Member

@cybertron cybertron left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

One minor question about the logging inline, but otherwise lgtm.

}
}

return ingressConfig, nil
}

func getNodeIpForRequestedIpStack(node v1.Node, filterIps []string, machineNetwork string) (string, error) {
log.SetLevel(logrus.DebugLevel)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If we're forcing this on, should these log messages just be Info level?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sure, just changed. I kind of thought about a future when we want to have an env variable to control the log level to decrease verbosity, but it will be simple to then lower level from info to debug

When generating keepalived.conf we are relying on the logic to gather
IPs of all the cluster nodes for the IP stack used by the specific VIP.
This logic currently relies only on the addresses reported as part of
Node.Status.Addresses.

In some scenarios it may be that the node is not reporting all its IPs
via kubelet but still have those available. If we detect such a scenario
(e.g. kubelet reporting only IPv4, but VIP being IPv6), we will check
for Node annotations created by OVN as those use different source of
truth so kubelet not reporting IPs is not affecting it.

The newly introduced behaviour is just a fallback in case
Node.Status.Addresses does not contain an IP of a requested stack,
therefore not changing the behaviour for currently working scenarios.

Contributes-to: OPNET-197
@openshift-ci
Copy link
Contributor

openshift-ci bot commented Feb 22, 2023

@mkowalski: all tests passed!

Full PR test history. Your PR dashboard.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here.

Copy link
Member

@cybertron cybertron left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

/lgtm

@openshift-ci openshift-ci bot added the lgtm Indicates that a PR is ready to be merged. label Feb 22, 2023
@openshift-ci
Copy link
Contributor

openshift-ci bot commented Feb 22, 2023

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: cybertron, mkowalski

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:
  • OWNERS [cybertron,mkowalski]

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@openshift-merge-robot openshift-merge-robot merged commit 5847e89 into openshift:master Feb 22, 2023
@mkowalski mkowalski deleted the detect-ips-better branch February 23, 2023 10:59
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
approved Indicates a PR has been approved by an approver from all required OWNERS files. jira/valid-reference Indicates that this PR references a valid Jira ticket of any type. lgtm Indicates that a PR is ready to be merged.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants