-
Notifications
You must be signed in to change notification settings - Fork 2.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
The servicelb DaemonSet should support setting hostNetwork #7798
Comments
If you need the original host address, set |
有没有考虑过让servicelb从TPROXY或者TOA(tcp_option_address)层面解决这个问题? Have you considered servicelb this issue at the TPROXY or TOA (tcp_option_address) level to resolve it? |
I'm not sure why that would be necessary? Just use the externalTrafficPolicy option as described in the docs. |
The "externalTrafficPolicy" set to "local" prevents the load balancer from being used across multiple nodes, which defeats the purpose of having a load balancer. If we can combine it with TPROXY or TOA to address this issue, I feel that it would make the solution even more perfect. |
It does not prevent it from being used across multiple nodes, it just requires you to run a pod on each of the nodes that you want to be able to expose the service on. This is the approach that is recommended by Kubernetes, as per the document I linked above. ServiceLB is very simple and uses nothing more than a few iptables rules; we are not planning on adding complexity by way of enabling any experimental TCP options that are not widely deployed or supported. I'm not even sure how we would make use of either of those options via iptables alone. |
I noticed if you set externalTrafficPolicy to Local, you will only be able to access the LB using the main interface IP on the node. If you ssh to the node, and curl against all node interface IPs, you will find curl only works against the main NIC ip, all the rest will just hang. Changing externalTrafficPolicy to Cluster will make all host IPs work.
|
Looks like what I observed is a known issue: #7637 but that only mentioned Loopback so I am not sure if that fix will also fix other non-primary interfaces. |
ok, I tried manually bumped klipper-lb from v0.4.3 to v0.4.4, and now externalTrafficPolicy to Local no longer hangs my curl with non-primary IPs. So the issue seems to be fixed. BUT, true client IP only shows in ingress logs if inbound requests use the primary IP of the host. I have an openvpn interface on the host, and if I curl the LB using the openvpn interface IP, though externalTrafficPolicy is set to Local, the ingress log still shows the servicelb pod IP, same behaviror as if I set externalTrafficPolicy to Cluster. |
这种设置会导致无法访问,必须得把externalTrafficPolicy: Local删除掉才可以,你可以全新安装一个k3s测试一下 This configuration will result in inaccessible access. You must remove the "externalTrafficPolicy: Local" setting in order to resolve the issue. I recommend performing a fresh installation of k3s to test it. |
如果klipper-lb可以支持tcp_option_address就完美了,我们国内的idc大多支持这种模式 If klipper-lb can support "tcp_option_address," it would be perfect as most IDCs in China support this mode. You can take a look at this project: https://github.com/Huawei/TCP_option_address. |
This bug is already fixed in #7561
klipper-lb is just L4. I haven't heard of "tcp_option_address", but I guess it is same as "tcp proxy protocol". You can enable this in your ingress, eg, here is the doc for traefik. You can enable it there if your IDC supports it. Here are some more discussions. @brandond Since k3s svclb/klipper-lb is the entrypoint for the inbound packet and knows the true client IP, is it possible to configure it to inject the tcp proxy header for the downstream ingress to consume? |
我在测试过程中发现了一个问题,所有的源ip都来自cni0的地址,RemoteAddr: 10.42.0.1,这一层就算我修改svclb支持“tcp_option_address”好像也无法起到作用。 During my testing, I encountered an issue where all source IPs are coming from the address of "cni0" interface, with RemoteAddr: 10.42.0.1. Even if I modify svclb to support "tcp_option_address," it seems to have no effect.
|
Klipper-lb does not actually terminate the connection, so it cannot do things like support proxy protocol or adding headers. It just uses iptables to redirect packets to a service or pod. The tcp option address thing is sketchy for several reasons, and is not supported by any CNI or application that I'm aware of, and also requires use of a custom kernel module that doesn't appear to be used outside a handful of Chinese VPS providers. ServiceLB is supposed to be a very simple no-frills loadbalancer service controller. It won't do everything for everyone. There are going to be many cases where you want something fancier like kube-vip or metallb. |
Thanks @brandond I will wait for the next backport release.
Were you able to reproduce this? |
I'm pretty sure this is because kubernetes only allows a single private IP per address family per node. If it comes in to another address (one that is not the address shown as the INTERNAL-IP in I don't believe this is something that we can fix in servicelb, as the kubelet, cni, and servicelb rules all rely on this single-valued node IP field to properly forward packets. |
@brandond service这一块还是有bug,不论我加不加externalTrafficPolicy: Local,RemoteAddr都是10.42.0.1,都是通过cni0转发过来的 There is still a bug in the service part. Whether I add externalTrafficPolicy: Local or not, RemoteAddr is 10.42.0.1, and they are all forwarded by cni0. |
only one node
|
The servicelb DaemonSet should support setting hostNetwork to allow the usage of lb where RemoteAddr: 10.42.0.1 is the CNI's IP address. This results in the inability to obtain the real client IP address.
Servicelb DaemonSet should have the capability to enable hostNetwork configuration, which allows lb to operate with the CNI using RemoteAddr: 10.42.0.1. However, this configuration prevents obtaining the actual client IP address
The text was updated successfully, but these errors were encountered: