Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

The servicelb DaemonSet should support setting hostNetwork #7798

Closed
w7team opened this issue Jun 18, 2023 · 18 comments
Closed

The servicelb DaemonSet should support setting hostNetwork #7798

w7team opened this issue Jun 18, 2023 · 18 comments

Comments

@w7team
Copy link

w7team commented Jun 18, 2023

The servicelb DaemonSet should support setting hostNetwork to allow the usage of lb where RemoteAddr: 10.42.0.1 is the CNI's IP address. This results in the inability to obtain the real client IP address.

Servicelb DaemonSet should have the capability to enable hostNetwork configuration, which allows lb to operate with the CNI using RemoteAddr: 10.42.0.1. However, this configuration prevents obtaining the actual client IP address

@brandond
Copy link
Member

brandond commented Jun 19, 2023

If you need the original host address, set externalTrafficPolicy on the service to Local. This will ensure that the traffic only goes to pods local to the node, which prevents the original source address from being obscured by SNAT.

Ref: https://kubernetes.io/docs/tasks/access-application-cluster/create-external-load-balancer/#preserving-the-client-source-ip

@github-project-automation github-project-automation bot moved this from New to Done Issue in K3s Development Jun 19, 2023
@w7team
Copy link
Author

w7team commented Jun 19, 2023

If you need the original host address, set the external traffic policy on the service to local. This will ensure that the traffic only goes to pods local to the node, which prevent the original source address from being obscure by SNAT.

有没有考虑过让servicelb从TPROXY或者TOA(tcp_option_address)层面解决这个问题?

Have you considered servicelb this issue at the TPROXY or TOA (tcp_option_address) level to resolve it?

@brandond
Copy link
Member

brandond commented Jun 19, 2023

I'm not sure why that would be necessary? Just use the externalTrafficPolicy option as described in the docs.

@w7team
Copy link
Author

w7team commented Jun 19, 2023

The "externalTrafficPolicy" set to "local" prevents the load balancer from being used across multiple nodes, which defeats the purpose of having a load balancer. If we can combine it with TPROXY or TOA to address this issue, I feel that it would make the solution even more perfect.

@brandond
Copy link
Member

brandond commented Jun 19, 2023

It does not prevent it from being used across multiple nodes, it just requires you to run a pod on each of the nodes that you want to be able to expose the service on. This is the approach that is recommended by Kubernetes, as per the document I linked above.

ServiceLB is very simple and uses nothing more than a few iptables rules; we are not planning on adding complexity by way of enabling any experimental TCP options that are not widely deployed or supported. I'm not even sure how we would make use of either of those options via iptables alone.

@gfrankliu
Copy link

gfrankliu commented Jun 19, 2023

I noticed if you set externalTrafficPolicy to Local, you will only be able to access the LB using the main interface IP on the node. If you ssh to the node, and curl against all node interface IPs, you will find curl only works against the main NIC ip, all the rest will just hang. Changing externalTrafficPolicy to Cluster will make all host IPs work.

  • k3s version v1.26.5+k3s1 (7cefebea)

@gfrankliu
Copy link

gfrankliu commented Jun 19, 2023

Looks like what I observed is a known issue: #7637 but that only mentioned Loopback so I am not sure if that fix will also fix other non-primary interfaces.

@gfrankliu
Copy link

ok, I tried manually bumped klipper-lb from v0.4.3 to v0.4.4, and now externalTrafficPolicy to Local no longer hangs my curl with non-primary IPs. So the issue seems to be fixed. BUT, true client IP only shows in ingress logs if inbound requests use the primary IP of the host. I have an openvpn interface on the host, and if I curl the LB using the openvpn interface IP, though externalTrafficPolicy is set to Local, the ingress log still shows the servicelb pod IP, same behaviror as if I set externalTrafficPolicy to Cluster.

@w7team
Copy link
Author

w7team commented Jun 20, 2023

spec:
  ports:
  - name: web
    protocol: TCP
    port: 80
    targetPort: web
    nodePort: 30019
  - name: websecure
    protocol: TCP
    port: 443
    targetPort: websecure
    nodePort: 30062
  selector:
    app.kubernetes.io/instance: traefik-kube-system
    app.kubernetes.io/name: traefik
  type: LoadBalancer
  sessionAffinity: None
  externalTrafficPolicy: Local

这种设置会导致无法访问,必须得把externalTrafficPolicy: Local删除掉才可以,你可以全新安装一个k3s测试一下

This configuration will result in inaccessible access. You must remove the "externalTrafficPolicy: Local" setting in order to resolve the issue. I recommend performing a fresh installation of k3s to test it.

@w7team
Copy link
Author

w7team commented Jun 20, 2023

如果klipper-lb可以支持tcp_option_address就完美了,我们国内的idc大多支持这种模式
可以看看这个项目:https://github.com/Huawei/TCP_option_address

If klipper-lb can support "tcp_option_address," it would be perfect as most IDCs in China support this mode. You can take a look at this project: https://github.com/Huawei/TCP_option_address.

@gfrankliu
Copy link

This configuration will result in inaccessible access. You must remove the "externalTrafficPolicy: Local" setting in order to resolve the issue. I recommend performing a fresh installation of k3s to test it.

This bug is already fixed in #7561
@brandond Can you provide a release with the fix?

If klipper-lb can support "tcp_option_address," it would be perfect as most IDCs in China support this mode. You can take a look at this project: https://github.com/Huawei/TCP_option_address.

klipper-lb is just L4. I haven't heard of "tcp_option_address", but I guess it is same as "tcp proxy protocol". You can enable this in your ingress, eg, here is the doc for traefik. You can enable it there if your IDC supports it. Here are some more discussions.

@brandond Since k3s svclb/klipper-lb is the entrypoint for the inbound packet and knows the true client IP, is it possible to configure it to inject the tcp proxy header for the downstream ingress to consume?

@w7team
Copy link
Author

w7team commented Jun 20, 2023

我在测试过程中发现了一个问题,所有的源ip都来自cni0的地址,RemoteAddr: 10.42.0.1,这一层就算我修改svclb支持“tcp_option_address”好像也无法起到作用。

During my testing, I encountered an issue where all source IPs are coming from the address of "cni0" interface, with RemoteAddr: 10.42.0.1. Even if I modify svclb to support "tcp_option_address," it seems to have no effect.

Hostname: whoami-app-c2bnuz51vz-654784b8fc-bblkm
IP: 127.0.0.1
IP: ::1
IP: 10.42.0.13
IP: fe80::7c5a:63ff:fec5:f6a7
RemoteAddr: 10.42.0.1:42098
GET / HTTP/1.1
Host: whoami
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:109.0) Gecko/20100101 Firefox/114.0
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,*/*;q=0.8
Accept-Encoding: gzip, deflate
Accept-Language: zh-CN,zh;q=0.8,zh-TW;q=0.7,zh-HK;q=0.5,en-US;q=0.3,en;q=0.2
Upgrade-Insecure-Requests: 1
X-Forwarded-For: 10.42.0.1
X-Forwarded-Host: whoami
X-Forwarded-Port: 80
X-Forwarded-Proto: http
X-Forwarded-Server: traefik-7f57b8d797-p9bg6
X-Real-Ip: 10.42.0.1

@brandond
Copy link
Member

brandond commented Jun 20, 2023

Klipper-lb does not actually terminate the connection, so it cannot do things like support proxy protocol or adding headers. It just uses iptables to redirect packets to a service or pod.

The tcp option address thing is sketchy for several reasons, and is not supported by any CNI or application that I'm aware of, and also requires use of a custom kernel module that doesn't appear to be used outside a handful of Chinese VPS providers.

ServiceLB is supposed to be a very simple no-frills loadbalancer service controller. It won't do everything for everyone. There are going to be many cases where you want something fancier like kube-vip or metallb.

@brandond
Copy link
Member

This bug is already fixed in #7561
@brandond Can you provide a release with the fix?

See the milestone on the issue you linked, or on any of the backport issues that are cross-linked lower down.

@gfrankliu
Copy link

Thanks @brandond I will wait for the next backport release.

  • ok, I tried manually bumped klipper-lb from v0.4.3 to v0.4.4, and now externalTrafficPolicy to Local no longer hangs my curl with non-primary IPs. So the issue seems to be fixed. BUT, true client IP only shows in ingress logs if inbound requests use the primary IP of the host. I have an openvpn interface on the host, and if I curl the LB using the openvpn interface IP, though externalTrafficPolicy is set to Local, the ingress log still shows the servicelb pod IP, same behaviror as if I set externalTrafficPolicy to Cluster.

Were you able to reproduce this?

@brandond
Copy link
Member

true client IP only shows in ingress logs if inbound requests use the primary IP of the host

I'm pretty sure this is because kubernetes only allows a single private IP per address family per node. If it comes in to another address (one that is not the address shown as the INTERNAL-IP in kubectl get node -o yaml) it is handled differently by the rules.

I don't believe this is something that we can fix in servicelb, as the kubelet, cni, and servicelb rules all rely on this single-valued node IP field to properly forward packets.

@w7team
Copy link
Author

w7team commented Jun 21, 2023

@brandond service这一块还是有bug,不论我加不加externalTrafficPolicy: Local,RemoteAddr都是10.42.0.1,都是通过cni0转发过来的

There is still a bug in the service part. Whether I add externalTrafficPolicy: Local or not, RemoteAddr is 10.42.0.1, and they are all forwarded by cni0.

@w7team
Copy link
Author

w7team commented Jun 21, 2023

only one node

---
apiVersion: v1
kind: Service
metadata:
  name: traefik
  namespace: kube-system
  uid: ff94b426-357c-45e6-b787-8c88f405f715
  resourceVersion: "4941"
  creationTimestamp: "2023-06-20T08:58:50Z"
  labels:
    app.kubernetes.io/instance: traefik-kube-system
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/name: traefik
    helm.sh/chart: traefik-21.2.1_up21.2.0
  annotations:
    meta.helm.sh/release-name: traefik
    meta.helm.sh/release-namespace: kube-system
  finalizers:
  - service.kubernetes.io/load-balancer-cleanup
spec:
  ports:
  - name: web
    protocol: TCP
    port: 80
    targetPort: web
    nodePort: 31591
  - name: websecure
    protocol: TCP
    port: 443
    targetPort: websecure
    nodePort: 31680
  selector:
    app.kubernetes.io/instance: traefik-kube-system
    app.kubernetes.io/name: traefik
  clusterIP: 10.43.216.47
  clusterIPs:
  - 10.43.216.47
  type: LoadBalancer
  sessionAffinity: None
  externalTrafficPolicy: Local
  healthCheckNodePort: 30199
  ipFamilies:
  - IPv4
  ipFamilyPolicy: PreferDualStack
  allocateLoadBalancerNodePorts: true
  internalTrafficPolicy: Cluster
status:
  loadBalancer:
    ingress:
    - ip: 118.xx.xxx.170
...

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
Archived in project
Development

No branches or pull requests

3 participants