Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

NAT issue with pod running in public-subnet with hostnetwork true #519

Closed
kishorekumark opened this issue Jul 1, 2019 · 3 comments
Closed

Comments

@kishorekumark
Copy link

Hi,
Environment:

  • K8S cluster created with kops in aws(not EKS) in private topology.
  • Configured CNI to amazonvpc from kops.
  • Have few worker groups in public-subnet
  • On a pubicsubnet node one pod running with hostnetwork true and one daemonset running with out hostnetwork true.

Issue:

  • With default setting, daemon-set able to communicate with internet. But the pod's traffic got NATed and its breaking our application comminunication path.For our application specific we No NAT required.
  • with AWS_VPC_K8S_CNI_EXTERNALSNAT=true, the pod with hostnetwork able to communicate to internet and NO NAT performed. But daemonset not able to reach internet.

Logs:
Details:

  • Application with hostnetwork true binds on port 60000 and send a pact to remote host on udp port 19302, and application expecting return packet on port 6000 but its nated to port 24343
    tcpdump output:

tcpdump -i any -n udp port 19302

tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on any, link-type LINUX_SLL (Linux cooked), capture size 262144 bytes
17:45:13.238527 IP 172.20.0.247.24343 > 74.125.196.127.19302: UDP, length 20
17:45:13.248664 IP 172.20.0.247.24344 > 74.125.196.127.19302: UDP, length 20
17:45:13.295939 IP 74.125.196.127.19302 > 172.20.0.247.24343: UDP, length 32

conntrack output:

conntrack -L | grep -i "24343"

udp 17 171 src=172.20.0.247 dst=74.125.196.127 sport=60000 dport=19302 src=74.125.196.127 dst=172.20.0.247 sport=19302 dport=24343 [ASSURED] mark=128 use=1
conntrack v1.4.4 (conntrack-tools): 370 flow entries have been shown.

Need info:
Is there any workaround or setting in order to make it work as no-NAT for a pods with hostnetwork true and NAT without hostnetwork true and route packets via InternetGW when nodes are in publicsubnet.

@mogren
Copy link
Contributor

mogren commented Jul 2, 2019

Hi @kishorekumark, this sounds a lot like #508. Thanks for raising the question, we are aware of this issue.

@mogren mogren added the question label Jul 2, 2019
@jaypipes
Copy link
Contributor

jaypipes commented Nov 6, 2019

Hi @kishorekumark, apologies for the long delay in responding to you here. We're actually trying to determine what precisely is the request from you here. Are you asking for a feature where instead of a single binary environment variable controlling whether the CNI plugin for a node does SNAT'ing, that you want the ability to select SNAT'ing for only some pods on the node and not others?

@mogren
Copy link
Contributor

mogren commented Mar 11, 2020

@kishorekumark Is this an issue with peered VPCs? Would it be solved by using AWS_VPC_K8S_CNI_EXCLUDE_SNAT_CIDRS that is available in v1.6.0?

@mogren mogren closed this as completed Apr 22, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants