You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
K8S cluster created with kops in aws(not EKS) in private topology.
Configured CNI to amazonvpc from kops.
Have few worker groups in public-subnet
On a pubicsubnet node one pod running with hostnetwork true and one daemonset running with out hostnetwork true.
Issue:
With default setting, daemon-set able to communicate with internet. But the pod's traffic got NATed and its breaking our application comminunication path.For our application specific we No NAT required.
with AWS_VPC_K8S_CNI_EXTERNALSNAT=true, the pod with hostnetwork able to communicate to internet and NO NAT performed. But daemonset not able to reach internet.
Logs:
Details:
Application with hostnetwork true binds on port 60000 and send a pact to remote host on udp port 19302, and application expecting return packet on port 6000 but its nated to port 24343
tcpdump output:
tcpdump -i any -n udp port 19302
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on any, link-type LINUX_SLL (Linux cooked), capture size 262144 bytes
17:45:13.238527 IP 172.20.0.247.24343 > 74.125.196.127.19302: UDP, length 20
17:45:13.248664 IP 172.20.0.247.24344 > 74.125.196.127.19302: UDP, length 20
17:45:13.295939 IP 74.125.196.127.19302 > 172.20.0.247.24343: UDP, length 32
Need info:
Is there any workaround or setting in order to make it work as no-NAT for a pods with hostnetwork true and NAT without hostnetwork true and route packets via InternetGW when nodes are in publicsubnet.
The text was updated successfully, but these errors were encountered:
Hi @kishorekumark, apologies for the long delay in responding to you here. We're actually trying to determine what precisely is the request from you here. Are you asking for a feature where instead of a single binary environment variable controlling whether the CNI plugin for a node does SNAT'ing, that you want the ability to select SNAT'ing for only some pods on the node and not others?
Hi,
Environment:
Issue:
Logs:
Details:
tcpdump output:
tcpdump -i any -n udp port 19302
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on any, link-type LINUX_SLL (Linux cooked), capture size 262144 bytes
17:45:13.238527 IP 172.20.0.247.24343 > 74.125.196.127.19302: UDP, length 20
17:45:13.248664 IP 172.20.0.247.24344 > 74.125.196.127.19302: UDP, length 20
17:45:13.295939 IP 74.125.196.127.19302 > 172.20.0.247.24343: UDP, length 32
conntrack output:
conntrack -L | grep -i "24343"
udp 17 171 src=172.20.0.247 dst=74.125.196.127 sport=60000 dport=19302 src=74.125.196.127 dst=172.20.0.247 sport=19302 dport=24343 [ASSURED] mark=128 use=1
conntrack v1.4.4 (conntrack-tools): 370 flow entries have been shown.
Need info:
Is there any workaround or setting in order to make it work as no-NAT for a pods with hostnetwork true and NAT without hostnetwork true and route packets via InternetGW when nodes are in publicsubnet.
The text was updated successfully, but these errors were encountered: