-
Notifications
You must be signed in to change notification settings - Fork 93
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Connection tracking of VXLAN UDP packets must be disabled #2015
Comments
yoheiueda
added a commit
to yoheiueda/cloud-api-adaptor
that referenced
this issue
Aug 30, 2024
This patch disables connection tracking of VXLAN UDP packets to prevent the conntrack table from filing up. The Linux connection tracking system (conntrack) always tracks UDP packets as connections by default. This is OK for usual UDP communications, but does not work with VXLAN UDP packets, since a destination port of each VXLAN UDP packet is fixed (4789) in both directions, and a source port of each packet is randomly selected. This UDP packet flow does not creates a connection flow, and every UDP packet of a VXLAN tunnel is recognized as a first packet of a new UDP stream. If conntrack table fills up, it may affect other network connections that needs NAT. This patch will alleviate such problems. Fixes confidential-containers#2015 Signed-off-by: Yohei Ueda <[email protected]>
yoheiueda
added a commit
to yoheiueda/cloud-api-adaptor
that referenced
this issue
Sep 6, 2024
This patch disables connection tracking of VXLAN UDP packets to prevent the conntrack table from filing up in both worker node and per pod VMs. The Linux connection tracking system (conntrack) always tracks UDP packets as connections by default. This is OK for usual UDP communications, but does not work with VXLAN UDP packets, since a destination port of each VXLAN UDP packet is fixed (4789) in both directions, and a source port of each packet is randomly selected. This UDP packet flow does not creates a connection flow, and every UDP packet of a VXLAN tunnel is recognized as a first packet of a new UDP stream. If conntrack table fills up, it may affect other network connections that needs NAT. This patch will alleviate such problems. Fixes confidential-containers#2015 Signed-off-by: Yohei Ueda <[email protected]>
bpradipt
pushed a commit
that referenced
this issue
Sep 6, 2024
This patch disables connection tracking of VXLAN UDP packets to prevent the conntrack table from filing up in both worker node and per pod VMs. The Linux connection tracking system (conntrack) always tracks UDP packets as connections by default. This is OK for usual UDP communications, but does not work with VXLAN UDP packets, since a destination port of each VXLAN UDP packet is fixed (4789) in both directions, and a source port of each packet is randomly selected. This UDP packet flow does not creates a connection flow, and every UDP packet of a VXLAN tunnel is recognized as a first packet of a new UDP stream. If conntrack table fills up, it may affect other network connections that needs NAT. This patch will alleviate such problems. Fixes #2015 Signed-off-by: Yohei Ueda <[email protected]>
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
The Linux connection tracking system (conntrack) always tracks UDP packets as connections by default. This is OK for usual UDP communications, but does not work with VXLAN UDP packets, since a destination port of each VXLAN UDP packet is fixed (4789) in both directions, and a source port of each packet is randomly selected. This UDP packet flow does not creates a connection flow, and every UDP packet of a VXLAN tunnel is recognized as a first packet of a new UDP stream.
We can confirm this behavior with the
conntrack
command. We can see a lot of "UNREPLIED" conntrack entries as follows.When conntrack table fills up, it may affect other network connections that needs NAT.
This issue is reported in detail in projectcalico/calico#8934, and Calico has already implemented a solution for it.
The text was updated successfully, but these errors were encountered: