-
Notifications
You must be signed in to change notification settings - Fork 742
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
UDP connections from pods to daemonset are lost when daemonset is replaced #373
Comments
When we observed similar behavior, it wasn't the fault of the CNI. In our case, the client was doing two unusual things:
The effect of [1] was to cause the kernel to "pin" the UDP flow: it only went through the iptables rules once at Unfortunately, I don't recall how we solved it: UDP has no in-band way to signal that the receiver's gone away and the client should try reconnecting. Not |
In our case, we're sending UDP flows via host's IP, not the name of the server. We think DNS is irrelevant with our issue. |
That makes sense. From #153, it seems to me the hostPort handling is delegated out to the upstream portmap plugin. That's billed as:
Which suggests to me that the behavior is "expected," or at least an issue more fixable in one of the upstream bits ( |
@sethp-nr Thanks for suggesting the workaround :) |
For now, I think we can make one of these choices:
As I'm not familiar with both portmap and vpc-cni, I'm not sure which to fix. Which would be the best option? |
Well, since this CNI delegates to portmap's implementation for host ports, it seems to me that the right place would be the upstream project. In fact, it looks like there's already an issue about this case: containernetworking/plugins#123 |
Seems I should go there to discuss :) Closing this issue since it seems portmap plugin is responsible for this. Thanks a lot @sethp-nr :) |
Background
Problem
We deployed a daemonset that accepts UDP packets through hostPort 8125. At first, we observed that other pods are correctly sending packets to pods of the daemonset.: pods are trying to send UDP packets to their host, and the hosts redirects the packets to their pods of the daemonset.
Then we replaced and redeployed the daemonset with the yaml file which is totally identical with the previous daemonset's one. After we deploy the daemonset again, the replaced daemonset does not accept packets from the pods. Pods are doing their best to send their packets, but the packets are never being delivered to the daemonset.
How to Reproduce
kubectl replace --force -f daemonset.yml
Expected Behavior
Replaced daemonset should also accept the packets. In other words, cni must reroute the packets to the newly deployed daemonset.
Trivia
Should you need more information, please let me know via mentioning me.
Thanks in advance.
The text was updated successfully, but these errors were encountered: