-
Notifications
You must be signed in to change notification settings - Fork 1.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
ipv6 port-forward has a consistent 36s delay on each request #1560
Comments
@aojea maybe something with the portmap plugin? you somewhat recently PRed changes to this upstream right? |
... seems odd that the port forward logs are referencing ipv4 here |
... also bringing me back to this issue again containerd/cri#730 |
/assign |
@BenTheElder you are right socat is executing a tcp4 listen https://github.com/containerd/cri/blob/master/pkg/server/sandbox_portforward_unix.go#L71. The only reason this is working is because socat can do ipv4 to ipv6 capabilities Also the delay is really containerd thing i think check: |
I can't reproduce this, @howardjohn My observation is that the time between the events
and
is the time that the port-forward is being used, you can reproduce it with ipref per example, where you can control the time the data is going through the network pipe.
and we can see the following events in the containerd log , that is running in the node that has the pod with iperf
if we run it 17 seconds:
the event gap is 17
so, 36 seconds should be something that is using the pipe, just curious, what are you using for checking the port forward functionality? However, there are several things wrong that we should fix in containerd/cri,liek socat and TCP4, thanks for pointing them out here |
I tried 3 different http based applications, 2 Istio debug endpoints and httpbin. All 3 of them are different languages etc but took the same 36s. I'll see if I can repro on another machine and with iperf |
I can reproduce on another machine, but that isn't saying much as both my machines are similar (gLinux). With iperf setup above:
^ exits in 8 seconds containerd logs show 36s gap
|
/lifecycle active |
/kind external |
containerd/cri#1470 just merged, but it will be a bit before this is in containerd I think. Once it's into containerd we can pull new binaries into kind and see how that goes. |
containerd/containerd@65df60b this is now in containerd @ HEAD, will see about pulling it in. |
triggered another nightly build, pulling it in via #1599 |
I think this should be fixed if using a node image built with HEAD |
we had to roll back containerd due to #1634 |
blocked on #1637, we're making progress on the issue. |
The first issue in #1637 was fixed, but then we discovered that the containerd upgrade was also the cause of flakiness in our CI. That we think we've tracked down to an upstream timeout change w/ replication in non-kind based CI, but no ETA on getting it fixed. Once that's done we can look at rolling forward again to pick up the fix for this issue. Until then we're stuck at <= 1.3.4 (we're rather certain the bug is introduced shortly after that, and the flake is due to errors deleting containers ...) |
this should be fixed now. |
Is this fix in the latest v0.11.1 ? or still pending to be release? |
When deploying an ipv6 kind cluster, all requests through
kubectl port-forward
are delayed by exactly 36s. This can be reproduced with a variety of backend pods.containerd logs during request:
kubelet seems to have no relevant logs
api-server seems to have no relevant logs
Pod <-> Pod traffic and calling the /proxy endpoint are all fast - just port-forward is slow
The text was updated successfully, but these errors were encountered: