-
Notifications
You must be signed in to change notification settings - Fork 925
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Service port forwarding recovery on restarted pods #686
Comments
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
Stale issues rot after 30d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
Rotten issues close after 30d of inactivity. Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
@fejta-bot: Closing this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
/reopen |
@jjfmarket: You can't reopen an issue/PR unless you authored it or you are a collaborator. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
I also see this behavior, my port forwards start failing after I restart the pod that was being forwarded to. |
/reopen |
@jjfmarket: You can't reopen an issue/PR unless you authored it or you are a collaborator. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
Isn't there any suggested implementation to implement this automatic recovery? |
/reopen |
@brianpursley: Reopened this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
I was taking a look at this a little bit today and I think this is a legitimate issue. The problem seems to be that port forwarding enters some sort of unrecoverable state after it is no longer able to communicate with the pod it was connected to, and yet it does not fail with an exit code either. Here are my steps to reproduce (use two terminals)terminal 1
terminal 2
Open a browser or curl to make some requests to http://localhost:8080 and verify that port forwarding is working terminal 1
Open a browser or curl to make some requests to http://localhost:8080 and verify that port forwarding is no longer working terminal 2
The problem is that NOTE: My example above is for a single pod, but you can port-forward to a service or deployment, in which case it will select a single pod within the deployment and forward to that pod only. You can follow similar steps to reproduce the issue with a deployment, but you have to find the pod it is connect to and delete that pod to see the effect. Ideas on possible solutions
|
/remove-lifecycle rotten |
let try to reproduce this report and work on it. |
/assign |
Hey @soltysh, I am wondering if we can discuss this one in the sig meeting. Should os.Exit(1) enough for this one ? Just tested a local patch and it works. |
/priority backlog |
@dougsland just open a PR and pls ping me on slack with it, I'll review |
/priority backlog |
I still not found the best solution for this problem |
@rthamrin upgrade |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle rotten |
still does not work properly:
never recovers - I'd expect to not kill current port-forward process but restart it/reestablish connection if possible. |
Those error messages are ok, assuming it eventually recovers. Otherwise returning a consistent error code would be helpful. |
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /close |
@k8s-triage-robot: Closing this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
/reopen |
@grumpyoldman-io: You can't reopen an issue/PR unless you authored it or you are a collaborator. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
/reopen |
@aojea: Reopened this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs. This bot triages issues according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /close not-planned |
@k8s-triage-robot: Closing this issue, marking it as "Not Planned". In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
This is still a problem, If adding resilient port-forwarding to |
I ran into this issue and I would also like kubectl port-forward command with a retry option; however, to work around the issue, I put kubectl port-forward in a Bash loop like the following procedure illustrates. Gotcha: This workaround isn't perfect since the first request after redeploy fails. Edit: Alternatively, skip the gnarly Bash loop and install and run knight42/krelay excellent kubectl plugin like the following working session illustrates.
Edit: Alternatively, skip the gnarly Bash loop and install and run knight42/krelay excellent kubectl plugin like the following working session illustrates.
|
@mbigras Hi, you might be interested in krelay, which behaves similar to
the port forwarding still works even after you update the deployment |
It would be nice if the native portforward adopted the same solution as krelay in this case :) |
I ran into the same issue. #!/usr/bin/env bash
PID=""
exit_handler() {
echo "Received SIGTERM or SIGINT. Shutting down..."
kill -TERM "$PID"
wait "$PID"
exit 0
}
trap exit_handler SIGTERM SIGINT
echo "Starting port-forwarding for $SVC_NAME in namespace $NAMESPACE"
while true
do
kubectl port-forward svc/"$SVC_NAME" "$HOST_PORT":"$REMOTE_PORT" -n "$NAMESPACE" &
PID=$!
wait $PID
done
|
When I start
kubectl port-forward svc/leeroy-app 50053:50051
it works the first time.If I kill the pod behind the service, kubernetes restarts the pod, and then the port forwarding starts failing:
If I kill manually kubectl port forwarding and restart, it works.
I would love to see the recovery automatically instead of having to parse the output and restart manually.
We are building portforwarding into our application through kubectl and this would help a lot with the integration.
The text was updated successfully, but these errors were encountered: