-
Notifications
You must be signed in to change notification settings - Fork 438
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
gRPC "socket closed" service exception still occurring #5918
Comments
To clarify, the issue I described is happening in the gRPC client. Thanks! |
@meredithslota this issue is still ongoing and it's important to let others follow the original thread here #2427 |
@lukasgit Apologies for the confusion; the linked issue #2427 was marked closed by @saranshdhingra back in April 2022 and I didn't want the last comment to be lost. It's cross-referenced so that previous repro information can still be useful, but if you have current repro steps, please include them here. Thanks! |
@saranshdhingra could you help triage this issue, it is out of SLO. |
@bcoe @meredithslota I will try to look into this issue this week. It's been a bit difficult to replicate it. Thanks a lot for your patience on this @lukasgit |
Reduced priority since it doesn't block shipping releases. |
@saranshdhingra - Did we find anything on this issue? |
I got both issue, "socket closed " and "go away " in python grpc server side. seems like nothing has been resolved lately |
So, I am not really sure how I can proceed with this issue as I have not been able to replicate it. Given that it's a response given by the backend APIs, all we can do at the Client Library's front is retry it, which we are doing. If someone, can find an instance where the For anyone who wants to control the retry logic can use the But, if there is something at the GRPC extension/product API level, I am afraid, the conversation needs to happen in the respective repository. After all this while, not being able to give a concrete answer to this is not what I had hoped, but this is the best I can say when I haven't been able to replicate it. I will be closing this for now, but if anyone encounters an instance where the library doesn't retry in the case where it is supposed to, please reopen this, I would certainly look into it. |
A similar issue sporadically occurs when using Bigtable client, @saranshdhingra.
I'm not entirely sure if it should be like that, but I'm guessing the problem could be addressed by adding the same retry mechanism configuration to this piece of configuration. |
We are running on Ubuntu 18.04 with gRPC v1.28.1 and see this issue with some frequency (on the order of 10s of minutes during a long-lived bi-directional streaming connection).
I'm curious if this issue was solved or if it just did not repro in your environment and hence was closed? Thanks!
Originally posted by @jaryder in #2427 (comment)
The text was updated successfully, but these errors were encountered: