-
Notifications
You must be signed in to change notification settings - Fork 653
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Error: 13 INTERNAL: Received RST_STREAM with code 2 triggered by internal client error: read ECONNRESET #1769
Comments
What version of the library are you using? |
Would you mind running your client with the environment variables |
i restart server , then the first time use grpc sometimes throw this error |
Thank you, that information is very helpful. |
In the latest version, those errors should now be reported with the |
I think I'm having the same issue. Everything works OK as long as the connection is active. But after the connection's been idle for a while, all new requests hang for a long time (10+ minutes). Eventually this error is thrown:
Using @grpc/grpc-js version 1.3.5. Does anyone know which older version (if any) behaves correctly without hanging for many minutes? |
That hanging behavior seems different from other reports of similar problems, and the |
Here is the detailed log.
|
First, I want to note that the Are you running that code on a serverless platform? That would explain this behavior, but it will make it tricky to address the problem. If you're not on serverless, you could also enable keepalives. You can do that by setting the channel option |
Thanks for the update. Yes after I posted the log I realized I had already downgraded to grpc-js 1.2.12. I have been trying different versions from 1.2.1 through 1.3.5 and the behavior is exactly the same in all of them. This is a node.js application utilizing the @domino/domino-db package by HCL to connect to a Domino database. gprc-js is a dependency to that package. Can I enable the keepalives via NODE_OPTIONS? Perhaps I need to contact the HCL team directly. |
I'm not surprised that downgrading the dependency didn't help. I don't see any reason to think that this is a regression. This is just a fundamental problem where the client thinks the connection is still good until it tries to make a request and doesn't get a response. These options are per-channel, so you can't set them with an environment variable. You probably will have to contact that other team so that they can allow you to set that option. |
We have been facing the same issue quite often while using firestore, and the severity is very high as it ends up with data loss or unacceptable latency times most of the time. You can find detailed logs here if it'd help: googleapis/nodejs-firestore#1023 (comment) @ googleapis/nodejs-firestore#1023 |
So I was also getting this issue and mine seemed to be a potential memory leak. I optimised my code so less stuff is stored in memory and the errors seems to stay away for now |
@Pixelatex Thanks for sharing. Did you get any error messages or any indicator that the root cause is about memory when you were getting this issue? |
Hi @murgatroid99, |
The change I made in 1.3.7 just changes the error code that that particular error is surfaced with. It won't change the latency you're seeing because that's a much lower-level issue, where the client just isn't getting responses from the remote end, and it's waiting for the TCP timeout. |
@murgatroid99 Thanks for your comment. As I mentioned in my previous comment, we're getting this error on firestore & firebase cloud functions. Do you think if the root cause is something firebase should fix, or it is an grpc-js related issue might be fixed in one of the next versions, or something we should fix in our client / cloud functions code? |
@murgatroid99 what do you recommend seeting This same issue is happening with us running Node v12.14 and grpc-node-js/1.3.7 :/ |
@pROFESOR11 This problem is likely a bad interaction between how gRPC handles connections and how Firebase Cloud Functions networking works. I don't expect to have a fix in the near future, but I can recommend the following to mitigate the problem:
It seems that people commonly set |
I had exact same issue.
I didn't think the contents of the
😱 These env-vars certainly shouldn't have been deployed and the error makes sense. That's why the Functions timed out when trying to connect with Firestore in my case. It is interesting that it times out rather than promptly failing. |
I had the same issue as @hfossli-agens. In my Google Cloud Run service, |
Same thing here... |
i reuse a connection for all request , sometime seems connection will disconnect when idle for a long times (client error: read ECONNRESET), but the second time visit will be ok
how to set grpc-js for auto reconnect to avoid this error ?
The text was updated successfully, but these errors were encountered: