-
Notifications
You must be signed in to change notification settings - Fork 148
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
RST_STREAM error keeps showing up #1023
Comments
@jakeleventhal Thanks for filing this. Do you know at all what type of request/API call this is related to? |
The only place i use grpc is via firestore so it must be something from there. I don't really have more info unfortunately |
I also got this one with {"error":"Error: 13 INTERNAL: Received RST_STREAM with code 2
at Object.callErrorFromStatus (/usr/src/app/node_modules/@grpc/grpc-js/build/src/call.js:30:26)
at Object.onReceiveStatus (/usr/src/app/node_modules/@grpc/grpc-js/build/src/client.js:175:52)
at Object.onReceiveStatus ... |
This is a weird jeopardize issue, thrown during |
All document modifications (e.g. |
Hi @schmidt-sebastian. |
After a while, errors, as mentioned by @jakeleventhal, are thrown and it’s not possible to persist data in firestore and an instance restart is required. That’s a real problem folks 😄 |
@Skonx To rule out that this is a problem with our backend, can you send me your project ID and the time at which you saw these repeated failures? My email is If it is not a backend issue, we likely need GRPC logs from to further diagnose these failures. These can be obtained by setting two environment variables: GRPC_TRACE=all, GRPC_VERBOSITY=DEBUG Note that this create a lot of logs, so hopefully this is not something we have to look at. |
Yep. I've replied to your google.com email address. |
Hello! We also experience the same error usually coupled with We noticed that it usually fails all at once and then recovers very quickly. |
We are also affected by this error. A month before, it was a sporadic error, maybe once a week, and now we are seeing it many times per day. |
I'm also experiencing the issue for over a month now, with a couple hundred to a few thousand errors per day:
@schmidt-sebastian I opened a support case if you need a project ID or other information: 23689958. |
Same problem here: Received RST_STREAM with code 2 |
We were able to resolve the
After the update, the various other DB related errors have also gone away, including We currently use Electron in a Google App Engine project and work in the standard runtime. Google has not updated its images to include some necessary libs for the new version of Electron. We were able to work around this as Google supports these libs in Puppeteer, so we installed Puppeteer and sent a deploy off to our production server (on a no-promote flag). After doing so, our best guess is that Google rebuilt our server images with the needed libs to run Puppeteer, which in turn allowed us to run the new version of Electron. I hope this information helps! We spent a lot of time diving into this, so if you have any questions, feel free to respond below. |
Our backend team has looked at the projects that were sent to us. There are no indications that your errors correlate with errors on the backend, which makes it likely that this is a client side issue. I will continue to investigate possible root causes. |
As of v3.8.5, we are now retrying RunQuery requests that fail with RST_STREAM. If this error shows up again, we can evaluate expanding our retry to other SDK methods as well. |
@schmidt-sebastian I've been running on v3.8.5 for the entire day, I still see RST_STREAM. I checked my code to see for which usage patterns does it occur:
Can we please reopen this issue for visibility? |
@merlinnot We continue to receive pushback on retrying RST_STREAM/INTERNAL for Commit RPCs. I hope your comment is enough to convince the respective folks that we should retry. |
I just finished my backend local development and everything work well. Then I deployed and none of my functions work. I'm getting the same error.
My code is pretty simple. this.#log.i(`create license`, JSON.stringify({license: license, id: id}));
const licenseDoc = this.ref.doc(id);
await licenseDoc.set(license); // <===== This is where the error originates from
return licenseDoc.id; Using Node.js 16 I don't get it once or twice, it literally happens everytime. My endpoint doesn't work at all. This is a brand new project on Firestore and my first deployment. Package.json dependencies "dependencies": {
"firebase-admin": "^11.10.1",
"firebase-backend": "^0.2.5",
"firebase-functions": "^4.4.1",
"uuid": "^9.0.1"
}, Is there anything I can try to get past this? We need to launch our product and I'd hate to need another week to rewrite all these endpoints on a different platform. |
We've been facing this error in our production environment for the past 3 days and it's occurred roughly 10,600 times:
The error is triggered when executing the following code:
Do we have any updates or workarounds for this? It's affecting our users and we'd appreciate your guidance. Note: Our Users collection has a significantly large number of documents. Could the volume of documents be a contributing factor to this issue? |
Have you found a solution to this? because i am facing the same exact error. |
Hello @CollinsVizion35, We haven't found a solution to this issue yet. We've attempted several methods, but none have resolved the problem:
Interestingly, everything operates flawlessly in our development project. The only difference is that the development project has a smaller User collection. I'm starting to suspect that this might be related to some undocumented limitation in Firestore... I will stay in touch about updates! |
Okay, thank you. I tried using batch commit and it still didn't work. |
I've got a workaround/solution in my situation |
Hey @maylorsan, i think i have found a solution from @edmilsonss I think it works with these changes former code: // Create a Firestore instance new code: // Create a Firestore instance |
@CollinsVizion35 Indeed, we haven't experimented with that solution just yet. As I mentioned in this comment, our primary approach was to optimize our algorithm logic between Firebase calls. Thankfully, this seems to have resolved the issue for now. It's certainly an unusual behavior 😄 |
Any update on this issue? Happens sporadically for us in production, using This seems to be occurring when instantiating too many transactions simultaneously, perhaps it initiates too many gRPC connections to the google API? |
@maylorsan Setting |
@sammyKhan, my apologies for the delayed reply! I wanted to clarify that we don't employ
In our case it appears that the issue we're encountering arises due to a significant delay occurring between the |
Hi @maylorsan Since your case is different to what is been reported in this issue, could you please open a new ticket and describe your problem in detail? |
Hi @cherylEnkidu, Thanks for the advice. I'll open a new ticket with all relevant details to address our specific Firestore issue. |
We still see this intermittently, no real pattern that I can see. The Firestore code that triggers it is simple and runs successfully 99% of the time:
But every so often we'll see the error. I've been progressively adding more debugging logging to the function to see if I can work out what may be causing it but there is nothing of note that I can see. Using up-to-date dependencies and node version.
Stack trace:
|
I think it's an issue with the GRPC library. We used to get this error
sometimes or a timeout and all our subsequent Firestore requests would fail
causing endless chaos.
googleapis/nodejs-datastore#1176
The recommended solution for the issue is the below - it falls back to
rest whenever GRPC is glitching.
const datastore = new Datastore({fallback:'rest'});
…On Thu, Feb 29, 2024 at 5:23 PM Adam Koch ***@***.***> wrote:
We still see this intermittently, no real pattern that I can see. The
Firestore code that triggers it is simple and runs successfully 99% of the
time:
await ref.set(updateData, {
merge: true,
});
But every so often we'll see the error. I've been progressively adding
more debugging logging to the function to see if I can work out what may be
causing it but there is nothing of note that I can see.
Using up-to-date dependencies and node version.
~/ $ node --version
v18.18.2
package.json dependencies:
"dependencies": {
***@***.***/firestore": "^7.3.0",
***@***.***/logging": "^11.0.0",
"firebase-admin": "^12.0.0",
"firebase-functions": "^4.7.0",
"googleapis": "^132.0.0",
...
},
Stack trace:
Error: 13 INTERNAL: Received RST_STREAM with code 2 (Internal server error) Error: 13 INTERNAL: Received RST_STREAM with code 2 (Internal server error)
at callErrorFromStatus ***@***.***/grpc-js/build/src/call.js:31:19)
at Object.onReceiveStatus ***@***.***/grpc-js/build/src/client.js:192:76)
at Object.onReceiveStatus ***@***.***/grpc-js/build/src/client-interceptors.js:360:141)
at Object.onReceiveStatus ***@***.***/grpc-js/build/src/client-interceptors.js:323:181)
at ***@***.***/grpc-js/build/src/resolving-call.js:99:78
at process.processTicksAndRejections (node:internal/process/task_queues:77:11)
for call at
at ServiceClientImpl.makeUnaryRequest ***@***.***/grpc-js/build/src/client.js:160:32)
at ServiceClientImpl.<anonymous> ***@***.***/grpc-js/build/src/make-client.js:105:19)
at ***@***.***/firestore/build/src/v1/firestore_client.js:231:29
at /workspace/node_modules/google-gax/build/src/normalCalls/timeout.js:44:16
at repeat (/workspace/node_modules/google-gax/build/src/normalCalls/retries.js:80:25)
at /workspace/node_modules/google-gax/build/src/normalCalls/retries.js:118:13
at OngoingCallPromise.call (/workspace/node_modules/google-gax/build/src/call.js:67:27)
at NormalApiCaller.call (/workspace/node_modules/google-gax/build/src/normalCalls/normalApiCaller.js:34:19)
at /workspace/node_modules/google-gax/build/src/createApiCall.js:84:30
at process.processTicksAndRejections (node:internal/process/task_queues:95:5)
Caused by: Error
at WriteBatch.commit ***@***.***/firestore/build/src/write-batch.js:432:23)
at DocumentReference.set ***@***.***/firestore/build/src/reference.js:398:27)
at /workspace/lib/auth.js:201:19
at Generator.next (<anonymous>)
at fulfilled (/workspace/lib/auth.js:5:58)
at process.processTicksAndRejections (node:internal/process/task_queues:95:5) {
code: 13,
details: 'Received RST_STREAM with code 2 (Internal server error)',
metadata: Metadata { internalRepr: Map(0) {}, options: {} },
note: 'Exception occurred in retry method that was not classified as transient'
}
at console.error (/workspace/node_modules/firebase-functions/lib/logger/compat.js:31:23)
at /workspace/lib/auth.js:207:17
at Generator.throw (<anonymous>)
at rejected (/workspace/lib/auth.js:6:65)
—
Reply to this email directly, view it on GitHub
<#1023 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AIWKQUIUHFMMMAV3O4YRAF3YV637DAVCNFSM4MJANDO2U5DIOJSWCZC7NNSXTN2JONZXKZKDN5WW2ZLOOQ5TCOJXGIYTGOJQGY3A>
.
You are receiving this because you commented.Message ID:
***@***.***>
--
Aditya Ramani
Ramani Data Science LLC
https://www.linkedin.com/in/adiramani/ 773-580-5665
|
any news on this issue? 2 years with the same problem here at the company, we tried
but it's too slow |
I think I have some information here that could help to find the reason behind this error. This is the error I'm getting: First, some context: I have a website that tracks an "photo was opened" event. What I'm doing here, is just to do and So, the backend has a really weird and ugly code like this: newData[`${date}.events`] = FieldValue.increment(1);
if (type === 'profile-view') {
newData[`${date}.profile.views`] = FieldValue.increment(1);
} else if (type === 'spot-open') {
newData[`${date}.spots.${spotId}.opens`] = FieldValue.increment(1);
}
try {
const response = await firestore.collection('userStats').doc(userId).update(newData);// <<<<< HERE IS WHERE IT FAILS
} catch (error) {
errors.report(
new Error(
`EDIT ERROR ${error.message}. userId:${userId}. type:${type}. spotid:${spotId}. ip:${ip}`
)
);
errors.report(error);
} Now, the possible root of the problem: Have you seen that Pay attention to the "IP": Error: EDIT ERROR 13 INTERNAL: Received RST_STREAM with code 2 (Internal server error). userId:eb91c63b-92ef-4a02-b2ba-03a69d298392. type:spot-open. spotid:4a5bcd81-fafb-412b-b0a5-663fdacbe3c4. ip:66.249.66.203 That's a Google Bot IP. I remember having this problem around 2 years ago, where I was calling the Anyways... I hope it helps. |
I have a question for everyone having this same issue: Are you all getting this error while calling the I am getting this error ONLY in a |
Yes that's where I'm seeing it. update() with a single admin.firestore.FieldValue.increment(1) (along with a few other numbers and strings in the same update.). I've been seeing this for years now and when it does happen my app performs poorly as a result. So hopefully this will help lead to a solution! |
I'm not using |
In my case I only see it on functions using NodeJS 18, or 20. It works perfectly on NodeJS 16. |
We had millions of these errors last year, they were all caused by the snapshot listeners, we solved it by moving off Firestore lol |
hello, any news on this issue??? |
Hello. I'm having this issue only on production. It works fine with emulators locally.
and when retrieving the data I do:
Error stacktrace:
|
FIXED: While I was looking at the files I noticed the env variable NODE_ENV was unset. So I was trying to reach the emulator instead of the production DB. So if you're having this issue, check what you're connecting to |
Is there any update or progress regarding this issue? In my case, I encounter this problem frequently (~ 30 times per day) across various parts of my project where the Below are logs from a single day, each occurring at different points in the code:
|
I never got this issue again. Are you sure you are in the latest version of all the dependencies? |
Yes I updated all the dependencies and deplyed the functions after I wrote the reply but the errors still occured Environment Details
|
running into this as well when doing a rather heavy migration |
Environment details
@google-cloud/firestore
version: 3.7.4Steps to reproduce
This error keeps appearing over and over in my logs (not regularly reproducible):
The text was updated successfully, but these errors were encountered: