-
Notifications
You must be signed in to change notification settings - Fork 373
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Deadline Exceeded error! #349
Comments
I couldn't figure out how to label this issue, so I've labeled it for a human to triage. Hang tight. |
Hi @warlockdn! Sorry to hear that you are having connection issues to Firestore. Note that the initial connection attempt can sometimes be slow, as there is a lot of overhead (Channel establishment, SSL handshakes, authentication). Once you have a connection to Cloud Firestore, the underlying channel will get re-used for further operations. You may be able to work around your issue if you start by issuing a DocumentReference.get() call. We retry those automatically if the connection establishment takes a bit longer than it should. |
Hi @schmidt-sebastian sorry I don't quite understand. On application restart I already created a connection to firebase. But anyways can you guide me to a resource where I can see some demo or sample code as to why and how this can be used. ? |
As said, this is not a solution, but rather a workaround:
|
Do u have a permanent solution to this ? I don't know if with this I can go into production ? If I have to wait to retry on things. Coz the point of being realtime will be of no use if on a load i receive these errors. Or may be i need to follow some other process ? |
If you continue to run into these issues, then we should look at your network stack and how you connect to our data centers. If we do need to go down that route, we can let you know some environment variables that will help us debug. |
Yea you can help me then lemme know what needs to be done so that I don't end up embarrassing my product haha. Thanks |
Please set the following environment variables: GRPC_TRACE=all |
so this is the error I finally received
|
so do I need to initialise firebase connection on each function call or does it work once on initialising the server ? |
?? |
Sorry, I am just catching up on my emails. You only need to initialize the client connection only once, and the network channel should stay open. |
So what's the issue with the time out then ? My internet connectivity is
pretty good
…On Wed, Sep 19, 2018, 12:23 AM Sebastian Schmidt ***@***.***> wrote:
Sorry, I am just catching up on my emails. You only need to initialize the
client connection only once, and the network channel should stay open.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#349 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/ADR8XkgdpookRGwgcuh3JlynA0OnFEDmks5ucUEygaJpZM4WYWqy>
.
|
Just to confirm, you are getting most of these errors on the initial document write and before you are using any other Firestore operations? I am suggesting that your initial operation should be a document read, which we will retry. The "time to first byte" from Firestore is not necessarily related to your network connection. |
I get this as well. I have a collection with about 300k documents on it and simply calling .get() on that collection will result in the timeout. Its a firebase problem and not your local system. I've seen this happen in hosted cloud functions, and the fix is to increase the timeout limit in the function details (max time is 540 seconds, default is 60 seconds). However in the node sdk I can't get figure out how to extends the default past the 60 second limit. |
I'm dealing with the same issue. |
I am having the same issue, when try to read my collection which has 100k docs, it gives me DEADLINE ERROR message. How to solve this, even a simple read is so trouble some. Is there a way to export the data from the DB without hitting this Error |
My workaround is to find a size that doesn't hit the DEADLINE ERROR, like 10k, then just loop by 10k increments and read for the database until you have read all your docs. You can do something like this:
|
@CapitanRedBeard I ended doing something similar. But, thanks for the reply. If the Firebase team can add an export option to the Firestore DB like RTDB has, it would help some folks here |
@CapitanRedBeard & @farhankhwaja while I don't know if this will help you, still I was running into the same Deadline exceeded errors with only less than 2k of records! I have implemented this into all of my cloud functions, hopefully, this will help you too:
Sadly I can't remember where I found that snippet but as I said, every little bit helps! Jon |
So why is this happening? what is the limit that is being exceeded here? It is on blaze plan so this should not happen. Instead firebase should let us query as much as we want.. |
We currently configure a 60 second deadline for the RunQuery call (which both While somewhat convoluted, you can configure a different deadline for your client:
|
I'm getting these DEADLINE_EXCEEDED error when running tests using only very small sets of data (with "firebase-admin": "^7.0.0" / node v8.14.0). I'm also using the blaze plan so I don't think it's billing related. I'm not running inside a cloud function, it's just a regular node app. It doesn't happen consistently either. I run small tests and it feels like 50% of the time my query times-out and runs into this error. The query itself returns a single record and should run in a fraction of a second. Is it possible a contention issue? I have a number of parts to my app that may well be issuing queries and updates at the same time. It feels like something is deadlocked. An example query that gets stuck in this way is:
It returns between zero and maybe 4 records only. I ran |
Can you do two things for me:
Thanks! |
I would like to clarify something that I personally wasn't aware of - increasing the request timeouts well past 60 seconds actually will not have the desired effect. 60 seconds is currently the maximum timeout supported by the backed, so the only other solution is to reduce your response size. |
@pkwiatkowski0 , I made this error go away. After a lot of testing and educated guesses on the code developed, here is what made the fix go away - When the CF is triggered, we were calling a few async calls to perform some business operations. In one of the asynchronous call we had to fetch a few hundred records and that's the place where we were always getting DEADLINE_EXCEEDED errors. After having some console.time() around the documents fetch it was seen that block of code was taking about 100s of seconds to fetch a mere 200 records from the collection. Though the function execution time in total was <100ms. Strange. Strange. Strange... We made the calls synchronous by having async/await and waited for the entire execution to complete before exiting the function. This greatly reduced the time it took to fetch 200 records from 100s of seconds to <1000ms. Post this we have not seen the DEADLINE_EXCEEDED anymore and code is stable. Hope this helps. I guess the fire-and-forget nature of the code blocks in the CF puts the entire function on some throttle mode and degrades the performance. |
I had the same error where I only was only reading and writing small amounts of records (not even a few hundred) per firestore get()/set(). I discovered for me the issue was I had too many operations happening in parallel as i was performing these operations nested within a for loop. Increasing timeout and memory did not help. To resolve the error (similar to @ashking resolution) I updated my code to handle the firestore database operation callbacks using async/wait. I effectively removed any parallel firestore operations. This is working really well and no more errors, and things happen quickly which is good as I was concerned I would then hit the timeout limit if things happens one at a time. But seems faster if anything. Hope this helps anyone stuck on a similar issue. |
It's not working. Even I try it multiple times |
@Yogeshkumar9291 , Can you isolate and share a sample snippet of your cloud function? |
Hi , here i mentioned the code as per your suggestion. Still facing the same error
|
@Yogeshkumar9291 this is the exact issue I had. using .then I could not control the exit of the function before the promise completed succcessfully. I moved it to async/await structure to make sure that my functions don't exit before promises are resolved and that solved my issue. I suggest you give it a try.
|
i have extended my timeout to 540sec, but yet i hit the deadline exceed issue about a min of the program run here my CF snippet
error msg in Chrome Console
|
Well at least I can fix your: deployment error if defined
|
It isn't working.. |
i have Dough who from Firebase shared that i should call onCall instead of onRequest to use callable function But yet it still doesn't resolve my issue. Strange... |
Same here August of 2020 and I'm getting the DEADLINE_EXCEEDED error. |
Same error when saving a batch of 100 writes. |
I have the same error when using firebase-admin in an Electron app + Vue 2.
My Firestore test collection has only 1 document with 1 field.
The same code running in a standalone nodejs app runs fine, so in my case prob has to do with Electron. |
@schmidt-sebastian Could I issue a |
|
Would you have an example of your code? Right now I am randomly having this issue where the DB won't save the subcollection items (here the Assignments), in this case we have a 1 year dates lock so it never gets past 400 documents, yet it is triggering the error, sometimes the DB will retry and create the documents minutes later, sometimes it wont.
|
Chiming in as we're also having a lot of
We are getting this error when fetching a document that isn't particularly large (below 1KB): // this.firestoreClient is a Firestore() instance
const users = await this.firestoreClient.collection(collections.USERS).doc(userId).get() It is happening very consistently but in an unpredictable way, and is wreaking total havoc in production. We see it most often in parts of the code where we are writing/reading in parallel/very close timespan from Firestore. Is there any solution available to date, or should we just migrate to another DB that can support our load (which isn't really high, we're never doing more than ~100 read/writes per second)? Seeing that the issue was opened a couple years ago, and appears to still be happening, we're not very confident on being able to get this resolved at all with Firestore. Moving everything to another database is a significant amount of work, that we didn't plan for, so having some sort of honest transparency from the team working on this would help a lot in making sure we make the right decision. We are also facing very similar errors with Cloud Tasks, who is using the same GRPC library; and was wondering if the issues are not related somehow, and even if you pointed out several time that the GRPC lib is only doing its job, it seems evident that there is something wrong happening on that layer (or on how both clients -- Firestore and Cloud Tasks -- are using it). PS: increasing the DEADLINE is not an option, the code is already running slow and failing; making it even slower is not viable workaround for production environments. Our users notice and complain about performance all the time already. |
@superzadeh you sure there isn't anything else around that fetch query that might be looping over and increasing the function execution time? The issues I had were solved by splitting a few iteration-heavy functions into smaller ones that happen after some trigger events. |
Thanks for the suggestion @MorenoMdz, our call site is only making calls to Firestore with async/await "sequentially" (no Promise.all or loops). This is a endpoint called by a CRON job, and working on our users collection one by one: each user has a CRON creating a Cloud Task for them, and our service is processing them one by one. We have 5 instances running, and the queue is capped at 200 max concurrent jobs, which more less means about 40 simultaneous jobs running per service instance (running on AppEngine Flex Environment, 2 CPUs and 6GB RAM). This is by far nothing amazing/pushing the limits in terms of scale, but enough to break Firestore? Here's the code from our one of our functions calling the // FirestoreService.ts
async updateUserTimeline(userId: string): Promise<void> {
// DEADLINE_EXCEEDED is thrown by this call 👇
const userData = await this.getUserData(userId)
const tipOfTheDay = userData.tipOfTheDay
//...
}
async getUserData(userId: string): Promise<firestore.DocumentData> {
const user = await this.firestoreClient.collection(collections.USERS).doc(userId).get()
return user.data()
} This code itself is called directly form an Express controller handling the route for the CRON job: // CronController.ts
public timelineUpdateForUser = async (
req: Request,
res: Response,
next: NextFunction,
) => {
try {
const userId = req.params.id
await this.firestoreService.updateUserTimeline(userId)
res.status(200).end()
} catch (error) {
this.loggingService.error(error)
next(error)
}
} Results in this:
|
Any update or workarounds we could try? We are still having this issue happening frequently (every hour at least) in production. I'm also wondering whether the issue should re-openned (or I'm happy to open a new one too). |
The workaround i used was to convert it to a batch commit rather than one at a time limited to 450 records in each batch. Never had the issue again. |
Thanks @ramzauchenna, are you referring to batched writes or something else? |
@superzadeh Are you only reading a single document? Are you otherwise able to use Firestore? |
No, we are reading multiple documents (a different one per HTTP request that comes in). The DEADLINE_EXCEEDED errors only happen during peak traffic, otherwise Firestore works as expected. We were wondering if the soft limit of 1 write per second could be causing this issue? We are only writing to different documents in parallel to these reads, however each document is within the subcollection of a document. Ex:
We write only once in document 1 and document 2, however since they are both in a subcollection of the |
I solved this by login into, google cloud console and choose your project.
in as a front end I am using flutter app so I increased timeout there too,
|
July 2023 same issue |
Hi @Dave181295, thank you for reporting this issue. If the problem is showing up consistently and you are able to reproduce it, please create a new issue ticket with a minimal reproducible app, and we can start looking into it. |
SAME ISSUE TODAY |
[REQUIRED] Step 2: Describe your environment
[REQUIRED] Step 3: Describe the problem
I am doing a basic Add to Firestore through Node Admin SDK. I just add a object to a document and then wait for the document ID to return after that I send it to the front end to be subscribed for realtime updates. But sometimes I receive the error while adding a document and also updating as to Deadline Exceeded
Steps to reproduce:
The code that I have added to my backend.
There you go I receive errors at catch.
Also checked my account I don't see any high usage since it is in development phase so hardly adding 10-20 docs on a normal day.
Can you please help here. Thanks
The text was updated successfully, but these errors were encountered: