Skip to content
This repository has been archived by the owner on Jul 13, 2023. It is now read-only.

Multiple request at same time alway throw this error - "4 DEADLINE_EXCEEDED: Deadline exceeded" #397

Closed
xesunny opened this issue Apr 8, 2020 · 83 comments
Assignees
Labels
api: cloudtasks Issues related to the googleapis/nodejs-tasks API. external This issue is blocked on a bug with the actual product. priority: p2 Moderately-important priority. Fix may not be included in next release. type: bug Error or flaw in code with unintended results or allowing sub-optimal usage patterns.

Comments

@xesunny
Copy link

xesunny commented Apr 8, 2020

Environment details

  • OS: Mac
  • Node.js version: 10.x
  • @google-cloud/tasks version: 1.7.2

Steps to reproduce

  1. I create a cloud task when a user perform certain activity on my app.
  2. If I get 20-30 users doing activity at the same time, I get "DEADLINE_EXCEEDED" error.
if (!client) {
  let { v2beta3 } = require('@google-cloud/tasks');
  client = new v2beta3.CloudTasksClient();
  parent = client.queuePath(process.env.GOOGLE_CLOUD_PROJECT, process.env.QUEUE_LOCATION, task_queue_name);
}
client.createTask({ parent, task });

Thanks!

@product-auto-label product-auto-label bot added the api: cloudtasks Issues related to the googleapis/nodejs-tasks API. label Apr 8, 2020
@bcoe bcoe added type: bug Error or flaw in code with unintended results or allowing sub-optimal usage patterns. priority: p2 Moderately-important priority. Fix may not be included in next release. needs more info This issue needs more information from the customer to proceed. labels Apr 8, 2020
@bcoe
Copy link
Contributor

bcoe commented Apr 8, 2020

@xesunny is this in a cloud function? I think you instead:

await client.createTask({parent, task});

Otherwise the cloud function retracts resources once the request is served, and you may get odd behavior.

@xesunny
Copy link
Author

xesunny commented Apr 9, 2020

image
Here is my exact source code.
Here is the error:
image

@xesunny
Copy link
Author

xesunny commented Apr 9, 2020

@xesunny is this in a cloud function? I think you instead:

await client.createTask({parent, task});

Otherwise the cloud function retracts resources once the request is served, and you may get odd behavior.

I'm using this code in (A) CF (B) Server side application (traditional)

I face this issue on both side. Temporary solution is : I have to submit my task sequentially.
For example: If I have 10 tasks to be submitted, I do them one by one.

Avg. time to submit 1 task is 200ms to 500ms. This cause unnecessary delay of ~5 seconds.

Because of this single issue, I can't use cloud task a lot. Though I love the product.

@isaiah-coleman
Copy link

isaiah-coleman commented Apr 9, 2020

I love tasks, but having to create tasks sequentially makes it unusable for me. I need to schedule too many tasks. I've been batching promises and waiting for them to resolve, but I still get the error. Perhaps I can tweak the batch size some, but if it becomes too slow I'll be forced to look for another solution :(

@yossi-eynav
Copy link

@bcoe - We're getting this error too.
is there a limit for dispatching multiple tasks in concurrent?

@bcoe
Copy link
Contributor

bcoe commented Apr 13, 2020

@yossi-eynav @isaiah-coleman I'm not sure what the hard limit is on concurrently creating tasks, but I wouldn't be shocked if one exists. My recommendation would be, if it's not reasonable to enqueue them sequentially, do something like this:

const WORK_SIZE = 16; // or 32, etc.
while (work.length) {
  const chunkOfWork = work.splice(0, WORK_SIZE);
  await Promiser.all(chunkOfWork.map((chunk) => {
    return enqueueTask(chunk);
  });
}

☝️ this allows you to manage the number of tasks enqueued concurrently, without doing so completely sequentially.

@ChrisBeeson
Copy link

I'm getting this error dispatching a single task.

@bcoe
Copy link
Contributor

bcoe commented Apr 14, 2020

@ChrisBeeson are you in a similar position as @xesunny describes, in which you enqueue a single task, but there may be multiple users causing this action concurrently.

@ChrisBeeson
Copy link

@bcoe no there are no other users

@bcoe
Copy link
Contributor

bcoe commented Apr 14, 2020

@ChrisBeeson to me this sounds like a different issue than @xesunny and others are describing, which seems to be related to concurrency limits. I think it might be worth opening a new issue, so that I can help debug (I believe something else is happening to you).

@yossi-eynav
Copy link

@bcoe thanks!
I'm getting this error even when I changed the code do be sequential.
where can I get more information about this error?

@bcoe
Copy link
Contributor

bcoe commented Apr 17, 2020

@yossi-eynav could you please share an example of the code you are running, and let me know what environment you're running in, i.e., Cloud Functions?

@xesunny reading your original message, the error is happening on your local OSX machine?

@yossi-eynav
Copy link

@bcoe
my code is similar to this sample -
https://github.com/googleapis/nodejs-tasks/blob/master/samples/createHttpTask.js.

I'm running on GKE

@ChrisBeeson
Copy link

I had the same problem, I think it's because httpRequest.body is expecting bytes, not a string. Try changing:
task.httpRequest.body = Buffer.from(payload).toString('base64');
to
task.httpRequest.body = Buffer.from(payload);

@yossi-eynav
Copy link

@bcoe
it's related to @grpc/grpc-js

{ Error: 4 DEADLINE_EXCEEDED: Deadline exceeded
    at Object.callErrorFromStatus (/Volumes/dev/playground/cloud-task-test/node_modules/@grpc/grpc-js/build/src/call.js:30:26)
    at Object.onReceiveStatus (/Volumes/dev/playground/cloud-task-test/node_modules/@grpc/grpc-js/build/src/client.js:174:52)
    at Object.onReceiveStatus (/Volumes/dev/playground/cloud-task-test/node_modules/@grpc/grpc-js/build/src/client-interceptors.js:340:141)
    at Object.onReceiveStatus (/Volumes/dev/playground/cloud-task-test/node_modules/@grpc/grpc-js/build/src/client-interceptors.js:303:181)
    at Http2CallStream.outputStatus (/Volumes/dev/playground/cloud-task-test/node_modules/@grpc/grpc-js/build/src/call-stream.js:114:27)
    at Http2CallStream.maybeOutputStatus (/Volumes/dev/playground/cloud-task-test/node_modules/@grpc/grpc-js/build/src/call-stream.js:153:22)
    at Http2CallStream.endCall (/Volumes/dev/playground/cloud-task-test/node_modules/@grpc/grpc-js/build/src/call-stream.js:140:18)
    at Http2CallStream.cancelWithStatus (/Volumes/dev/playground/cloud-task-test/node_modules/@grpc/grpc-js/build/src/call-stream.js:441:14)
    at Timeout.DeadlineFilter.timer.setTimeout (/Volumes/dev/playground/cloud-task-test/node_modules/@grpc/grpc-js/build/src/deadline-filter.js:59:28)
    at ontimeout (timers.js:436:11)

@isaiah-coleman
Copy link

isaiah-coleman commented Apr 23, 2020

@yossi-eynav if you look, a lot of @google-cloud libraries with a @grpc/grpc-js dependency have an open issue related to concurrency. The only thing I've read that helps people seems to be upgrading your Node version to v13. However, that hasn't fixed the issue for me.

see: grpc/grpc-node#1158 it might help

@yossi-eynav
Copy link

@bcoe @isaiah-coleman
I've set the flag fallback to be true and now I have 0 errors on creating 1000 tasks in concurrent..

it seems that the problem is with gax-nodejs library -> can you share the changes between the regular flow and the fallback flow since I'm not familiar with this library at all?

@isaiah-coleman
Copy link

@yossi-eynav I'm not sure I can help you. I'm not a contributor on the library, merely just sharing what seems to have helped others. Can you explain how you used the fallback flag to fix the errors? It might be a solution I can use.

@alexander-fenster
Copy link
Contributor

@yossi-eynav Setting fallback to true enables a different transport (the one that was initially supposed for browsers) - instead of using gRPC, it serializes your requests and sends them over regular HTTP/1 connection with node-fetch to a different endpoint. When you enable fallback, you don't do any gRPC requests at all - it uses totally different stack. That might serve as a good workaround, but we'd like to investigate what's going on.

Would it be possible for you to measure the time - how long does it take for it to fail with DEADLINE EXCEEDED?

@xesunny
Copy link
Author

xesunny commented Apr 25, 2020

@yossi-eynav could you please share an example of the code you are running, and let me know what environment you're running in, i.e., Cloud Functions?

@xesunny reading your original message, the error is happening on your local OSX machine?

Yes, It's happening on my Mac OSX machine but I have got this error on "Google Cloud instance" & "Google Cloud run" as well.

@xesunny
Copy link
Author

xesunny commented Apr 25, 2020

@yossi-eynav Setting fallback to true enables a different transport (the one that was initially supposed for browsers) - instead of using gRPC, it serializes your requests and sends them over regular HTTP/1 connection with node-fetch to a different endpoint. When you enable fallback, you don't do any gRPC requests at all - it uses totally different stack. That might serve as a good workaround, but we'd like to investigate what's going on.

Would it be possible for you to measure the time - how long does it take for it to fail with DEADLINE EXCEEDED?

Can you give an example on how to use "fallback: true" ? I don't have much experience with gprc.

@yossi-eynav
Copy link

@alexander-fenster
~120s (created 1000 tasks in parallel)

@yossi-eynav
Copy link

@xesunny

const {CloudTasksClient} = require('@google-cloud/tasks');
const client = new CloudTasksClient({ fallback: true })

@isaiah-coleman
Copy link

Are the requests over HTTP/1 regional? Electing for { falback: true } gives me the following

error. {"code":3,"message":"Location 'us-east1' is not a valid location. Use ListLocations to list valid locations.","details":[]}

I know this is the correct region, when i switch out the region I receive the following:

{"code":3,"message":"Location must equal us-east1 because the App Engine app that is associated with this project is located in us-east1","details":[]}

For due diligence this is me checking the ListLocations

$ gcloud tasks locations list
NAME FULL_NAME
us-east1 projects/[project-name]/locations/us-east1

I'm grateful for any help that can be provided.

@yossi-eynav
Copy link

@isaiah-coleman i'm on us-center1

@grangerjs
Copy link

Watching here as I am seeing the exact same issue that @liorschwimmer saw with @google-cloud/secret-manager on a Fastify app in Cloud Run. It only seems to fail every couple of days or so. I can't reproduce it locally.

Cloud Run Log:
Pasted_Image_2_16_21__4_13_PM

Implementation:
Pasted_Image_2_16_21__4_17_PM

Could also be user error... haven't ruled that out yet.

@PrajapatiDhara1510
Copy link

Has anyone solved the issue? I facing the same issue. I need to create 500-600 tasks when a user added. I am using a loop to create it. But facing an error.

@sofisl
Copy link
Contributor

sofisl commented Apr 28, 2021

Internal tracking number: 186681285

@superzadeh
Copy link

superzadeh commented May 10, 2021

We had the same issue come up in the past 2-4 days. Any updates on this?

This is affecting us in production, and we can't deliver on our SLOs because of this. If there's no resolution (this issue is open for over a year now), then the only option would be move to a different infrastructure/cloud provider, any recommendations?

@shayne-lp
Copy link

Not sure how the internal tracking number helps us sadly, @sofisl. Are there any details from that you can share publicly?

To echo the comment above, at a certain point one has to wonder just how important customers facing issues with core libs on GCP are to Google (this being unsolved and sparsely updated for over a year now), and if this would happen on AWS (seriously doubt it). Get you all are busy with competing priorities and this may not be be a quick fix, but in the meantime we’re held hostage and holding the bag. Blunt updates welcomed!

@bcoe
Copy link
Contributor

bcoe commented May 12, 2021

@shayne-lp @superzadeh, since @sofisl opened the ticket, it has been assigned to an engineer and is being investigated.

I'm sorry about how long this issue has been affecting folks, and will keep sharing information with the product team.

@superzadeh
Copy link

superzadeh commented May 17, 2021

Any updates @bcoe or @sofisl? or a workaround maybe? This is really starting to become a problem in our production environment; and I'm also starting to see this in Firestore (via the NPM package"firebase-admin": "9.6.0"):

Error: 4 DEADLINE_EXCEEDED: Deadline exceeded
    at Object.callErrorFromStatus (/app/node_modules/@grpc/grpc-js/src/call.ts:81:24)
    at Object.onReceiveStatus (/app/node_modules/@grpc/grpc-js/src/client.ts:334:36)
    at Object.onReceiveStatus (/app/node_modules/@grpc/grpc-js/src/client-interceptors.ts:426:34)
    at Object.onReceiveStatus (/app/node_modules/@grpc/grpc-js/src/client-interceptors.ts:389:48)
    at /app/node_modules/@grpc/grpc-js/src/call-stream.ts:249:24
    at processTicksAndRejections (internal/process/task_queues.js:79:11)
    at runNextTicks (internal/process/task_queues.js:66:3)
    at listOnTimeout (internal/timers.js:523:9)
    at processTimers (internal/timers.js:497:7)
Caused by: Error: 
    at WriteBatch.commit (/app/node_modules/@google-cloud/firestore/build/src/write-batch.js:413:23)
    at DocumentReference.set (/app/node_modules/@google-cloud/firestore/build/src/reference.js:350:14)

This all looks like I'm running some code that is not production ready/maybe there's an earlier version of NPM packages that are more stable/battle tested?

@nikkanetiya
Copy link

nikkanetiya commented May 19, 2021

@bcoe Appreciate if we can get any quick help on this one.

We are using GCP heavily for our infrastructure. Currently, in a new microservice hosted on GCP Kubernetes, we are getting this error so many times. Earlier, it was coming for firestore, google cloud storage, and task requests, we tried everything updated all lib. version, node version to 14.x.

But currently, we are seeing this error for the task request these are a number for this service.

Error: 4 DEADLINE_EXCEEDED: Deadline exceeded at Object.callErrorFromStatus (/app/node_modules/@grpc/grpc-js/src/call.ts:81:24) at Object.onReceiveStatus (/app/node_modules/@grpc/grpc-js/src/client.ts:334:36) at Object.onReceiveStatus (/app/node_modules/@grpc/grpc-js/src/client-interceptors.ts:426:34) at Object.onReceiveStatus (/app/node_modules/@grpc/grpc-js/src/client-interceptors.ts:389:48) at /app/node_modules/@grpc/grpc-js/src/call-stream.ts:276:24 at processTicksAndRejections (internal/process/task_queues.js:75:11)

Cloud Task got created successfully:- 131,148
Cloud Task got an Error:- 3,504

In case you need anything from our side let me know I can provide many examples, traces, and project ID if required. This is the only error preventing us right now to enable this feature for everyone and affecting us seriously on production.

We have P1 support as well but the reason to post it here is in case I can provide some quick trace or detail if you guys need anything from us check this in detail.

Screenshot 2021-05-19 at 2 20 39 PM

@superzadeh
Copy link

superzadeh commented May 19, 2021

Very similar symptoms to you @nikkanetiya and unreliability of most services we use on GCP that rely on GRPC (so, most of them actually).

We also ran into https://issuetracker.google.com/issues/158014637?pli=1 last year (which is still unresolved!) and already spent an unnecessary amount of time migrating our code from a Cloud Functions to AppEngine (which was the only option we had with the time we had). I have the impression that the GRPC/core libs that are used by GCP services are fundamentally broken.

We already spent too much time trying to find workarounds, and the fact that these issues stay open for over a year made us take the decision to move away all our stuff back to AWS (we used it for most of our products, and tried GCP/Firebase for a new mobile product ==> we'll migrate it to AWS). It is a significant time investment (again), coming at the worse timing possible (we're fundraising in 3 months), and I would have really preferred to find another resolution.

This stuff just doesn't happen on AWS (not affiliated to them, can't say for Azure or others providers but we're having a much better experience with AWS), and even if it does, you get daily updates on the status of the issue/investigation, and resolution happens much faster.

I am mostly sharing this here for transparency towards the community who might have to make hard decisions in order to deliver the bare minimum of UX to their users by providing a reliable infrastructure for their services.

@GlennChon
Copy link

GlennChon commented May 20, 2021

See other replies below for better solutions

Might be helpful for some.
My setup: cloud function as a webhook to schedule multiple tasks that call a new cloud function webhook with dynamic parameters

I had this issue with creating multiple tasks in the queue and it looks like I wasn't handling the promises properly.

Solution:

import {createTask} from 'wherever/your/cloud-task-function/is';

let arr = [item];
let promises = [];
arr.forEach((item)=>{
     const {prop1, prop2, propN...} = item;
     promises.push(createTask({prop1, prop2, propN...));
}
return Promise.all(promises).then(()=>{
     return res.status(201).send('Task Generator Initialized');
}).catch((err)=>{
     return res.status(500).send(err.message);
}

returning Promise.all is letting me schedule all of them instead of getting the deadline exceeded error, I've also set the function timeout to the 9 minute max since my needs require an unknown number of task creations

@bcoe
Copy link
Contributor

bcoe commented May 21, 2021

@nikkanetiya @superzadeh are you scheduling many tasks concurrently, if you perform Promise.all with a large number of tasks, you can run into contention on resources, which can lead to issues like DEADLINE_EXCEEDED.

If you have many tasks to enqueue in on job, an approach I take is as follows:

const tasksToCreate = [...]; // an array with a large number of tasks to enqueue.
const WORK_SIZE = 32; // some size of work to perform.
while (work.length > 0) {
  const work = tasksToCreate.slice(0, WORK_SIZE).map(() => {
     return createTaskPromise();
   });
  await Promise.all(work);
}

@bcoe
Copy link
Contributor

bcoe commented May 21, 2021

@xesunny I believe this approach, in my last comment, of chunking up work will likely work for your use case too.

You do not need to perform one task at a time, but you should pick some upper bound on the amount of work that is processed concurrently, and you should await completion, otherwise you can end up with an unhandled rejection.

@superzadeh
Copy link

superzadeh commented May 21, 2021

@nikkanetiya @superzadeh are you scheduling many tasks concurrently, if you perform Promise.all with a large number of tasks, you can run into contention on resources, which can lead to issues like DEADLINE_EXCEEDED.

If you have many tasks to enqueue in on job, an approach I take is as follows:

const tasksToCreate = [...]; // an array with a large number of tasks to enqueue.

const WORK_SIZE = 32; // some size of work to perform.

while (work.length > 0) {

  const work = tasksToCreate.slice(0, WORK_SIZE).map(() => {

     return createTaskPromise();

   });

  await Promise.all(work);

}

Thanks for the reply 🙏 That is what we're doing, except we use batch sizes of 100. I'll try to tune it down to 32, but overall this feels pretty limited if even a batch size of 100 causes contention on the system. What I've typically found when using similar queueing service is the ability to queue batches: the client sends X amount of tasks to queue, and the contention/batch queue is managed by the system itself (in this case, Cloud Task). This feels like a critical feature missing, that is now being thrown at users to "deal with it".

How do you recommend scaling this approach? We have larger batches (currently up to 20k, but we plan to scale it to at least 10x, maybe 50x towards December this year), and queuing them "32 by 32" is going to be too slow. We are currently queueing the tasks through a fan-out Cloud Scheduler job, calling AppEngine which is creating the tasks. I fear that with a batch size of 32, the AppEngine request will timeout before finishing to creating all the tasks.

Any ideas?

@nikkanetiya
Copy link

nikkanetiya commented May 21, 2021

@nikkanetiya @superzadeh are you scheduling many tasks concurrently, if you perform Promise.all with a large number of tasks, you can run into contention on resources, which can lead to issues like DEADLINE_EXCEEDED.

If you have many tasks to enqueue in on job, an approach I take is as follows:

const tasksToCreate = [...]; // an array with a large number of tasks to enqueue.
const WORK_SIZE = 32; // some size of work to perform.
while (work.length > 0) {
  const work = tasksToCreate.slice(0, WORK_SIZE).map(() => {
     return createTaskPromise();
   });
  await Promise.all(work);
}

@bcoe Thanks for your reply.

We are using this with worker listening to the messages from the PubSub queue, so we are not always adding 100s task queue at the same time concurrently so we will not able to chuck it and honestly, I think currently a single pod is only processing 100 messages at a time (not all message will create up the create task).

And Just to give you some more context let's say if we will always get DEADLINE_EXCEEDED error on failures, we can still add some backoff logic to retry it again, But the real problem here is on many tasks are actually getting created on Cloud Task even though we got DEADLINE_EXCEEDED so we have no idea on our end whatever that task is created or not and will it get executed at that or not.

@bcoe
Copy link
Contributor

bcoe commented May 21, 2021

@nikkanetiya @superzadeh have you tried tuning timeout in call options;

const call_options = {
    timeout: 200000 // millis
}
const response = await client.createTask(request, call_options);

I believe this can be overridden on a request by request basis, and combined with chunking up your work, you may find that you avoid the DEADLINE_EXCEEDED issue.

@smaeda-ks
Copy link

https://github.com/googleapis/nodejs-tasks/blob/v2.3.1/src/v2beta2/cloud_tasks_client.ts#L3006-L3022

I've been calling this method explicitly after task creation (e.g., using Promise.all()) and so far not seeing this DEADLINE_EXCEEDED error as I used to be.

@superzadeh
Copy link

superzadeh commented May 24, 2021

@nikkanetiya @superzadeh have you tried tuning timeout in call options;

const call_options = {
    timeout: 200000 // millis
}
const response = await client.createTask(request, call_options);

I believe this can be overridden on a request by request basis, and combined with chunking up your work, you may find that you avoid the DEADLINE_EXCEEDED issue.

I will give this a try @bcoe, but this raises even more concerns regarding scalability: if we are only creating tasks 32 at a time, with a timeout of up to 200000 ms, how are we supposed to scale this approach? We currently have batches of 20k tasks, and plan to grow that to at least 100k within a few months. What is your recommendation here?

Also, if you have any status updates on the internal tracking number 186681285, it would go a long way to create transparency and help build trust in GCP's ability to deliver a reliable platform (is it sitting somewhere in a backlog, in progress, stuck without being able to reproduce, found the issue and working on a fix?)

@superzadeh
Copy link

superzadeh commented May 29, 2021

@nikkanetiya @superzadeh have you tried tuning timeout in call options;

const call_options = {
    timeout: 200000 // millis
}
const response = await client.createTask(request, call_options);

I believe this can be overridden on a request by request basis, and combined with chunking up your work, you may find that you avoid the DEADLINE_EXCEEDED issue.

I will give this a try @bcoe, but this raises even more concerns regarding scalability: if we are only creating tasks 32 at a time, with a timeout of up to 200000 ms, how are we supposed to scale this approach? We currently have batches of 20k tasks, and plan to grow that to at least 100k within a few months. What is your recommendation here?

Also, if you have any status updates on the internal tracking number 186681285, it would go a long way to create transparency and help build trust in GCP's ability to deliver a reliable platform (is it sitting somewhere in a backlog, in progress, stuck without being able to reproduce, found the issue and working on a fix?)

We gave this a shot @bcoe, and when we add the call_options as you have suggested, all the tasks that we create turn into POST, instead of GET 🤷‍♂️.
I assume that passing callOptions overrides the method we define on the task (appEngineHttpRequest.httpMethod), I tried to look for some documentation for the call options, but could only find anything in the TS definition that would override the HTTP method. Because of this, we couldn't try that in an environment where we have enough scale to see the DEADLINE_EXCEEDED issue.

Here's the TS definition I have for call options (in gax.d.ts)

export interface CallOptions {
    timeout?: number;
    retry?: Partial<RetryOptions> | null;
    autoPaginate?: boolean;
    pageToken?: string;
    pageSize?: number;
    maxResults?: number;
    maxRetries?: number;
    otherArgs?: {
        [index: string]: any;
    };
    bundleOptions?: BundleOptions | null;
    isBundling?: boolean;
    longrunning?: BackoffSettings;
    apiName?: string;
    retryRequestOptions?: RetryRequestOptions;
}

Here's how we create the tasks, do you see any reason why they end up being POST instead of GET?

constructor() {
    // We only create the task client once
    const client = new CloudTasksClient();
    const project = process.env.GCLOUD_PROJECT!;
    const location = process.env.QUEUE_LOCATION;
    this.taskClient = client
}    
 
 
public async queueTask = (uri: string, queueName: string) => { 
    // we reuse the task client
    const parent = this.taskClient.queuePath(project, location, queueName);

    const task: protos.google.cloud.tasks.v2.ITask = {
      appEngineHttpRequest: {
        httpMethod: "GET",
        relativeUri: uri,
      },
    };

    // Send create task request.
    const request: protos.google.cloud.tasks.v2.ICreateTaskRequest = {
      parent,
      task,
    };
    
    // Creating the task like this results in the task sent to our AppEngine as POST ❌
    const callOptions = {
      timeout: 200000, // millis
    }
    const [response] = await this.taskClient.createTask(request, callOptions)
    
     // Creating the task like this results in the task sent to our AppEngine as GET ✅
    const [response] = await this.taskClient.createTask(request)
    
    //...
}

@superzadeh
Copy link

superzadeh commented Jun 3, 2021

I'm really sorry if I come off as being pushy, but we're still having this error in production. Any updates you can share? Our leadership team will kick off a migration to AWS if I come back to them saying there's still no updates on this.

@bcoe
Copy link
Contributor

bcoe commented Jun 7, 2021

@superzadeh call_options should have no affect on whether the event dispatched to you by Cloud Tasks is ultimately a POST or GET, could you please open a separate issue describing this?

EDIT: by having this separate ticket to point people towards, I can more easily draw attention to other people internally.

@bcoe
Copy link
Contributor

bcoe commented Jul 16, 2021

@xesunny I'm doing some cleanup up issues, and am closing this as I haven't heard back from you in a while.

If folks are continuing to bump into problems, please don't hesitate to open a new issue, including a code snippet, and your gcloud project ID.

@bcoe bcoe closed this as completed Jul 16, 2021
@bcoe
Copy link
Contributor

bcoe commented Jul 29, 2021

@shayne-lp if you open a new issue with a code snippet that demonstrates the issue you're bumping into, along with your project ID, we can share it with the internal engineering team.

If you don't feel comfortable sharing a project ID on GitHub, you can send an email to bencoe [at] google.com.

@jorgemgr94
Copy link

jorgemgr94 commented Oct 9, 2022

People who are still running into this issue, consider the workaround mentioned by @yossi-eynav .

@xesunny

const {CloudTasksClient} = require('@google-cloud/tasks');
const client = new CloudTasksClient({ fallback: true })

This is what you will be enabling: #397 (comment)

Currently, we're enqueuing batches of 500 tasks concurrently without any problem 👍

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
api: cloudtasks Issues related to the googleapis/nodejs-tasks API. external This issue is blocked on a bug with the actual product. priority: p2 Moderately-important priority. Fix may not be included in next release. type: bug Error or flaw in code with unintended results or allowing sub-optimal usage patterns.
Projects
None yet
Development

No branches or pull requests