Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

bigquery job.promise() is not documented? and always polling at a fixed interval of 500ms? #2710

Closed
c0b opened this issue Oct 31, 2017 · 8 comments
Assignees
Labels
api: bigquery Issues related to the BigQuery API. type: question Request for information or clarification. Not an issue.

Comments

@c0b
Copy link
Contributor

c0b commented Oct 31, 2017

from the bigquery job api I was only aware the complete event to listen to a job with a callback when job completed, till recently I found from some gist shared code I got that a job.promise() is available, since our application uses node v6 and recently upgraded to v8; the promise api fits the code better, and works with async await model; wonder should you at least document it?
https://googlecloudplatform.github.io/google-cloud-node/#/docs/bigquery/0.9.6/bigquery/job

On the other hand, I spent some time figured out how was this default job.promise() working, I found the call trace down to the Operation's setTimeout self.startPolling of every 500ms, so it's polling at a hard coded interval of 500ms? while in many gcloud products best practices a backing off strategry of retrying is preferred,
https://github.com/GoogleCloudPlatform/google-cloud-node/blob/master/packages/common/src/operation.js#L184

this behavior of polling 500ms may be acceptable (or wanted) for some cases, for our ETL scripts which runs hundreds of query jobs concurrently in BATCH mode is just not so efficient, for this ETL purpose I have a piece of code already in use in production for a long while, implemented the backing off strategry, it supports an optional options obj parameters of waitbegin (default to 500ms) and waitmax (default to 10s)

// looping on a bigquery job till it's 'DONE' or error
//   using a backing off strategry, waiting starts with 500ms,
//     then increase by half till the max is 10s
function waitJobFinish(job, {waitbegin=500, waitmax=10000, initialState='UNKNOWN'} = {}) {
  return new Promise((fulfilled, rejected) =>
    function loop(retries=0, wait=waitbegin, state=initialState) {
      job.getMetadata()
        .catch(rejected)
        .then(([ metadata, apiResponse ]) => {
          if (metadata.status.state !== state) {
            console.log(`Job ${metadata.id} state transit from ${state} to ${metadata.status.state}, at ${(new Date).toJSON()} after ${retries} retries check job status.`);
            state = metadata.status.state;
          }

          if (metadata.status.errorResult)
            return rejected(metadata.status.errorResult);

          if (metadata.status.state === 'DONE')
            return fulfilled([ metadata, apiResponse, retries ]);

          setTimeout(loop, wait, retries+1, Math.min(waitmax, (wait+=wait/2)), state);
        });
    }() // (0, waitbegin, 'UNKNOWN')
  );
}

so with this API, it's similar to job.promise() we can write code like this, but internally it's doing a backing off strategy of retrying retrieve metadata;

  bigquery.startQuery({
    query: '...',
    // more options
  })
  .then(([ job ]) => waitJobFinish(job))
  .then(([ metadata, apiResponse, retries ]) => { ... })

or with async await

  // in an async function
  const [ job ] = await bigquery.startQuery(...);
  const [ metadata, apiResponse, retries ] = await waitJobFinish(job);
  // ...

the console.log lines give us transparency of how healthy each job runs, state transition from 'PENDING' to 'RUNNING' to 'DONE'

I'm not sure this strategy can be in the Operation for all the @google-cloud/... packages, but at least works for bigquery job; let me know if you like the code.

@callmehiphop callmehiphop added api: bigquery Issues related to the BigQuery API. type: question Request for information or clarification. Not an issue. labels Nov 1, 2017
@callmehiphop
Copy link
Contributor

@stephenplusplus are there any reasons why we hardcode the polling interval and/or don't use a back off in operations?

@stephenplusplus
Copy link
Contributor

I think that makes sense, although I can't find any official recommendations from the Google API docs. Do you know of any @c0b, or best practices in general for what should be our default?

Related: This was brought up in #1886, and discarded because GAPIC APIs already do backoff and allow configuration of retry parameters. Maybe we should look at what they do and copy it here.

@c0b
Copy link
Contributor Author

c0b commented Nov 3, 2017

I did a search "best practices backing off strategy site:cloud.google.com" got this doc on storage, and indeed saw more places (like errors trouble shooting guide) mentioned the exponential backoff strategy

https://cloud.google.com/storage/docs/exponential-backoff

If you get an error, use exponential backoff to avoid problems due to large traffic bursts.

another reason we need this is the bigquery API requests limit is 100/s isn't that big, my client side ETL script is constantly doing some hundreds of BATCH mode query, that could fill up the 100/s very soon get rateLimitError if use the job.promise()

https://cloud.google.com/bigquery/quotas

API requests per second, per user: If you make more than 100 requests per second, throttling might occur.

@c0b
Copy link
Contributor Author

c0b commented Nov 3, 2017

the (wait+=wait/2) isn't exactly exponential backoff described in the storage/docs/exponential-backoff link above, it is (wait*=1.5)
but easy to change to Math.min(maxwait, wait*=2)

@stephenplusplus
Copy link
Contributor

The only thing I'm stuck on is we are talking about polling as opposed to error handling. We aren't retrying in the traditional sense, where the first request failed, so we must try again. We are only making a follow-up request because the job didn't complete yet. That's why I'm drawn to the constant interval, although I'm completely fine going with another value than 500ms, and/or allowing it to be configurable.

I'm also fine with introducing the backoff strategy that GAPIC uses, since it's probably safe to assume if they're doing it, it's the right thing for us to do. These are the options (source):

Name Type Description
initialRetryDelayMillis number the initial delay time, in milliseconds, between the completion of the first failed request and the initiation of the first retrying request.
retryDelayMultiplier number the multiplier by which to increase the delay time between the completion of failed requests, and the initiation of the subsequent retrying request.
maxRetryDelayMillis number the maximum delay time, in milliseconds, between requests. When this value is reached, retryDelayMultiplier will no longer be used to increase delay time.
initialRpcTimeoutMillis number the initial timeout parameter to the request.
rpcTimeoutMultiplier number the multiplier by which to increase the timeout parameter between failed requests.
maxRpcTimeoutMillis number the maximum timeout parameter, in milliseconds, for a request. When this value is reached, rpcTimeoutMultiplier will no longer be used to increase the timeout.
totalTimeoutMillis number the total time, in milliseconds, starting from when the initial request is sent, after which an error will be returned, regardless of the retrying attempts made meanwhile.

These are the default settings for longrunning operations (source):

{
  initialRetryDelayMillis: 100,
  retryDelayMultiplier: 1.3,
  maxRetryDelayMillis: 60000,
  initialRpcTimeoutMillis: null,
  rpcTimeoutMultiplier: null,
  maxRpcTimeoutMillis: null,
  totalTimeoutMillis: null
}

@callmehiphop WDYT?

@callmehiphop
Copy link
Contributor

I think that's fine, should we make a setter of some kind to allow users to configure these options?

operation
  .setPollingOptions({ retryDelayMultiplier: 2 })
  .on('error', console.errror)
  .on('complete', () => console.log('yay'));

?

@stephenplusplus
Copy link
Contributor

That sounds nice, how would we handle the methods that handle job polling internally, e.g. https://github.com/GoogleCloudPlatform/google-cloud-node/blob/7160d17a3b3c0f3af4bff900cc9ac64831564498/packages/bigquery/src/table.js#L377? Accept a new option, like metadata.backoffSettings?

@stephenplusplus
Copy link
Contributor

This issue was moved to googleapis/nodejs-bigquery#13

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
api: bigquery Issues related to the BigQuery API. type: question Request for information or clarification. Not an issue.
Projects
None yet
Development

No branches or pull requests

3 participants