-
Notifications
You must be signed in to change notification settings - Fork 1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Always set destination table in BigQuery query config in Feast Batch Serving so it can handle large results #392
Conversation
…n work with large results Refer to: https://cloud.google.com/bigquery/quotas#query_jobs, maximum reponse-size bullet point.
GRPC client such as Feast Python SDK will usually not show error cause only error description
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: davidheryanto The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
Are you sure you shouldnt configure the |
Output rows may not have the same order as requested entity rows
We only need to set |
/lgtm |
* Implement project namespacing (without auth) * Update Protos, Java SDK, Golang SDK to support namespacing * Fixed Python SDK to support project namespacing protos * Add integration with projects, update code to be compliant with new protos * Move name, version and project back to spec * Update Feast Core and Feast Ingestion to support project namespacing * Update Core and Ingestion based on refactored FeatureSet proto * Remove entity dataset validation * Register feature sets first to speed up tests * Apply PR #392 * Apply spotless * Order test output Co-authored-by: Chen Zhiling <[email protected]>
This pull request updates query configuration in Feast Serving for BigQuery store such that the query result is always saved explicitly to a destination table.
BigQuery has a default maximum response size of 10 GB compressed when query results are written to a temporary table managed by BigQuery. To overcome this limit, an explicit destination table is provided.
These destination tables are only intermediate tables used by Feast Batch Serving to create the final features output, hence Feast set them to expire in 1 day (BigQuery will auto delete them when they expire) by default.