-
Notifications
You must be signed in to change notification settings - Fork 1.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
google_bigquery_table cannot create a table with num_hash partition start range of 0 #6525
google_bigquery_table cannot create a table with num_hash partition start range of 0 #6525
Comments
@amerenda can you share your
|
@amerenda is this still an issue with you? |
The start field appears to be dropped in the request payload if ts value is set to 0. It is a required field
|
I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues. If you feel this issue should be reopened, we encourage creating a new issue linking back to this one for added context. If you feel I made an error 🤖 🙉 , please reach out to my human friends 👉 [email protected]. Thanks! |
Community Note
modular-magician
user, it is either in the process of being autogenerated, or is planned to be autogenerated soon. If an issue is assigned to a user, that user is claiming responsibility for the issue. If an issue is assigned tohashibot
, a community member has claimed the issue already.Terraform Version
Terraform v0.12.24
Affected Resource(s)
Terraform Configuration Files
Debug Output
https://gist.github.com/amerenda/26cbb92d0727d38188872284f80a9c81
Panic Output
N/A
Expected Behavior
terraform should create a partitioned bigquery table in the provided dataset. The field should be num_hash, and the start should be 0.
Actual Behavior
This error is not present, and the apply is successful if the range start is set to 1.
Steps to Reproduce
terraform apply
Important Factoids
If the table is created manually with a range start of 0, this works as expected.
Workaround
range_partitioning
toignore_changes
The text was updated successfully, but these errors were encountered: