Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Error setting autoscaling_config on bigtable_instance #11988

Closed
caieo opened this issue Jun 29, 2022 · 2 comments
Closed

Error setting autoscaling_config on bigtable_instance #11988

caieo opened this issue Jun 29, 2022 · 2 comments
Assignees
Labels

Comments

@caieo
Copy link

caieo commented Jun 29, 2022

Community Note

  • Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request.
  • Please do not leave +1 or me too comments, they generate extra noise for issue followers and do not help prioritize the request.
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment.
  • If an issue is assigned to the modular-magician user, it is either in the process of being autogenerated, or is planned to be autogenerated soon. If an issue is assigned to a user, that user is claiming responsibility for the issue. If an issue is assigned to hashibot, a community member has claimed the issue already.

Terraform Version

Affected Resource(s)

  • google_bigtable_instance

Expected Behavior

User should be able to set autoscalingConfig without an issue.

Actual Behavior

When setting both autoscalingConfig and numNodes, this underlying API error is returned: Error creating instance. rpc error: code = Canceled desc = Operation successfully rolled back : Both manual scaling (serve_nodes) and autoscaling (cluster_autoscaling_config) enabled. Exactly one must be set for CreateInstance/CreateCluster

(numNodes looks like it maps to serve_nodes)

However, if you remove numNodes, we run into the validation function that requires numNodes to have a value of at least one: Error: cluster.numNodes cannot be less than 1

Have other TF customers run into this?

References

@caieo caieo added the bug label Jun 29, 2022
@rileykarson rileykarson self-assigned this Jun 29, 2022
@rileykarson
Copy link
Collaborator

I took a look at this and don't believe it can be replicated when using the provider directly, but may be an issue for indirect users of the provider. We have a few tests that exercise various behaviours on the field, all of which succeed:

When setting both autoscalingConfig and numNodes, this underlying API error is returned

I'm not sure why this isn't a ConflictsWith here, honestly! Specifying both in config would be unusual- you'd be telling Terraform to set an autoscaling policy, but also force it to exactly num_nodes nodes (but it would then be able to scale from there). This is what it ends up doing on updates anyways, in theory (depending on the implementation of the client library we're using), but at least for most plans it will just set the same value as is already true.

However, if you remove numNodes, we run into the validation function that requires numNodes to have a value of at least one: Error: cluster.numNodes cannot be less than 1

I think the ValidateFunc is a positive here, as it stops zero or negative values, both of which are invalid. However, values in Terraform can be a little unusual- we can't tell in the provider's implementation whether a value is zero or unset, but Terraform Core, which runs the ValidateFunc, can. In practice that means that if a user has not defined a value in config, validation will not run even if the value is in state.

These are a fiddly set of fields and can cause issues for indirect users. If you're able to, I'd recommend working on a patched version of the provider that removes this validation. It's an improvement for direct users but has the negative impact you've seen when Terraform's rules about emptiness/nilness aren't followed exactly (and those rules aren't formally recorded anywhere I'm aware of, particularly due to the number of shims between Terraform Core and the SDK).

@github-actions
Copy link

github-actions bot commented Aug 1, 2022

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues.
If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.

@github-actions github-actions bot locked as resolved and limited conversation to collaborators Aug 1, 2022
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
Projects
None yet
Development

No branches or pull requests

2 participants