-
Notifications
You must be signed in to change notification settings - Fork 1.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
google_vpc_access_connector getting redeployed on every terraform apply
without changes in configuration
#9228
google_vpc_access_connector getting redeployed on every terraform apply
without changes in configuration
#9228
Comments
For minThroughput and maxThroughput, API takes either both or none. not just one of them. If none is sent, API adds defaults in the response. |
Thank you for looking into this. I think the GCP REST API itself is behaving correctly for what I have been describing - I think the issue seems to be in the terraform resource. Please find additional details below. Terraform resource bevaviorBlock added to TF file, network and subnet already present and previously created with TF, "project" modified after pasting
Until here the behavior was as expected
It is stating that max_throughput = 300 -> 1000 # forces replacement despite there has not been a value specified for max_throughput in the TF file. It seems to assumes the default value of 1000. I would expect TF to not compare the max_throughput as part of the state, if it is not specified. According to the API documentation either throughput or number of instances should be specified for scaling: Workaround Alternatively defining the resource only with the use of throughput also works fine
VPC REST API behavior (v1beta1)The API behavior you describe deviates in one point from what I see. The request only seems to fail with the message you saw if there is already a connector present in the specified subnet. Sending
Leads to
|
API returns 300 for |
I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues. |
Community Note
modular-magician
user, it is either in the process of being autogenerated, or is planned to be autogenerated soon. If an issue is assigned to a user, that user is claiming responsibility for the issue. If an issue is assigned tohashibot
, a community member has claimed the issue already.Terraform Version
Terraform v0.15.3
on linux_amd64
Affected Resource(s)
Terraform Configuration Files
Debug Output
Panic Output
Expected Behavior
The google_vpc_access_connector resource should work fine with providing "min_instances", "max_instances" and not providing "min_throughput", "max_throughput". Alternatively to the parameter set "min_instances", "max_instances", it should be possible to use "min_throughput", "max_throughput".
According to the documentation
https://cloud.google.com/sdk/gcloud/reference/beta/compute/networks/vpc-access/connectors/create
At most one of these may be specified:
Actual Behavior
When only having the parameters "min_instances" / "max_instances" specified in the terraform configuration, terraform seems to assume a default value for "max_throughput" of 1000 and thus always sees a deviation in state, whenever the "max_instances" < 10.
The default value for "max_throughput" of 1000 would refer to "max_instances" of 10. Whenever "max_instances" has a value different than 10, the serverless VPC access gets redeployed in every "terraform apply".
When only using the parameters "min_throughput" / "max_throughput" instead of "min_instances" / "max_instances" it is working as expected and no re-deployment is happening.
Steps to Reproduce
Terraform apply needs to be executed twice with the above configuration.
terraform apply
terraform apply
Important Factoids
References
The text was updated successfully, but these errors were encountered: