Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

google_vpc_access_connector getting redeployed on every terraform apply without changes in configuration #9228

Closed
fhaubner opened this issue May 25, 2021 · 4 comments · Fixed by GoogleCloudPlatform/magic-modules#4823, #9282 or hashicorp/terraform-provider-google-beta#3294
Assignees
Labels

Comments

@fhaubner
Copy link

Community Note

  • Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request.
  • Please do not leave +1 or me too comments, they generate extra noise for issue followers and do not help prioritize the request.
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment.
  • If an issue is assigned to the modular-magician user, it is either in the process of being autogenerated, or is planned to be autogenerated soon. If an issue is assigned to a user, that user is claiming responsibility for the issue. If an issue is assigned to hashibot, a community member has claimed the issue already.

Terraform Version

Terraform v0.15.3
on linux_amd64

  • provider registry.terraform.io/hashicorp/google v3.69.0
  • provider registry.terraform.io/hashicorp/google-beta v3.69.0

Affected Resource(s)

  • google_vpc_access_connector

Terraform Configuration Files

resource "google_compute_subnetwork" "custom_test" {
  provider      = google-beta
  name          = "vpc-con"
  ip_cidr_range = "10.2.0.0/28"
  region        = "europe-west3"
  network       = google_compute_network.custom_test.id
}

resource "google_compute_network" "custom_test" {
  provider                = google-beta
  name                    = "vpc-con"
  auto_create_subnetworks = false
}

resource "google_vpc_access_connector" "test-vpc-connector" {
  provider      = google-beta
  name          = "test-vpc-connector"
  subnet {
    name = google_compute_subnetwork.custom_test.name
  }
  machine_type = "e2-standard-4"
  min_instances = 2
  max_instances = 3
  project = "xyz"
  region  = "europe-west3"
}

Debug Output

Panic Output

Expected Behavior

The google_vpc_access_connector resource should work fine with providing "min_instances", "max_instances" and not providing "min_throughput", "max_throughput". Alternatively to the parameter set "min_instances", "max_instances", it should be possible to use "min_throughput", "max_throughput".

According to the documentation
https://cloud.google.com/sdk/gcloud/reference/beta/compute/networks/vpc-access/connectors/create
At most one of these may be specified:

  • Scaling settings of a VPC Access Connector can be specified in terms of number of Google Compute Engine VM instances underlying the connector autoscaling group.
  • Scaling settings of a VPC Access Connector can be specified in terms of throughput.

Actual Behavior

When only having the parameters "min_instances" / "max_instances" specified in the terraform configuration, terraform seems to assume a default value for "max_throughput" of 1000 and thus always sees a deviation in state, whenever the "max_instances" < 10.
The default value for "max_throughput" of 1000 would refer to "max_instances" of 10. Whenever "max_instances" has a value different than 10, the serverless VPC access gets redeployed in every "terraform apply".

When only using the parameters "min_throughput" / "max_throughput" instead of "min_instances" / "max_instances" it is working as expected and no re-deployment is happening.

Steps to Reproduce

Terraform apply needs to be executed twice with the above configuration.

  1. terraform apply
  2. terraform apply

Important Factoids

References

  • #0000
@fhaubner fhaubner added the bug label May 25, 2021
@edwardmedia edwardmedia self-assigned this May 25, 2021
@edwardmedia
Copy link
Contributor

edwardmedia commented May 25, 2021

For minThroughput and maxThroughput, API takes either both or none. not just one of them. If none is sent, API adds defaults in the response.

@fhaubner
Copy link
Author

Thank you for looking into this. I think the GCP REST API itself is behaving correctly for what I have been describing - I think the issue seems to be in the terraform resource. Please find additional details below.

Terraform resource bevavior

Block added to TF file, network and subnet already present and previously created with TF, "project" modified after pasting

resource "google_vpc_access_connector" "test-vpc-connector" {
  provider      = google-beta
  name          = "test-vpc-connector"
  subnet {
    name = google_compute_subnetwork.custom_test.name
  }
  machine_type = "e2-standard-4"
  min_instances = 2
  max_instances = 3
  project = "xyz"
  region  = "europe-west3"
}

terraform apply

  # google_vpc_access_connector.test-vpc-connector will be created
  + resource "google_vpc_access_connector" "test-vpc-connector" {
      + id             = (known after apply)
      + machine_type   = "e2-standard-4"
      + max_instances  = 3
      + max_throughput = 1000
      + min_instances  = 2
      + min_throughput = 200
      + name           = "test-vpc-connector"
      + project        = "xyz"
      + region         = "europe-west3"
      + self_link      = (known after apply)
      + state          = (known after apply)
      + subnet {
          + name       = "vpc-con"
          + project_id = (known after apply)
        }
    }
Plan: 1 to add, 0 to change, 0 to destroy.

Do you want to perform these actions?
  Terraform will perform the actions described above.
  Only 'yes' will be accepted to approve.
  Enter a value: yes
Apply complete! Resources: 1 added, 0 changed, 0 destroyed.

Until here the behavior was as expected

terraform apply

Terraform will perform the following actions:
  # google_vpc_access_connector.test-vpc-connector must be replaced
-/+ resource "google_vpc_access_connector" "test-vpc-connector" {
      ~ id             = "projects/xyz/locations/europe-west3/connectors/test-vpc-connector" -> (known after apply)
      ~ max_throughput = 300 -> 1000 # forces replacement
        name           = "test-vpc-connector"
      ~ self_link      = "projects/xyz/locations/europe-west3/connectors/test-vpc-connector" -> (known after apply)
      ~ state          = "READY" -> (known after apply)
        # (6 unchanged attributes hidden)
      ~ subnet {
            name       = "vpc-con"
          ~ project_id = "xyz" -> (known after apply)
        }
    }
Plan: 1 to add, 0 to change, 1 to destroy.

It is stating that max_throughput = 300 -> 1000 # forces replacement despite there has not been a value specified for max_throughput in the TF file. It seems to assumes the default value of 1000. I would expect TF to not compare the max_throughput as part of the state, if it is not specified.

According to the API documentation either throughput or number of instances should be specified for scaling:
https://cloud.google.com/sdk/gcloud/reference/beta/compute/networks/vpc-access/connectors/create

Workaround
When adding a line to the TF file with max_throughput being 100 * max_instances TF does no longer see a deviation in state and does not need to redeploy
max_throughput = 300

Alternatively defining the resource only with the use of throughput also works fine

resource "google_vpc_access_connector" "test-vpc-connector" {
  provider      = google-beta
  name          = "test-vpc-connector"
  subnet {
    name = google_compute_subnetwork.custom_test.name
  }
  machine_type = "e2-standard-4"
  min_throughput = 300
  max_throughput = 400
  project = "xyz"
  region  = "europe-west3"
}

terraform apply
...
terraform apply

Apply complete! Resources: 0 added, 0 changed, 0 destroyed.

VPC REST API behavior (v1beta1)

The API behavior you describe deviates in one point from what I see. The request only seems to fail with the message you saw if there is already a connector present in the specified subnet.

Sending

{
  "machineType": "e2-standard-4",
  "maxInstances": 3,
  "minInstances": 2,
  "subnet": {
    "name": "vpc-subnet-1"
  }
}

Leads to

{
  "name": "projects/xyz/locations/europe-west3/operations/xyz",
  "metadata": {
    "@type": "type.googleapis.com/google.cloud.vpcaccess.v1beta1.OperationMetadataV1Beta1",
    "method": "google.cloud.vpcaccess.v1beta1.VpcAccessService.CreateConnector",
    "createTime": "2021-05-26T07:45:56.375154Z",
    "target": "projects/xyz/locations/europe-west3/connectors/my-test-connector-vpc5"
  }
}

@edwardmedia
Copy link
Contributor

edwardmedia commented May 29, 2021

API returns 300 for max_throughput if it is not set. Change its default to 300. min_throughput remains unchanged which is 200

@github-actions
Copy link

github-actions bot commented Jul 3, 2021

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues.
If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.

@github-actions github-actions bot locked as resolved and limited conversation to collaborators Jul 3, 2021
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.