Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

google_vpc_access_connector forces recreate with default max_throughput #10244

Open
bharathkkb opened this issue Oct 4, 2021 · 5 comments
Open

Comments

@bharathkkb
Copy link

bharathkkb commented Oct 4, 2021

Community Note

  • Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request.
  • Please do not leave +1 or me too comments, they generate extra noise for issue followers and do not help prioritize the request.
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment.
  • If an issue is assigned to the modular-magician user, it is either in the process of being autogenerated, or is planned to be autogenerated soon. If an issue is assigned to a user, that user is claiming responsibility for the issue. If an issue is assigned to hashibot, a community member has claimed the issue already.

Terraform Version

Terraform v1.0.6
on darwin_amd64

  • provider registry.terraform.io/hashicorp/google-beta v3.86.0

Affected Resource(s)

  • google_vpc_access_connector

Terraform Configuration Files

resource "google_vpc_access_connector" "connector" {
  provider      = google-beta
  project       = local.project_id
  name          = "vpc-con"
  region        = "us-central1"
  max_instances = 7
  min_instances = 2
  subnet {
    name = google_compute_subnetwork.custom_test.name
  }
  machine_type = "e2-standard-4"
}

resource "google_compute_subnetwork" "custom_test" {
  provider      = google-beta
  project       = local.project_id
  name          = "vpc-con"
  ip_cidr_range = "10.2.0.0/28"
  region        = "us-central1"
  network       = google_compute_network.custom_test.id
}

resource "google_compute_network" "custom_test" {
  project                 = local.project_id
  provider                = google-beta
  name                    = "vpc-con"
  auto_create_subnetworks = false
}

Expected Behavior

No diff after apply.

Actual Behavior

Diff with force recreate.

Initial apply

  # google_vpc_access_connector.connector will be created
  + resource "google_vpc_access_connector" "connector" {
      + id             = (known after apply)
      + machine_type   = "e2-standard-4"
      + max_instances  = 7
      + max_throughput = 300
      + min_instances  = 2
      + min_throughput = 200
      + name           = "vpc-con"
      + project        = "...."
      + region         = "us-central1"
      + self_link      = (known after apply)
      + state          = (known after apply)

      + subnet {
          + name       = "vpc-con"
          + project_id = (known after apply)
        }
    }

Subsequent apply

Terraform will perform the following actions:

  # google_vpc_access_connector.connector must be replaced
-/+ resource "google_vpc_access_connector" "connector" {
      ~ id             = "projects/...../locations/us-central1/connectors/vpc-con" -> (known after apply)
      ~ max_throughput = 700 -> 300 # forces replacement
        name           = "vpc-con"
      ~ self_link      = "projects/..../locations/us-central1/connectors/vpc-con" -> (known after apply)
      ~ state          = "READY" -> (known after apply)
        # (6 unchanged attributes hidden)

      ~ subnet {
            name       = "vpc-con"
          ~ project_id = "...." -> (known after apply)
        }
    }

Steps to Reproduce

  1. terraform apply

Important Factoids

This might be an API limitation and seems to happen when I specify an explicit max_instances value.

References

saw a similar issue here GoogleCloudPlatform/magic-modules#4823

b/308570051

@bharathkkb bharathkkb added the bug label Oct 4, 2021
@bharathkkb
Copy link
Author

Seems similar to the observation in #9228 (comment)

@nat-henderson
Copy link
Contributor

Hmm, suspicious ... I wonder if the API automatically sets it to 100 x max_instances, given this set of two examples...

@nat-henderson
Copy link
Contributor

Yes, it looks like we should probably prohibit providing both max_instances and max_throughput, according to the API.

At most one of these may be specified:

Scaling settings of a VPC Access Connector can be specified in terms of number of Google Compute Engine VM instances underlying the connector autoscaling group.
Scaling settings of a VPC Access Connector can be specified in terms of throughput.

Funny, the solution in #9228 worked for literally the one user who filed the bug - but will stop working as soon as they change their number of instances. Okay, sure - might be okay to just remove the default and add an AtMostOneOf.

@nat-henderson nat-henderson self-assigned this Oct 7, 2021
@nat-henderson
Copy link
Contributor

Yeah, okay, this needs a bigger change with more conflicts specified. Working on that.

@estensen
Copy link

Got a bit confused by this resource being GA, but updating a connector in GCP is preview
Temp work-around to click-ops connector and use that. We're using max_instances=3, which is 3 by default for us since the resource was created a while ago.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

5 participants