Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Requires replacement on google_container_cluster for non-existent attribute path #8038

Closed
Overbryd opened this issue Dec 17, 2020 · 4 comments · Fixed by GoogleCloudPlatform/magic-modules#4345, #8066 or hashicorp/terraform-provider-google-beta#2811
Assignees
Labels

Comments

@Overbryd
Copy link

Community Note

  • Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request.
  • Please do not leave +1 or me too comments, they generate extra noise for issue followers and do not help prioritize the request.
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment.
  • If an issue is assigned to the modular-magician user, it is either in the process of being autogenerated, or is planned to be autogenerated soon. If an issue is assigned to a user, that user is claiming responsibility for the issue. If an issue is assigned to hashibot, a community member has claimed the issue already.

Terraform Version

$ terraform -v
Terraform v0.14.2

Affected Resource(s)

  • google_container_cluster

Terraform Configuration Files

resource google_container_cluster "this" {
  provider           = google-beta
  name               = local.cluster_name
  location           = local.location
  node_locations     = var.region != "" ? sort(random_shuffle.available_zones.result) : []
  min_master_version = local.version
  network            = var.network
  subnetwork         = var.subnetwork

  cluster_autoscaling {
    enabled = var.cluster_autoscaling_enabled

    resource_limits {
      resource_type = "cpu"
      minimum       = 0
      maximum       = var.cluster_autoscaling_max_cpu
    }

    resource_limits {
      resource_type = "memory"
      minimum       = 0
      maximum       = var.cluster_autoscaling_max_memory
    }

    auto_provisioning_defaults {
      oauth_scopes = local.node_oauth_scopes
    }
  }

  lifecycle {
    ignore_changes = [
      node_pool,
      initial_node_count
    ]
  }

  # We can't create a cluster with no node pool defined, but we want to only use
  # separately managed node pools. So we create the smallest possible default
  # node pool and immediately delete it.
  remove_default_node_pool = true
  initial_node_count       = 1

  master_auth {
    # Note: Basic auth is deprecated.
    #       Setting an empty username and password explicitly disables basic auth.
    username = ""
    password = ""
    # Note: Client certificate configuration is deprecated.
    client_certificate_config {
      issue_client_certificate = false
    }
  }

  master_authorized_networks_config {
    dynamic "cidr_blocks" {
      for_each = var.master_trusted_cidr_blocks
      content {
        display_name = cidr_blocks.key
        cidr_block   = cidr_blocks.value
      }
    }
  }

  maintenance_policy {
    daily_maintenance_window {
      start_time = "11:00"
    }
  }

  ip_allocation_policy {
    cluster_secondary_range_name  = var.cluster_secondary_range_name
    services_secondary_range_name = var.services_secondary_range_name
  }

  private_cluster_config {
    enable_private_endpoint = false
    enable_private_nodes    = false
    master_ipv4_cidr_block  = "172.16.0.0/28"
  }
}

resource google_container_node_pool "pools" {
  count = length(var.node_pools)
  name = lookup(var.node_pools[count.index], "name", "pool-${count.index}")
  cluster = google_container_cluster.this.name
  location = var.location
  version = lookup(var.node_pools[count.index], "auto_upgrade", false) ? "" : local.version

  management {
    auto_upgrade = lookup(var.node_pools[count.index], "auto_upgrade", false)
    auto_repair = true
  }

  autoscaling {
    min_node_count = lookup(var.node_pools[count.index], "min_node_count", 0)
    max_node_count = lookup(var.node_pools[count.index], "max_node_count", 100)
  }

  initial_node_count = lookup(
    var.node_pools[count.index],
    "initial_node_count",
    lookup(var.node_pools[count.index], "min_node_count", 0)
  )

  node_config {
    image_type = lookup(var.node_pools[count.index], "image_type", "COS")
    machine_type = lookup(var.node_pools[count.index], "machine_type", "n1-standard-2")
    disk_size_gb = lookup(var.node_pools[count.index], "disk_size_gb", 100)
    disk_type = lookup(var.node_pools[count.index], "disk_type", "pd-standard")

    oauth_scopes = local.node_oauth_scopes

    labels = {
      "cluster_name" = local.cluster_name
      "node_pool" = lookup(var.node_pools[count.index], "name", "pool-${count.index}")
    }

    dynamic "taint" {
      for_each = lookup(var.node_pools[count.index], "dedicated", "") != "" ? [var.node_pools[count.index]["dedicated"]] : []

      content {
        effect = "NO_SCHEDULE"
        key = "dedicated"
        value = taint.value
      }
    }
  }

  lifecycle {
    ignore_changes = [
      initial_node_count
    ]
  }
}

Debug Output

Error: Provider produced invalid plan

Provider "registry.terraform.io/hashicorp/google-beta" has indicated "requires
replacement" on module.airflow_cluster.google_container_cluster.this for a
non-existent attribute path
cty.Path{cty.GetAttrStep{Name:"private_cluster_config"},
cty.IndexStep{Key:cty.NumberIntVal(0)},
cty.GetAttrStep{Name:"master_ipv4_cidr_block"}}.

This is a bug in the provider, which should be reported in the provider's own
issue tracker.

Expected Behavior

  • Non-changes on the cluster should be idempotent, and do not block the whole terraform run

Actual Behavior

  • The terraform run blocks with the error message from above

Steps to Reproduce

  1. terraform apply
@ghost ghost added bug labels Dec 17, 2020
@Overbryd
Copy link
Author

Overbryd commented Dec 17, 2020

The workaround (I still think this is a bug, because it should not end up in this abysmal state) for anybody who is interested:

private_cluster_config {
    enable_private_endpoint = false
    enable_private_nodes    = false
    # When enable_private_nodes is false, you can create a cluster with a master_ipv4_cidr_block.
    # But any subsequent plan/apply on the cluster will fail with the error above.
    # In theory you should not do that, but it might happen.
    # So when enable_private_nodes is false, you must set master_ipv4_cidr_block to an empty string ""
    master_ipv4_cidr_block  = ""
}

I am using the resource as part of a greater module, and therefore this combination of parameters popped up.
I advise on users who are setting those values from module inputs, to make master_ipv4_cidr_block a conditional setting based on enable_private_nodes.

For example:

private_cluster_config {
    enable_private_endpoint = false
    enable_private_nodes    = var.some_private_node_config
    # When enable_private_nodes is false, you can create a cluster with a master_ipv4_cidr_block.
    # But any subsequent plan/apply on the cluster will fail with the error above.
    # In theory you should not do that, but it might happen.
    # So when enable_private_nodes is false, you must set master_ipv4_cidr_block to an empty string ""
    master_ipv4_cidr_block  = var.some_private_node_config ? var.some_master_ipv4_cidr_block : ""
}

@edwardmedia edwardmedia self-assigned this Dec 17, 2020
@edwardmedia
Copy link
Contributor

edwardmedia commented Dec 17, 2020

@Overbryd here is what the document specifies about master_ipv4_cidr_block. This field only applies to private clusters, when enable_private_nodes is true which seems to match what you have found. Is there any issue left that needs to be addressed?

@Overbryd
Copy link
Author

@edwardmedia I think it should fail at an earlier stage rather 1) allowing to create the cluster with the wrong settings and 2) marking the cluster for replacement on a subsequent apply.

@ghost
Copy link

ghost commented Jan 22, 2021

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues.

If you feel this issue should be reopened, we encourage creating a new issue linking back to this one for added context. If you feel I made an error 🤖 🙉 , please reach out to my human friends 👉 [email protected]. Thanks!

@ghost ghost locked as resolved and limited conversation to collaborators Jan 22, 2021
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.