Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Order of google_cloud_run_v2_service => volumes constantly need update #2698

Closed
lyricnz opened this issue Nov 16, 2024 · 11 comments · Fixed by #2700
Closed

Order of google_cloud_run_v2_service => volumes constantly need update #2698

lyricnz opened this issue Nov 16, 2024 · 11 comments · Fixed by #2700
Assignees

Comments

@lyricnz
Copy link
Contributor

lyricnz commented Nov 16, 2024

Describe the bug
When applying the same terraform code repeatedly, the order of two volumes/volume_mounts is always different, creating unnecessary changes for "apply"

Environment

❯ terraform -version
OpenTofu v1.8.5
on darwin_amd64
+ provider registry.opentofu.org/hashicorp/google v6.11.1
+ provider registry.opentofu.org/hashicorp/google-beta v6.11.1
❯ git rev-parse --short HEAD
e0d6f0ea

To Reproduce
A resource with two volume/mounts, and apply these. eg:

module "cloud_run" {
  source     = "../../../modules/cloud-run-v2"
  project_id = module.project.project_id
  name = var.run_svc_name
  region     = var.region

  revision = {
    gen2_execution_environment = true
  }

  containers = {
    default = {
      image = var.image
      volume_mounts = {
        website = "/var/www/html/sites"
        cloudsql = "/cloudsql"
      }
    }
  }
  volumes = {
    website = {
      gcs = {
        bucket       = module.bucket.id
        is_read_only = false
      }
    }
    "cloudsql" = {
      cloud_sql_instances = ["myproject:australia-southeast2:mydb-db"]  # TODO: var
    }
  }
}

Expected behavior
On subsequent run, there should be no change/diff.

Result
Immediately after applying, when terraform-apply again:

OpenTofu will perform the following actions:

  # module.cloud_run.google_cloud_run_v2_service.service[0] will be updated in-place
  ~ resource "google_cloud_run_v2_service" "service" {
        id                      = "projects/myproject/locations/australia-southeast2/services/thingy"
        name                    = "thingy"
        # (28 unchanged attributes hidden)

      ~ template {
            # (7 unchanged attributes hidden)

          ~ containers {
                name       = "default"
                # (4 unchanged attributes hidden)

              ~ volume_mounts {
                  ~ mount_path = "/var/www/html/sites" -> "/cloudsql"
                  ~ name       = "website" -> "cloudsql"
                }
              ~ volume_mounts {
                  ~ mount_path = "/cloudsql" -> "/var/www/html/sites"
                  ~ name       = "cloudsql" -> "website"
                }

                # (3 unchanged blocks hidden)
            }

          ~ volumes {
              ~ name = "website" -> "cloudsql"

              + cloud_sql_instance {
                  + instances = [
                      + "myproject:australia-southeast2:my-db",
                    ]
                }

              - gcs {
                  - bucket        = "myname-thingy-website" -> null
                  - mount_options = [] -> null
                  - read_only     = false -> null
                }
            }
          ~ volumes {
              ~ name = "cloudsql" -> "website"

              - cloud_sql_instance {
                  - instances = [
                      - "myproject:australia-southeast2:my-db",
                    ] -> null
                }

              + gcs {
                  + bucket    = "myname-thingy-website"
                  + read_only = false
                }
            }

            # (2 unchanged blocks hidden)
        }

        # (1 unchanged block hidden)
    }

Plan: 0 to add, 1 to change, 0 to destroy.

Additional context
Is this issue with underlying module, or our use of it?

@lyricnz
Copy link
Contributor Author

lyricnz commented Nov 16, 2024

This seems to be a fairly-common problem with Terraform and complex attributes...

In this case, is it because there's some "magic" involved in the cloudsql mapping? If I do a gcloud run services describe myservice I don't see the cloudsql as a volume-map at all...

@lyricnz lyricnz changed the title Order of google_cloud_run_v2_service => volumes constantly swapping back and forth Order of google_cloud_run_v2_service => volumes constantly need update Nov 16, 2024
@lyricnz
Copy link
Contributor Author

lyricnz commented Nov 16, 2024

from debug output:

2024-11-16T14:38:22.245+1100 [WARN]  Provider "provider[\"registry.opentofu.org/hashicorp/google-beta\"]" produced an unexpected new value for module.cloud_run.google_cloud_run_v2_service.service[0], but we are tolerating it because it is using the legacy plugin SDK.
    The following problems may be the cause of any confusing errors from downstream operations:
      - .template[0].volumes[0].name: was cty.StringVal("cloudsql"), but now cty.StringVal("website")
      - .template[0].volumes[0].cloud_sql_instance: block count changed from 1 to 0
      - .template[0].volumes[0].gcs: block count changed from 0 to 1
      - .template[0].volumes[1].name: was cty.StringVal("website"), but now cty.StringVal("cloudsql")
      - .template[0].volumes[1].cloud_sql_instance: block count changed from 0 to 1
      - .template[0].volumes[1].gcs: block count changed from 1 to 0
      - .template[0].containers[0].volume_mounts[0].mount_path: was cty.StringVal("/cloudsql"), but now cty.StringVal("/var/www/html/sites")
      - .template[0].containers[0].volume_mounts[0].name: was cty.StringVal("cloudsql"), but now cty.StringVal("website")
      - .template[0].containers[0].volume_mounts[1].mount_path: was cty.StringVal("/var/www/html/sites"), but now cty.StringVal("/cloudsql")
      - .template[0].containers[0].volume_mounts[1].name: was cty.StringVal("website"), but now cty.StringVal("cloudsql")

@lyricnz
Copy link
Contributor Author

lyricnz commented Nov 16, 2024

I think upstream module has an issue here, and we're triggering it. If we use the following with google_cloud_run_v2_service

    containers {
      image = "us-docker.pkg.dev/cloudrun/container/hello"
      volume_mounts {
        name       = "cloudsql"
        mount_path = "/cloudsql"
      }
      volume_mounts {
        name       = "bucket"
        mount_path = "/var/www"
      }
    }

then plan/apply shows a delta every time. If we swap the order of the volume mounts, then it shows no-changes. So, I assume this means that Google is returning these in a particular order, not the order they were added. Interestingly gcloud run services describe doesn't show the SQL mount at all.

The challenge is that (I think) we are reordering any volume_mounts that cloud-run-v2 module is passed by using:

        dynamic "volume_mounts" {
          for_each = coalesce(containers.value.volume_mounts, tomap({}))
          content {
            name       = volume_mounts.key
            mount_path = volume_mounts.value
          }
        }

If we do cloudsql mounts last this seems to work (at least for this case). I don't know terraform well enough to do a filter, and a separate dynamic content.

There's probably something similar for the volumes block

@lyricnz
Copy link
Contributor Author

lyricnz commented Nov 16, 2024

TLDR: we seem to need to provide the cloudsql volume/volume_mount last when calling the upstream module. for now

@lyricnz
Copy link
Contributor Author

lyricnz commented Nov 16, 2024

Raised a bug on the provider hashicorp/terraform-provider-google#20360 but... pragmatically maybe we should work around it here?

@wiktorn
Copy link
Collaborator

wiktorn commented Nov 16, 2024

I'm able to reproduce the issue with the volume name website, but not able to reproduce with volume name bucket.

So there is no good solution within module as of now 🫤

@ludoo
Copy link
Collaborator

ludoo commented Nov 16, 2024

@wiktorn would it be enough to split the dynamic block into multiple dynamic blocks, and pin the order? It's quite a bit of work as we would need to correlate volumes and mounts, but doable.

@wiktorn
Copy link
Collaborator

wiktorn commented Nov 16, 2024

@wiktorn would it be enough to split the dynamic block into multiple dynamic blocks, and pin the order?

Let me test that out.

@wiktorn wiktorn self-assigned this Nov 16, 2024
@lyricnz
Copy link
Contributor Author

lyricnz commented Nov 16, 2024

I was experimenting with splitting the volume_mount loop (but ran out of time). Iirc just reordering the volume stanza fixed that half

@wiktorn
Copy link
Collaborator

wiktorn commented Nov 16, 2024

I have a working fix for Cloud SQL, it's needed in both volume definitions and mounts, just checking all the other mount types as I'm at it.

@lyricnz
Copy link
Contributor Author

lyricnz commented Nov 17, 2024

Works for me, in my limited case.

No changes. Your infrastructure matches the configuration.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
3 participants