Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Internal neg annotation on service is not ignored #445

Closed
viktorvoltaire opened this issue May 14, 2019 · 17 comments
Closed

Internal neg annotation on service is not ignored #445

viktorvoltaire opened this issue May 14, 2019 · 17 comments

Comments

@viktorvoltaire
Copy link

Terraform Version

Terraform v0.11.13
provider.kubernetes v1.6.2

Affected Resource(s)

  • kubernetes_service

Terraform Configuration Files

resource "kubernetes_service" "example" {
  metadata {
    name = "example"

    labels {
      "name" = "example"
    }

    annotations {
      "cloud.google.com/neg" = "{\"ingress\": true}"
    }
  }

  spec {
    type             = "NodePort"
    session_affinity = "None"

    port {
      port        = 80
      target_port = 6000
      protocol    = "TCP"
    }

    selector {
      "name" = "example"
    }
  }
}

Expected Behavior

After applying this resource and doing another plan without any changes the expected behavior is: "No changes. Infrastructure is up-to-date."

Actual Behavior

  ~ kubernetes_service.example
      metadata.0.annotations.%:                           "2" => "1"
      metadata.0.annotations.cloud.google.com/neg-status: "{\"network_endpoint_groups\":{\"80\":\"k8s1-71f753e6-default-example-80-9b3d0709\"},\"zones\":[\"europe-west1-b\",\"europe-west1-c\",\"europe-west1-d\"]}" => "

Steps to Reproduce

Set up a service with the "cloud.google.com/neg" = "{\"ingress\": true}" annotation

Important Factoids

Running on GKE using https://cloud.google.com/load-balancing/docs/negs/

@alexsomesan
Copy link
Member

Please update the provider to version 1.7.0 because 1.6.2 doesn't contain the fix for internal annotation handling.

@viktorvoltaire
Copy link
Author

viktorvoltaire commented May 23, 2019

After updating to terraform v0.12 and updating the provider to 1.7.0 it's still failing, this shows up after every plan / apply. See output below:

Terraform v0.12.0
+ provider.kubernetes v1.7.0
  ~ resource "kubernetes_service" "example" {
        id                    = "default/example"
        load_balancer_ingress = []

      ~ metadata {
          ~ annotations      = {
                "cloud.google.com/neg"        = jsonencode(
                    {
                        ingress = true
                    }
                )
              - "cloud.google.com/neg-status" = jsonencode(
                    {
                      - network_endpoint_groups = {
                          - 80 = "k8s1-71f753e6-default-example-80-9b3d0709"
                        }
                      - zones                   = [
                          - "europe-west1-b",
                          - "europe-west1-c",
                          - "europe-west1-d",
                        ]
                    }
                ) -> null
            }

@ghost ghost removed the waiting-response label May 23, 2019
@alexsomesan
Copy link
Member

Yes, this is expected since “cloud.google.com” is not considered an „internal” annotation namespace.
Can you please share the entire resource template?

@viktorvoltaire
Copy link
Author

resource "kubernetes_service" "example" {

  metadata {
    name = "${var.service-name}"

    labels = {
      name = "${var.service-name}"
    }

    annotations = {
      "cloud.google.com/neg" = "{\"ingress\": true}"
    }
  }

  spec {
    type             = "NodePort"
    session_affinity = "None"

    port {
      port        = 80
      target_port = "${var.container-port}"
      protocol    = "TCP"
    }

    selector = {
      name = "${var.service-name}"
    }
  }
}

@mastercactapus
Copy link

Any progress/thoughts on this? Still "broken" in latest:

$ terraform version
Terraform v0.12.10
+ provider.azurerm v1.35.0
+ provider.google v2.16.0
+ provider.kubernetes v1.9.0

I think that specific annotation will just need to be ignored because of how it's used by GCP/GKE. It feels dirty, but I'm not sure if there's a better option.

As a workaround you can ignore all annotation changes for now by adding ignore_changes under lifecycle:

resource "kubernetes_service" "example" {
  lifecycle {
    ignore_changes = [
      metadata[0].annotations,
      metadata[0].annotations["cloud.google.com/neg-status"]
    ]
  }
  metadata {
    name = "${local.name}"
    annotations = {
      // NOTE: comment out the `ignore_changes` above if adding/removing values here
      "cloud.google.com/neg" = "{\"ingress\": true}"
    }
  }
  // ...
}

@dominik-lekse
Copy link

I confirm the issue in the following scenario.

Terraform v0.12.12
+ provider.google v2.17.0
+ provider.google-beta v2.17.0
+ provider.kubernetes v1.9.0
+ provider.random v2.2.0

The affected kubernetes_service resource

resource "kubernetes_service" "app" {
  count = var.cluster_enabled ? 1 : 0

  metadata {
    name      = var.app_name
    namespace = local.kubernetes_namespace

    labels = {
      "app" = var.app_name
    }

    annotations = {
      "cloud.google.com/neg" = "{\"exposed_ports\": {\"80\":{}}}"
    }
  }

  spec {
    type = "ClusterIP"

    selector = {
      "app" = var.app_name
    }

    port {
      port        = 80
      protocol    = "TCP"
      target_port = local.app_pod_port
    }
  }

  lifecycle {
    ignore_changes = [
      // TODO Exclude all annotations due to ignoring specific annotations is currently not supported by the provider (https://github.com/terraform-providers/terraform-provider-kubernetes/issues/445)
      //metadata[0].annotations,
      metadata[0].annotations["cloud.google.com/neg-status"]
    ]
  }
}

yields the follwing plan when not ignoring all annotations

Terraform will perform the following actions:

  # module.gcp_app.kubernetes_service.app[0] will be updated in-place
  ~ resource "kubernetes_service" "app" {
        id                    = "default/app"
        load_balancer_ingress = []

      ~ metadata {
          ~ annotations      = {
                "cloud.google.com/neg"        = jsonencode(
                    {
                        exposed_ports = {
                            80 = {}
                        }
                    }
                )
              - "cloud.google.com/neg-status" = jsonencode(
                    {
                      - network_endpoint_groups = {
                          - 80 = "k8s1-146f1e62-default-app-80-accc08ed"
                        }
                      - zones                   = [
                          - "europe-west3-c",
                        ]
                    }
                ) -> null
            }
            generation       = 0
            labels           = {
                "app" = "app"
            }
            name             = "app"
            namespace        = "default"
            resource_version = "4167803"
            self_link        = "/api/v1/namespaces/default/services/app"
            uid              = "146f1e62-fee4-11e9-875e-42010a9c0054"
        }

        spec {
            cluster_ip                  = "10.32.15.10"
            external_ips                = []
            load_balancer_source_ranges = []
            publish_not_ready_addresses = false
            selector                    = {
                "app" = "app"
            }
            session_affinity            = "None"
            type                        = "ClusterIP"

            port {
                node_port   = 0
                port        = 80
                protocol    = "TCP"
                target_port = "8080"
            }
        }
    }

Plan: 0 to add, 1 to change, 0 to destroy.

@emmaLP
Copy link

emmaLP commented Mar 19, 2020

Can this issue be made a priority as I can't ignore ALL changes to annotations as annotations are likely to change between config

This issue still affect version 1.11.1

@JDavis10213
Copy link

JDavis10213 commented Apr 7, 2020

I have ran into this issue trying to filter certain labels and annotations.

resource "kubernetes_namespace" "flux-namespace" {
  count = data.external.existing_namespace.result.namespace == "null" ? 1 : 0
  lifecycle {
    ignore_changes = [
      metadata[0].annotations["fluxcd.io/sync-checksum"],
      metadata[0].labels["fluxcd.io/sync-gc-mark"],
    ]
  }
  metadata {
    name = var.env-namespace
    labels = {
      terraform_managed = "dont_change_label"
    }
  }
}

Terraform will perform the following actions:

  # kubernetes_namespace.flux-namespace[0] will be updated in-place
  ~ resource "kubernetes_namespace" "flux-namespace" {
        id = "flux-demo-int"

      ~ metadata {
          ~ annotations      = {
              - "fluxcd.io/sync-checksum" = "1dbcf2bcd328bec3b53cad899fad48d1cfd75689" -> null
            }
            generation       = 0
          ~ labels           = {
              - "fluxcd.io/sync-gc-mark" = "sha256.D7LkpjEVfdGhi-rp3xVPB3gsTCtWS0i7d-DNuK0QeVA" -> null
                "terraform_managed"      = "dont_change_label"
            }
            name             = "flux-demo-int"
            resource_version = "130041024"
            self_link        = "/api/v1/namespaces/flux-demo-int"
            uid              = "1ec642e2-7902-11ea-8ba6-0a6f322bffe6"
        }
    }

Plan: 0 to add, 1 to change, 0 to destroy.

A fix for this would be great.

@asychev
Copy link

asychev commented Apr 9, 2020

Same problem

@davidlukac
Copy link

Same issue with

$ terraform version
Terraform v0.12.24
+ provider.external v1.2.0
+ provider.http v1.2.0
+ provider.kubernetes v1.11.1

@essjayhch
Copy link

essjayhch commented Apr 21, 2020

Although changing the neg-status annotation is non-destructive, it is very annoying that the terraform reports 'updates' for things that are internal.

Moreover in apps like ours where we have 10 or 12 of them from different modules, the volume of "diffs" caused by this is so large as to make a plan unreadable. This makes the process of debugging a plan very difficult because it is full of stuff that doesn't represent the change that is being applied.

For that reason alone, imo, this should be addressed fairly soon.

For clarity there are two issues from my perspective: 1) GKE 'creating a specific annotation by default kind of makes it an internal, and should be treated as such 2) not having the ability to ignore specific annotations within the lifecycle model is a problem (which may be out of scope for this team as I suspect that may have more to do with the internals of terraform rather than the k8s module, but it's worth noting.

@jrhouston jrhouston added acknowledged Issue has undergone initial review and is in our work queue. needs investigation labels May 20, 2020
@jrhouston
Copy link
Collaborator

You should be able to ignore this annotation using the lifecycle feature, if you check out this example: #741 (comment)

You need to create an empty value for the annotation initially so the lifecycle feature is able to pick up the annotation.

@aareet aareet added waiting-response and removed acknowledged Issue has undergone initial review and is in our work queue. labels May 27, 2020
@aareet
Copy link
Contributor

aareet commented Jul 2, 2020

Closing based on @jrhouston's response. Let me know if this is still an issue.

@aareet aareet closed this as completed Jul 2, 2020
@dtoddonx
Copy link

This has become an issue with GKE 1.17 -- Setting "neg-status" to a blank value and ignore_annotations off is no longer an option because GKE removes a neg-status annotation when there is no ingress defined. Also in 1.17, NEGs are created for all services, with or without an ingress. So "cloud.google.com/neg" is created anyway. That's controllable, however neg-status isn't created and is automatically removed if there's no ingress. You'll only need the ingress if you want an external load balancer instead of an internal cluster service.

On 1.16 and lower, the workaround works; but on 1.17 the workaround still results in an update to the service every time because GKE pulls the annotation off and Terraform wants to re-add it so that it can ignore it.

@viceice
Copy link

viceice commented Jul 31, 2020

This is pretty annoying. i'm using rancher, which will add a lot annotations, so i have to configure every possible to exclude. 😢

@aareet
Copy link
Contributor

aareet commented Aug 7, 2020

We are tracking the overall changes necessary to close this in #746. Closing in favour of the more holistic solution being planned in 746.

@aareet aareet closed this as completed Aug 7, 2020
@ghost
Copy link

ghost commented Sep 7, 2020

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues.

If you feel this issue should be reopened, we encourage creating a new issue linking back to this one for added context. If you feel I made an error 🤖 🙉 , please reach out to my human friends 👉 [email protected]. Thanks!

@ghost ghost locked and limited conversation to collaborators Sep 7, 2020
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet