Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

creation of new objects, when using templates, instead of updating existing ones #50

Closed
ottes opened this issue Sep 20, 2023 · 4 comments
Labels
question Further information is requested

Comments

@ottes
Copy link

ottes commented Sep 20, 2023

Okay, well, i hope i am able to explain this behavior and how to solve it.

Think of the following config. When changing something in one of the yaml-files inside the manifests directory, the plan wants to destory&add, instead updating the existing entity.

data "kubectl_path_documents" "manifests-directory-yaml" {
  pattern = "./manifest/*.yaml"
  vars = {
    namespace = var.namespace
    stackName = var.stackName
    defaultReceiver = local.defaultAlertReceiver
    receivers = jsonencode(local.receivers)
  }
}

resource "kubectl_manifest" "directory-yaml" {
  for_each  = toset(data.kubectl_path_documents.manifests-directory-yaml.documents)
  yaml_body = each.value
}

my colleague posted the following to me: because of the for_each the yaml of the entire document lands in the terraform state, an unwanted sideeffect of that is, when you change a single letter in the manifest, it wants to destroy and recreate it. Instead it should just in-place update it.

we also desire to still have multiple things in a single yaml file, i could bring some example code, i was provides with this, but it only works for a single file inside a directory at the moment, i guess it will be more confusing, to have this posted here directly.

@alekc
Copy link
Owner

alekc commented Sep 20, 2023

Yeah, I am far from happy on that kubectl_path_documents which was hereditated from original fork.

The issue as you said, is that by using the toset, you put the content of your yaml files as keys, which triggers destruction and recreation every time contents changes.

If you had the individual manifests in your yaml files, you could do something in a much cleaner way, ie:

resource "kubectl_manifest" "example" {
  for_each         = fileset("./test", "*.yaml")
  yaml_body        = file("./test/${each.value}")
  wait             = false
  wait_for_rollout = false
}

filenames would be used as keys, and you would get the contents via file (or templatefile).

what should ideally happen with kubectl_path_documents is that it would read files, split yamls if necessary inside every single file, and create a hashmap which would have keys defined as kind-ns-nameofobject

While doable, it would be a breaking change with potentialy disaster for anyone using this data structure (due to the change of the keys).

The only solution which can be acceptable (in terms of the breaking changes), is to explicitely deprecate the kubectl_path_documents and create a brand new resource.

I will put this on backburner for now (since it can be mostly solved as per my example above and usage of individual manifests per file).

@ottes
Copy link
Author

ottes commented Sep 20, 2023

okay nice, i think i can work with your example, my colleague already came up with something similar. we have to discuss this at our side too. thanks for your feedback anyway!

@alekc
Copy link
Owner

alekc commented Sep 20, 2023

@ottes so, I've dig up a bit more... Turns out someone really messed up the documentation...

What you want is

data "kubectl_path_documents" "manifests-directory-yaml" {
  pattern = "./manifests/*.yaml"
}
resource "kubectl_manifest" "directory-yaml" {
  for_each  = data.kubectl_path_documents.manifests-directory-yaml.manifests
  yaml_body = each.value
}

and it will generate a nice plan


  # kubectl_manifest.directory-yaml["/apis/apiextensions.k8s.io/v1/customresourcedefinitions/name-here-crontabs.stable.example.com"] will be created
  + resource "kubectl_manifest" "directory-yaml" {
      + api_version             = "apiextensions.k8s.io/v1"
      + apply_only              = false
      + field_manager           = "kubectl"
      + force_conflicts         = false
      + force_new               = false
      + id                      = (known after apply)
      + kind                    = "CustomResourceDefinition"
      + live_manifest_incluster = (sensitive value)
      + live_uid                = (known after apply)
      + name                    = "name-here-crontabs.stable.example.com"
      + namespace               = (known after apply)
      + server_side_apply       = false
      + uid                     = (known after apply)
      + validate_schema         = true
      + wait_for_rollout        = true
      + yaml_body               = (sensitive value)
      + yaml_body_parsed        = <<-EOT
            apiVersion: apiextensions.k8s.io/v1
            kind: CustomResourceDefinition
            metadata:
              name: name-here-crontabs.stable.example.com
            spec:
              conversion:
                strategy: None
              group: stable.example.com
              names:
                kind: CronTab
                plural: name-here-crontabs
                shortNames:
                - ct
                singular: crontab
              scope: Namespaced
              version: v1
              versions:
              - name: v1
                served: true
                storage: true
        EOT
      + yaml_incluster          = (sensitive value)
    }

  # kubectl_manifest.directory-yaml["/apis/networking.k8s.io/v1/namespaces/xxx/ingresses/name-here"] will be created
  + resource "kubectl_manifest" "directory-yaml" {
      + api_version             = "networking.k8s.io/v1"
      + apply_only              = false
      + field_manager           = "kubectl"
      + force_conflicts         = false
      + force_new               = false
      + id                      = (known after apply)
      + kind                    = "Ingress"
      + live_manifest_incluster = (sensitive value)
      + live_uid                = (known after apply)
      + name                    = "name-here"
      + namespace               = "xxx"
      + server_side_apply       = false
      + uid                     = (known after apply)
      + validate_schema         = true
      + wait_for_rollout        = true
      + yaml_body               = (sensitive value)
      + yaml_body_parsed        = <<-EOT
            apiVersion: networking.k8s.io/v1
            kind: Ingress
            metadata:
              annotations:
                azure/frontdoor: enabled
                azure/sensitive: this is a big secret
                nginx.ingress.kubernetes.io/rewrite-target: /
              name: name-here
              namespace: xxx
            spec:
              ingressClassName: nginx
              rules:
              - host: '*.example.com'
                http:
                  paths:
                  - backend:
                      service:
                        name: test
                        port:
                          number: 80
                    path: /testpath
                    pathType: Prefix
        EOT
      + yaml_incluster          = (sensitive value)
    }

  # kubectl_manifest.directory-yaml["/apis/stable.example.com/v1/crontabs/name-here-crd"] will be created
  + resource "kubectl_manifest" "directory-yaml" {
      + api_version             = "stable.example.com/v1"
      + apply_only              = false
      + field_manager           = "kubectl"
      + force_conflicts         = false
      + force_new               = false
      + id                      = (known after apply)
      + kind                    = "CronTab"
      + live_manifest_incluster = (sensitive value)
      + live_uid                = (known after apply)
      + name                    = "name-here-crd"
      + namespace               = (known after apply)
      + server_side_apply       = false
      + uid                     = (known after apply)
      + validate_schema         = true
      + wait_for_rollout        = true
      + yaml_body               = (sensitive value)
      + yaml_body_parsed        = <<-EOT
            apiVersion: stable.example.com/v1
            kind: CronTab
            metadata:
              name: name-here-crd
            spec:
              cronSpec: '* * * * /5'
              image: my-awesome-cron-image
        EOT
      + yaml_incluster          = (sensitive value)
    }

  # kubectl_manifest.directory-yaml["/apis/stable.example.com/v1/crontabs/name-here-crd-single"] will be created
  + resource "kubectl_manifest" "directory-yaml" {
      + api_version             = "stable.example.com/v1"
      + apply_only              = false
      + field_manager           = "kubectl"
      + force_conflicts         = false
      + force_new               = false
      + id                      = (known after apply)
      + kind                    = "CronTab"
      + live_manifest_incluster = (sensitive value)
      + live_uid                = (known after apply)
      + name                    = "name-here-crd-single"
      + namespace               = (known after apply)
      + server_side_apply       = false
      + uid                     = (known after apply)
      + validate_schema         = true
      + wait_for_rollout        = true
      + yaml_body               = (sensitive value)
      + yaml_body_parsed        = <<-EOT
            apiVersion: stable.example.com/v1
            kind: CronTab
            metadata:
              name: name-here-crd-single
            spec:
              cronSpec: '* * * * /5'
              image: my-awesome-cron-image
        EOT
      + yaml_incluster          = (sensitive value)
    }

Plan: 4 to add, 0 to change, 0 to destroy.

Notice how the key of the resource is a proper unique attribute?

I will update the documentation of the related data source

@alekc alekc added the question Further information is requested label Sep 20, 2023
@ottes
Copy link
Author

ottes commented Sep 21, 2023

yeah, i tried it and it's working in 100% of the cases, thanks for you help, i see the other issue for changing the docs, so i will close this issue, beeing thankful :)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Further information is requested
Projects
None yet
Development

No branches or pull requests

2 participants