Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

add missing automount_service_account_token to the pod spec #261

Merged

Conversation

jhoblitt
Copy link
Contributor

The same as #251 with the code formatting issue fixed.

closes #251

@alexsomesan
Copy link
Member

Thanks. FYI, you can also push commits to the same PR. No need to re-open.

Let me run the acceptance tests on this.

@alexsomesan
Copy link
Member

Also, I think this new attribute should at least be added to one of the ACC test configurations, to be exercised by the tests.

@jhoblitt
Copy link
Contributor Author

@alexsomesan I can't push to the original PR opener's fork/branch so it had to be a new PR...

@jhoblitt
Copy link
Contributor Author

@alexsomesan I'm a provider n00b. I see a automount_service_account_token test for kubernetes_service_account. Is that a pattern that you'd like copied-ish for resources that use a pod spec or is there a better example you could point me at?

@ghost ghost added size/M and removed size/XS labels Dec 26, 2018
@jhoblitt jhoblitt force-pushed the pod-spec-automount-service-account-token branch from b2a338f to 9203ad7 Compare December 26, 2018 22:03
@jhoblitt
Copy link
Contributor Author

@alexsomesan After banging my head against the wall figuring out how to run the acc tests without the framework trying to startup > 1K pods all at once (a testacc README blurb would be greatly appreciated), I've taken a stab at adding a simple acceptance test... which has turned up two obvious problems:

  • an idempotency issue with the resource trying to remove the service account secret volume
  • an ordering problem with the service_account type.

The volume will probably require heuristically predicting the name. The later I don't know how to fix and would appreciate any guidance you might have.

$ make testacc TEST=./kubernetes TESTARGS='-run=TestAccKubernetesPod_config_with_automount_service_account_token'
==> Checking that code complies with gofmt requirements...
TF_ACC=1 go test ./kubernetes -v -run=TestAccKubernetesPod_config_with_automount_service_account_token -timeout 120m
=== RUN   TestAccKubernetesPod_config_with_automount_service_account_token
--- FAIL: TestAccKubernetesPod_config_with_automount_service_account_token (7.49s)
	testing.go:518: Step 0 error: After applying this step, the plan was not empty:
		
		DIFF:
		
		DESTROY/CREATE: kubernetes_pod.test
		  metadata.#:                                   "1" => "1"
		  metadata.0.generation:                        "0" => "<computed>"
		  metadata.0.labels.%:                          "1" => "1"
		  metadata.0.labels.app:                        "pod_label" => "pod_label"
		  metadata.0.name:                              "tf-acc-test-jnsnythdb6" => "tf-acc-test-jnsnythdb6"
		  metadata.0.namespace:                         "default" => "default"
		  metadata.0.resource_version:                  "12968" => "<computed>"
		  metadata.0.self_link:                         "/api/v1/namespaces/default/pods/tf-acc-test-jnsnythdb6" => "<computed>"
		  metadata.0.uid:                               "c7fdcebb-0954-11e9-bbed-42010a8001e6" => "<computed>"
		  spec.#:                                       "1" => "1"
		  spec.0.automount_service_account_token:       "true" => "true"
		  spec.0.container.#:                           "1" => "1"
		  spec.0.container.0.image:                     "nginx:1.7.9" => "nginx:1.7.9"
		  spec.0.container.0.image_pull_policy:         "IfNotPresent" => "<computed>"
		  spec.0.container.0.name:                      "containername" => "containername"
		  spec.0.container.0.resources.#:               "1" => "<computed>"
		  spec.0.container.0.stdin:                     "false" => "false"
		  spec.0.container.0.stdin_once:                "false" => "false"
		  spec.0.container.0.termination_message_path:  "/dev/termination-log" => "/dev/termination-log"
		  spec.0.container.0.tty:                       "false" => "false"
		  spec.0.container.0.volume_mount.#:            "1" => "0" (forces new resource)
		  spec.0.container.0.volume_mount.0.mount_path: "/var/run/secrets/kubernetes.io/serviceaccount" => ""
		  spec.0.container.0.volume_mount.0.name:       "tf-acc-test-bnq812kuef-token-zjwwn" => ""
		  spec.0.container.0.volume_mount.0.read_only:  "true" => "false"
		  spec.0.dns_policy:                            "ClusterFirst" => "ClusterFirst"
		  spec.0.host_ipc:                              "false" => "false"
		  spec.0.host_network:                          "false" => "false"
		  spec.0.host_pid:                              "false" => "false"
		  spec.0.hostname:                              "" => "<computed>"
		  spec.0.image_pull_secrets.#:                  "0" => "<computed>"
		  spec.0.node_name:                             "gke-jhoblitt-test-tf-pro-default-pool-75cefd47-v6f8" => "<computed>"
		  spec.0.restart_policy:                        "Always" => "Always"
		  spec.0.service_account_name:                  "tf-acc-test-bnq812kuef" => "tf-acc-test-bnq812kuef"
		  spec.0.termination_grace_period_seconds:      "30" => "30"
		  spec.0.volume.#:                              "1" => "0" (forces new resource)
		  spec.0.volume.0.name:                         "tf-acc-test-bnq812kuef-token-zjwwn" => ""
		  spec.0.volume.0.secret.#:                     "1" => "0"
		  spec.0.volume.0.secret.0.secret_name:         "tf-acc-test-bnq812kuef-token-zjwwn" => ""
		
		STATE:
		
		kubernetes_pod.test:
		  ID = default/tf-acc-test-jnsnythdb6
		  provider = provider.kubernetes
		  metadata.# = 1
		  metadata.0.annotations.% = 0
		  metadata.0.generate_name = 
		  metadata.0.generation = 0
		  metadata.0.labels.% = 1
		  metadata.0.labels.app = pod_label
		  metadata.0.name = tf-acc-test-jnsnythdb6
		  metadata.0.namespace = default
		  metadata.0.resource_version = 12968
		  metadata.0.self_link = /api/v1/namespaces/default/pods/tf-acc-test-jnsnythdb6
		  metadata.0.uid = c7fdcebb-0954-11e9-bbed-42010a8001e6
		  spec.# = 1
		  spec.0.active_deadline_seconds = 0
		  spec.0.automount_service_account_token = true
		  spec.0.container.# = 1
		  spec.0.container.0.args.# = 0
		  spec.0.container.0.command.# = 0
		  spec.0.container.0.env.# = 0
		  spec.0.container.0.env_from.# = 0
		  spec.0.container.0.image = nginx:1.7.9
		  spec.0.container.0.image_pull_policy = IfNotPresent
		  spec.0.container.0.lifecycle.# = 0
		  spec.0.container.0.liveness_probe.# = 0
		  spec.0.container.0.name = containername
		  spec.0.container.0.port.# = 0
		  spec.0.container.0.readiness_probe.# = 0
		  spec.0.container.0.resources.# = 1
		  spec.0.container.0.resources.0.limits.# = 0
		  spec.0.container.0.resources.0.requests.# = 1
		  spec.0.container.0.resources.0.requests.0.cpu = 100m
		  spec.0.container.0.resources.0.requests.0.memory = 
		  spec.0.container.0.security_context.# = 0
		  spec.0.container.0.stdin = false
		  spec.0.container.0.stdin_once = false
		  spec.0.container.0.termination_message_path = /dev/termination-log
		  spec.0.container.0.tty = false
		  spec.0.container.0.volume_mount.# = 1
		  spec.0.container.0.volume_mount.0.mount_path = /var/run/secrets/kubernetes.io/serviceaccount
		  spec.0.container.0.volume_mount.0.name = tf-acc-test-bnq812kuef-token-zjwwn
		  spec.0.container.0.volume_mount.0.read_only = true
		  spec.0.container.0.volume_mount.0.sub_path = 
		  spec.0.container.0.working_dir = 
		  spec.0.dns_policy = ClusterFirst
		  spec.0.host_ipc = false
		  spec.0.host_network = false
		  spec.0.host_pid = false
		  spec.0.hostname = 
		  spec.0.image_pull_secrets.# = 0
		  spec.0.init_container.# = 0
		  spec.0.node_name = gke-jhoblitt-test-tf-pro-default-pool-75cefd47-v6f8
		  spec.0.node_selector.% = 0
		  spec.0.restart_policy = Always
		  spec.0.security_context.# = 0
		  spec.0.service_account_name = tf-acc-test-bnq812kuef
		  spec.0.subdomain = 
		  spec.0.termination_grace_period_seconds = 30
		  spec.0.volume.# = 1
		  spec.0.volume.0.aws_elastic_block_store.# = 0
		  spec.0.volume.0.azure_disk.# = 0
		  spec.0.volume.0.azure_file.# = 0
		  spec.0.volume.0.ceph_fs.# = 0
		  spec.0.volume.0.cinder.# = 0
		  spec.0.volume.0.config_map.# = 0
		  spec.0.volume.0.downward_api.# = 0
		  spec.0.volume.0.empty_dir.# = 0
		  spec.0.volume.0.fc.# = 0
		  spec.0.volume.0.flex_volume.# = 0
		  spec.0.volume.0.flocker.# = 0
		  spec.0.volume.0.gce_persistent_disk.# = 0
		  spec.0.volume.0.git_repo.# = 0
		  spec.0.volume.0.glusterfs.# = 0
		  spec.0.volume.0.host_path.# = 0
		  spec.0.volume.0.iscsi.# = 0
		  spec.0.volume.0.name = tf-acc-test-bnq812kuef-token-zjwwn
		  spec.0.volume.0.nfs.# = 0
		  spec.0.volume.0.persistent_volume_claim.# = 0
		  spec.0.volume.0.photon_persistent_disk.# = 0
		  spec.0.volume.0.quobyte.# = 0
		  spec.0.volume.0.rbd.# = 0
		  spec.0.volume.0.secret.# = 1
		  spec.0.volume.0.secret.0.default_mode = 420
		  spec.0.volume.0.secret.0.items.# = 0
		  spec.0.volume.0.secret.0.optional = false
		  spec.0.volume.0.secret.0.secret_name = tf-acc-test-bnq812kuef-token-zjwwn
		  spec.0.volume.0.vsphere_volume.# = 0
		
		  Dependencies:
		    kubernetes_service_account.test
		kubernetes_service_account.test:
		  ID = default/tf-acc-test-bnq812kuef
		  provider = provider.kubernetes
		  automount_service_account_token = false
		  default_secret_name = tf-acc-test-bnq812kuef-token-zjwwn
		  image_pull_secret.# = 0
		  metadata.# = 1
		  metadata.0.annotations.% = 0
		  metadata.0.generate_name = 
		  metadata.0.generation = 0
		  metadata.0.labels.% = 0
		  metadata.0.name = tf-acc-test-bnq812kuef
		  metadata.0.namespace = default
		  metadata.0.resource_version = 12954
		  metadata.0.self_link = /api/v1/namespaces/default/serviceaccounts/tf-acc-test-bnq812kuef
		  metadata.0.uid = c7d0a07a-0954-11e9-bbed-42010a8001e6
		  secret.# = 0
FAIL
FAIL	github.com/terraform-providers/terraform-provider-kubernetes/kubernetes7.526s
make: *** [GNUmakefile:17: testacc] Error 1

@jhoblitt jhoblitt force-pushed the pod-spec-automount-service-account-token branch from 9203ad7 to 1e1fcdc Compare January 10, 2019 15:49
@jhoblitt
Copy link
Contributor Author

I've removed the depends_on and use of template to make the acc test more consistent with the existing tests.

@uschtwill
Copy link

uschtwill commented Jan 15, 2019

We actually fell on our face because of the (unchangable) default on this. Would very much like to see this merged. :)

@MarvelOrange
Copy link
Contributor

How we looking for this? I need the attribute to be optional to release 0.1.1 of my shared module that deploys the K8 dashboard. https://github.com/MarvelOrange/terraform-kubernetes-dashboard

@pdecat
Copy link
Contributor

pdecat commented Feb 1, 2019

I've taken a stab at adding a simple acceptance test... which has turned up two obvious problems:

* an idempotency issue with the resource trying to remove the service account secret volume

* an ordering problem with the service_account type.

The volume will probably require heuristically predicting the name. The later I don't know how to fix and would appreciate any guidance you might have.

I reckon that's what Radek meant in his comments when he said he wanted to avoid spurious diffs:

@StephenWithPH
Copy link

Adding a workaround for those that are (also) stubbing their toes on this...

resource "kubernetes_service_account" "workaround" {
  metadata {
    name      = "workaround"
  }

  automount_service_account_token = "true"
}

Provided you provision the service account with automount_service_account_token = "true", pods which use this service account will have the token auto mounted. This removes the need to specify that value on the pod.

See the service account docs compared to the pod docs.

@ghost ghost removed the waiting-response label Apr 10, 2019
@pdecat
Copy link
Contributor

pdecat commented Apr 10, 2019

Indeed, that's what I used yesterday as a work-around too!

Worth noting is that updating that attribute on an existing service account does not yet work, but will in the next release, cf. #377 (comment)

@jpreese
Copy link

jpreese commented Apr 11, 2019

@StephenWithPH I'm not quite sure I understand. The documentation says that the flag can be overridden at the Pod level. The problem is that Pods are hard coded to be false. So even if the ServiceAccount is associated with the Pod, wouldn't the hardcoded false value override it?

Regardless, I'm super excited for this to get merged in.

@jhoblitt
Copy link
Contributor Author

I don't believe this PR isn't in a mergable state as the provider tries to remove secret volume that gets mounted.

@jhoblitt jhoblitt force-pushed the pod-spec-automount-service-account-token branch 2 times, most recently from f2404cb to 2eb3ea8 Compare April 11, 2019 20:38
@jhoblitt
Copy link
Contributor Author

jhoblitt commented Apr 12, 2019

To add a bit more detail, the provider needs to be taught to ignore the volume that is auto-magically mounted at /var/run/secrets/kubernetes.io/serviceaccount. I am willing to try to implement this but I'm not sure how to go about it. Does anyone have design advise or know of an existing provider that implements this pattern I could use as a reference?

@jgwmaxwell
Copy link
Contributor

@jhoblitt am I right in thinking that the ignoring is only required for Pod and not for Deployment or ReplicationController that use templating? If so, maybe this could be split into 2 PRs - one that's ready to go that targets the higher-level constructs, and then another that deals with the issues surrounding Pod state?

Also, if I'm not mistaken, DaemonSet suffers from the same disease too, and would fit into the simpler category to fix.

@StephenWithPH
Copy link

the provider needs to be taught to ignore the volume that is auto-magically mounted

@jhoblitt ... here is some information that may be useful. I happened to get bitten by this in a very different context. It sounds like you may see the same thing. If so, it's really a larger question for terraform-provider-kubernetes.

kubectl (as a K8s client) does some fairly sophisticated things to assure that it will ...

apply the changes you’ve made, without overwriting any automated changes to properties you haven’t specified

https://github.com/kubernetes/kubernetes/pull/44121/files#diff-9ce7ea8441086bf1902b4f936f4601d0R40097 and https://github.com/kubernetes/kubernetes/pull/44121/files#diff-9ce7ea8441086bf1902b4f936f4601d0R37920 show annotations on k8s' OpenAPI spec (this is the easiest place I could find to link to) which indicate that volumes and volume mounts need to be patch merged.

I believe this is why the automagically-mounted secret volume is tripping you up.

I'm not savvy enough with Terraform plugins to quickly grok whether this plugin already tries to handle "upsert-ish" behavior... especially because that seems like it might conflict with Terraform's declarative design goals.

@alexpekurovsky
Copy link

alexpekurovsky commented May 1, 2019

Due to the fact I spent several hours reading issues like this one, I want to share workaround that works today. Instead of trusting pod to mount secret automatically, you need just to set in pod's spec it needs to mount it:

resource "kubernetes_service_account" "kubernetes_dashboard" {
    metadata {
        name      = "kubernetes-dashboard"
        namespace = "kube-system"

        labels {
            k8s-app = "kubernetes-dashboard"
        }
    }
}

resource "kubernetes_deployment" "kubernetes_dashboard" {
    ...
    spec {
        ...
        template {
            ...
            spec {
                volume {
                    name = "${kubernetes_service_account.kubernetes_dashboard.default_secret_name}"
                    secret {
                        secret_name = "${kubernetes_service_account.kubernetes_dashboard.default_secret_name}"
                    }
                }
                ...
                container {
                    ...
                    volume_mount {
                        name       = "${kubernetes_service_account.kubernetes_dashboard.default_secret_name}"
                        mount_path = "/var/run/secrets/kubernetes.io/serviceaccount"
                        read_only  = true
                    }
                }
            }
        }
    }
}

@bkarakashev
Copy link

Why this is not been merged yet?

@jhoblitt
Copy link
Contributor Author

I'm happy to do what work is needed to get this merged but need guidance...

@alexsomesan
Copy link
Member

@jhoblitt I've done the rebase, but haven't pushed yet. I'm changing the tests slightly so they don't run into race conditions with the volume mounting. Once the tests are stable we can merge. It's faster if I do it because I need to try out a few different approaches. I'm aiming to include this in the next release.

@alexsomesan
Copy link
Member

alexsomesan commented Jun 15, 2019

So, solving the problem with the volume showing up in Pod's diff isn't quite trivial, without making some unsubstantiated assumptions about the names of the volume and service account.

While we find a way to reliably filter out the automounted token volume from Pods, I propose we move forward with the fix only for Deployments, DaemonSets and the rest of the managed workload resources, while we leave Pods as they are for now. I expect this to have less disruptive impact while solving this issue for the majority of folks.

Please +1 the comment if you think this will work well for your use case.

@alexsomesan alexsomesan force-pushed the pod-spec-automount-service-account-token branch from cdf0ec8 to 2ccb673 Compare June 27, 2019 22:24
Copy link
Member

@alexsomesan alexsomesan left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good IMO. CI passes.

@davisford
Copy link

davisford commented Jul 18, 2019

Adding a workaround for those that are (also) stubbing their toes on this...

resource "kubernetes_service_account" "workaround" {
  metadata {
    name      = "workaround"
  }

  automount_service_account_token = "true"
}

Provided you provision the service account with automount_service_account_token = "true", pods which use this service account will have the token auto mounted. This removes the need to specify that value on the pod.

See the service account docs compared to the pod docs.

I cannot get this to work with

$ terraform version
Terraform v0.12.5
+ provider.k8sraw v0.2.0
+ provider.kubernetes v1.8.0
+ provider.null v2.1.2
+ provider.template v2.1.2
resource "kubernetes_service_account" "operator" {
  metadata {
    name = "zalando-postgres-operator"
    namespace = "default"
  }

  # workaround for https://github.com/terraform-providers/terraform-provider-kubernetes/pull/261#issuecomment-481872856
  automount_service_account_token = true
}

When I spawn an operator that needs the service account it still fails. It is still spec'd with automountServiceAccountToken: false

spec:
  automountServiceAccountToken: false
  containers:
  serviceAccount: zalando-postgres-operator
  serviceAccountName: zalando-postgres-operator

I have to manually add the volume/volume mount to make it work:

      spec {

        service_account_name = kubernetes_service_account.operator.metadata.0.name

        volume {
          name = "service-account-token"
          secret {
            secret_name = kubernetes_service_account.operator.default_secret_name
          }
        }


          volume_mount {
            mount_path = "/var/run/secrets/kubernetes.io/serviceaccount"
            name = "service-account-token"
            read_only = true
          }

This isn't working b/c the operator pod itself spawns additional pods from a CRD. Those pods seem to also inherit the automountServiceAccountToken: false when they get spawned from the operator, and I cannot figure out how to fix.

Should I be using the master branch or a commit past release v1.8.0?

EDIT -- ok, it does work when the operator spawns new pods -- those pods are now auto-mounting the service token. The operator itself, however, still appears to need to manually mount the volume in the deployment spec.

// Setting this default to false prevents a perpetual diff caused by volume_mounts
// being mutated on the server side as Kubernetes automatically adds a mount
// for the service account token
podSpecFields["automount_service_account_token"].Default = false
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

FWIW - I resolved this issue in my fork of the provider:
sl1pm4t@e0e953c#diff-7424630baa3c87d787d88925b5f81cac

@ghost ghost locked and limited conversation to collaborators Apr 21, 2020
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

Successfully merging this pull request may close these issues.