Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Do not hardcode AutomountServiceAccountToken #38

Closed
mingfang opened this issue Jul 27, 2017 · 22 comments
Closed

Do not hardcode AutomountServiceAccountToken #38

mingfang opened this issue Jul 27, 2017 · 22 comments
Labels

Comments

@mingfang
Copy link

Currently AutomountServiceAccountToken is hardcoded to false
https://github.com/terraform-providers/terraform-provider-kubernetes/search?utf8=✓&q=AutomountServiceAccountToken&type=

According to the Kubernetes documentation https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/, this value should either be computed or set by each pod.

@radeksimko
Copy link
Member

Hi @mingfang
this was an intention when implementing both Pod and RC to avoid spurious diffs.
Unfortunately there's no way to identify the default service account which gets created automatically as part of this process which makes diffing (between what the user has defined and what is the reality on the server) very tricky. That is why we decided to disable this functionality.

In fact I wish there was a similar way to disable creation of default secrets in the service account. I had to work around the problem there in less-than-ideal way:
https://github.com/terraform-providers/terraform-provider-kubernetes/blob/master/kubernetes/resource_kubernetes_service_account.go#L85-L106

From user's point of view I don't believe there's much downside to AutomountServiceAccountToken being set to false - the user just has to be explicit. Maybe the only downside is expectations of default behaviour when coming from kubectl and/or other tools.

Is there any particular reason you need the ability to automount the default service account token?
If not - do you mind me closing this issue with the explanation above?

@mingfang
Copy link
Author

mingfang commented Jul 27, 2017

The automountServiceAccountToken flag is independent of the service account, default or explicit.
This simply tells Kubernetes to mount the token in a tmpfs inside the pod.
Without it I don't see how any pod can use service accounts.
The only reason to set it to false would be if you really paranoid and don't want a particular pod to receive any secret tokens at all.

@mingfang
Copy link
Author

@radeksimko
Copy link
Member

The automountServiceAccountToken flag is independent of the service account, default or explicit.

I was just mentioning service account because it has a very similar problem which we had to work around, unlike in Pod and RC where we were able to just disable this behaviour.

Without it I don't see how any pod can use service accounts.

There's certainly a way to specify service account when scheduling a pod or RC, see related docs at
https://github.com/terraform-providers/terraform-provider-kubernetes/blob/master/website/docs/r/pod.html.markdown#spec -> specifically service_account_name.

The only reason to set it to false would be if you really paranoid and don't want a particular pod to receive any secret tokens at all.

I personally think that no access by default (and explicitness) is generally a good security practice, but I fully understand some other people may have different opinions. Either way the main motivation here was avoid spurious diffs as mentioned, the security is rather a (IMO positive) side effect.

This was what I did to enable it on my fork.

Have you tried running plan when using this patched version? I believe you'll keep getting diffs in volumes attached to the pod.

@mingfang
Copy link
Author

I'm a bit confuse since you seem to talk about the automountServiceAccountToken and the service_account_name at the same time.
I believe there might be a problem with the default service_account_name. But that has nothing to do with the automountServiceAccountToken flag.

Yes, I've been using my fork with automountServiceAccountToken = true and no problems on my local kubernetes cluster.

@mingfang
Copy link
Author

Btw you can play with my fork easily here https://github.com/mingfang/docker-terraform

@radeksimko
Copy link
Member

I'm a bit confuse since you seem to talk about the automountServiceAccountToken and the service_account_name at the same time.

Right, it is a bit confusing because there's a couple of actions enabled by that field, which is documented at https://kubernetes.io/docs/admin/service-accounts-admin/#service-account-admission-controller - so both service account association and volume mount are enabled by this single option.

I have not inspected the container and tried accessing the API, but here's a config I was able to successfully apply which mimics the behaviour of automountServiceAccountToken:

resource "kubernetes_pod" "test" {
  metadata {
    name = "terraform-example"
  }

  spec {
    service_account_name = "${kubernetes_service_account.example.metadata.0.name}"
    container {
      image = "nginx:1.7.9"
      name  = "example"
      volume_mount {
        mount_path = "/var/run/secrets/kubernetes.io/serviceaccount"
        name = "${kubernetes_service_account.example.default_secret_name}"
        read_only = true
      }
    }
    volume {
      name = "${kubernetes_service_account.example.default_secret_name}"
      secret {
        secret_name = "${kubernetes_service_account.example.default_secret_name}"
      }
    }
  }
}

resource "kubernetes_service_account" "example" {
  metadata {
    name = "terraform-example"
  }
}

Yes, I've been using my fork with automountServiceAccountToken = true and no problems on my local kubernetes cluster.

I built the provider from your branch and used the following config to demonstrate the problem:

resource "kubernetes_pod" "test" {
  metadata {
    name = "terraform-example"
  }

  spec {
    automount_service_account_token = true
    container {
      image = "nginx:1.7.9"
      name  = "example"
    }
  }
}

and here's what happens when you apply it:

$ terraform apply
kubernetes_pod.test: Creating...
  metadata.#:                                  "" => "1"
  metadata.0.generation:                       "" => "<computed>"
  metadata.0.name:                             "" => "terraform-example"
  metadata.0.namespace:                        "" => "default"
  metadata.0.resource_version:                 "" => "<computed>"
  metadata.0.self_link:                        "" => "<computed>"
  metadata.0.uid:                              "" => "<computed>"
  spec.#:                                      "" => "1"
  spec.0.automount_service_account_token:      "" => "true"
  spec.0.container.#:                          "" => "1"
  spec.0.container.0.image:                    "" => "nginx:1.7.9"
  spec.0.container.0.image_pull_policy:        "" => "<computed>"
  spec.0.container.0.name:                     "" => "example"
  spec.0.container.0.resources.#:              "" => "<computed>"
  spec.0.container.0.stdin:                    "" => "false"
  spec.0.container.0.stdin_once:               "" => "false"
  spec.0.container.0.termination_message_path: "" => "/dev/termination-log"
  spec.0.container.0.tty:                      "" => "false"
  spec.0.dns_policy:                           "" => "ClusterFirst"
  spec.0.host_ipc:                             "" => "false"
  spec.0.host_network:                         "" => "false"
  spec.0.host_pid:                             "" => "false"
  spec.0.hostname:                             "" => "<computed>"
  spec.0.image_pull_secrets.#:                 "" => "<computed>"
  spec.0.node_name:                            "" => "<computed>"
  spec.0.restart_policy:                       "" => "Always"
  spec.0.service_account_name:                 "" => "<computed>"
  spec.0.termination_grace_period_seconds:     "" => "30"
kubernetes_pod.test: Creation complete (ID: default/terraform-example)

Apply complete! Resources: 1 added, 0 changed, 0 destroyed.

The state of your infrastructure has been saved to the path
below. This state is required to modify and destroy your
infrastructure, so keep it safe. To inspect the complete state
use the `terraform show` command.

State path:
$ terraform plan
Refreshing Terraform state in-memory prior to plan...
The refreshed state will be used to calculate this plan, but will not be
persisted to local or remote state storage.

kubernetes_pod.test: Refreshing state... (ID: default/terraform-example)
The Terraform execution plan has been generated and is shown below.
Resources are shown in alphabetical order for quick scanning. Green resources
will be created (or destroyed and then created if an existing resource
exists), yellow resources are being changed in-place, and red resources
will be destroyed. Cyan entries are data sources to be read.

Note: You didn't specify an "-out" parameter to save this plan, so when
"apply" is called, Terraform can't guarantee this is what will execute.

  ~ kubernetes_pod.test
      spec.0.container.0.volume_mount.#:            "1" => "0"
      spec.0.container.0.volume_mount.0.mount_path: "/var/run/secrets/kubernetes.io/serviceaccount" => ""
      spec.0.container.0.volume_mount.0.name:       "default-token-4pbsm" => ""
      spec.0.container.0.volume_mount.0.read_only:  "true" => "false"
      spec.0.volume.#:                              "1" => "0"
      spec.0.volume.0.name:                         "default-token-4pbsm" => ""
      spec.0.volume.0.secret.#:                     "1" => "0"
      spec.0.volume.0.secret.0.secret_name:         "default-token-4pbsm" => ""


Plan: 0 to add, 1 to change, 0 to destroy.

Let me know if that clears some of the confusions and whether we can close this issue now.
Thanks.

@mingfang
Copy link
Author

I see the problem with the pod resource you described above, but I was testing with the replication controller resource and am not seeing any mounts at all.
Actually that might be a bug in the replication controller resource.

@radeksimko
Copy link
Member

My point remains - it is possible to mimic the behaviour of AutomountServiceAccountToken by specifying the volume + mount, so I don't see it as "must have" feature when there's potential for troubles as described above. I'm happy to revisit this decision if/when K8S API allows us to identify default service account + volume + mounts which get generated via this option.

In that context do you mind me closing this issue?

Actually that might be a bug in the replication controller resource.

If there is a bug I'd be more than happy to hear about it and fix it, if we have a way to reproduce it.

@gzunino
Copy link

gzunino commented Apr 26, 2018

I don't see how to make this work, I'm following comment #38 (comment). Service account is created and mounted in pod, but when trying to access the kubernetes API from the pod I got

Unknown user "system:serviceaccount:dev:service-account"

I named the service account "service-account" and using namespace "dev". I can see that service account exists by using kubectl from my machine but from the pod using curl with the correct mounted token or kubectl I keep getting "Unknown user".

@ramyala
Copy link

ramyala commented Jun 15, 2018

Could you allow AutomountServiceAccountToken to be set explicitly while creating serviceaccount?

@alkar
Copy link

alkar commented Nov 1, 2018

@radeksimko correct me if I'm wrong but here's what I'm thinking: given the serviceAccountName attribute of the spec (which defaults to default), you can discover the Secret name by looking at the ServiceAccount resource. Then on Read, we could remove the volume & volume mount and that wouldn't confuse the diff? This is obviously a couple more API calls and perhaps a bit messy code-wise.

Additionally, I first noticed this behaviour with the new Deployment resource. However, it looks Deployments / Daemonsets (and even ReplicaSets) don't hold any of the token volume information in their spec, it's just at the Pod level that they get injected. Wouldn't that make it possible to at least change this behaviour for higher-level resources like Deployments?

@mr-rodgers
Copy link

mr-rodgers commented Jan 24, 2019

@radeksimko the approach given here doesn't seem to include the ca.crt file associated with the service account. This issue is crippling and frustrating and I don't understand why I have to deal with it.

@uhari03
Copy link

uhari03 commented Feb 14, 2019

I am also running into this issue at the Deployment resource level. My concern is that mounting the secret as a work around is fiddling in implementation details and we'll need to update our workaround as Kubernetes evolves to dealing with service accounts / service account tokens in a different way.

@ukstv
Copy link

ukstv commented Mar 9, 2019

@radeksimko At least, you could document the behaviour and your proposed solution in the corresponding sections of Terraform docs about kubernetes_service_account, kubernetes_deployment, and kubernetes_pod.

@onorua
Copy link

onorua commented Mar 27, 2019

Not really sure why terraform folks decided to change default behavior of kubectl and the rest of Kubernetes ecosystem, without even documenting how to make it working.
I've spent several hours debugging what is wrong, tried "workaround" for the problem which should not exists in the first place, and still could not get it working.
Any way to get it working as expected?

@mingfang
Copy link
Author

@onorua To address this issue and many others, I've created my own Kubernetes Provider here https://github.com/mingfang/terraform-provider-k8s. For this case it will work as expected and no work arounds are needed.

@onorua
Copy link

onorua commented Mar 27, 2019

@mingfang thank you very much for your help. Unfortunately that would require copying your provider to all my colleagues which may be inconvenient.
Would be way better if terraform folks fixed it mainstream, considering age of the ticket as well as tag "question", this was not considered as a bug (which it is if you ask me). Most probably we should avoid using terraform for kubernetes management in our environment.

@tdmalone
Copy link
Contributor

@onorua Unfortunately the official Terraform Kubernetes provider currently is plagued by some questionable design decisions, that aren’t a priority to fix at the moment due to Terraform 0.12’s imminent arrival (and delays).

@rayterrill
Copy link

Echoing some of the other comments above. I spent hours trying to figure out why a service account and deployment wasn't working in Terraform, but worked with no issues in kubectl - it was the AutomountServiceAccountToken being hardcoded to False in the deployment resource.

At a minimum this should be documented in the Terraform docs for the resource with something noting the resource does not behave like kubectl does.

I was able to use code like that provided by @radeksimko to eventually get it working.

@Tawmu
Copy link

Tawmu commented Jun 27, 2019

I just ran into this problem today when writing the first of our Terraform modules for our infrastructure. It's disappointing that this hasn't been fixed two years on when it actively breaks default Kubernetes behaviour.

Thankfully @radeksimko's volumeMount workaround worked for us

@tdmalone
Copy link
Contributor

In case anyone missed it, this is really close to being resolved here: #261

@ghost ghost locked and limited conversation to collaborators Apr 21, 2020
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
Projects
None yet
Development

No branches or pull requests