Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Generate EKS Config #380

Closed
wants to merge 3 commits into from
Closed

Generate EKS Config #380

wants to merge 3 commits into from

Conversation

vickleford
Copy link

When working with terraform-provider-kubernetes, we at our company are usually working in the context of deploying something to some EKS cluster. This creates an expectation to have the kubeconfig managed (1) in a consistent manner anywhere terraform will execute (2) in a manner that is configured appropriately for all possible EKS deployment targets. We seek to alleviate that problem by generating the kubeconfig for an EKS cluster while respecting (not modifying or overwriting) ~/.kube/config.

Without this, we could still generate a kube config for EKS, but not without convoluting our terraform across N modules aimed at EKS deployments with local provisioners, suffering lots of complication and lots of duplication. Even if we centralize the configuration of the terraform in a module, we still have to make sure the kube config is generated and ready before running any of the kubernetes resources to provision things into EKS. On top of this sits the complication with depending on modules.

Sitting back and thinking of this problem as code, we are prompted to question whether configuring stuff for the provider belongs in our terraform files at all and we believe the answer is a solid no. Consider terraform-provider-aws, which allows us to pass configuration options into the provider instead of forcing us to manage local ~/.aws/credentials with terraform. Consider terraform-provider-kubernetes, which allows us to specify which kubeconfig to use. In the same way, terraform resources for authenticating to an EKS cluster should in no way be managed by terraform resources outside the chosen provider. Since AWS allows us to generate a kubeconfig sans-secrets, it makes sense to simply tell the provider which cluster to deploy to, in the same way we tell another provider which region to deploy AWS resources.

Thanks for reviewing my PR! Please be direct in your critique.

@ghost ghost added the size/L label Mar 29, 2019
@mbarrien
Copy link
Contributor

mbarrien commented Mar 30, 2019

This puts a very AWS specific setting into a generic Kubernetes provider (and makes this generic Kubernetes provider have a non-test dependency on the vendor specific AWS library); that seems like if this belongs anywhere, it would belong in the aws provider. And in some sense it does, because this pattern works:

data "aws_eks_cluster" "cluster" {
  name = "${var.eks_cluster_id}"
}

data "aws_eks_cluster_auth" "cluster" {
  name = "${var.eks_cluster_id}"
}

provider "kubernetes" {
  host                   = "${data.aws_eks_cluster.cluster.endpoint}"
  cluster_ca_certificate = "${base64decode(data.aws_eks_cluster.cluster.certificate_authority.0.data)}"
  token                  = "${data.aws_eks_cluster_auth.cluster.token}"
  load_config_file       = false
}

That said, the above solution does suffer from putting the authentication token generated by aws_eks_cluster_auth into the statefile, while your code prevents that from leaking.

If you want to generate the kube config for outside of Terraform uses, aws eks update-kubeconfig --cluster-id foobar works.

@vickleford
Copy link
Author

@mbarrien cool, thanks. This looks like a better alternative. I'm not following how I could put this into the aws provider instead. So that I can weigh the two options (put this in the AWS provider vs the example resources above in the right module) can you help me understand more about your aws provider proposal? Sorry, this is my third week with Terraform ^_^

@vickleford
Copy link
Author

  • Agree this is way too specific for this provider as-is
  • Agree the provided pattern is good enough for now
    • Thanks for the heads-up that it puts token in the statefile
  • Have some ideas cooking up about feeding in a config file as a string from a new param
    • From some homebrewed output on the cluster
    • From a new export that can be added to data.aws_eks_cluster
  • Will close this for now and open a new PR later

Thanks @mbarrien for providing some good direction!

@vickleford vickleford closed this Apr 1, 2019
@ghost ghost locked and limited conversation to collaborators Apr 21, 2020
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants