Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
When working with terraform-provider-kubernetes, we at our company are usually working in the context of deploying something to some EKS cluster. This creates an expectation to have the kubeconfig managed (1) in a consistent manner anywhere terraform will execute (2) in a manner that is configured appropriately for all possible EKS deployment targets. We seek to alleviate that problem by generating the kubeconfig for an EKS cluster while respecting (not modifying or overwriting) ~/.kube/config.
Without this, we could still generate a kube config for EKS, but not without convoluting our terraform across N modules aimed at EKS deployments with local provisioners, suffering lots of complication and lots of duplication. Even if we centralize the configuration of the terraform in a module, we still have to make sure the kube config is generated and ready before running any of the kubernetes resources to provision things into EKS. On top of this sits the complication with depending on modules.
Sitting back and thinking of this problem as code, we are prompted to question whether configuring stuff for the provider belongs in our terraform files at all and we believe the answer is a solid no. Consider terraform-provider-aws, which allows us to pass configuration options into the provider instead of forcing us to manage local ~/.aws/credentials with terraform. Consider terraform-provider-kubernetes, which allows us to specify which kubeconfig to use. In the same way, terraform resources for authenticating to an EKS cluster should in no way be managed by terraform resources outside the chosen provider. Since AWS allows us to generate a kubeconfig sans-secrets, it makes sense to simply tell the provider which cluster to deploy to, in the same way we tell another provider which region to deploy AWS resources.
Thanks for reviewing my PR! Please be direct in your critique.