Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

kube_config parameter for data.aws_eks_cluster #8149

Closed
wants to merge 3 commits into from
Closed

kube_config parameter for data.aws_eks_cluster #8149

wants to merge 3 commits into from

Conversation

vickleford
Copy link

@vickleford vickleford commented Apr 1, 2019

Add a kube_config parameter for data.aws_eks_cluster so that it may be consumed in a way such as what is proposed at hashicorp/terraform-provider-kubernetes#383

Community Note

  • Please vote on this pull request by adding a 👍 reaction to the original pull request comment to help the community and maintainers prioritize this request
  • Please do not leave "+1" comments, they generate extra noise for pull request followers and do not help prioritize the request

Changes proposed in this pull request:

  • Add kube_config to data.aws_eks_cluster

Output from acceptance testing:

003041-vwatkins terraform-provider-aws$ make testacc TESTARGS='-run=TestAccAWSEksClusterDataSource_basic' | tee acctest.out2
==> Checking that code complies with gofmt requirements...
TF_ACC=1 go test ./... -v -parallel 20 -run=TestAccAWSEksClusterDataSource_basic -timeout 120m
?   	github.com/terraform-providers/terraform-provider-aws	[no test files]
=== RUN   TestAccAWSEksClusterDataSource_basic
=== PAUSE TestAccAWSEksClusterDataSource_basic
=== CONT  TestAccAWSEksClusterDataSource_basic
--- PASS: TestAccAWSEksClusterDataSource_basic (1317.00s)
PASS
ok  	github.com/terraform-providers/terraform-provider-aws/aws	1319.116s

@ghost ghost added size/M Managed by automation to categorize the size of a PR. documentation Introduces or discusses updates to documentation. service/eks Issues and PRs that pertain to the eks service. tests PRs: expanded test coverage. Issues: expanded coverage, enhancements to test infrastructure. labels Apr 1, 2019
@vickleford
Copy link
Author

I will add acceptance test output tomorrow, after unsetting TF_LOG and teeing so it doesn't spill over my console buffer size ;)

@bflad bflad added the enhancement Requests to existing resources that expand the functionality or scope. label Apr 5, 2019
@bflad
Copy link
Contributor

bflad commented Apr 5, 2019

Hi @vickleford 👋 Thanks for submitting this enhancement. If the Kubernetes provider accepts loading the configuration content directly via a string, I can certainly see how this would seem like a potential way to ease connecting the two providers.

I do have some reservations about hardcoding kubeconfig generation functionality into the AWS provider though:

  • Differing environments will have differing requirements when it comes to properly authenticating to their EKS cluster servers. This kubeconfig template as written currently does not accept configuration for assuming roles (via the aws-iam-authenticator -r argument), enabling the --forward-session-name argument, or adding AWS_PROFILE environment variable information. The logic to support these various configurations may make the resource/template logic complex.
  • The kubeconfig file or aws-iam-authenticator options may change over time rendering the attribute out of date or obsolete unless the maintainers here keep parity over time in this project (whose expertise and focus is generally on AWS functionality, not necessarily Kubernetes).
  • The functionality of generating the file contents is possible already via locals or the template_file data source. Its not necessarily simple but it does achieve the same result with full customization. e.g. from the AWS EKS Introduction Guide
locals {
  kubeconfig = <<KUBECONFIG
apiVersion: v1
clusters:
- cluster:
    server: ${aws_eks_cluster.demo.endpoint}
    certificate-authority-data: ${aws_eks_cluster.demo.certificate_authority.0.data}
  name: kubernetes
contexts:
- context:
    cluster: kubernetes
    user: aws
  name: aws
current-context: aws
kind: Config
preferences: {}
users:
- name: aws
  user:
    exec:
      apiVersion: client.authentication.k8s.io/v1alpha1
      command: aws-iam-authenticator
      args:
        - "token"
        - "-i"
        - "${aws_eks_cluster.demo.name}"
KUBECONFIG
}

output "kubeconfig" {
  value = "${local.kubeconfig}"
}
data "aws_eks_cluster" "example" {
  name = "example"
}

data "aws_eks_cluster_auth" "example" {
  name = "example"
}

provider "kubernetes" {
  host                   = "${data.aws_eks_cluster.example.endpoint}"
  cluster_ca_certificate = "${base64decode(data.aws_eks_cluster.example.certificate_authority.0.data)}"
  token                  = "${data.aws_eks_cluster_auth.example.token}"
  load_config_file       = false
}

An alternative option here might be creating a kubernetes_kubeconfig data source in the Kubernetes provider that accepts HCL for some of the configuration parameters and converts it to the proper YAML (similar to how we handle the aws_iam_policy_document data source in this provider to create IAM compatible JSON). Taking that one step further, potentially a more specialized data source that is tailored for exec authentication could be created. Either of these would allow the keeping the kubeconfig and Kubernetes domain expertise within the Kubernetes provider.

Another alternative would be implementing the second half of hashicorp/terraform-provider-kubernetes#161, which is directly configuring exec authentication information into the Kubernetes provider configuration rather than dealing with an entire file/template. I have not spent time on a design sketch of what that functionality may look like from the operator perspective (been too busy on the AWS front 😅 ), but it essentially would accept the exec portions of the kubeconfig, e.g.

# Very quick design sketch with example arguments
provider "kubernetes" {
  host                   = "${aws_eks_cluster.example.endpoint}"
  cluster_ca_certificate = "${base64decode(aws_eks_cluster.example.certificate_authority.0.data)}"
  load_config_file       = false

  exec {
    command = "aws-iam-authenticator token -i ${aws_eks_cluster.example.name}"

    environment_variables = {
      AWS_PROFILE = "production"
    }
  }
}

This would be generic to accept any exec authentication, not just aws-iam-authenticator.

What do you think? 😄

@bflad bflad added the waiting-response Maintainers are waiting on response from community or contributor. label Apr 5, 2019
@vickleford
Copy link
Author

Hello @bflad 👋 and well met! Thanks for a detailed and well-presented train of thought.

If I am understanding you correctly, your hesitation is 1) limited solution vector with the template as written and 2) broadening that solution adds unwanted complexity to the aws provider. If that is on track, when I step back and consider the provider at large I have to agree with your critique as evidenced by my surprise that template generation was absent from aws-sdk and asking myself the question "where should this code live in this provider?"

The alternatives proposed are quite workable. In fact, we're already generating the config as an output using locals, and this isn't terrible given that it is centralized and not repeated.

Do we find agreement that this attribute is both valuable and at-home in this provider if the right implementation can be found? For example, what if template generation was implemented in aws-sdk and then used here instead of bogging this provider down with template generation details? If so, until then, is there a chance for this to be considered as a minimal implementation to open a door to finding more concrete use cases?

It has been a long week, and I think I'm missing the elegance of your proposal about exec authentication on the kubernetes provider. This is exhaustion on my part; your presentation was great. I need some time to take a breath and then come back to digest it. I ask your forgiveness.

Thanks again for your fair and just feedback, attention to detail, great communication, and all of your work for the Terraform providers. I'll update you again soon.

@ghost ghost removed the waiting-response Maintainers are waiting on response from community or contributor. label Apr 6, 2019
@bflad
Copy link
Contributor

bflad commented Apr 24, 2019

Hi again @vickleford 👋 It looks like work has started on implementing exec-based credential plugins in the Terraform Kubernetes Provider, e.g. hashicorp/terraform-provider-kubernetes#161 (comment)

# Functionality under development - details may change during implementation
provider "kubernetes" {
  cluster_ca_certificate = "${base64decode(aws_eks_cluster.cluster.certificate_authority.0.data)}"
  host                   = "${aws_eks_cluster.cluster.endpoint}"
  load_config_file       = false

  exec {
    api_version = "client.authentication.k8s.io/v1alpha1"
    args        = ["token", "-i", "${aws_eks_cluster.cluster.name}"]
    command     = "aws-iam-authenticator"
  }
}

This would provide another option to completely bypass Kubernetes configuration file handling for providing Terraform Kubernetes Provider compatible information and the need for templating it within the Terraform AWS Provider for a resource attribute.

Given we have various existing solutions to this problem and work in progress for another option, I'm going to close this out now due to the complexity and maintenance burden it is introducing within this provider. If there are use cases I am missing which cannot be solved with the existing provider attributes, please do reach out or preferably submit a feature request GitHub issue with additional use case details. Thanks.

@bflad bflad closed this Apr 24, 2019
@ghost
Copy link

ghost commented Mar 30, 2020

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues.

If you feel this issue should be reopened, we encourage creating a new issue linking back to this one for added context. Thanks!

@ghost ghost locked and limited conversation to collaborators Mar 30, 2020
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
documentation Introduces or discusses updates to documentation. enhancement Requests to existing resources that expand the functionality or scope. service/eks Issues and PRs that pertain to the eks service. size/M Managed by automation to categorize the size of a PR. tests PRs: expanded test coverage. Issues: expanded coverage, enhancements to test infrastructure.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants