-
Notifications
You must be signed in to change notification settings - Fork 984
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Feature Request: Support exec Authentication (client-go Credential Plugins) #161
Comments
EDIT (Mar 2019) -- this hack is unnecessary now due to the eks_cluster_auth data source (see bflad's comment)! I've gotten around this in a rather hacky way to get EKS initialized with workers: resource "aws_eks_cluster" "default" {
name = "my-cluster"
# ...
}
data "external" "heptio_authenticator_aws" {
program = ["bash", "${path.module}/authenticator.sh"]
query {
cluster_name = "my-cluster"
}
}
provider "kubernetes" {
host = "${aws_eks_cluster.default.endpoint}"
cluster_ca_certificate = "${base64decode(aws_eks_cluster.default.certificate_authority.0.data)}"
token = "${data.external.heptio_authenticator_aws.result.token}"
load_config_file = false
} and the #!/bin/bash
set -e
# Extract cluster name from STDIN
eval "$(jq -r '@sh "CLUSTER_NAME=\(.cluster_name)"')"
# Retrieve token with Heptio Authenticator
TOKEN=$(heptio-authenticator-aws token -i $CLUSTER_NAME | jq -r .status.token)
# Output token as JSON
jq -n --arg token "$TOKEN" '{"token": $token}' with this method I was able to use the resource "kubernetes_config_map" "aws_auth" {
metadata {
name = "aws-auth"
namespace = "kube-system"
}
data {
mapRoles = <<YAML
- rolearn: ${module.iam.node_role_arn}
username: system:node:{{EC2PrivateDNSName}}
groups:
- system:bootstrappers
- system:nodes
YAML
}
} so that my workers come online with the EKS cluster in a single |
@tristanpemble Whenever I run your example, I get an unauthorized error. Any idea what might be the cause of that? I am not creating a config map in my example though, I have already created the config map manually. If I do something like this https://blog.stigok.com/2018/06/25/aws-eks-kubernetes-terraform-system-anonymous-service-account.html That works, and terraform works. Would rather use heptio for authentication rather than a hard coded token. |
I already had my workers coming up with the terraform instructions, but I can't quite figure out how to actually be able to use terraform control kubernetes yet. TBH, I am trying to learn both of these things at once and I don't understand the k8s auth strcture though, so this may not be the right place to ask. |
Tip for those having trouble: try running Temporarily hard-code that token inside your Terraform Once you verify that Heptio can give you a working token, then you can figure out the data "external" trick. I was originally encountering some problems until I realized that, because I use a non-default AWS profile from my ~/.aws/credentials, I needed to set the AWS_PROFILE environment variable for all runs of Terraform associated with that cluster. (I forgot that Terraform is going to invoke heptio-authenticator as a subprocess, and that subprocess needs to be set up with AWS credentials to be able to pull a token). I did get @tristanpemble's example working for my own case. Thanks! Although, I'm curious why there isn't a dependency ordering issue since |
The |
Aha, makes sense, thanks. So the authenticator still gets you a token even if the named cluster doesn't actually exist. |
FYI: The Terraform tutorial tells you to use |
I tried this myself, got a token, but any attempt to create a k8s resource results in an 'unauthoried' error.
The external script:
Output from terraform apply:
|
I am not entirely whether extracting the eks end point and cert auth from the data resource works like that. I provisioned my eks directly using the aws provider. I would also check if the correct iam role policy attachments are initiated. see here How I did it: https://github.com/moosahmed/Stateful_Symphony/blob/master/terraform/modules/eks/cluster/cluster.tf this works fine for me. good luck! Also check rolearn: ${data.terraform_remote_state.eks.node_iam_role} if you are feeding the correct arn. |
@moosahmed I've already provisioned the EKS cluster resource and nodes. Now I'm trying to provision services inside k8s, such as service accounts. Using kubectl works, but it's not idempotent, so I'm trying to use Terraform instead. No matter what I try the Terraform provider fails to authenticate to k8s. |
@neilhwatson I had to create a token for terraform to use. My provider looks like this
|
Thanks for the hard working. My issue was gone after upgrading to v1.3.0. |
To modify @neilhwatson 's response, a work around without relying on an external script to be checked in (but still requiring jq to be installed):
My org also sticks in |
@mbarrien - What's the query for? Why not just do:
|
for windows users
requires you have installed jq. |
@aaronmell - would be interested to see what the code looks like for |
@ostmike I created this in kubernetes `apiVersion: v1
|
FYI, if you're using EKS, there is hashicorp/terraform-provider-aws#4904 which is much more seamless and "terraform way". |
Related to the above comment with EKS, hashicorp/terraform-provider-aws#7438 (a continuation of hashicorp/terraform-provider-aws#4904) has been merged and will release in version 1.58.0 of the Terraform AWS provider, which contains a new data "aws_eks_cluster" "example" {
name = "example"
}
data "aws_eks_cluster_auth" "example" {
name = "${data.aws_eks_cluster.example.name}"
}
provider "kubernetes" {
host = "${data.aws_eks_cluster.cluster.endpoint}"
cluster_ca_certificate = "${base64decode(data.aws_eks_cluster.cluster.certificate_authority.0.data)}"
token = "${data.aws_eks_cluster_auth.example.token}"
load_config_file = false
} |
@bflad Thank you. Worked a charm! |
A work in progress pull request for the last part of this feature request can be found here: #396 It should enable something like the following for AWS EKS: # Functionality under development - details may change during implementation
provider "kubernetes" {
cluster_ca_certificate = "${base64decode(data.aws_eks_cluster.cluster.certificate_authority.0.data)}"
host = "${data.aws_eks_cluster.cluster.endpoint}"
load_config_file = false
exec {
api_version = "client.authentication.k8s.io/v1alpha1"
args = ["token", "-i", "${data.aws_eks_cluster.cluster.name}"]
command = "aws-iam-authenticator"
}
} |
@alexsomesan Can you release a new version that includes this PR, please? |
Kubernetes implemented support for an
exec
authentication provider, where the client can generically reach out to another binary to retrieve a valid authentication token. This is used for supporting authentication models such as LDAP credentials or AWS IAM credentials.This feature is alpha in Kubernetes 1.10, however it is the required model for working with AWS EKS.
Documentation on client-go credential plugins can be found here: https://kubernetes.io/docs/admin/authentication/#client-go-credential-plugins
Presumably a few things will need to happen to land this support:
Optionally:
exec
authentication instead of requiring an on-disk kubeconfig for this setupThe text was updated successfully, but these errors were encountered: