Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Feature Request: Export AKS clusterAdmin credentials #2421

Closed
haodeon opened this issue Nov 30, 2018 · 2 comments
Closed

Feature Request: Export AKS clusterAdmin credentials #2421

haodeon opened this issue Nov 30, 2018 · 2 comments

Comments

@haodeon
Copy link

haodeon commented Nov 30, 2018

Community Note

  • Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request
  • Please do not leave "+1" or "me too" comments, they generate extra noise for issue followers and do not help prioritize the request
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment

Description

Add clusteradmin exported attributes for azurerm_kubernetes_cluster.

Currently the Terraform AKS resource only exports the kube_config for clusterUser.

After creating an Azure AD enabled RBAC cluster, one must run "az aks get-credential --admin" in order get the clusterAdmin credentials to setup RBAC.

If azurerm_kubernetes_cluster exported clusterAdmin credentials then those attributes could be used as inputs to the Terraform kubernetes provider and the resource "kubernetes_cluster_role_binding" can be used to setup RBAC.

New or Affected Resource(s)

azurerm_kubernetes_cluster

Potential Terraform Configuration

output "kube_config_admin" {
  value = "${azurerm_kubernetes_cluster.k8s.kube_config_admin_raw}"
}

References

@neumanndaniel
Copy link

neumanndaniel commented Dec 5, 2018

This is important especially when working with RBAC and AAD enabled on AKS and you want to provision clusterrolebindings for your users.

Current workaround is to integrate the following one into your TF deployment for AKS. Otherwise you don't have a chance after the AKS deployment process finish to do additional configuration tasks against the cluster itself.

resource "null_resource" "k8s" {

  provisioner "local-exec" {
    command = "az aks get-credentials --resource-group ${azurerm_resource_group.k8s.name} --name ${var.cluster_name} --admin"
  }
}

Since TF only support AKS without RBAC or AKS with RBAC & AAD and not the option AKS with RBAC currently. It is important to have this capability.

@ghost
Copy link

ghost commented Mar 5, 2019

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues.

If you feel this issue should be reopened, we encourage creating a new issue linking back to this one for added context. If you feel I made an error 🤖 🙉 , please reach out to my human friends 👉 [email protected]. Thanks!

@ghost ghost locked and limited conversation to collaborators Mar 5, 2019
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

No branches or pull requests

3 participants