-
Notifications
You must be signed in to change notification settings - Fork 984
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Kubernetes Provider tries to reach localhost:80/api when targeting azurerm resources #405
Comments
Could you share your provider configuration? Provider initialization is done at startup, that may explain what's going on. |
provider "kubernetes" {
host = "${azurerm_kubernetes_cluster.cluster.kube_config.0.host}"
username = "${azurerm_kubernetes_cluster.cluster.kube_config.0.username}"
password = "${azurerm_kubernetes_cluster.cluster.kube_config.0.password}"
client_certificate = "${base64decode(azurerm_kubernetes_cluster.cluster.kube_config.0.client_certificate)}"
client_key = "${base64decode(azurerm_kubernetes_cluster.cluster.kube_config.0.client_key)}"
cluster_ca_certificate = "${base64decode(azurerm_kubernetes_cluster.cluster.kube_config.0.cluster_ca_certificate)}"
}``` |
As the cluster is in the same terraform stack, these values are not available during initialization. A second apply should work in that case. I personally, with GKE, do not manage the cluster and the k8s configuration in the same stack, but use remote states or datasources to get the credentials from my gke stack in the k8s stack. Some people have had success by generating Note: using datasources instead of remote states to pass the credentials is the preferred choice when available as it avoids storing the kubernetes credentials in the the terraform state. |
Ok, then I was confused by the documentation on terraform.io as - as I renember it - says that you can use this configuration. But how? Would be pretty cool for us, so that we don't produce more blocker traffic on our agent cluster... |
To generate kubeconfig for AKS, have a look at https://github.com/terraform-providers/terraform-provider-kubernetes/blob/c9a99e9351709871cfff8652adf9ff939ac4613d/kubernetes/test-infra/aks/main.tf#L91 |
Regarding the usage of datasources with separate stacks: #161 (comment) (that's for EKS, not AKS) |
Thank you! I think I will be able to go on from here. Just have to find out if it is possible to do with azurerm. |
Side note worth mentioning, here's the option I'm using with GKE and Google OAuth2 access tokens, cf.:
Not sure if the equivalent is available for AKS or EKS. |
Apparently this issue doesnt happen if you specify the provider version "1.10.0" for kubernetes. My code is very similar to this, except for the use of username and password:
|
I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues. If you feel this issue should be reopened, we encourage creating a new issue linking back to this one for added context. If you feel I made an error 🤖 🙉 , please reach out to my human friends 👉 [email protected]. Thanks! |
Terraform Version
0.11.13 Azure DevOps Extension
Affected Resource(s)
etc. Every kubernetes Resource i try to use.
Terraform Configuration Files
Debug Output
https://gist.github.com/damnedOperator/f7aa5fcffb49ed12cd24d5fa58f362c1
Expected Behavior
Terraform apply should end without errors and the kubernetes Provider should configure the Kubernetes cluster created on AKS
Actual Behavior
Terraform apply fails and provider tries to connect to either localhost or ".visualstudio.com"
Steps to Reproduce
Important Factoids
The terraform Job is running in a release pipeline on Azure DevOps Server 2019. What makes me wonder is that the azurerm provider does not go into error.
We deploy the tfstate file from source control so TF does always know how its state is.
References
#382 reports similar behaviour, but in our case, the provider is configured with attributes from the AKS creation
The text was updated successfully, but these errors were encountered: