-
Notifications
You must be signed in to change notification settings - Fork 63
Get an error when using dynamic credentials #129
Comments
Getting into the same bug... Setupprovider "kubernetes-alpha" {
server_side_planning = true
}
resource "kubernetes_manifest" "cert_manager_cluster_issuer_prd" {
provider = kubernetes-alpha
manifest = {
apiVersion = "cert-manager.io/v1"
kind = "ClusterIssuer"
metadata = {
name = "letsencrypt-prd"
}
spec = {
acme = {
# https://letsencrypt.org/docs/acme-protocol-updates/
server = "https://acme-v02.api.letsencrypt.org/directory"
# Email for the cert contact
email = "contact@${var.domain}"
# Name of a secret used to store the ACME account private key
privateKeySecretRef = {
name = "${var.domain}-private-key-secret"
}
# Zone resolvers by Route53 DNS01 challenges
solvers = [{
selector = {
dnsZones = [var.domain]
}
dns01 = {
route53 = {
region = var.aws_region
# https://stackoverflow.com/questions/63402926/fetch-zone-id-of-hosted-domain-on-route53-using-terraform/63403290#63403290
hostedZoneID = data.aws_route53_zone.domain_hosted_zone.zone_id
}
}
}]
}
}
}
} Logs
|
With this provider you cannot supply credentials from a resource that is created in the same Is that what your example is doing? |
Yes, the cluster is created in the same apply. According to this issue GH-82, it's not possible now ? |
In order for things to improve in these situations where you create the cluster in the same apply, some changes are required in Terraform itself. The issue is tracked upstream here: hashicorp/terraform#4149 |
Ok, but I'm using resources from Kubernetes provider in the same way for creating namespaces and it's working.
|
Hey @lperrin-obs, solved my problem in a different way: using the data from the data "aws_eks_cluster" "cluster" {
name = module.eks.cluster_id
}
data "aws_eks_cluster_auth" "cluster" {
name = module.eks.cluster_id
}
provider "kubernetes" {
version = ">= 1.13.2"
host = data.aws_eks_cluster.cluster.endpoint
cluster_ca_certificate = base64decode(data.aws_eks_cluster.cluster.certificate_authority.0.data)
token = data.aws_eks_cluster_auth.cluster.token
load_config_file = false
}
terraform {
required_version = ">= 0.12"
backend "s3" {}
required_providers {
aws = ">= 3.0, < 4.0"
random = "~> 3.0.0"
k8s = {
version = "0.8.2"
source = "banzaicloud/k8s"
}
}
}
# Configured the provider with the credentials from the eks module
provider "k8s" {
host = data.aws_eks_cluster.cluster.endpoint
cluster_ca_certificate = base64decode(data.aws_eks_cluster.cluster.certificate_authority.0.data)
token = data.aws_eks_cluster_auth.cluster.token
load_config_file = false
} |
@marcellodesales , but the problem is not with
It appears that until Terraform support "partial apply via #4149" this cannot be really solved without splitting the terraform configuration in
Since the #4149 was opened in 2015 I believe it's very unlikely that we will get the partial apply functionality anytime soon. |
@ecerulm Yeah I ended up using the
That way we have a pipeline... For instance, creating AWS Certs for ALBs would need the ARN for the declared ALBs... So fat that's where I'm going... |
This is a significant blocker to automating a K8s cluster as there is always manifest changes required with provisioning. Is there a reason why if the provider credentials are empty the plan can't specify undefined for the resources? |
Terraform Version and Provider Version
Terraform v0.13.5
Kubernetes Version
1.19.2
Affected Resource(s)
If this issue appears to affect multiple resources, it may be an issue with Terraform's core, so please mention this. -->
Terraform Configuration Files
Debug Output
https://gist.github.com/lperrin-obs/e42a62c29e37f3c37483d41dc54625ed
Expected Behavior
The K8S manifest will be applied.
Actual Behavior
I get an error
Error: rpc error: code = Unknown desc = no client configuration
when running the plan and the K8S cluster is not yet created.Steps to Reproduce
terraform plan
-->References
Community Note
The text was updated successfully, but these errors were encountered: