Skip to content
This repository has been archived by the owner on Aug 11, 2021. It is now read-only.

Get an error when using dynamic credentials #129

Open
lperrin-obs opened this issue Oct 23, 2020 · 9 comments
Open

Get an error when using dynamic credentials #129

lperrin-obs opened this issue Oct 23, 2020 · 9 comments
Labels
bug Something isn't working upstream-terraform

Comments

@lperrin-obs
Copy link

lperrin-obs commented Oct 23, 2020

Terraform Version and Provider Version

Terraform v0.13.5

  • provider registry.terraform.io/hashicorp/helm v1.3.2
  • provider registry.terraform.io/hashicorp/kubernetes v1.13.2
  • provider registry.terraform.io/hashicorp/kubernetes-alpha v0.2.1
  • provider registry.terraform.io/hashicorp/null v3.0.0
  • provider registry.terraform.io/terraform-providers/gitlab v3.1.0
  • provider registry.terraform.io/terraform-providers/ovh v0.9.1
  • provider registry.terraform.io/terraform-providers/scaleway v1.17.0

Kubernetes Version

1.19.2

Affected Resource(s)

  • kubernetes_manifest

If this issue appears to affect multiple resources, it may be an issue with Terraform's core, so please mention this. -->

Terraform Configuration Files

resource "scaleway_k8s_cluster_beta" "chewsk8s" {
  name = "chewsk8s"
  version = "1.19.2"
  cni = "calico"
  ingress = "nginx"
}
resource "scaleway_k8s_pool_beta" "chewsk8s_pool" {
  cluster_id = scaleway_k8s_cluster_beta.chewsk8s.id
  name = "chewsk8s_pool"
  node_type = "DEV1-M"
  size = 1
  wait_for_pool_ready = true
}
resource "null_resource" "kubeconfig" {
    depends_on = [scaleway_k8s_pool_beta.chewsk8s_pool]
    triggers = {
         host = scaleway_k8s_cluster_beta.chewsk8s.kubeconfig[0].host
         token = scaleway_k8s_cluster_beta.chewsk8s.kubeconfig[0].token
         cluster_ca_certificate = scaleway_k8s_cluster_beta.chewsk8s.kubeconfig[0].cluster_ca_certificate
    }
}

provider "kubernetes-alpha" {
  host             = null_resource.kubeconfig.triggers.host
  token            = null_resource.kubeconfig.triggers.token
  cluster_ca_certificate = base64decode(
     null_resource.kubeconfig.triggers.cluster_ca_certificate
  )
}

resource "kubernetes_manifest" "cluster-issuer" {
  provider = kubernetes-alpha

  manifest = {
      apiVersion = "cert-manager.io/v1"
      kind       = "ClusterIssuer"
      metadata   = {
          name = "letsencrypt-prod"
      }
      spec = {
          acme = {
              email = #############"
              server = "https://acme-v02.api.letsencrypt.org/directory"
              privateKeySecretRef = {
                name = "letsencrypt-prod"
              }
              solvers = [{
                http01 = {
                  ingress = {
                    class = "nginx"
                  }
                }
              }]
          }
      }
  }
}

Debug Output

https://gist.github.com/lperrin-obs/e42a62c29e37f3c37483d41dc54625ed

Expected Behavior

The K8S manifest will be applied.

Actual Behavior

I get an error Error: rpc error: code = Unknown desc = no client configuration when running the plan and the K8S cluster is not yet created.

Steps to Reproduce

  1. terraform plan -->

References

Community Note

  • Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment
@lperrin-obs lperrin-obs added the bug Something isn't working label Oct 23, 2020
@marcellodesales
Copy link

Getting into the same bug...

Setup

provider "kubernetes-alpha" {
  server_side_planning = true
}

resource "kubernetes_manifest" "cert_manager_cluster_issuer_prd" {
  provider = kubernetes-alpha

  manifest = {
    apiVersion = "cert-manager.io/v1"
    kind       = "ClusterIssuer"
    metadata = {
      name = "letsencrypt-prd"
    }
    spec = {
      acme = {
        # https://letsencrypt.org/docs/acme-protocol-updates/
        server = "https://acme-v02.api.letsencrypt.org/directory"

        # Email for the cert contact
        email = "contact@${var.domain}"

        # Name of a secret used to store the ACME account private key
        privateKeySecretRef = {
          name = "${var.domain}-private-key-secret"
        }

        # Zone resolvers by Route53 DNS01 challenges
        solvers = [{
          selector = {
            dnsZones = [var.domain]
          }
          dns01 = {
            route53 = {
              region = var.aws_region
              # https://stackoverflow.com/questions/63402926/fetch-zone-id-of-hosted-domain-on-route53-using-terraform/63403290#63403290
              hostedZoneID = data.aws_route53_zone.domain_hosted_zone.zone_id
            }
          }
        }]
      }
    }
  }
}

Logs

2020-10-27T02:37:44.385Z [DEBUG] plugin: plugin process exited: path=.terraform/plugins/registry.terraform.io/hashicorp/kubernetes-alpha/0.2.1/linux_amd64/terraform-provider-kubernetes-alpha_v0.2.1_x5 pid=257
2020-10-27T02:37:44.385Z [DEBUG] plugin: plugin exited
Error: rpc error: code = Unknown desc = no client configuration

Screen Shot 2020-10-26 at 11 44 20 PM

@alexsomesan
Copy link
Member

With this provider you cannot supply credentials from a resource that is created in the same apply operation.

Is that what your example is doing?

@lperrin-obs
Copy link
Author

lperrin-obs commented Oct 28, 2020

Yes, the cluster is created in the same apply.

According to this issue GH-82, it's not possible now ?

@alexsomesan
Copy link
Member

In order for things to improve in these situations where you create the cluster in the same apply, some changes are required in Terraform itself.

The issue is tracked upstream here: hashicorp/terraform#4149

@lperrin-obs
Copy link
Author

Ok, but I'm using resources from Kubernetes provider in the same way for creating namespaces and it's working.

provider "kubernetes" {
  load_config_file = "false"

  host             = null_resource.kubeconfig.triggers.host
  token            = null_resource.kubeconfig.triggers.token
  cluster_ca_certificate = base64decode(
     null_resource.kubeconfig.triggers.cluster_ca_certificate
  )
}

resource "kubernetes_namespace" "cert-manager" {
  metadata {
    name = "cert-manager"
  }
}

@marcellodesales
Copy link

marcellodesales commented Oct 31, 2020

Hey @lperrin-obs, solved my problem in a different way: using the data from the eks module itself...

data "aws_eks_cluster" "cluster" {
  name = module.eks.cluster_id
}

data "aws_eks_cluster_auth" "cluster" {
  name = module.eks.cluster_id
}
  • Setting up the kubernetes module
provider "kubernetes" {
  version = ">= 1.13.2"

  host                   = data.aws_eks_cluster.cluster.endpoint
  cluster_ca_certificate = base64decode(data.aws_eks_cluster.cluster.certificate_authority.0.data)
  token                  = data.aws_eks_cluster_auth.cluster.token
  load_config_file       = false
}
  • Other modules that depend on the config file can use the same approach
terraform {
  required_version = ">= 0.12"

  backend "s3" {}

  required_providers {
    aws    = ">= 3.0, < 4.0"
    random = "~> 3.0.0"
    k8s = {
      version = "0.8.2"
      source  = "banzaicloud/k8s"
    }
  }
}

# Configured the provider with the credentials from the eks module
provider "k8s" {
  host                   = data.aws_eks_cluster.cluster.endpoint
  cluster_ca_certificate = base64decode(data.aws_eks_cluster.cluster.certificate_authority.0.data)
  token                  = data.aws_eks_cluster_auth.cluster.token
  load_config_file       = false
}

@ecerulm
Copy link

ecerulm commented Nov 3, 2020

@marcellodesales , but the problem is not with provider "kubernetes" but with provider "kubernetes-alpha". Using your approach still gives Error: rpc error: code = Unknown desc = no client configuration

data "aws_eks_cluster" "main" {
  name = module.terraform-aws-modules-eks.cluster_id
}

data "aws_eks_cluster_auth" "main" {
  name = module.terraform-aws-modules-eks.cluster_id
}

provider "kubernetes" {
  load_config_file = false
  host = data.aws_eks_cluster.main.endpoint
  cluster_ca_certificate = base64decode(data.aws_eks_cluster.main.certificate_authority[0].data)
  token = data.aws_eks_cluster_auth.main.token
  version = "~> 1.9"
}

provider "kubernetes-alpha" {
  host = data.aws_eks_cluster.main.endpoint
  cluster_ca_certificate = base64decode(data.aws_eks_cluster.main.certificate_authority[0].data)
  token = data.aws_eks_cluster_auth.main.token
  server_side_planning = true
}

module "terraform-aws-modules-eks" {
  source = "terraform-aws-modules/eks/aws"

It appears that until Terraform support "partial apply via #4149" this cannot be really solved without splitting the terraform configuration in

  • configuration that creates the aws_eks_cluster
  • configuration that creates uses the kubernetes provider

Since the #4149 was opened in 2015 I believe it's very unlikely that we will get the partial apply functionality anytime soon.

@marcellodesales
Copy link

@ecerulm Yeah I ended up using the k8s provider to do apply using this approach because I couldn't find a way as well... Sorry about that... However, I'm leaning towards the GitOps approach:

  • Terraform builds cluster
  • Install ArgoCD to sync Kubernetes Apps
    • You can use argoCD itself to maintain itself
  • Terraform generates Cluster-level properties and commits those values to specific Kustomization repos that are shared among the needed Kustomization repos that ArgoCD maintains

That way we have a pipeline... For instance, creating AWS Certs for ALBs would need the ARN for the declared ALBs... So fat that's where I'm going...

@stevehipwell
Copy link

This is a significant blocker to automating a K8s cluster as there is always manifest changes required with provisioning. Is there a reason why if the provider credentials are empty the plan can't specify undefined for the resources?

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
bug Something isn't working upstream-terraform
Projects
None yet
Development

No branches or pull requests

5 participants