-
Notifications
You must be signed in to change notification settings - Fork 63
Helm dependency for CRD manifest #72
Comments
Here is terraform graph: https://ibb.co/WtpPnBS |
This happens because custom resource (ClusterIssuer) takes some time to properly register in API server. In case when CRD ClusterIssuer was just created and then immediately one tries to create custom resource — this error will happen. |
I have the exact same problem, cert manager via helm release and following error while planning for
I my case I'm applying the custom resource in a second run after the crds are already running. |
EDIT: I've discovered that my issue was because because of dynamic generated connection to kubernetes, when I switched to static kubeconfig file it worked. I don't understand the reason since it's documented as supported but I understand this is a topic for another issuer. I've switched to a night build instead of v0.1.0 but still not working, given the following log do you think it's conflicting with other provider? it does not seems to have something to do with helm because any resource manifest give the same error: PS: I'm also using GCS as state storage if it helps.
|
During the plan stage it isn't actually actually applying the helm chart though, so the resource will never be there. I'm seeing the exact same problem, but with a different helm chart/crd combination. My guess is when it checks with kubernetes for the current state of the custom resource it's getting an error back (because it's not there). I would assume in such cases (where a dependency is known) that we shouldn't be erroring out, but simply reporting that we will be adding that object with a predicted resultant manifest equivalent to what we are applying. |
Happening to me on Terraform 0.13 as well |
Very important for me also....
|
I'm facing this exact same issue when provider "kubernetes-alpha" {
config_path = "~/.kube/config"
server_side_planning = false
}
resource helm_release cert_manager {
count = local.enable_cert_manager ? 1 : 0
atomic = true
chart = "cert-manager"
name = "cert-manager"
namespace = "kube-system"
repository = "https://charts.jetstack.io"
version = local.cert_manager_version
values = [
yamlencode(
{
installCRDs = true
}
)
]
}
resource kubernetes_secret route53_cert_manager_credentials {
count = local.enable_cert_manager ? 1 : 0
metadata {
name = "route53-cert-manager-credentials"
namespace = "kube-system"
}
data = {
secret_key = var.cert_manager_secret_key
}
}
resource kubernetes_manifest cluster_issuer {
depends_on = [helm_release.cert_manager, kubernetes_secret.route53_cert_manager_credentials]
count = local.enable_cert_manager ? 1 : 0
provider = kubernetes-alpha
manifest = {
apiVersion = "cert-manager.io/v1alpha2"
kind = "ClusterIssuer"
metadata = {
name = "letsencrypt"
}
spec = {
acme = {
email = var.acme_email
server = local.acme_server
privateKeySecretRef = {
name = "acme-cluster-issuer"
}
solvers = [
{
dns01 = {
route53 = {
hostedZoneID = var.zone_id
region = var.cert_manager_aws_region
accessKeyID = var.cert_manager_access_key
secretAccessKeySecretRef = {
name = local.cert_manager_secret_name
key = "secret_key"
}
}
}
selector = {
dnsZones = [
var.dns_zone
]
}
}
]
}
}
}
}
resource kubernetes_manifest default_cert {
depends_on = [helm_release.cert_manager, kubernetes_secret.route53_cert_manager_credentials,kubernetes_manifest.cluster_issuer]
count = local.enable_cert_manager ? 1 : 0
provider = kubernetes-alpha
manifest = {
apiVersion = "cert-manager.io/v1alpha2"
kind = "Certificate"
metadata = {
name = "default-cert"
namespace = "kube-system"
}
spec = {
secretName = "default-cert"
duration = "2160h"
renewBefore = "360h"
issuerRef = {
name = "letsencrypt"
kind = "ClusterIssuer"
}
dnsNames = [
local.dns_name
]
}
}
wait_for = {
fields = {
"status.conditions[0].status" = "True",
}
}
} $ terraform apply
Error: rpc error: code = Unknown desc = failed to determine resource GVR: no matches for cert-manager.io/v1alpha2, Resource=ClusterIssuer |
@dbalymvz can you try adding the wait attribute to your config to see if that helps with this issue? |
I have the same issue using a helm chart (which provides a custom resource) and a kubernetes-alpha manifest consuming the custom resource that should/would be provided by the helm chart. Similar to @TrevorPace I get the error during the terraform plan step, as the logic of the kubernetes-alpha provider seems to validate for the custom resource before it can actually exist (i.e. the helm chart was created/applied). As a workaround it is possible to pre-create the helm chart (by commenting out the kubernetes-alpha manifest), running terraform apply, and then adding the kubernetes-alpha manifest in a second run of terraform apply. However, this has to be used with caution, as it will cause a similar error during terraform destroy. I tried to add a wait time as well as an explicit depends_on between the resources, but as expected it will fail directly during plan. |
Same issue when using CRD, with kubernetes-alpha. A workaround is to first create CRD resources, run terraform apply, and then add the code related to the kubernetes-alpha provider and run terraform apply once again. As the error is detected at the plan step, there is no way to use dependencies between both resources. We need to run 2 different plan. |
i use bash script and -target=module. for all modules in main.tf and it's works) |
I get similar error tying to apply a istio crd
A major use case for kube_manifest is crds and if we cannot do crds then what is the use for it? Can we add a flag for crds so that it does not try to validate kind in the manifest during terraform plan? We need this provider to be more dynamic. I really need this for crds. The major issue I am seeing is when terraform does an plan or apply it cannot validate because the crd is not installed yet We need hasicorp to provide a fix for this. I hate using local-exec and kube_manifest and tfk8s are perfect for this. Fixing this would solve a lot issues in building kubernetes cluster. Please fix this asap we really need a solution for this whether it be a flag or another solution we needs this fixed. Having to use local exec or a script really sucks to do when kube_manifest was created to solve this exact problem. This needs some resolution this has been opened since June. The community really needs this functionality please fix this issue.
|
Same problem with
...the resource "kubernetes_manifest" "traefik_dashboard" {
depends_on = [helm_release.traefik]
provider = kubernetes-alpha
manifest = {
"apiVersion" = "traefik.containo.us/v1alpha1"
"kind" = "IngressRoute"
"metadata" = {
"name" = "dashboard"
"namespace" = "traefik"
}
"spec" = {
"entryPoints" = ["websecure"]
"routes" = [
{
"kind" = "Rule"
"match" = "Host(`traefik.mydomain.com`)"
"services" = [
{
"kind" = "TraefikService"
"name" = "api@internal"
},
]
},
]
"tls" = {
"certResolver" = "le"
}
}
}
} As it is, we can't plug this into our CI with the need for multiple runs. It doesn't stop me playing around with it but it's definitely a blocker for any future production usage. As such, I hope this gets prioritised for future releases... |
Running into this as well with cert-manager. Any chance we could get an update of where a fix for this sits in priorities, at least? Thanks! |
From the issue in upstream kubernetes provider, seems like something is on the works hashicorp/terraform-provider-kubernetes#215 (comment) |
For anyone that is looking for a temporary fix that allows them to deploy CRDs within a terraform module that another helm chart relies on, I would recommend using the helm provider and setting the chart to a local relative path (leave "repository" undefined). Then, just define the CRDs in there and they will be installed (you can naturally use the helm chart to fill in variables). It's not great, but it saves having to deploy some helm chart to a private repo...Then you can use this provider when this issue has been addressed. |
I fear, that the issue mentioned by the reporter is still not fixed. Even with the newest version
it fails with
(config file is nearly exactly the same as reporters, only that we don't include the yaml from file and some names differ) What I would like is, if there would be something similar to some AWS resources, where the plan shows "known after apply", or "could not fetch CRD ClusterIssuer, doing my best to apply manifest cluster_issuer_letsencrypt_prod, but it may fail" if that's possible. |
@SimonDreher have same issue. Did you try to make a check with raw cat <<EOF | kubectl apply -f -
apiVersion: certmanager.k8s.io/v1alpha1
kind: Issuer
metadata:
name: letsencrypt-staging
namespace: istio-system
spec:
acme:
email: [email protected]
server: https://acme-staging-v02.api.letsencrypt.org/directory
privateKeySecretRef:
# Secret resource used to store the account's private key.
name: example-issuer-account-key
http01: {}
---
EOF
error: unable to recognize "STDIN": no matches for kind "Issuer" in version "certmanager.k8s.io/v1alpha1" But it seems that i have cert-manager:
I tried both approaches kubectl apply -f https://github.com/jetstack/cert-manager/releases/download/v1.2.0/cert-manager.yaml and data "kubectl_file_documents" "cert_manager" {
content = file("${path.root}/src/cert-manager/v1.2.0/cert-manager.yml")
}
resource "kubectl_manifest" "cert_manager" {
count = length(data.kubectl_file_documents.cert_manager.documents)
yaml_body = element(data.kubectl_file_documents.cert_manager.documents, count.index)
}
resource "kubernetes_secret" "cert_manager_issuer" {
metadata {
name = "example-issuer-account-key"
namespace = "istio-system"
}
} |
@a0s I think your problem is that the API group is not correct. cert-manager uses "cert-manager.io/v1" since version 1.0. "certmanager.k8s.io/v1alpha1" was removed with 0.10/0.11: https://cert-manager.io/docs/release-notes/release-notes-0.11/ I've now come to the following solution by using the kubectl-provider and a bit ugly workaround with a provisioner, which lets Kubernetes enough time to recognize the CRD and start up cert-managers validating webhook. If someone has a better idea for the local exec, I would appreciate it very much.
|
Maybe the I haven't tested it myself though :) |
I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues. If you feel this issue should be reopened, we encourage creating a new issue linking back to this one for added context. If you feel I made an error 🤖 🙉 , please reach out to my human friends 👉 [email protected]. Thanks! |
Terraform Version and Provider Version
Terraform v0.12.26
Affected Resource(s)
kubernetes_manifest
Terraform Configuration Files
Panic Output
Error: rpc error: code = Unknown desc = no matches for cert-manager.io/v1alpha2, Resource=ClusterIssuer
Expected Behavior
Terraform schedules kubernetes_manifest.cert-manager-cluster-issuer deployment after helm_release.cert-manager is ready
Actual Behavior
Terraform throws an error that CRD is not available on 'plane' step
Steps to Reproduce
terraform plan
The text was updated successfully, but these errors were encountered: