You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Jul 21, 2023. It is now read-only.
in terraform/modules/kubernetes/kubernetes.tf, we create a couple of kubernetes_manifest resources. Those manifests get deployed into a Kubernetes namespace. terraform plan-ning the creation of those resources evidently entails (the equivalent of) kubectl apply --dry-run. If the namespace in question doesn't exist, then the manifest application dry run fails because the Kubernetes cluster doesn't know about the namespace we propose to deploy the manifest into.
This breaks deploying new localities, because that operation involves creating a namespace and creating a kubernetes_manifest in one apply operation. Curiously, make apply-bootstrap doesn't work around this: in spite of apply-bootstrap threatening to create namespaces during its planning phase, it doesn't actually do it. Presumably this is because of the weird behavior around -target that terraform warns us about.
The text was updated successfully, but these errors were encountered:
Possibly relates to #1046, in that we might want to consider formalizing the notion of multi-stage Terraform application, so that we don't run afoul of surprising behavior in TF providers or around the -target option.
The `kubernetes_manifest` resource in provider `hashicorp/kubernetes`
has a known issue[1] where resources created in a manifest can't depend
on other resources that don't exist yet. To work around this, we instead
use `gavinbunney/kubectl`'s `kubectl_manifest` resource, which does not
have this problem because it uses a different mechanism for planning.
[1] hashicorp/terraform-provider-kubernetes#1380Resolves#1088
The `kubernetes_manifest` resource in provider `hashicorp/kubernetes`
has a known issue[1] where resources created in a manifest can't depend
on other resources that don't exist yet. To work around this, we instead
use `gavinbunney/kubectl`'s `kubectl_manifest` resource, which does not
have this problem because it uses a different mechanism for planning.
[1] hashicorp/terraform-provider-kubernetes#1380Resolves#1088
I wound up resolving this by adopting the gavinbunney/kubectl provider just for manifests, which doesn't have the same problematic behavior as the hashicorp/kubernetes provider. One final note here: terraform applying this change will destroy the existing ExternalMetrics for SQS queue depth and replace them with new, identical ones. This should cause nothing more than a momentary interruption of pod autoscaling while the corresponding HPAs rediscover the ExternalMetric, after which normal operation will resume, so I didn't think it was worth the hassle of doing terraform imports across all the EKS environments we have.
Sign up for freeto subscribe to this conversation on GitHub.
Already have an account?
Sign in.
in
terraform/modules/kubernetes/kubernetes.tf
, we create a couple ofkubernetes_manifest
resources. Those manifests get deployed into a Kubernetes namespace.terraform plan
-ning the creation of those resources evidently entails (the equivalent of)kubectl apply --dry-run
. If the namespace in question doesn't exist, then the manifest application dry run fails because the Kubernetes cluster doesn't know about the namespace we propose to deploy the manifest into.This breaks deploying new localities, because that operation involves creating a namespace and creating a
kubernetes_manifest
in one apply operation. Curiously,make apply-bootstrap
doesn't work around this: in spite ofapply-bootstrap
threatening to create namespaces during its planning phase, it doesn't actually do it. Presumably this is because of the weird behavior around-target
thatterraform
warns us about.The text was updated successfully, but these errors were encountered: