Skip to content
This repository has been archived by the owner on Jul 21, 2023. It is now read-only.

Can't plan or apply kubernetes_manifest resources unless Kubernetes namespace already exists #1088

Closed
tgeoghegan opened this issue Nov 4, 2021 · 3 comments · Fixed by #1262

Comments

@tgeoghegan
Copy link
Contributor

in terraform/modules/kubernetes/kubernetes.tf, we create a couple of kubernetes_manifest resources. Those manifests get deployed into a Kubernetes namespace. terraform plan-ning the creation of those resources evidently entails (the equivalent of) kubectl apply --dry-run. If the namespace in question doesn't exist, then the manifest application dry run fails because the Kubernetes cluster doesn't know about the namespace we propose to deploy the manifest into.

This breaks deploying new localities, because that operation involves creating a namespace and creating a kubernetes_manifest in one apply operation. Curiously, make apply-bootstrap doesn't work around this: in spite of apply-bootstrap threatening to create namespaces during its planning phase, it doesn't actually do it. Presumably this is because of the weird behavior around -target that terraform warns us about.

@tgeoghegan
Copy link
Contributor Author

The workaround is to comment out the kubernetes_manifest resources, do make apply, then uncomment the manifests and apply again.

@tgeoghegan
Copy link
Contributor Author

Possibly relates to #1046, in that we might want to consider formalizing the notion of multi-stage Terraform application, so that we don't run afoul of surprising behavior in TF providers or around the -target option.

@tgeoghegan tgeoghegan added this to the Winter 2021-2022 stability milestone Dec 1, 2021
@tgeoghegan tgeoghegan self-assigned this Dec 1, 2021
tgeoghegan added a commit that referenced this issue Jan 5, 2022
The `kubernetes_manifest` resource in provider `hashicorp/kubernetes`
has a known issue[1] where resources created in a manifest can't depend
on other resources that don't exist yet. To work around this, we instead
use `gavinbunney/kubectl`'s `kubectl_manifest` resource, which does not
have this problem because it uses a different mechanism for planning.

[1] hashicorp/terraform-provider-kubernetes#1380

Resolves #1088
tgeoghegan added a commit that referenced this issue Jan 5, 2022
The `kubernetes_manifest` resource in provider `hashicorp/kubernetes`
has a known issue[1] where resources created in a manifest can't depend
on other resources that don't exist yet. To work around this, we instead
use `gavinbunney/kubectl`'s `kubectl_manifest` resource, which does not
have this problem because it uses a different mechanism for planning.

[1] hashicorp/terraform-provider-kubernetes#1380

Resolves #1088
@tgeoghegan
Copy link
Contributor Author

I wound up resolving this by adopting the gavinbunney/kubectl provider just for manifests, which doesn't have the same problematic behavior as the hashicorp/kubernetes provider. One final note here: terraform applying this change will destroy the existing ExternalMetrics for SQS queue depth and replace them with new, identical ones. This should cause nothing more than a momentary interruption of pod autoscaling while the corresponding HPAs rediscover the ExternalMetric, after which normal operation will resume, so I didn't think it was worth the hassle of doing terraform imports across all the EKS environments we have.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

1 participant