-
Notifications
You must be signed in to change notification settings - Fork 107
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Resources defined with kubectl_path_documents not picked up by documents #167
Comments
Using |
I'm having the same issue with kubectl_path_documents. I tried as @lifelofranco suggested , the terraform plan is not showing data "kubectl_path_documents" "manifests" {
} resource "kubectl_manifest" "crd" { When I don't use data "kubectl_path_documents" {} and copy & paste the contents of my yaml file inline with resource "kubectl_manifest" "crd" as below It works. I can see the resource "kubectl_manifest" "crd" {} to be created in the plan resource "kubectl_manifest" "provisioner_with_csg_launch_template" { } |
@b2jude I am going to assume that this provider is not being properly supported anymore. I only found out once attempting to create our 'staging' environment that this provider does not function and the resources aren't being picked up by Terraform, despite defining them properly in my module. You could attempt to use the official Kubernetes provider here (https://github.com/hashicorp/terraform-provider-kubernetes). As a last attempt, I just created a Anyways, that's just my two cents. |
Facing same issue, this code doesn't work
|
Do you have any updates on this problem? Are there any replacement ideas? This was working so well for me, and It just started to stop working on the new setup as it still works in the area you are doing an update of the existing deployment. |
do not use count. There was an error in the documents on gavinbunney version. Long story short, if you are using count, and then remove one of the documents, it will likely cause cascate delete/recreate because you have indexed your documents by the index number. If you are using foreach, then documents are indexed by the filename, so if you remove a file, only that manifest will be removed. See alekc/terraform-provider-kubectl#50 discussion. TlDR data "kubectl_path_documents" "manifests-directory-yaml" {
pattern = "./manifests/*.yaml"
}
resource "kubectl_manifest" "directory-yaml" {
for_each = data.kubectl_path_documents.manifests-directory-yaml.manifests
yaml_body = each.value
} |
It looks like my resources defined with kubectl_path_documents is not being picked up when called by a module.
Running terraform apply creates the correct number of resources when in the module directory.
Running terraform apply from the root directory project that is sourcing the module ignores all resources created from path_documents
I am not sure if this is intended or a bug.
on v1.14.0
The text was updated successfully, but these errors were encountered: