-
Notifications
You must be signed in to change notification settings - Fork 9.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Terraform removes non-managed project metadata #7636
Comments
I did a bit of further testing on this one and came up with some interesting results. So if you create two different metadata resources, they conflict and overwrite each other on initial apply. variable "region" {
default = "us-east1"
}
variable "project_name" {
description = "The ID of the Google Cloud project"
}
provider "google" {
region = "${var.region}"
project = "${var.project_name}"
credentials = ""
}
resource "google_compute_project_metadata" "tf-testing-metadata" {
metadata {
tf-testing-1 = "test metadata variable 1 created by TF"
tf-testing-2 = "test metadata variable 2 created by TF"
}
}
resource "google_compute_project_metadata" "tf-testing-metadata-2" {
metadata {
tf-testing-3 = "test metadata variable 3 created by TF"
tf-testing-4 = "test metadata variable 4 created by TF"
}
} Then if you try to delete one, it again triggers the deletion of the entire project metadata, even the other managed resource. It seems like the MetadataDelete function was written purposefully in this way, even containing a comment on L190. In contrast, the MetadataUpdate functions (here and here) read in all the project metadata and then modify and delete individual items from the state. It looks possible to modify |
@cblecker - What if there was another resource called resource "google_compute_project_metadata_key" "md-1" {
key = "tf-testing-1"
value = "test metadata variable 1 created by TF"
}
resource "google_compute_project_metadata_key" "md-2" {
key = "tf-testing-2"
value = "test metadata variable 2 created by TF"
} Perhaps a resource like this would make it easier to reason about the expected behaviour? |
@sl1pm4t - I like that. It would shrink the scope of what each resource controls, and removes the need for more complicated logic around preventing duplicate resources. |
@evandbrown -- |
Weighing in on this: Terraform, in general, acts as a sledgehammer... and that's by design. A large part of the value for Terraform comes from knowing that your infrastructure is reproducible, which means that what is in the configs is what your infrastructure should look like. Having Terraform operate like a scalpel means we lose the Infrastructure as Code portion of the tool, because the infrastructure is no longer accurately reflected by our code. In this case, though, I can see some use cases where adhering to purity maybe isn't the best. In this case, I think it's best to do a metadata key resource that just owns a single key. That way we maintain the "Terraform owns this resource" property, but limit the scope of that to a single key, which means Kubernetes and project-wide SSH keys and stuff should still work, even if it bends the intent of Terraform a little. |
I've relabeled this appropriately, as a request for a new per-key resource. I can't promise when that will get a PR, but it's at least on the radar now, and we'll get to it as soon as we can. Thanks for bringing this up, @cblecker. |
Thanks @paddyforan! I appreciate you looking into this, and I think that's a sane path forward. |
Not saying I like this about terraform, but this is congruent with how terraform works in every other way. Its a declarative language, not a procedural one. You dont tell terraform what to do, you describe the state of the system, and terraform makes sure that is the state. It's really infrastructure as data, not infrastructure as code. maybe you want something like google_compute_metadata_has or google_compute_metadata_includes modules to indicate that you are simply describing that the metadata has to have this stuff, with no care on the other things it. |
@xcalibre0 While that makes sense from a Terraform point of view, it doesn't from the actual provider. As you do certain actions in the project such as creating a GKE cluster, even when using Terraform as a tool to do that action, the metadata is modified by the Google Compute API outside of Terraform. This creates a disparity between what Terraform knows as the state, and what is actually the state in the GCP project. This is where @paddyforan's above suggestion comes in -- Terraform owns keys within the metadata, but not the entire metadata structure.. This would allow Terraform to keep track of and change what it knows about, but not touch the things that it has no way of knowing about. |
I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues. If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further. |
Terraform Version
Terraform v0.6.16
Affected Resource(s)
Terraform Configuration Files
Debug Output
https://gist.github.com/cblecker/881acb52b2a3af9b194973ddd673fc08
Panic Output
Not applicable
Expected Behaviour
Any project metadata, including project-wide SSH keys created outside of Terraform should remain as part of the project should the Terraform-managed infrastructure be modified or destroyed.
Actual Behaviour
The API call that Terraform uses to remove it's metadata from the project seems to clear all metadata from the project, including the very important sshKeys metadata key, which can result in loss of access to instances in the project that may not be managed by Terraform.
Steps to Reproduce
gcloud compute project-info add-metadata --metadata testing-nontf-1=testing
gcloud compute project-info describe
terraform apply
gcloud compute project-info describe
terraform destroy
gcloud compute project-info describe
Important Factoids
Not applicable
References
Not applicable
The text was updated successfully, but these errors were encountered: