-
Notifications
You must be signed in to change notification settings - Fork 1.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Cannot delete instance group because it's being used by a backend service #6376
Comments
Unfortunately, this is an upstream Terraform issue. The provider doesn't have access to the update/destroy order. This is a similar to the scenario outlined here: #3008 |
Multiple apply doesn't fix the issue here. You have to change the config,
apply, than change again, apply.
…On Tue, May 19, 2020, 20:05 Cameron Thornton ***@***.***> wrote:
Closed #6376
<#6376>
.
—
You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub
<#6376 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AAO34LNRT7KQV3MSZZTZ6CLRSLC7BANCNFSM4NAMXOJA>
.
|
Sorry, that's what I meant. We don't have access to enable a solution for just one apply. |
Hi, here's a somewhat work-around for this specific use-case using an intermediate datasource (needs two applies): provider google {
version = "3.22.0"
region = "europe-west1"
project = "myproject"
}
locals {
#zones = []
zones = ["europe-west1-b"]
}
data "google_compute_network" "network" {
name = "default"
}
data "google_compute_instance_group" "s1" {
for_each = toset(local.zones)
name = format("s1-%s", each.key)
zone = each.key
}
resource "google_compute_region_backend_service" "s1" {
name = "s1"
dynamic "backend" {
for_each = [for group in data.google_compute_instance_group.s1 : group.self_link if group.self_link != null]
content {
group = backend.value
}
}
health_checks = [
google_compute_health_check.default.self_link,
]
}
resource "google_compute_health_check" "default" {
name = "s1"
tcp_health_check {
port = "80"
}
}
resource "google_compute_instance_group" "s1" {
for_each = toset(local.zones)
name = format("s1-%s", each.key)
zone = each.key
network = data.google_compute_network.network.self_link
} |
@pdecat your suggestion removes the dependency between |
I can confirm it does. But at least it does not need manual intervention out of band to fix the situation. |
Maybe something the google provider could do to fix this situation would be to manage backends of a # NOT A WORKING EXAMPLE
locals {
project = "<project-id>"
network = "<vpc-name>"
network_project = "<vpc-project>"
zones = ["europe-west1-b", "europe-west1-c", "europe-west1-d"]
s1_count = 3
}
provider "google" {
project = local.project
version = "~> 3.0"
}
data "google_compute_network" "network" {
name = local.network
project = local.network_project
}
resource "google_compute_region_backend_service" "s1" {
name = "s1"
health_checks = [
google_compute_health_check.default.self_link,
]
}
# WARNING: this resource type does not exist
resource "google_compute_region_backend_service_backend" "s1" {
for_each = google_compute_instance_group.s1
backend_service = google_compute_region_backend_service.s1.self_link
group = backend.value.self_link
}
resource "google_compute_health_check" "default" {
name = "s1"
tcp_health_check {
port = "80"
}
}
resource "google_compute_instance_group" "s1" {
count = local.s1_count
name = format("s1-%02d", count.index + 1)
zone = element(local.zones, count.index)
network = data.google_compute_network.network.self_link
} As a side note, I feel like hashicorp/terraform#8099 is not really about the same issue. It is about replacing or updating a resource when another resource it depends on changes (and not being destroyed). |
I added a comment on the Terraform core issue (hashicorp/terraform#25010 (comment)) Based on that comment ( If ... wouldn't that have the same effect as my manual |
@pdecat that should work, and requires implementing a new fine-grained resource Reopening the issue since a solution is possible, and this will be tracked similarly to other feature-requests. |
@StephenWithPH |
lack of pretty essential features and bugs like this makes me very disappointed with all the terraform and GCP |
The question is when :) |
This issue is actually quite problematic I get these errors trying to destroy the whole module. It requires multiple targeted terraform destroys to complete
|
I actually just ran into this issue a couple of days ago, and I was able to resolve it by appending a random string to the end of the group manager's name and using the |
This has been driving me nuts for months. Once all this config/infra is in place, the service / backend service cannot be deleted even if removing the URL map in the same change. It's becomes a two step of removing URL map, then removing service and backend service. In an enterprise setting with ~10 environments each receiving different releases at different schedules, having repeat CI pipelines is not okay and is basically unmanageable. |
I can relate to this, GCP doesn't update the URL map before destroying backend services. Very frustrating. |
Signed-off-by: Modular Magician <[email protected]>
Signed-off-by: Modular Magician <[email protected]>
Can confirm that this is the case with manual global load balancing setup on Google Provider as well. Definitely super annoying that we need to manually need to:
This means anytime we turn down on a region some administrator is going to have to do this instead of simply relying on CI/CD. What's worse is that it makes proving certain security/compliance certifications harder as our CI/CD + pull request process is audited and logged; but random CLI commands from an administrator's shell environment is harder to track (i.e. we need to involve GCP Audit Logging in the business justifications). Looking forward to an elegant solution by the provider here. |
I had the same problem. My workaround was to run following command (IT PROVOKES UNAVAILABILITY): # This will delete the URL map, then the backend service and finally create them again
terraform apply -replace="google_compute_region_url_map.name_of_your_url_map" Hope it helps. |
I think it's fundamentally a terraform core issue, but it could be fixed in the provider if there was a standalone resource to manage a backend of a backend service. In this case the deletion of the instance group/neg/whatever would naturally involve the deletion of the backend resource, and deletes in this case would be properly ordered. Of course the same then should be done for all analogous cases, which is a hassle and spans across most terraform providers (and maybe even impossible in some cases), but these would provide extra flexibility as well on top of being a workaround for this issue. |
Keeping fingers crossed for somebody to solve this issue. Today I have faced it when trying to increase |
hi could you paste an example of what you did with the create_before_destroy ? |
Disappointing this exists for 2+ years and still no fix. How come terraform doesn't understand it can't delete a managed instance group without first removing the load balancer (i.e. backend) depending on it? Seems a pretty simple idea, which for some reason isn't implemented? |
I'm having the same issue. I tried fixing it by adding a manual dependency using lifecycle.replace_triggered_by, but you have to do this on every single dependent resource, otherwise I keep getting the 'resource already used by' error. |
So.. this is a top 11 issue by likes, 4 years later we still have to do painful workarounds. |
This seems to work reasonably well as a workaround: resource "random_id" "group-manager-suffix" {
byte_length = 4
}
resource "google_compute_instance_group_manager" "my-group" {
name = "my-instance-group-manager-${random_id.group-manager-suffix.hex}"
...
lifecycle {
create_before_destroy = true
}
}
resource "google_compute_backend_service" "my-backend" {
...
backend {
group = google_compute_instance_group_manager.my-group.instance_group
...
}
} By randomizing the name it's possibly to |
Hello folks, I started working on adding this new resource |
Community Note
modular-magician
user, it is either in the process of being autogenerated, or is planned to be autogenerated soon. If an issue is assigned to a user, that user is claiming responsibility for the issue. If an issue is assigned tohashibot
, a community member has claimed the issue already.Terraform Version
Terraform v0.12.24
Affected Resource(s)
Terraform Configuration Files
I'm not sure is this a general TF problem or a Google provider problem, but here it goes.
Currently it's not possible to lover the number of
google_compute_instance_group
that are used in agoogle_compute_region_backend_service
. In the code above if we lower the number ofgoogle_compute_instance_group
resources and try to apply the configuration, TF will first try to delete the not needed instance groups and then update the backend configuration, but that order doesn't work because you cannot delete an instance group that is used by the backend service, the order should be the other way around.So to sum it up, when I lower the number of the instance group resources TF does this:
google_compute_instance_group
-> this failsgoogle_compute_region_backend_service
It should do this the other way around:
google_compute_region_backend_service
google_compute_instance_group
-> this failsHere is the output it generates:
Expected Behavior
TF should first update the
google_compute_region_backend_service
, then delete the instance group.Actual Behavior
TF tried to delete the instance group first, which resulted in an error.
Steps to Reproduce
terraform apply
s1_count = 2
terraform apply
Important Factoids
It's not a simple task to fix this. One "workaround" is to change the dynamic for_each to have a
slice()
function like this:So you first set the second number of
slice()
to the new number of the instanca groups run apply, then lower thes1_count
to that same number and run apply again, but that's just to complicated for a simple task like this.b/308569276
The text was updated successfully, but these errors were encountered: