Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

aws_appautoscaling_target update detaches any aws_appautoscaling_policy's #240

Closed
hashibot opened this issue Jun 13, 2017 · 16 comments · Fixed by #2950
Closed

aws_appautoscaling_target update detaches any aws_appautoscaling_policy's #240

hashibot opened this issue Jun 13, 2017 · 16 comments · Fixed by #2950
Labels
bug Addresses a defect in current functionality.

Comments

@hashibot
Copy link

This issue was originally opened by @charlesbjohnson as hashicorp/terraform#8484. It was migrated here as part of the provider split. The original body of the issue is below.


Hi there,

Updating an aws_appautoscaling_target (which always forces a new resource) does not trigger dependent aws_appautoscaling_policys to be recreated. As a result, updating min_capacity or max_capacity causes all dependent aws_appautoscaling_policys to be deleted/detached, requiring a subsequent terraform apply to reattach them.

Terraform Version

v0.7.0+

Affected Resource(s)

  • aws_appautoscaling_target
  • aws_appautoscaling_policy

Terraform Configuration Files

provider "aws" {
  region = "us-west-2"
}

resource "aws_ecs_cluster" "cluster" {
  name = "demo-85e6a168597c3fe593b335df4c11496afe5dea31"
}

resource "aws_ecs_service" "service" {
  name = "${aws_ecs_cluster.cluster.name}"
  cluster = "${aws_ecs_cluster.cluster.id}"
  task_definition = "${aws_ecs_task_definition.task_definition.arn}"
  desired_count = 1
}

resource "aws_ecs_task_definition" "task_definition" {
  family = "nginx"
  container_definitions = <<EOF
[
  {
    "name": "nginx",
    "image": "nginx:latest",
    "cpu": 10,
    "memory": 500,
    "essential": true
  }
]
EOF
}

resource "aws_appautoscaling_target" "target" {
  name = "${aws_ecs_cluster.cluster.name}"
  service_namespace = "ecs"
  resource_id = "service/${aws_ecs_cluster.cluster.name}/${aws_ecs_service.service.name}"
  scalable_dimension = "ecs:service:DesiredCount"
  role_arn = "arn:aws:iam::428324370204:role/ecsAutoscaleRole"
  min_capacity = 1
  max_capacity = 2
}

resource "aws_appautoscaling_policy" "policy" {
  name = "${aws_appautoscaling_target.target.name}"
  resource_id = "service/${aws_ecs_cluster.cluster.name}/${aws_ecs_service.service.name}"
  adjustment_type = "ChangeInCapacity"
  cooldown = 300
  metric_aggregation_type = "Average"
  step_adjustment {
    metric_interval_lower_bound = 0
    scaling_adjustment = 1
  }
}

Expected Behavior

The aws_appautoscaling_target would be updated and the aws_appautoscaling_policy would still be attached.

Actual Behavior

The aws_appautoscaling_target was deleted and re-created, but the aws_appautoscaling_policy was lost in the process. A subsequent terraform apply will add it back, but it either should not have been removed or it should also have been recreated along with the aws_appautoscaling_target.

Steps to Reproduce

Please list the steps required to reproduce the issue, for example:

  1. terraform apply
  2. change aws_appautoscaling_target.target.max_capacity to 3
  3. terraform apply (causes the policy detachment)
  4. terraform apply (reattaches the policy)

Important Factoids

According to the AWS docs, the aws_appautoscaling_target can be created as well as updated via RegisterScalableTarget. Perhaps this could be used instead of recreating the aws_appautoscaling_target on update.

@hashibot hashibot added the bug Addresses a defect in current functionality. label Jun 13, 2017
@fabienrenaud
Copy link

Still an issue with terraform 0.9.8.
A lot of people are having this issue.

@ckyoog
Copy link

ckyoog commented Jul 8, 2017

I have same problem. I originally posted my case in issue terraform#8099.

I post it here again, hope it can help you guys to reproduce the problem.

resource "aws_appautoscaling_target" "ecs_target" {
  max_capacity       = "${var.max_capacity}"
  min_capacity       = "${var.min_capacity}"
  role_arn           = "${var.global_vars["ecs_as_arn"]}"

  resource_id        = "service/${var.global_vars["ecs_cluster_name"]}/${var.ecs_service_name}"
  scalable_dimension = "ecs:service:DesiredCount"
  service_namespace  = "ecs"
}

resource "aws_appautoscaling_policy" "ecs_cpu_scale_in" {
  adjustment_type         = "${var.adjustment_type}"
  cooldown                = "${var.cooldown}"
  metric_aggregation_type = "${var.metric_aggregation_type}"

  name                    = "${var.global_vars["ecs_cluster_name"]}-${var.ecs_service_name}-cpu-scale-in"
  resource_id             = "service/${var.global_vars["ecs_cluster_name"]}/${var.ecs_service_name}"
  scalable_dimension      = "ecs:service:DesiredCount"
  service_namespace       = "ecs"

  step_adjustment {
    metric_interval_upper_bound = "${var.scale_in_cpu_upper_bound}"
    scaling_adjustment          = "${var.scale_in_adjustment}"
  }

  depends_on = ["aws_appautoscaling_target.ecs_target"]
}

resource aws_appautoscaling_policy.ecs_cpu_scale_in (let it be autoscaling policy) depends on resource aws_appautoscaling_target.ecs_target(let it be autoscaling target).

When I change the value of max_capacity, and then run terraform plan, it shows the autoscaling target is forced to new (it is going to be destroyed and re-added). But nothing will happen to autoscaling policy, which is supposed to be destroyed and re-added as well.

Why is it supposed to? Because in my practice, after terraform apply successfully (which destroys and re-adds the autoscaling target successfully), the autoscaling policy is gone automatically (if you login to aws console, you can see it's gone), so I have to run terraform apply again, the second time, and this time, it will add the autoscaling policy back.

@alexcallihan
Copy link

Glad I found this. Noticed it today when dropping the min_capacity on our target. Checked the console later and noticed the policies were gone. Ran another terraform apply and saw they re-created.

@christianclarke
Copy link

We are also observing the same problem.

@alexcallihan
Copy link

A small workaround for this that appears to be working correctly for my situation is just leveraging the target-id in the name arg for the policy:

name = "${aws_appautoscaling_target.xxxxx.id}-scale-down"

This seems to force a destroy and rebuild.

@christianclarke
Copy link

@alexcallihan Awesome. Thank you very much. Can confirm that work-around works as well.

@WolverineFan
Copy link
Contributor

I believe #968 is a duplicate of this issue.

@mmmorris1975
Copy link

Verified that @alexcallihan workaround makes apply work as expected when autoscaling target properties are adjusted

@bflad
Copy link
Contributor

bflad commented Jan 22, 2018

We just merged a change to the aws_appautoscaling_target resource in master (releasing in v1.8.0 of the provider) that now supports updating the min/max attributes instead of recreating the target. This should help a lot with this situation!

@bflad
Copy link
Contributor

bflad commented Jan 29, 2018

This has been released in terraform-provider-aws version 1.8.0. Please see the Terraform documentation on provider versioning or reach out if you need any assistance upgrading.

@christianclarke
Copy link

Using v1.8.0 of the AWS provider plugin. We noticed that our ECS service would consistently have the AWS service linked IAM role AWSServiceRoleForApplicationAutoScaling_ECSService regardless of the custom role we gave it in the aws_appautoscaling_target resource. Thereby it wasn't idempotent - Terraform would always try to change the IAM role in aws_appautoscaling_target.

I could have used an "lifecycle_ignore" hook, so Terraform wouldn't manage the IAM role in the aws_appautoscaling_target . But rather than ignore it, this is what I came up with to fix the idempotency. Is there any gotchas anyone can think of ?

data "aws_iam_role" "ecs_service_autoscaling" {
name = "AWSServiceRoleForApplicationAutoScaling_ECSService"
}

resource "aws_appautoscaling_target" "service" {
max_capacity = "${var.max_instances}"
min_capacity = "${var.min_instances}"
resource_id = "service/myecscluster/${aws_ecs_service.service.name}"
role_arn = "${data.aws_iam_role.ecs_service_autoscaling.arn}"
scalable_dimension = "ecs:service:DesiredCount"
service_namespace = "ecs"
}

@dendrochronology
Copy link

dendrochronology commented Feb 8, 2018

I'm seeing the same thing; every apply tries to update the role_arn. Have you opened another issue, @christianclarke, or should this be re-opened, @bflad?

I can confirm your workaround solved it, but I'm not yet sure if it's side effect free 😄

@christianclarke
Copy link

@dendrochronology

This I think has already been raised. The role in question is a service linked role which AWS insist that your autoscaking service adopts

#921

@ophintor
Copy link

Hi,

This is still not working with Terraform 0.12 and unfortunately the above workaround from @alexcallihan doesn't seem to work anymore :(

In a nutshell, I have an ECS service with an autoscaling target and a couple of policies. Let's say I make a change that forces the recreation of the service/target: the autoscaling policy is still in the state file but it doesnt get recreated after recreating the ECS service and target. When I look in AWS, the servioce is there but the autoscaling policies and the target are just gone.

When I run terraform a second time, then they get recreated:

...
  # module.user-mgmt.aws_appautoscaling_policy.ecs_policy_down_cpu will be created
  + resource "aws_appautoscaling_policy" "ecs_policy_down_cpu" {
      + arn                = (known after apply)
      + id                 = (known after apply)
      + name               = "cpu-scale-down"
      + policy_type        = "StepScaling"
      + resource_id        = "service/Service_Finder_dptest/service-finder_dptest_user-mgmt"
      + scalable_dimension = "ecs:service:DesiredCount"
      + service_namespace  = "ecs"

      + step_scaling_policy_configuration {
          + adjustment_type         = "ChangeInCapacity"
          + cooldown                = 300
          + metric_aggregation_type = "Average"

          + step_adjustment {
              + metric_interval_upper_bound = "0"
              + scaling_adjustment          = -1
            }
        }
    }

  # module.user-mgmt.aws_appautoscaling_policy.ecs_policy_up_cpu will be created
  + resource "aws_appautoscaling_policy" "ecs_policy_up_cpu" {
      + arn                = (known after apply)
      + id                 = (known after apply)
      + name               = "cpu-scale-up"
      + policy_type        = "StepScaling"
      + resource_id        = "service/Service_Finder_dptest/service-finder_dptest_user-mgmt"
      + scalable_dimension = "ecs:service:DesiredCount"
      + service_namespace  = "ecs"

      + step_scaling_policy_configuration {
          + adjustment_type         = "ChangeInCapacity"
          + cooldown                = 60
          + metric_aggregation_type = "Average"

          + step_adjustment {
              + metric_interval_lower_bound = "0"
              + scaling_adjustment          = 1
            }
        }
    }

  # module.user-mgmt.aws_appautoscaling_target.ecs_target will be created
  + resource "aws_appautoscaling_target" "ecs_target" {
      + id                 = (known after apply)
      + max_capacity       = 20
      + min_capacity       = 2
      + resource_id        = "service/Service_Finder_dptest/service-finder_dptest_user-mgmt"
      + role_arn           = (known after apply)
      + scalable_dimension = "ecs:service:DesiredCount"
      + service_namespace  = "ecs"
    }
...

Any chance this could be looked into? Thanks!

My version:

$ terraform version
Terraform v0.12.4
+ provider.archive v1.2.2
+ provider.aws v2.19.0
+ provider.random v2.1.2
+ provider.template v2.1.2
+ provider.tls v2.0.1

@bflad
Copy link
Contributor

bflad commented Jul 23, 2019

Hi folks 👋 If you would like to report potentially lingering issues, please file a new bug report following the issue template.

@ghost
Copy link

ghost commented Nov 2, 2019

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues.

If you feel this issue should be reopened, we encourage creating a new issue linking back to this one for added context. Thanks!

@ghost ghost locked and limited conversation to collaborators Nov 2, 2019
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
bug Addresses a defect in current functionality.
Projects
None yet
10 participants