Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

aws_cloudwatch_metric_alarm: diffs didn't match during apply #968

Closed
WolverineFan opened this issue Jun 26, 2017 · 5 comments
Closed

aws_cloudwatch_metric_alarm: diffs didn't match during apply #968

WolverineFan opened this issue Jun 26, 2017 · 5 comments
Labels
bug Addresses a defect in current functionality. service/cloudwatch Issues and PRs that pertain to the cloudwatch service.

Comments

@WolverineFan
Copy link
Contributor

The problem appears to be related to aws_appautoscaling_policy not properly depending on aws_appautoscaling_target in the graph.

I've noticed that any time I update the min_capacity or max_capacity of the aws_appautoscaling_target it breaks the link to the policy because Terraform creates a new aws_appautoscaling_target but doesn't then update the policy. I can fix the problem generally by running a second time. In this case however I was updating both the min/max as well as some policy and alarm settings, which caused the apply to blow up.

Terraform Version

Terraform Version: 0.9.8

Affected Resource(s)

Please list the resources as a list, for example:

  • aws_cloudwatch_metric_alarm
  • aws_appautoscaling_policy
  • aws_appautoscaling_target

Terraform Configuration Files

# This isn't a working config, just a sketch

resource "aws_spot_fleet_request" "spot" {
  iam_fleet_role                      = "arn:aws:iam::xxxx:role/aws-ec2-spot-fleet-role"
  replace_unhealthy_instances         = true
  spot_price                          = "0.199"
  target_capacity                     = "${var.target_capacity}"
  allocation_strategy                 = "lowestPrice"
  valid_until                         = "2019-11-04T20:44:20Z"
  terminate_instances_with_expiration = true

  lifecycle {
    ignore_changes = [
      "target_capacity",
    ]
  }

  launch_specification {
    instance_type               = "c3.xlarge"
    ami                         = "ami-12345"
    key_name                    = "xxx"
    associate_public_ip_address = "true"
    monitoring                  = "true"
    vpc_security_group_ids      = ["sg-abcd"]
    subnet_id                   = "sub-12345"
  }
}

resource "aws_appautoscaling_target" "spot" {
  max_capacity       = "${var.max_capacity}"
  min_capacity       = "${var.min_capacity}"
  resource_id        = "spot-fleet-request/${aws_spot_fleet_request.spot.id}"
  role_arn           = "arn:aws:iam:xxx:role/aws-ec2-spot-fleet-autoscale-role"
  scalable_dimension = "ec2:spot-fleet-request:TargetCapacity"
  service_namespace  = "ec2"
}

# Create an Increasing autoscaling policy
resource "aws_appautoscaling_policy" "add_capacity" {
  adjustment_type         = "PercentChangeInCapacity"
  cooldown                = "${var.cooldown}"
  metric_aggregation_type = "Average"
  name                    = "spot-${aws_spot_fleet_request.spot.id}_upscaling"
  resource_id             = "spot-fleet-request/${aws_spot_fleet_request.spot.id}"
  scalable_dimension      = "ec2:spot-fleet-request:TargetCapacity"
  service_namespace       = "ec2"

  step_adjustment {
    metric_interval_lower_bound = 0
    scaling_adjustment = "${var.scaling_up_adjustment}"
  }

  depends_on = ["aws_appautoscaling_target.spot"]
}

resource "aws_cloudwatch_metric_alarm" "add_capacity" {
  alarm_name          = "spot-${aws_spot_fleet_request.spot.id}_upscaling"
  comparison_operator = "GreaterThanOrEqualToThreshold"
  evaluation_periods  = 5
  metric_name         = "CPUUtilization"
  namespace           = "AWS/EC2Spot"
  period              = 60
  statistic           = "Average"
  threshold           = "${var.scale_up_alarm_threshold}"
  treat_missing_data  = "missing"

  dimensions {
    FleetRequestId = "${aws_spot_fleet_request.spot.id}"
  }

  alarm_description = "Autoscaling CPUUtilization too high"
  alarm_actions     = ["${aws_appautoscaling_policy.add_capacity.arn}"]
}

Actual Behavior

* module.spot_fleet_a.aws_cloudwatch_metric_alarm.add_capacity: aws_cloudwatch_metric_alarm.add_capacity: diffs didn't match during apply. This is a bug with Terraform and should be reported as a GitHub Issue.

Please include the following information in your report:

    Terraform Version: 0.9.8
    Resource ID: aws_cloudwatch_metric_alarm.add_capacity
    Mismatch reason: extra attributes: alarm_actions.3717967580, alarm_actions.1002408260
    Diff One (usually from plan): *terraform.InstanceDiff{mu:sync.Mutex{state:0, sema:0x0}, Attributes:map[string]*terraform.ResourceAttrDiff{"threshold":*terraform.ResourceAttrDiff{Old:"75", New:"85", NewComputed:false, NewRemoved:false, NewExtra:interface {}(nil), RequiresNew:false, Sensitive:false, Type:0x0}, "treat_missing_data":*terraform.ResourceAttrDiff{Old:"", New:"missing", NewComputed:false, NewRemoved:false, NewExtra:interface {}(nil), RequiresNew:false, Sensitive:false, Type:0x0}}, Destroy:false, DestroyDeposed:false, DestroyTainted:false, Meta:map[string]interface {}(nil)}
    Diff Two (usually from apply): *terraform.InstanceDiff{mu:sync.Mutex{state:0, sema:0x0}, Attributes:map[string]*terraform.ResourceAttrDiff{"treat_missing_data":*terraform.ResourceAttrDiff{Old:"", New:"missing", NewComputed:false, NewRemoved:false, NewExtra:interface {}(nil), RequiresNew:false, Sensitive:false, Type:0x0}, "threshold":*terraform.ResourceAttrDiff{Old:"75", New:"85", NewComputed:false, NewRemoved:false, NewExtra:interface {}(nil), RequiresNew:false, Sensitive:false, Type:0x0}, "alarm_actions.3717967580":*terraform.ResourceAttrDiff{Old:"arn:aws:autoscaling:us-west-1:205799058367:scalingPolicy:ca30b432-401a-4316-a7ae-ddddd1fcdfab:resource/ec2/spot-fleet-request/sfr-53f4158c-743b-4266-ab6b-3a46d6fdd373:policyName/spot-sfr-53f4158c-743b-4266-ab6b-3a46d6fdd373_upscaling", New:"", NewComputed:false, NewRemoved:true, NewExtra:interface {}(nil), RequiresNew:false, Sensitive:false, Type:0x0}, "alarm_actions.1002408260":*terraform.ResourceAttrDiff{Old:"", New:"arn:aws:autoscaling:us-west-1:205799058367:scalingPolicy:2c7f57fa-8d27-4b85-a7b4-98d73450a1ff:resource/ec2/spot-fleet-request/sfr-53f4158c-743b-4266-ab6b-3a46d6fdd373:policyName/spot-sfr-53f4158c-743b-4266-ab6b-3a46d6fdd373_upscaling", NewComputed:false, NewRemoved:false, NewExtra:interface {}(nil), RequiresNew:false, Sensitive:false, Type:0x0}}, Destroy:false, DestroyDeposed:false, DestroyTainted:false, Meta:map[string]interface {}(nil)}

Steps to Reproduce

Please list the steps required to reproduce the issue, for example:

  1. terraform apply
  2. Update max_capacity, scale_up_alarm_threshold, and cooldown
  3. terraform apply
@bflad
Copy link
Contributor

bflad commented Jan 22, 2018

The aws_appautoscaling_target part of this issue here duplicates #240, which was just closed by merging an enhancement to allow updating the min/max attributes instead of recreating the resource each time. This will be released in v1.8.0 of the provider.

@bflad bflad added the service/cloudwatch Issues and PRs that pertain to the cloudwatch service. label Jan 22, 2018
@peteroruba
Copy link

Unfortunately cannot confirm the workaround solves this issue. Using TF 0.8.8.

@four43
Copy link

four43 commented Oct 31, 2018

I am having a similar issue. My aws_cloudwatch_metric_alarms used to trigger autoscaling on an EC2 Spot Fleet are getting lost. I think the core issue is that Terraform assumes it can change the resource_id in place of an aws_appautoscaling_policy where in actuality that seems to break links with the aws_cloudwatch_metric_alarm

As a test, if I taint the aws_appautoscaling_policy all works as expected. I'm guessing for my case that update in place is just broken.

@bflad
Copy link
Contributor

bflad commented Jul 5, 2019

Hi folks 👋The fix for the aws_appautoscaling_policy resource not properly recreating on resource_id updates was previously released in version 2.3.0 of the Terraform AWS provider and has been available in all releases since. Please see the Terraform documentation on provider versioning or reach out if you need any assistance upgrading.

For further bug reports with this functionality, please create a new GitHub issue following the template for triage. Thanks!

@bflad bflad closed this as completed Jul 5, 2019
@ghost
Copy link

ghost commented Nov 2, 2019

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues.

If you feel this issue should be reopened, we encourage creating a new issue linking back to this one for added context. Thanks!

@ghost ghost locked and limited conversation to collaborators Nov 2, 2019
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
bug Addresses a defect in current functionality. service/cloudwatch Issues and PRs that pertain to the cloudwatch service.
Projects
None yet
Development

No branches or pull requests

5 participants