Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

aws_kinesis_firehose_delivery_stream can become permanently dirty #6053

Closed
nhnicwaller opened this issue Oct 2, 2018 · 7 comments · Fixed by #9103
Closed

aws_kinesis_firehose_delivery_stream can become permanently dirty #6053

nhnicwaller opened this issue Oct 2, 2018 · 7 comments · Fixed by #9103
Labels
bug Addresses a defect in current functionality. service/firehose Issues and PRs that pertain to the firehose service.
Milestone

Comments

@nhnicwaller
Copy link

Community Note

  • Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request
  • Please do not leave "+1" or "me too" comments, they generate extra noise for issue followers and do not help prioritize the request
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment

Terraform Version

Terraform v0.11.7
+ provider.aws v1.34.0
+ provider.template v1.0.0

Affected Resource(s)

  • aws_kinesis_firehose_delivery_stream

Terraform Configuration Files

resource "aws_kinesis_firehose_delivery_stream" "logs" {
  name        = "${aws_s3_bucket.logs.bucket}-firehose"
  destination = "extended_s3"

  kinesis_source_configuration {
    kinesis_stream_arn = "${aws_kinesis_stream.cloudwatch-logs.arn}"
    role_arn = "${aws_iam_role.firehose_role.arn}"
  }

  extended_s3_configuration {
    role_arn = "${aws_iam_role.firehose_role.arn}"
    bucket_arn = "${aws_s3_bucket.logs.arn}"
    buffer_size        = 30
    buffer_interval    = 900 # we prefer high interval (fewer files); 900 seconds is the max allowed
    compression_format = "GZIP"

    processing_configuration = [
      {
        enabled = "true"
        processors = [
          {
            type = "Lambda"
            parameters = [
              {
                parameter_name = "LambdaArn"
                parameter_value = "${aws_lambda_function.kinesis-cloudwatch-newline.arn}"
              }
            ]
          }
        ]
      }
    ]

    cloudwatch_logging_options {
      enabled = true
      log_group_name = "${aws_cloudwatch_log_group.logs-firehose-logs.name}"
      log_stream_name = "S3Delivery"
    }
  }
}

Debug Output

Too long to anonymize. Will try to provide if absolutely required.

Expected Behavior

Terraform should converge the stack such that when plan is run immediately after apply there should be no changes necessary.

Actual Behavior

Once the Firehose delivery stream is "dirty" there is no way to mark it as clean again in Terraform, short of actually deleting and re-creating the Firehose delivery stream. This is what it looks like when dirty:

  ~ aws_kinesis_firehose_delivery_stream.logs
      extended_s3_configuration.0.data_format_conversion_configuration.#:         "1" => "0"
      extended_s3_configuration.0.data_format_conversion_configuration.0.enabled: "false" => "true"

Steps to Reproduce

  1. Define a new aws_kinesis_firehose_delivery_stream in Terraform
  2. terraform apply
  3. Change one of the firehose settings in AWS Console (eg. set buffer time to 90 seconds)
  4. [optional] Use AWS Console to revert settings. (doesn't matter either way)
  5. terraform apply
  6. terraform plan will be dirty, despite the fact that apply was just run
@bflad bflad added bug Addresses a defect in current functionality. service/firehose Issues and PRs that pertain to the firehose service. labels Oct 3, 2018
@bflad
Copy link
Contributor

bflad commented Oct 3, 2018

Hi @nhnicwaller 👋 Sorry for the odd behavior! I imagine in this case the AWS console that is applying a "default" data processing configuration (set to false to match reality) to the actual resource. We'll likely need to teach the Terraform resource to ignore this difference automatically.

You have two options to workaround this in the meantime. 😅

Add a disabled data format conversion configuration in Terraform:

resource "aws_kinesis_firehose_delivery_stream" "logs" {
  # ... other configuration ...

  extended_s3_configuration {
    # ... other configuration ...

    data_format_conversion_configuration {
      enabled = false
    }
  }
}

Although I believe in that case the Terraform resource will also require input_format_configuration/output_format_configuration to be added to the configuration as well, which is less than ideal.

Instead, you can tell Terraform to always ignore the problematic attributes using ignore_changes:

resource "aws_kinesis_firehose_delivery_stream" "logs" {
  # ... other configuration ...

  lifecycle {
    ignore_changes = [
      "extended_s3_configuration.0.data_format_conversion_configuration",
      "extended_s3_configuration.0.data_format_conversion_configuration.0.enabled",
    ]
  }
}

Hope this helps in the meantime.

@nhnicwaller
Copy link
Author

Amazing response, thank you very much! Both techniques to silence this warning are very helpful to know until such time as the root cause is fixed.

@maxmanders
Copy link

I've just been bitten by this same problem. We're typically using 2.6.0, I tried with 2.14.0 in case it had been fixed. The lifecycle workaround works, though addressing this in a future release would be awesome.

@tmccombs
Copy link
Contributor

With terraform 0.12.6 neither of the workarounds work. The lifecycle trick still results in a broken state after refresh (Error: insufficient items for attribute "input_format_configuration"; must have at least 1), and having a block with enabled set to false still requires creating a configuration.

@bflad bflad added this to the v2.25.0 milestone Aug 20, 2019
@bflad
Copy link
Contributor

bflad commented Aug 20, 2019

The fix for the original issue has been merged and will release with version 2.25.0 of the Terraform AWS Provider, later this week.

@ghost
Copy link

ghost commented Aug 23, 2019

This has been released in version 2.25.0 of the Terraform AWS provider. Please see the Terraform documentation on provider versioning or reach out if you need any assistance upgrading.

For further feature requests or bug reports with this functionality, please create a new GitHub issue following the template for triage. Thanks!

@ghost
Copy link

ghost commented Nov 1, 2019

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues.

If you feel this issue should be reopened, we encourage creating a new issue linking back to this one for added context. Thanks!

@ghost ghost locked and limited conversation to collaborators Nov 1, 2019
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
bug Addresses a defect in current functionality. service/firehose Issues and PRs that pertain to the firehose service.
Projects
None yet
4 participants