Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

aws_msk_cluster not using the latest revision of aws_msk_configuration #17484

Closed
gneveu opened this issue Feb 5, 2021 · 7 comments · Fixed by #23662
Closed

aws_msk_cluster not using the latest revision of aws_msk_configuration #17484

gneveu opened this issue Feb 5, 2021 · 7 comments · Fixed by #23662
Labels
bug Addresses a defect in current functionality. service/kafka Issues and PRs that pertain to the kafka service.
Milestone

Comments

@gneveu
Copy link

gneveu commented Feb 5, 2021

Community Note

  • Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request
  • Please do not leave "+1" or other comments that do not add relevant new information or questions, they generate extra noise for issue followers and do not help prioritize the request
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment

Terraform CLI and Terraform AWS Provider Version

Terraform v0.13.5
+ provider.aws v3.27.0

Affected Resource(s)

  • aws_msk_configuration
  • aws_msk_cluster

Terraform Configuration Files

resource "aws_msk_cluster" "nxmsk-terraform" {
  cluster_name           = local.msk_real_cluster_name
  kafka_version          = var.kafka_version
  number_of_broker_nodes = var.nbBrokers

  broker_node_group_info {
    instance_type   = var.instance_type
    ebs_volume_size = 1000
    client_subnets  = [for snet in data.aws_subnet.msk_subnets : snet.id]
    security_groups = [data.aws_security_group.kafka_nsg.id]
  }

  configuration_info {
    arn      = aws_msk_configuration.msk_conf.arn
    revision = aws_msk_configuration.msk_conf.latest_revision
  }
...
}


resource "aws_msk_configuration" "msk_conf" {
  kafka_versions = [var.kafka_version]
  name           = "nx-msk-conf-${local.msk_real_cluster_name}"

  server_properties = <<PROPERTIES
auto.create.topics.enable = ${var.allow_topic_autocreation}
default.replication.factor = 3
log.cleanup.policy = compact
min.insync.replicas = 2
num.partitions = 1
num.replica.fetchers = 1
unclean.leader.election.enable = false
PROPERTIES
}

Debug Output

16:07:21  Terraform will perform the following actions:
16:07:21  
16:07:21    # module.msk.aws_msk_configuration.nx_msk_conf will be updated in-place
16:07:21    ~ resource "aws_msk_configuration" "nx_msk_conf" {
16:07:21          arn               = "arn:aws:kafka:eu-west-1:377041471446:configuration/nx-msk-conf-nxmsk-eu-west-1-main/7cc9b685-ee09-43d7-8162-a48c63bdd307-4"
16:07:21          id                = "arn:aws:kafka:eu-west-1:377041471446:configuration/nx-msk-conf-nxmsk-eu-west-1-main/7cc9b685-ee09-43d7-8162-a48c63bdd307-4"
16:07:21          kafka_versions    = [
16:07:21              "2.4.1.1",
16:07:21          ]
16:07:21          latest_revision   = 4
16:07:21          name              = "nx-msk-conf-nxmsk-eu-west-1-main"
16:07:21        ~ server_properties = <<~EOT
16:07:21            - auto.create.topics.enable = false
16:07:21            + auto.create.topics.enable = true
16:07:21              default.replication.factor = 3
16:07:21              log.cleanup.policy = compact
16:07:21              min.insync.replicas = 2
16:07:21              num.partitions = 1
16:07:21              num.replica.fetchers = 1
16:07:21              unclean.leader.election.enable = false
16:07:21          EOT
16:07:21      }
16:07:21  
16:07:21  Plan: 0 to add, 1 to change, 0 to destroy.

Panic Output

Expected Behavior

When updating a value from the aws_msk_configuration resource, the aws_msk_cluster pointing to this configuration should pick the latest revision of the configuration

Actual Behavior

The aws_msk_configuration resource correctly gets updated with the new confioguration values and a new revision of the configuration is created, however MSK is still using revision N-1 and a second terraform plan application is needed to have MSK point to the latest revision

Steps to Reproduce

  1. terraform init
  2. terraform plan
  3. terraform apply

Important Factoids

It is worth noting than the MSK configuration and MSK cluster resources are defined in the same terraform module.

References

  • #0000
@ghost ghost added the service/kafka Issues and PRs that pertain to the kafka service. label Feb 5, 2021
@github-actions github-actions bot added the needs-triage Waiting for first response or review from a maintainer. label Feb 5, 2021
@anGie44 anGie44 added bug Addresses a defect in current functionality. and removed needs-triage Waiting for first response or review from a maintainer. labels Feb 9, 2021
@dlp1154
Copy link

dlp1154 commented Feb 18, 2021

We are observing the same issue

@jamestait
Copy link

For reference, adding a depends_on doesn't help, instead resulting in:

Error: Provider produced inconsistent final plan

When expanding the plan for aws_msk_cluster.default to include new values
learned so far during apply, provider "registry.terraform.io/-/aws" produced
an invalid new value for .configuration_info[0].revision: was
cty.NumberIntVal(2), but now cty.NumberIntVal(3).

This is a bug in the provider, which should be reported in the provider's own
issue tracker.

At least it makes the bug explicit. The only workaround I know of so far is to apply the terraform configuration twice -- once to create the new configuration revision, and the second to get the cluster to use it.

@marcincuber
Copy link

marcincuber commented Oct 18, 2021

Although I am seeing the same behaviour as described in the issue. I would like to present a solution that actually works on a single apply of terraform scripts.

resource "aws_msk_configuration" "cluster" {
  kafka_versions = ["2.6.2"]
  name           = "${local.name_prefix}-main-configuration"

  server_properties = file("${path.module}/kafka_properties.txt")
}

data "aws_msk_configuration" "cluster" {
  name = "${local.name_prefix}-main-configuration"

  depends_on = [aws_msk_configuration.cluster]
}

resource "aws_msk_cluster" "cluster" {
  count = 1

  cluster_name           = "${local.name_prefix}-msk"
  kafka_version          = "2.6.2"
  number_of_broker_nodes = 3

  configuration_info {
    arn      = data.aws_msk_configuration.cluster.arn
    revision = data.aws_msk_configuration.cluster.latest_revision
  }

Using data source, terraform correctly updates cluster with the latest revision of cluster configuration. Hope that will help some of you! :)

@danielmotaleite
Copy link

I can confirm that the @marcincuber workaround do work!

Thanks for that little gem!

@jamestait
Copy link

I think the workaround depends on changes to dependency handling introduced in Terraform 0.13.0 (I know, I know). It doesn't work with 0.12.

@github-actions
Copy link

This functionality has been released in v4.6.0 of the Terraform AWS Provider. Please see the Terraform documentation on provider versioning or reach out if you need any assistance upgrading.

For further feature requests or bug reports with this functionality, please create a new GitHub issue following the template. Thank you!

@github-actions
Copy link

github-actions bot commented May 7, 2022

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues.
If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.

@github-actions github-actions bot locked as resolved and limited conversation to collaborators May 7, 2022
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
bug Addresses a defect in current functionality. service/kafka Issues and PRs that pertain to the kafka service.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

6 participants