Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Running a terraform plan wants to change a ton fo resources #1098

Closed
1 of 4 tasks
ado120 opened this issue Nov 12, 2020 · 5 comments
Closed
1 of 4 tasks

Running a terraform plan wants to change a ton fo resources #1098

ado120 opened this issue Nov 12, 2020 · 5 comments

Comments

@ado120
Copy link

ado120 commented Nov 12, 2020

I have issues

I'm submitting a...

  • bug report
  • feature request
  • support request - read the FAQ first!
  • kudos, thank you, warm fuzzy

What is the current behavior?

I haven't done much of anything on my cluster except changing a values.yml file that I want applied to a helm release on the cluster. However when running a terraform plan against my folder, I am getting a BUNCH of changes. This happens all the time, and makes me very weary of using the eks module in production. Is best practice to version lock what eks module version is being used?

Here are the changes it wants to make when all I want is a simply values.yml change..

module.eks.random_pet.workers[1] must be replaced
module.eks.random_pet.workers[0] must be replaced
module.eks.kubernetes_config_map.aws_auth[0]
module.eks.aws_launch_configuration.workers[1]
module.eks.aws_launch_configuration.workers[0]
module.eks.aws_iam_role_policy_attachment.cluster_elb_sl_role_creation[0] will be created
module.eks.aws_iam_role_policy_attachment.cluster_AmazonEKSVPCResourceControllerPolicy[0] will be created
module.eks.aws_iam_role_policy.cluster_elb_sl_role_creation[0] will be destroyed
module.eks.aws_iam_policy.cluster_elb_sl_role_creation[0] will be created
module.eks.aws_autoscaling_group.workers[1] will be updated in-place
module.eks.aws_autoscaling_group.workers[0] will be updated in-place

Why in the world does it want to change so much stuff? We use this cluster in production now so this is very scary. Is this just a side effect of me spinning up the cluster on an older eks module version and then now running a plan on the latest eks module version? (I forgot to version lock). At this point I think it might just be better that I don't have terraform manage the cluster, but manage it manually instead. Any input would be greatly appreciated!!

If this is a bug, how to reproduce? Please include a code sample if relevant.

What's the expected behavior?

Are you able to fix this problem and submit a PR? Link here if you have already.

Environment details

  • Affected module version:
  • OS:
  • Terraform version:

Any other relevant info

@barryib
Copy link
Member

barryib commented Nov 12, 2020

Which version of this module are you using ? Is it an upgrade ? Provide your plan output please.

We can't really help you if you don't provide these info in your issue.

Environment details

  • Affected module version:
  • OS:
  • Terraform version:

FWIW, these following changes are normal during the module upgrade. Please read the changelog for more info.

module.eks.aws_iam_role_policy_attachment.cluster_elb_sl_role_creation[0] will be created
module.eks.aws_iam_role_policy_attachment.cluster_AmazonEKSVPCResourceControllerPolicy[0] will be created
module.eks.aws_iam_role_policy.cluster_elb_sl_role_creation[0] will be destroyed
module.eks.aws_iam_policy.cluster_elb_sl_role_creation[0] will be created

@ado120
Copy link
Author

ado120 commented Nov 12, 2020

@barryib How can I find out what version of the eks module I'm using? Here is the plan output:

TF Version: 13.4

Plan output attached. Basically I provisioned this cluster maybe 2-3 months ago, haven't changed anything really except for the values.yml file for a helm chart I'm installing using terraform as well. So from an eks side, nothing has changed, not sure why it wants to change the aws_auth as well.
plan.txt

@barryib
Copy link
Member

barryib commented Nov 12, 2020

If you don't know, you're probably using the latest version of this module. This is quite dangerous for production environment. Please pin the eks module version in your code. And read the changelog before upgrade. See https://registry.terraform.io/modules/terraform-aws-modules/eks/aws/latest for available versions (the latest one is v13.2.1).

module "eks" {
  source  = "terraform-aws-modules/eks/aws"
  version = "13.2.1"
  # insert the 7 required variables here
}

As I mentioned, there were several changes in this module since the last time you ran it. See changelog for more details.

By reading your plan, it sounds like you're updating your launch configuration because the AMI has changed (you are using the latest AMI available. AWS build them regularly. Pin this if you want).

There are no issue on random pets, they can be recreated as longer you don't set asg_recreate_on_change=true. Their recreation come from TF 0.13 (#940, #1043 and #976) Others changes are about tags specifications (#1095 and #1092), aws-auth configmap labels (#989), IAM policy change from inline policy to selfmanaged policy (#1039). Please read the changelog.

@barryib
Copy link
Member

barryib commented Nov 14, 2020

Closing. Feel free to re-open if needed.

@barryib barryib closed this as completed Nov 14, 2020
@github-actions
Copy link

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues. If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.

@github-actions github-actions bot locked as resolved and limited conversation to collaborators Nov 23, 2022
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants