-
-
Notifications
You must be signed in to change notification settings - Fork 4.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Running a terraform plan wants to change a ton fo resources #1098
Comments
Which version of this module are you using ? Is it an upgrade ? Provide your plan output please. We can't really help you if you don't provide these info in your issue. Environment details
FWIW, these following changes are normal during the module upgrade. Please read the changelog for more info. module.eks.aws_iam_role_policy_attachment.cluster_elb_sl_role_creation[0] will be created
module.eks.aws_iam_role_policy_attachment.cluster_AmazonEKSVPCResourceControllerPolicy[0] will be created
module.eks.aws_iam_role_policy.cluster_elb_sl_role_creation[0] will be destroyed
module.eks.aws_iam_policy.cluster_elb_sl_role_creation[0] will be created |
@barryib How can I find out what version of the eks module I'm using? Here is the plan output: TF Version: 13.4 Plan output attached. Basically I provisioned this cluster maybe 2-3 months ago, haven't changed anything really except for the values.yml file for a helm chart I'm installing using terraform as well. So from an eks side, nothing has changed, not sure why it wants to change the aws_auth as well. |
If you don't know, you're probably using the latest version of this module. This is quite dangerous for production environment. Please pin the eks module version in your code. And read the changelog before upgrade. See https://registry.terraform.io/modules/terraform-aws-modules/eks/aws/latest for available versions (the latest one is v13.2.1). module "eks" {
source = "terraform-aws-modules/eks/aws"
version = "13.2.1"
# insert the 7 required variables here
} As I mentioned, there were several changes in this module since the last time you ran it. See changelog for more details. By reading your plan, it sounds like you're updating your launch configuration because the AMI has changed (you are using the latest AMI available. AWS build them regularly. Pin this if you want). There are no issue on random pets, they can be recreated as longer you don't set |
Closing. Feel free to re-open if needed. |
I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues. If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further. |
I have issues
I'm submitting a...
What is the current behavior?
I haven't done much of anything on my cluster except changing a values.yml file that I want applied to a helm release on the cluster. However when running a terraform plan against my folder, I am getting a BUNCH of changes. This happens all the time, and makes me very weary of using the eks module in production. Is best practice to version lock what eks module version is being used?
Here are the changes it wants to make when all I want is a simply values.yml change..
module.eks.random_pet.workers[1] must be replaced
module.eks.random_pet.workers[0] must be replaced
module.eks.kubernetes_config_map.aws_auth[0]
module.eks.aws_launch_configuration.workers[1]
module.eks.aws_launch_configuration.workers[0]
module.eks.aws_iam_role_policy_attachment.cluster_elb_sl_role_creation[0] will be created
module.eks.aws_iam_role_policy_attachment.cluster_AmazonEKSVPCResourceControllerPolicy[0] will be created
module.eks.aws_iam_role_policy.cluster_elb_sl_role_creation[0] will be destroyed
module.eks.aws_iam_policy.cluster_elb_sl_role_creation[0] will be created
module.eks.aws_autoscaling_group.workers[1] will be updated in-place
module.eks.aws_autoscaling_group.workers[0] will be updated in-place
Why in the world does it want to change so much stuff? We use this cluster in production now so this is very scary. Is this just a side effect of me spinning up the cluster on an older eks module version and then now running a plan on the latest eks module version? (I forgot to version lock). At this point I think it might just be better that I don't have terraform manage the cluster, but manage it manually instead. Any input would be greatly appreciated!!
If this is a bug, how to reproduce? Please include a code sample if relevant.
What's the expected behavior?
Are you able to fix this problem and submit a PR? Link here if you have already.
Environment details
Any other relevant info
The text was updated successfully, but these errors were encountered: