-
Notifications
You must be signed in to change notification settings - Fork 9.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Adding a New CloudFront Origin Causes All Other Origins to be Redeployed #12065
Comments
Diffing the output of my plan showed that for each existing origin, |
That worked perfectly! Thank you! 😁 It's a shame it doesn't show that field being changed in the diff but perhaps I should have specified that field instead of leaving it to be defaulted. |
We are still seeing this issue with Terraform |
We are seeing this issue with Terraform v0.12.21 provider.aws v2.54.0. And we're seeing it show changes for origins no matter what, redeploying on every apply. We have done a Plan, Apply, Plan and the following plan still shows resources for the origins changing even though the Apply successfully applied the origin changes. |
I'm seeing this with Terraform v0.12.9, provider.aws v2.70.0 - we imported cloudfront formations and recreated the resources exactly, but trying to plan them with the exact same configurations results in destroying and recreating the formations. We've done the same with multiple other formations, but only the one with two origins forces delete and recreate. |
^disregard, one of the origins had a header typo. Bless up @wamonite 🙏 |
We are in the same situation as @jwwerpy. The changes are made every time, not only when the new origin is added. Every apply recreates the origins. Tested with terrafrom 0.11, 0.12, 0.13 and 0.14 with providers 2.16, 2.70 and 3.22. |
You rock 🚀 |
hey @wamonite thank you so much for this interesting observation! I opened a PR for the terraform cloudfront module based on your suggestion (cited you there) to have the default value as ""..and it was approved :) |
@imaginarynik nice one! |
@wamonite Thank you! That fixed the problem :) |
👋 It looks like this was fixed in AWS Provider version |
I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues. |
Community Note
Terraform Version 0.12.16
AWS Provider Version 2.49
Affected Resource(s)
Terraform Configuration Files
Debug Output
Panic Output
N/A
Expected Behavior
The CloudFront distribution should update in place and not cause changes to other resources which have not changed. A single origin should be added and no other origins should be modified.
Actual Behavior
All existing origins on the CloudFront distribution are flagged to be destroyed and re-built with the exact same values. Terraform says that the origins will be recreated but in reality no downtime is observed. This even persists if you apply changes manually and then
terraform state rm
andterraform import
, Terraform still wants to apply changes to the origins.Steps to Reproduce
The issue is caused by adding an origin to a pre-existing CloudFront distribution. Through the AWS console this does not affect any other origins (the distribution as a whole needs to redeploy though). However through Terraform all of the existing origins need to be destroyed. Follow these steps using the configuration above to replicate;
terraform apply
with the above configurationterraform apply
the new configurationThis is the extra origin to be added to the initial configuration above to trigger the issue;
Important Factoids
N/A
References
The text was updated successfully, but these errors were encountered: