Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Azure Storage dns_endpoint_type -> forces replacement #25424

Closed
1 task done
sai-gunaranjan opened this issue Mar 27, 2024 · 11 comments
Closed
1 task done

Azure Storage dns_endpoint_type -> forces replacement #25424

sai-gunaranjan opened this issue Mar 27, 2024 · 11 comments

Comments

@sai-gunaranjan
Copy link
Contributor

Is there an existing issue for this?

  • I have searched the existing issues

Community Note

  • Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request
  • Please do not leave "+1" or "me too" comments, they generate extra noise for issue followers and do not help prioritize the request
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment and review the contribution guide to help.

Terraform Version

1.3.9

AzureRM Provider Version

3.97.1

Affected Resource(s)/Data Source(s)

azurerm_storage_account

Terraform Configuration Files

In AzureRM Storage account by setting `dns_endpoint_type` to `null` its forcing redeployment resulting in deleting and recreation of storage accounts

Debug Output/Panic Output

resource "azurerm_storage_account" "sa" {
      + dns_endpoint_type                  = "Standard" # forces replacement
      ~ id                                 = "/subscriptions/1234455/resourceGroups/somergname/providers/Microsoft.Storage/storageAccounts/accountname" -> (known after apply)
      + large_file_share_enabled           = (known after apply)
        name                               = "accountname"

Expected Behaviour

no change as that field was not defined.

Actual Behaviour

terraform delete and recreated the accounts during the Terraform apply stage. however running Terraform plan it did not show these accounts were going to be recreated

Steps to Reproduce

Terraform Apply using 1.3.9 and AzureRM provider 3.97.1

Important Factoids

No response

References

No response

@sai-gunaranjan
Copy link
Contributor Author

#25367

@magodo
Copy link
Collaborator

magodo commented Mar 27, 2024

@sai-gunaranjan Can you please elaborate more about the reproduce steps, as I assume this issue has been resolved by #25367 already?

@manicminer
Copy link
Contributor

@sai-gunaranjan I've done some testing with v3.96.0, v3.97.0 and v3.97.1, and have been unable to reproduce this issue using v3.97.1. I've tried upgrading in various steps, with configs having dns_endpoint_type either absent, explicitly null (effectively absent), and Standard, but using the patch release v3.97.1, the storage account was never proposed for replacement.

If you are experiencing this issue with v3.97.1, do you have specific steps we can take, with full configs, to replicate the issue? Thanks!

@sai-gunaranjan
Copy link
Contributor Author

sai-gunaranjan commented Mar 28, 2024

@manicminer @magodo we are also having trouble reproducing the issue now in our environments.
these were very old storage accounts created back on 2020 (Provider version 2.x) TF only deleted storage accounts of with account_kind="StorageV2" and account_tier="Standard" I can share screenshots privately if needed. I will try a few more combinations and let you know.
cc: @aharden

@robbert-nlo
Copy link

robbert-nlo commented Apr 2, 2024

Hi!

I'm seeing this behaviour too when attempting to upgrade from 3.94.0 to 3.97.1 (never used/tried 3.97.0). I'm on Terraform 1.7.5.

Plan snippet:

  # module.hub["westeurope"].azurerm_storage_account.nw must be replaced
-/+ resource "azurerm_storage_account" "nw" {
      ~ access_tier                        = "Hot" -> (known after apply)
      + dns_endpoint_type                  = "Standard" # forces replacement
      ~ id                                 = "/subscriptions/11111111-900a-46fa-a0d9-111111111111/resourceGroups/rg-app-hub-stg-we/providers/Microsoft.Storage/storageAccounts/stappnwstgwe" -> (known after apply)
      + large_file_share_enabled           = (known after apply)
        name                               = "stappnwstgwe"

(...)

Code snippets:

module "hub" {
  source = "../modules/hub"

  for_each = { for cfg in module.var.conf_by_app.app : cfg.location => cfg }

  application     = each.value.app
  environment     = each.value.environment
  location        = each.key
  address_space   = each.value.address_space
  resource_groups = module.sb.azurerm_resource_group

(...)

../modules/hub:

resource "azurerm_storage_account" "nw" {
  name                = "st${var.application}nw${var.environment}${local.location_table[var.location]}"
  location            = var.location
  resource_group_name = var.resource_groups["${local.module_name}-${local.location_table[var.location]}"].name

  account_tier                     = "Standard"
  account_replication_type         = "LRS"
  allow_nested_items_to_be_public  = false
  cross_tenant_replication_enabled = false
  min_tls_version                  = "TLS1_2"

  lifecycle { ignore_changes = [tags] }
}

@tombuildsstuff
Copy link
Contributor

@robbert-nlo @sai-gunaranjan out of interest are you running terraform plan / terraform apply with -refresh=false by any chance? If so, can you try running a terraform plan without -refresh=false and let us know if that's any different? If not, would you mind providing the output from terraform version?

Thanks!

@robbert-nlo
Copy link

robbert-nlo commented Apr 2, 2024

@tombuildsstuff Ah yes, that's it. The pipeline this is running is using terraform plan -refresh=false. When using -refresh=true, there is no diff to the storage account.

@sai-gunaranjan
Copy link
Contributor Author

@tombuildsstuff yes. in this instance we did have in terraform Apply we have -refresh=false
TF version when the incident occurred was 1.3.9

@sai-gunaranjan
Copy link
Contributor Author

sai-gunaranjan commented Apr 2, 2024

I am able to reproduce the issue now, earlier in our test envi we did not use -refresh=false

Steps used:

  1. deploy a storage account using an earlier RM Provider version
  2. update your provider version to 3.97.1 (no resource change) re-run Terraform apply with -refresh=false

image

@manicminer @tombuildsstuff @aharden

@tombuildsstuff
Copy link
Contributor

@sai-gunaranjan per the Terraform docs:

Disables the default behavior of synchronizing the Terraform state with remote objects before checking for configuration changes. This can make the planning operation faster by reducing the number of remote API requests. However, setting refresh=false causes Terraform to ignore external changes, which could result in an incomplete or incorrect plan.

Which is what's happening here - as such you'll need to do a terraform apply without -refresh=false here in order to pick up this change. As a general rule -refresh=false isn't intended to be used for daily usage since it doesn't track changes made outside of Terraform (such as this one, where the API has introduced a new value for this field) - and as such bug fixes to workaround incorrect API responses (as is happening here) won't be picked up with -refresh=false set.

Since this issue is specific to using -refresh=false and so removing that will fix this issue, I'm going to go ahead and close this one out as resolved.

Thanks!

Copy link

github-actions bot commented May 4, 2024

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues.
If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.

@github-actions github-actions bot locked as resolved and limited conversation to collaborators May 4, 2024
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

No branches or pull requests

6 participants