Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Azure FrontDoor : Caching issue - Unexpected resource modifying #4461

Closed
Lidestyle opened this issue Sep 30, 2019 · 20 comments · Fixed by #4601 or #5358
Closed

Azure FrontDoor : Caching issue - Unexpected resource modifying #4461

Lidestyle opened this issue Sep 30, 2019 · 20 comments · Fixed by #4601 or #5358
Assignees
Milestone

Comments

@Lidestyle
Copy link

Terraform (and AzureRM Provider) Version

terraform -v
Terraform v0.12.9
+ provider.azurerm v1.34.0

Affected Resource(s)

  • azurerm_frontdoor

Terraform Configuration Files

- module -- frontdoor

- -- main.tf

// Manage Azure FrontDoor
resource "azurerm_frontdoor" "traffic_gateway" {
  name                                         = local.frontdoor_name
  friendly_name                                = local.frontdoor_name
  location                                     = var.location
  resource_group_name                          = var.resource_group
  enforce_backend_pools_certificate_name_check = var.backend_certificate_check # false

  routing_rule {
      name                    = var.routing_rule_name
      accepted_protocols      = ["Https"]
      patterns_to_match       = ["/*"]
      frontend_endpoints      = [var.frontend_endpoint_name]
      forwarding_configuration {
        forwarding_protocol   = var.forwarding_protocol
        backend_pool_name     = var.backend_pool_name
      }
  }

  backend_pool_load_balancing {
    name                            = var.balancing_name
    sample_size                     = local.backand_balancer.sample_size
    successful_samples_required     = local.backand_balancer.successful_samples_required
    additional_latency_milliseconds = local.backend_balancer.additional_latency_milliseconds
  }

  backend_pool_health_probe {
    name                  = var.liveness_probe_name
    path                  = var.healthcheck_api_path
    interval_in_seconds   = 255
    protocol              = "Https"
  }

  backend_pool {
      name            = var.backend_pool_name
      backend {
          host_header = "${var.backend_name}.azure-api.net"
          address     = "${var.backend_name}.azure-api.net"
          http_port   = 80
          https_port  = 443
      }

      load_balancing_name = var.balancing_name
      health_probe_name   = var.liveness_probe_name
  }

  frontend_endpoint {
    name                                      = var.frontend_endpoint_name
    host_name                                 = var.backend_host_name
    web_application_firewall_policy_link_id   = var.waf_enabled != false ? azurerm_frontdoor_firewall_policy.waf_assigned_policy[0].id : null
    custom_https_provisioning_enabled         = false
  }

  tags = {
    environment = var.environment_prefix
  }
}

// Manage FrontDoor firewall policy
resource "azurerm_frontdoor_firewall_policy" "waf_assigned_policy" {

Expected Behavior

terraform plan 
OR 
terraform apply

TERRAFORM SHOULD NOTIFY ABOUT ANY CHANGES

Actual Behavior

terraform plan
OR
terraform apply
  • Operation result: "Apply complete! Resources: 0 added, 0 changed, 0 destroyed."
  • UNEXPECTED ACTION: TERRAFORM SILENTLY UPDATED FRONTDOOR WITH ENABLED CACHING

Steps to Reproduce

  1. terraform import azurerm_frontdoor.traffic_gateway /subscriptions/...

  2. terraform apply

  3. Check Caching option on frontDoor servire [Azure Portal]

Important Factoids

References

https://docs.microsoft.com/en-us/azure/frontdoor/front-door-caching

@dantape
Copy link

dantape commented Oct 3, 2019

I am having the same issue.

@NillsF
Copy link
Contributor

NillsF commented Oct 11, 2019

Also having this issue.

I had a look at the GO SDK and the Terraform implementation, and noticed the following in the Go SDK

// CacheConfiguration caching settings for a caching-type route. To disable caching, do not provide a
// cacheConfiguration object.
type CacheConfiguration struct {
	// QueryParameterStripDirective - Treatment of URL query terms when forming the cache key. Possible values include: 'StripNone', 'StripAll'
	QueryParameterStripDirective Query `json:"queryParameterStripDirective,omitempty"`
	// DynamicCompression - Whether to use dynamic compression for cached content. Possible values include: 'DynamicCompressionEnabledEnabled', 'DynamicCompressionEnabledDisabled'
	DynamicCompression DynamicCompressionEnabled `json:"dynamicCompression,omitempty"`
}

While the terraform provider still creates a CacheConfiguration item

else {
    // Set Defaults
    c["cache_query_parameter_strip_directive"] = string(frontdoor.StripNone)
    c["cache_use_dynamic_compression"] = false
}

The solution might be to have that else branch not set those two values. Not sure. Any ideas?

@WodansSon WodansSon self-assigned this Oct 11, 2019
@WodansSon
Copy link
Collaborator

@NillsF That is exactly what is happening, I will sneak a fix in for this with my PR to fix the Front Door documentation.

@WodansSon WodansSon added this to the v1.36.0 milestone Oct 11, 2019
WodansSon added a commit to naikajah/terraform-provider-azurerm that referenced this issue Oct 11, 2019
WodansSon pushed a commit that referenced this issue Oct 11, 2019
…t_configuration` to documentation (#4601)

* CustomHost to be Optional in Redirect Configuration

* Fix for #4461 Front door caching issue
WodansSon added a commit that referenced this issue Oct 11, 2019
WodansSon added a commit that referenced this issue Oct 15, 2019
@WodansSon WodansSon reopened this Oct 16, 2019
@WodansSon
Copy link
Collaborator

Looking closer at this I see what the real issue is, I have opened the above PR to fix this issue completly.

@tombuildsstuff tombuildsstuff modified the milestones: v1.36.0, v1.37.0 Oct 24, 2019
@Lidestyle
Copy link
Author

Lidestyle commented Nov 20, 2019

JFYI

You can fix this issue by adding the following lifecycle policy for your FrontDoor resource:

lifecycle {
  ignore_changes = [
    routing_rule
  ]
}

Tested on 1.35.0 - 1.36.1 providers.

@tombuildsstuff tombuildsstuff removed this from the v1.37.0 milestone Nov 21, 2019
@bradhannah
Copy link

I believe I am experiencing the same issue - can we confirm that it was not in fact fully solved in #4618?

@william-li-ry
Copy link

william-li-ry commented Nov 27, 2019

For those who had the same issue, this works for me that won't set cache to Enabled.

lifecycle {
    ignore_changes = [
        "routing_rule.0.forwarding_configuration.0.cache_query_parameter_strip_directive",
        "routing_rule.1.forwarding_configuration.0.cache_query_parameter_strip_directive",
    ]
  }

@devblackops
Copy link

Ignoring changes on the routing rule isn't working for me. It still enables caching. This is my config:

resource "azurerm_frontdoor" "fd" {
  name                = "${var.prefix}-fd1"
  friendly_name       = "${var.prefix}-fd1"
  location            = var.location
  resource_group_name = var.resource_group_name
  tags                = var.tags

  routing_rule {
    name               = "routing-rule"
    accepted_protocols = ["Http", "Https"]
    patterns_to_match  = ["/*"]
    frontend_endpoints = [
      "custom-name"
    ]

    forwarding_configuration {
      forwarding_protocol = "MatchRequest"
      backend_pool_name   = "backend"
    }
  }

  enforce_backend_pools_certificate_name_check = true

  backend_pool_load_balancing {
    name = "loadbalancing-settings"
  }

  backend_pool_health_probe {
    name                = "health-probe"
    path                = "/_azure/hadr/ping?key=xxxxxx"
    protocol            = "Https"
    interval_in_seconds = 30
  }

  backend_pool {
    name = "backend"
    backend {
      host_header = var.site_fqdn
      address     = var.appgw_fqdn
      http_port   = 80
      https_port  = 443
    }

    load_balancing_name = "loadbalancing-settings"
    health_probe_name   = "health-probe"
  }

  frontend_endpoint {
    name                              = "custom-name"
    host_name                         = var.site_fqdn
    custom_https_provisioning_enabled = true

    custom_https_configuration {
      certificate_source                         = "AzureKeyVault"
      azure_key_vault_certificate_vault_id       = var.certificate_keyvault_id
      azure_key_vault_certificate_secret_name    = var.certificate_keyvault_secret_name
      azure_key_vault_certificate_secret_version = var.certificate_keyvault_secret_version
    }
  }

  lifecycle {
    ignore_changes = [
      "routing_rule[0].forwarding_configuration[0].cache_query_parameter_strip_directive"
    ]
  }
}

@vivaladan
Copy link

vivaladan commented Dec 5, 2019

I can't get this to work either. Tried variations of ignore_changes but each time Terraform runs it re-enables the caching, which is the only change I make over in Azure. I've included the plan of what Terraform suggests the changes will be, but like I said the only actual change is caching gets turned back on after manually being turned off.

At the very least the default should be cache off, as per creating it through the portal. But better yet the ability to control whether it's on or off rather than just it's settings.

I'm running azurerm 1.37.0

   ~ routing_rule {
            accepted_protocols = [
                "Https",
            ]
            enabled            = true
          ~ frontend_endpoints = [
                "azureEndpoint",
              - "",
              + "customEndpoint",
            ]
            id                 = "/subscriptions/.../RoutingRules/defaultForwardingRoute"
            name               = "defaultForwardingRoute"
            patterns_to_match  = [
                "/*",
            ]

            forwarding_configuration {
                backend_pool_name                     = "appService"
                cache_query_parameter_strip_directive = "StripAll"
                cache_use_dynamic_compression         = true
                forwarding_protocol                   = "HttpsOnly"
            }
        }

@devblackops
Copy link

I agree. We should be able to control caching directly. I think it would make sense to have a cache_enabled property. If that was set to true, then cache_query_parameter_strip_directive and cache_use_dynamic_compression would be optional. If cache_enabled is false, the other properties become invalid.

forwarding_configuration {
  forwarding_protocol = "MatchRequest"
  backend_pool_name   = "backend"

  cache_enabled                         = true
  cache_query_parameter_strip_directive = "StripAll"
  cache_use_dynamic_compression         = true  
}

@Lidestyle
Copy link
Author

Ignoring changes on the routing rule isn't working for me. It still enables caching. This is my config:

resource "azurerm_frontdoor" "fd" {
  name                = "${var.prefix}-fd1"
  friendly_name       = "${var.prefix}-fd1"
  location            = var.location
  resource_group_name = var.resource_group_name
  tags                = var.tags

  routing_rule {
    name               = "routing-rule"
    accepted_protocols = ["Http", "Https"]
    patterns_to_match  = ["/*"]
    frontend_endpoints = [
      "custom-name"
    ]

    forwarding_configuration {
      forwarding_protocol = "MatchRequest"
      backend_pool_name   = "backend"
    }
  }

  enforce_backend_pools_certificate_name_check = true

  backend_pool_load_balancing {
    name = "loadbalancing-settings"
  }

  backend_pool_health_probe {
    name                = "health-probe"
    path                = "/_azure/hadr/ping?key=xxxxxx"
    protocol            = "Https"
    interval_in_seconds = 30
  }

  backend_pool {
    name = "backend"
    backend {
      host_header = var.site_fqdn
      address     = var.appgw_fqdn
      http_port   = 80
      https_port  = 443
    }

    load_balancing_name = "loadbalancing-settings"
    health_probe_name   = "health-probe"
  }

  frontend_endpoint {
    name                              = "custom-name"
    host_name                         = var.site_fqdn
    custom_https_provisioning_enabled = true

    custom_https_configuration {
      certificate_source                         = "AzureKeyVault"
      azure_key_vault_certificate_vault_id       = var.certificate_keyvault_id
      azure_key_vault_certificate_secret_name    = var.certificate_keyvault_secret_name
      azure_key_vault_certificate_secret_version = var.certificate_keyvault_secret_version
    }
  }

  lifecycle {
    ignore_changes = [
      "routing_rule[0].forwarding_configuration[0].cache_query_parameter_strip_directive"
    ]
  }
}

That's the worst thing. As far as I understand the resource isn't added in a state file yet, that's why the lifecycle rule isn't working.

In this case, there 2 crutches ways to solve the issue.

  1. Add lifecycle rule to FrontDoor configuration:
lifecycle {
  ignore_changes = [
    routing_rule
  ]
}

BTW: just try to plan your configuration after that, 50% it can solve the issue, if not:
2) You should apply your configuration first, and disable caching manually in Azure Portal. It shouldn't modify the resource then.
3) Add the resource to your state file and try to terraform plan

I also want to admit that since FrontDoor exist in state file and caching block ignored, terraform shouldn't enable it even if you change FrontDoor configuration (any other configuration setting)

@devblackops
Copy link

When I add the ignore_changes section I receive the same error as in issue #4748 when running the plan.

lifecycle {
  ignore_changes = [
    routing_rule
  ]
}
Error: Error creating Front Door "XXXXX" (Resource Group "XXXXX"): "routing_rule":"routing-rule" "frontend_endpoints":"" was not found in the configuration file. verify you have the "frontend_endpoint":"" defined in the configuration file

  on ..\modules\frontdoor\main.tf line 9, in resource "azurerm_frontdoor" "fd":
   9: resource "azurerm_frontdoor" "fd" {

@Blankf
Copy link
Contributor

Blankf commented Jan 2, 2020

using the latest 1.39 with terraform 12, but unfortunatly no changes yet.

was anyone able to make the ignore_changes work?
have the same issue as devblackops regarding the ignore_changes.

@rhollins
Copy link

rhollins commented Jan 8, 2020

I'm also waiting for the fix so far the workaround was to use the following lifecycle settings, but it's problematic as needs to be removed each time we do actual change to the backend or routing rules and then caching must be removed manually.

  lifecycle {
    ignore_changes = [
      "routing_rule",
      "backend_pool",
    ]
  }

@WodansSon
Copy link
Collaborator

@devblackops, looking at this closer I am going to implement your suggestion of exposing a cache_enabled feild. This issue is caused by the API overloading the cache_use_dynamic_compression field to be a value and a conditional. If the cache_use_dynamic_compression is set to false the REST API expects the caller to not pass any of the cache values, but if it is set to true then the other cache values are then required. By implementing your suggestion this will make the fix easier and will not introduce a breaking change.

forwarding_configuration {
  forwarding_protocol = "MatchRequest"
  backend_pool_name   = "backend"

  cache_enabled                         = true
  cache_query_parameter_strip_directive = "StripAll"
  cache_use_dynamic_compression         = true  
}

@devblackops
Copy link

Excellent news @WodansSon! Thank you. Would this be in an upcoming minor release then?

@bhp15973
Copy link

Excellent news @WodansSon! Thank you. Would this be in an upcoming minor release then?

for me it is very important fix as well

@JonKragh
Copy link

Also looking forward to the ability to disable caching. Thanks!

@WodansSon WodansSon added this to the v1.42.0 milestone Jan 24, 2020
@ghost
Copy link

ghost commented Jan 27, 2020

This has been released in version 1.42.0 of the provider. Please see the Terraform documentation on provider versioning or reach out if you need any assistance upgrading. As an example:

provider "azurerm" {
    version = "~> 1.42.0"
}
# ... other configuration ...

@ghost
Copy link

ghost commented Mar 28, 2020

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues.

If you feel this issue should be reopened, we encourage creating a new issue linking back to this one for added context. If you feel I made an error 🤖 🙉 , please reach out to my human friends 👉 [email protected]. Thanks!

@ghost ghost locked and limited conversation to collaborators Mar 28, 2020
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.