Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

azurerm_monitor_diagnostic_setting continuously overwriting same values for diagnostics #19258

Closed
1 task done
Stanislava27 opened this issue Nov 11, 2022 · 3 comments
Closed
1 task done

Comments

@Stanislava27
Copy link

Is there an existing issue for this?

  • I have searched the existing issues

Community Note

  • Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request
  • Please do not leave "+1" or "me too" comments, they generate extra noise for issue followers and do not help prioritize the request
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment

Terraform Version

1.2.7

AzureRM Provider Version

3.30.0

Affected Resource(s)/Data Source(s)

azurerm_monitor_diagnostic_setting

Terraform Configuration Files

resource "azurerm_monitor_diagnostic_setting" "fw_diagnostics" {
  name                           = "FirewallDiagnostics"
  log_analytics_destination_type = "AzureDiagnostics"
  target_resource_id             = azurerm_firewall.firewall[0].id
  log_analytics_workspace_id     = azurerm_log_analytics_workspace.sentinel.id

  dynamic "log" {
    for_each = var.fw_diagnostic_settings
    content {
      category = log.value
      enabled  = true

      retention_policy {
        days    = 7
        enabled = true
      }
    }
  }
}

the var is:

variable "fw_diagnostic_settings" {
  description = "Log categories used in diagnostics settings for firewalls"
  type        = list(string)
  default     = ["AzureFirewallApplicationRule", "AzureFirewallDnsProxy", "AzureFirewallNetworkRule"]
}

Debug Output/Panic Output

- log {
          - category = "AzureFirewallNetworkRule" -> null
          - enabled  = true -> null

          - retention_policy {
              - days    = 7 -> null
              - enabled = true -> null
            }
        }
      + log {
          + category = "AzureFirewallNetworkRule"
          + enabled  = true

          + retention_policy {
              + days    = 7
              + enabled = true
            }
        }

Expected Behaviour

If values are not changed, they should not be replaced all the time. It add noise to the terraform plan output and makes it harder to see the real changes.

Actual Behaviour

Values for each log of the diagnostic setting is replaced with the same value.

Steps to Reproduce

create a var with a few types of logs collected for diagnostics.
add a resource "azurerm_monitor_diagnostic_setting" for a resource (in my example, for a firewall)
run terraform plan

Important Factoids

No response

References

No response

@github-actions github-actions bot removed the bug label Nov 11, 2022
@teowa
Copy link
Contributor

teowa commented Nov 14, 2022

Hi @Stanislava27 , thanks for submitting this issue!
Seems this issue is related to #17172 #7235 #10388. And these cases should be noted in resource doc.

In conclusion, all possible fields should be explicitly set. from #10388 (comment), below config with dynamic block and azurerm_monitor_diagnostic_categories data source should output no diff:

Config Detail
resource "azurerm_resource_group" "example" {
  name     = "example-resources"
  location = "West Europe"
}

resource "azurerm_log_analytics_workspace" "example" {
  name                = "acctest-01"
  location            = azurerm_resource_group.example.location
  resource_group_name = azurerm_resource_group.example.name
  sku                 = "PerGB2018"
  retention_in_days   = 30
}

resource "azurerm_virtual_network" "example" {
  name                = "testvnet"
  address_space       = ["10.0.0.0/16"]
  location            = azurerm_resource_group.example.location
  resource_group_name = azurerm_resource_group.example.name
}

resource "azurerm_subnet" "example" {
  name                 = "AzureFirewallSubnet"
  resource_group_name  = azurerm_resource_group.example.name
  virtual_network_name = azurerm_virtual_network.example.name
  address_prefixes     = ["10.0.1.0/24"]
}

resource "azurerm_public_ip" "example" {
  name                = "testpip"
  location            = azurerm_resource_group.example.location
  resource_group_name = azurerm_resource_group.example.name
  allocation_method   = "Static"
  sku                 = "Standard"
}

resource "azurerm_firewall" "example" {
  name                = "testfirewall"
  location            = azurerm_resource_group.example.location
  resource_group_name = azurerm_resource_group.example.name
  sku_name            = "AZFW_VNet"
  sku_tier            = "Standard"

  ip_configuration {
    name                 = "configuration"
    subnet_id            = azurerm_subnet.example.id
    public_ip_address_id = azurerm_public_ip.example.id
  }
}

data "azurerm_monitor_diagnostic_categories" "example" {
  resource_id = azurerm_firewall.example.id
}

variable "fw_diagnostic_settings_log" {
  description = "Log categories used in diagnostics settings for firewalls"
  type        = list(string)
  default     = ["AzureFirewallApplicationRule", "AzureFirewallDnsProxy", "AzureFirewallNetworkRule"]
}

variable "fw_diagnostic_settings_metric" {
  description = "Metric categories used in diagnostics settings for firewalls"
  type        = list(string)
  default     = []
}

resource "azurerm_monitor_diagnostic_setting" "fw_diagnostics" {
  name                           = "FirewallDiagnostics"
  log_analytics_destination_type = "AzureDiagnostics"
  target_resource_id             = azurerm_firewall.example.id
  log_analytics_workspace_id     = azurerm_log_analytics_workspace.example.id
  dynamic "log" {
    iterator = log_category
    for_each = data.azurerm_monitor_diagnostic_categories.example.log_category_types

    content {
      enabled  = contains(var.fw_diagnostic_settings_log, log_category.value) ? true : false
      category = log_category.value
      retention_policy {
        days    = contains(var.fw_diagnostic_settings_log, log_category.value) ? 7 : 0
        enabled = contains(var.fw_diagnostic_settings_log, log_category.value) ? true : false

      }
    }
  }

  dynamic "metric" {
    iterator = metric_category
    for_each = data.azurerm_monitor_diagnostic_categories.example.metrics

    content {
      enabled  = contains(var.fw_diagnostic_settings_metric, metric_category.value) ? true : false
      category = metric_category.value
      retention_policy {
        days    = contains(var.fw_diagnostic_settings_metric, metric_category.value) ? 7 : 0
        enabled = contains(var.fw_diagnostic_settings_metric, metric_category.value) ? true : false
      }
    }
  }
}

@Stanislava27
Copy link
Author

Hi @Stanislava27 , thanks for submitting this issue! Seems this issue is related to #17172 #7235 #10388. And these cases should be noted in resource doc.

In conclusion, all possible field should be explicitly set. from #10388 (comment), eelow config with dynamic block and azurerm_monitor_diagnostic_categories data source should output no diff:

Config Detail

Thank you, @teowa. #10388 is same is my issue. I have added my vote to 10388 and will close this issue as a duplicate

@Stanislava27 Stanislava27 closed this as not planned Won't fix, can't repro, duplicate, stale Nov 14, 2022
@github-actions
Copy link

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues.
If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.

@github-actions github-actions bot locked as resolved and limited conversation to collaborators Dec 15, 2022
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

No branches or pull requests

4 participants