-
Notifications
You must be signed in to change notification settings - Fork 4.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
azurerm_monitor_diagnostic_setting re-applies changes/updates every time running plan or apply... #10388
Comments
This comment has been minimized.
This comment has been minimized.
This comment was marked as off-topic.
This comment was marked as off-topic.
This comment was marked as off-topic.
This comment was marked as off-topic.
This comment was marked as off-topic.
This comment was marked as off-topic.
This comment was marked as off-topic.
This comment was marked as off-topic.
This comment was marked as off-topic.
This comment was marked as off-topic.
This comment was marked as off-topic.
This comment was marked as off-topic.
This comment was marked as off-topic.
This comment was marked as off-topic.
This comment was marked as off-topic.
This comment was marked as off-topic.
This comment was marked as off-topic.
This comment was marked as off-topic.
This comment was marked as off-topic.
This comment was marked as off-topic.
This comment was marked as off-topic.
This comment was marked as off-topic.
This comment was marked as off-topic.
This comment was marked as off-topic.
This comment was marked as off-topic.
This comment was marked as off-topic.
This comment was marked as off-topic.
This comment was marked as off-topic.
I did a bit of an investigation: As soon as one diagnostic setting (regardless whether it is "log" or "monitor") is enabled for a target resource, the API returns all available diagnostic settings for that target resource with the default values (enabled: false and retention: 0):
{
"id": "/subscriptions/****/resourcegroups/test-for-github-10388/providers/microsoft.keyvault/vaults/test-for-github-10388/providers/microsoft.insights/diagnosticSettings/github-issue-10388",
"identity": null,
"kind": null,
"location": null,
"name": "github-issue-10388",
"properties": {
"eventHubAuthorizationRuleId": null,
"eventHubName": null,
"logAnalyticsDestinationType": null,
"logs": [
{
"category": "AuditEvent",
"categoryGroup": null,
"enabled": false,
"retentionPolicy": {
"days": 0,
"enabled": false
}
},
{
"category": "AzurePolicyEvaluationDetails",
"categoryGroup": null,
"enabled": false,
"retentionPolicy": {
"days": 0,
"enabled": false
}
}
],
"metrics": [
{
"category": "AllMetrics",
"enabled": true,
"retentionPolicy": {
"days": 0,
"enabled": false
}
}
],
"serviceBusRuleId": null,
"storageAccountId": null,
"workspaceId": "/subscriptions/****/resourceGroups/oasis-tst-loganalytics/providers/Microsoft.OperationalInsights/workspaces/oasis-tst-analytics"
},
"resourceGroup": "test-for-github-10388",
"tags": null,
"type": "Microsoft.Insights/diagnosticSettings"
} Terraform then interprets this existence as change from null to false and wants to change it back - which does not work. if *v.Enabled{
results = append(results, output)
} This has some side-effects though when explicitly specifying "enabled: false" like it is done in the tests (e.g. https://github.com/hashicorp/terraform-provider-azurerm/blob/main/internal/services/monitor/monitor_diagnostic_setting_resource_test.go#L277) |
I would like to see the proposed solution implemented. The ignore lifecycle doesn't work in this scenario. For us, the workaround is to specify the automatic policy in the terraform code as if some audit-log were setup by terraform. The proposed "explicitly specify enabled=true/false to manage" would be perfect for us. I can propose a PR with the changes if that's OK for you. |
I am seeing similar problem for AKS Diagnostics where I split diff log category to two types of storage (few configured to store in Log Analytics and few in Storage Account). Below is my TF file to create the resource "azurerm_monitor_diagnostic_setting" "aks_audit" {
lifecycle {
ignore_changes = [target_resource_id]
}
name = ${local.diag_name}
log_analytics_workspace_id = azurerm_log_analytics_workspace.workspace.id
target_resource_id = data.azurerm_kubernetes_cluster.aks.id
log {
category = "cloud-controller-manager"
enabled = true
retention_policy {
days = 90
enabled = true
}
}
log {
category = "cluster-autoscaler"
enabled = true
retention_policy {
days = 90
enabled = true
}
}
log {
category = "kube-apiserver"
enabled = true
retention_policy {
days = 90
enabled = true
}
}
log {
category = "kube-controller-manager"
enabled = true
retention_policy {
days = 90
enabled = true
}
}
log {
category = "kube-scheduler"
enabled = true
retention_policy {
days = 90
enabled = true
}
}
log {
category = "guard"
enabled = true
retention_policy {
days = 90
enabled = true
}
}
metric {
category = "AllMetrics"
enabled = true
retention_policy {
days = 90
enabled = true
}
}
}
resource "azurerm_monitor_diagnostic_setting" "aks_audit_sa" {
lifecycle {
ignore_changes = [target_resource_id]
}
name = ${local.sa_diag_name}
storage_account_id = azurerm_storage_account.audit_sa.id
target_resource_id = data.azurerm_kubernetes_cluster.aks.id
log {
category = "kube-audit"
enabled = true
retention_policy {
days = 90
enabled = true
}
}
log {
category = "kube-audit-admin"
enabled = true
retention_policy {
days = 90
enabled = true
}
}
} The plan shows something like # azurerm_monitor_diagnostic_setting.aks_audit will be updated in-place
~ resource "azurerm_monitor_diagnostic_setting" "aks_audit" {
id = <REDACTED>
name = <REDACTED>
# (2 unchanged attributes hidden)
- log {
- category = "csi-azuredisk-controller" -> null
- enabled = false -> null
- retention_policy {
- days = 0 -> null
- enabled = false -> null
}
}
- log {
- category = "csi-azurefile-controller" -> null
- enabled = false -> null
- retention_policy {
- days = 0 -> null
- enabled = false -> null
}
}
- log {
- category = "csi-snapshot-controller" -> null
- enabled = false -> null
- retention_policy {
- days = 0 -> null
- enabled = false -> null
}
}
- log {
- category = "kube-audit" -> null
- enabled = false -> null
- retention_policy {
- days = 0 -> null
- enabled = false -> null
}
}
- log {
- category = "kube-audit-admin" -> null
- enabled = false -> null
- retention_policy {
- days = 0 -> null
- enabled = false -> null
}
}
# (7 unchanged blocks hidden)
}
# azurerm_monitor_diagnostic_setting.aks_audit_sa will be updated in-place
~ resource "azurerm_monitor_diagnostic_setting" "aks_audit_sa" {
id = <REDACTED>
name = <REDACTED>
# (2 unchanged attributes hidden)
- log {
- category = "cloud-controller-manager" -> null
- enabled = false -> null
- retention_policy {
- days = 0 -> null
- enabled = false -> null
}
}
- log {
- category = "cluster-autoscaler" -> null
- enabled = false -> null
- retention_policy {
- days = 0 -> null
- enabled = false -> null
}
}
- log {
- category = "csi-azuredisk-controller" -> null
- enabled = false -> null
- retention_policy {
- days = 0 -> null
- enabled = false -> null
}
}
- log {
- category = "csi-azurefile-controller" -> null
- enabled = false -> null
- retention_policy {
- days = 0 -> null
- enabled = false -> null
}
}
- log {
- category = "csi-snapshot-controller" -> null
- enabled = false -> null
- retention_policy {
- days = 0 -> null
- enabled = false -> null
}
}
- log {
- category = "guard" -> null
- enabled = false -> null
- retention_policy {
- days = 0 -> null
- enabled = false -> null
}
}
- log {
- category = "kube-apiserver" -> null
- enabled = false -> null
- retention_policy {
- days = 0 -> null
- enabled = false -> null
}
}
- log {
- category = "kube-controller-manager" -> null
- enabled = false -> null
- retention_policy {
- days = 0 -> null
- enabled = false -> null
}
}
- log {
- category = "kube-scheduler" -> null
- enabled = false -> null
- retention_policy {
- days = 0 -> null
- enabled = false -> null
}
}
- metric {
- category = "AllMetrics" -> null
- enabled = false -> null
- retention_policy {
- days = 0 -> null
- enabled = false -> null
}
}
# (2 unchanged blocks hidden)
}
Plan: 0 to add, 2 to change, 0 to destroy. This is the plan on every TF plan exec, w/o any variable changes. Had to add the lifecycle meta arg when this happened last time but the mentioned request handling on the Azure API end is forcing the provider to identify a change every time. I had to explicitly specify all the categories(log as well as metric) that I don't need as below to make TF dodge the "undesired" changes. ex: log {
category = "CATEGORY"
enabled = false
retention_policy {
days = 0
enabled = false
}
} Till this issue remains open/unresolved, I need to keep adding whenever a new diagnostic log category is introduced by Azure and enforced. |
Hi all, Below my workaround which resulted in no unneeded changes detected.
|
@murdibb worked for us - thanks! |
It works for us as well. Tested it. |
Unfortunatelly, even this isn't working anymore for me, the data call doesn't seem to be called during plan, which still triggers changes. |
Version 3.39.0 of azurerm provider introduced new block In my case, transitioning existing |
Version 3.39.1 with enabled_log still has the same issue for me |
Version 3.39.0 of azurerm provider introduced new block enabled_log under azurerm_monitor_diagnostic_setting resource. fixed the enabled_log block issue . But same issue exists for the metric block and it tries to apply the same changes over and over every Terraform plan/apply runs. |
Would having an |
For me it keeps happening even when using
|
The same problem exists for the Synapse workspace. The diagnostics have no metrics. But a category AllMetrics is created automatically.
|
Community Note
Terraform (and AzureRM Provider) Version
Affected Resource(s)
azurerm_monitor_diagnostic_setting when applying to Azure SQL DB and Synapse Pool
Expected Behaviour
After running Terraform apply, Terraform apply/plan again should say no changes
Actual Behaviour
After running Terraform plan/apply, will apply the same changes again and again.
Below is a pic of what I am seeing everytime I re-run Terraform plan, ie. it is applying those same changes every time, but terraform should only update what has changed net:
Steps to Reproduce
terraform plan
Code
The resource created and the azurerm_monitor_diagnostic_setting used to send resource logs out:
Azure SQL DB:
Azure Synapse Pool
The text was updated successfully, but these errors were encountered: