Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Applying azurerm_monitor_diagnostic_setting to Keyvault #8225

Closed
ghost opened this issue Aug 24, 2020 · 13 comments
Closed

Applying azurerm_monitor_diagnostic_setting to Keyvault #8225

ghost opened this issue Aug 24, 2020 · 13 comments

Comments

@ghost
Copy link

ghost commented Aug 24, 2020

Community Note

  • Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request
  • Please do not leave "+1" or "me too" comments, they generate extra noise for issue followers and do not help prioritize the request
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment

Terraform (and AzureRM Provider) Version

Terraform v0.12.28
provider.azurerm v2.24.0

Affected Resource(s)

  • azurerm_monitor_diagnostic_setting

Terraform Configuration Files

Defined in my Keyvault module against my Keyvault resource:

resource "azurerm_monitor_diagnostic_setting" "keyvault" {
  count                      = var.keyvault_enable_logging == true ? 1 : 0
  name                       = "kv-diag"
  target_resource_id         = azurerm_key_vault.keyvault.id
  log_analytics_workspace_id = var.keyvault_log_workspace_id

  log {
    category = "AuditEvent"
    enabled = true
    retention_policy {
      days    = 30
      enabled = false 
    }
  }
  
  metric {
    category = "AllMetrics"
    enabled  = "false"
    retention_policy {
      days    = 30
      enabled = false 
    }
  }
}

Debug Output

As the setting is currently configured in the state file:

            "metric": [
              {
                "category": "AllMetrics",
                "enabled": false,
                "retention_policy": [
                  {
                    "days": 30,
                    "enabled": false
                  }
                ]
              }
            ],

Expected Behavior

No changes should be identified when running a terraform plan

Actual Behavior

Changes are detected in the plan:

- metric {
          - category = "AllMetrics" -> null
          - enabled  = false -> null

          - retention_policy {
              - days    = 0 -> null
              - enabled = false -> null
            }
        }
      
+ metric {
          + category = "AllMetrics"
          + enabled  = false

          + retention_policy {
              + days    = 30
              + enabled = false
            }
        }
    }

Steps to Reproduce

  1. terraform plan

Important Factoids

I don't have this issue with my Diagnostic settings for other resources, which I'm having no issues with. I am defining all categories in all of my diagnostic_setting resource blocks, regardless of whether I want to enable them.

References

I initially thought it was the same behaviour as the issue below but after explicitly defining the metric block and setting it as enabled = false my issue persisted

@ghost ghost changed the title Keyvault diagnostic_settings Applying azurerm_monitor_diagnostic_setting to Keyvault Aug 24, 2020
@magodo
Copy link
Collaborator

magodo commented Aug 25, 2020

@DAL3001 Thank you for submitting this 👍

From the execution plan above, it seems that the days is set to 0 in remote resource. Can you run a terraform refresh and then check your state file to see whether the days is 0?

If the issue still remains, can you run terraform plan with TF_LOG=DEBUG and provide the debug log to help us further investigate?

Besides, to which location are you provisioning your resources?

@ghost
Copy link
Author

ghost commented Sep 6, 2020

Hi @magodo

Sorry for the delay!

In the end I couldn't replicate my issue on a fresh apply using a different Azure Subscription with the same Terraform code, so figured it must be some sort of state mixup.

I tried a refresh but still had the same issue. In the end, I removed the diagnostic setting manually in the portal and ran another apply. Everything was recreated correctly and now subsequent plan/apply runs don't see to be marking the diagnostic setting for changes anymore, which is good. Not sure what the exact cause was in this case but I guess this can be closed as it doesn't appear to be an actual issue.

Thanks :)

@ghost ghost closed this as completed Sep 6, 2020
@ghost
Copy link
Author

ghost commented Sep 7, 2020

Actually, sorry, this appears to be happening again. It wasn't happening on Friday on the first apply after the diagnostic logging resource was created, but now it is...

@magodo - do you need a specific part of the DEBUG log? It seems quite revealing so would like to sanitise it a bit if you only need some specific parts?

Thanks

@ghost ghost reopened this Sep 7, 2020
@magodo
Copy link
Collaborator

magodo commented Sep 7, 2020

@DAL3001

Yes, please provide the log with TF_LOG=DEBUG set, for terraform refresh. I mainly focus on the request/response of the first read of diagnostic setting resource (during refresh), as I suspect this issue is because the service somehow resets the days to 0 if you disable it.

As a workaround, can you try to explicitly define the metric.retention_policy block and set the days to 0 and try again?

@etaham
Copy link

etaham commented Sep 30, 2020

@magodo I just tested with your suggestion. Adding:

metric {
    category = "AllMetrics"
    enabled = false
    retention_policy {
      enabled = false
      days = 0
    }
  }

works are a workaround.

@magodo
Copy link
Collaborator

magodo commented Oct 12, 2020

Hey @etaham and @DAL3001 do you mind I close this issue as it is resolved by explicitly setting days to 0?

Though it might be tempting to set 0 as the default value for days, but it is actually not always the case, hence we can not do it in the provider level.

@etaham
Copy link

etaham commented Oct 12, 2020

@magodo,

We would need to add the entire metrics block marked as disabled just to work around - it's not just setting a 0. Accidentally turning on the metrics block will result in Azure charges.

Ideally, this should be fixed so people don't need to add extra code just to squelch TerraForm.

@magodo
Copy link
Collaborator

magodo commented Oct 13, 2020

@etaham Yes, this is a known issue that you have ot declare all the supported metric/log, as you already noticed that is tracked in #7235. For that issue, I have another PR (#7463) trying to fix it but is blocked by the plugin SDK.

@etaham
Copy link

etaham commented Oct 19, 2020

@magodo makes sense.

@magodo
Copy link
Collaborator

magodo commented Oct 20, 2020

@etaham Do you mind I close this issue for now in favor of #7235?

PS: there is a recent update in that issue which provides another workaround that you might want a try.

@etaham
Copy link

etaham commented Oct 20, 2020

@magodo Sure

@tombuildsstuff
Copy link
Contributor

Closing as mentioned above.

@ghost
Copy link

ghost commented Nov 19, 2020

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues.

If you feel this issue should be reopened, we encourage creating a new issue linking back to this one for added context. If you feel I made an error 🤖 🙉 , please reach out to my human friends 👉 [email protected]. Thanks!

@ghost ghost locked as resolved and limited conversation to collaborators Nov 19, 2020
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

No branches or pull requests

4 participants