Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

azurerm_role_assignment forces replacement after azurerm_storage_share resource_manager_id is corrected #21798

Closed
1 task done
enorlando opened this issue May 16, 2023 · 9 comments · Fixed by #22271
Closed
1 task done

Comments

@enorlando
Copy link

enorlando commented May 16, 2023

Is there an existing issue for this?

  • I have searched the existing issues

Community Note

  • Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request
  • Please do not leave "+1" or "me too" comments, they generate extra noise for issue followers and do not help prioritize the request
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment

Terraform Version

1.4.6

AzureRM Provider Version

3.56

Affected Resource(s)/Data Source(s)

azurerm_role_assignment

Terraform Configuration Files

resource "azurerm_role_assignment" "xxx" {
  scope                = azurerm_storage_share.xxx.resource_manager_id
  role_definition_name = "Storage File Data SMB Share Contributor"
  principal_id         = "xxx"
}

Debug Output/Panic Output

# azurerm_role_assignment.xxx must be replaced
-/+ resource "azurerm_role_assignment" "xxx" {
      ~ id                               = "/subscriptions/xxx/resourceGroups/xxx/providers/Microsoft.Storage/storageAccounts/xxx/fileServices/default/fileshares/shares/providers/Microsoft.Authorization/roleAssignments/xxx" -> (known after apply)
      ~ name                             = "xxx" -> (known after apply)
      ~ principal_type                   = "Group" -> (known after apply)
      ~ role_definition_id               = "/subscriptions/xxx/providers/Microsoft.Authorization/roleDefinitions/xxx" -> (known after apply)
      ~ scope                            = "/subscriptions/xxx/resourceGroups/xxx/providers/Microsoft.Storage/storageAccounts/xxx/fileServices/default/fileshares/shares" -> "/subscriptions/xxx/resourceGroups/xxx/providers/Microsoft.Storage/storageAccounts/xxx/fileServices/default/shares/shares" # forces replacement
      + skip_service_principal_aad_check = (known after apply)
        # (2 unchanged attributes hidden)
    }

Expected Behaviour

No changes. Your infrastructure matches the configuration.

Actual Behaviour

forces replacement

Steps to Reproduce

terraform plan

Important Factoids

After #21638 was merged and provider was updated we encounter this issue. We can no longer use the resource_manager_id of the azurerm_storage_share when this is the scope and have to use the hard coded path like below to avoid the recreation of the resource

"/subscriptions/xxx/resourceGroups/xxx/providers/Microsoft.Storage/storageAccounts/xxx/fileServices/default/fileshares/shares"

References

No response

@magodo
Copy link
Collaborator

magodo commented May 17, 2023

@enorlando Sorry for this unexpected breaking change.. Alternatively, you can use the ignore_chagnes = [scope] to suppress the diff.

Whilst, since what #21638 does is to fix the incorrect storage share id to the real azure resource id. So I'm wondering whether your existing role assignment do take effect in fact?

@enorlando
Copy link
Author

@magodo That is what we have done in the interim however it is not the best approach

The existing role assignment has indeed been working prior to this provider change. If I apply the diff, it updates the state with the correct merged resource_manager_id however the access from IAM is removed

@magodo
Copy link
Collaborator

magodo commented May 17, 2023

@enorlando That is because the portal is using the wrong resource id:

image

(including the URL it uses to list the role assignments)

You can get the newly assigned roles in the correct id by tools like az cli: az role assignment list --scope <id>

@enorlando
Copy link
Author

@magodo so this is a bug in the UI of the azure portal? I assume that until that is fixed we need to use ignore_changes = [scope] to suppress the diff

@iwu-spd
Copy link

iwu-spd commented May 25, 2023

i'm not sure it's a bug in the UI. What we noticed was the users lost access to the share when Terraform assigned the scope to /subscriptions/xxx/resourceGroups/xxx/providers/Microsoft.Storage/storageAccounts/xxx/fileServices/default/shares/shares

Now we're forced to do the hard coded path above

@dvansteenburg
Copy link
Contributor

I also opened up a ticket with Microsoft when we ran into this exact same issue with using the resource_manager_id in azurerm v3.56.0 (or newer), and also our users lost connectivity. Microsoft, in their examples, show that we should be using /subscriptions/xxx/resourceGroups/xxx/providers/Microsoft.Storage/storageAccounts/xxx/fileServices/default/fileshares/shares. See the example shown they sent... https://learn.microsoft.com/en-us/azure/storage/files/storage-files-identity-ad-ds-assign-permissions?tabs=azure-cli#share-level-permissions-for-specific-azure-ad-users-or-groups. When we changed back to using an earlier version of the azurerm, things started working again, the Azure Portal started showing the RBAC permissions properly, users getting access to the file shares.

@enorlando
Copy link
Author

@magodo may you she some light on this please? Thanks

@magodo
Copy link
Collaborator

magodo commented Jun 25, 2023

Thank you @dvansteenburg for providing more context, and sorry for the inconvinience! I've submit a PR to revert #21645.

Since this file share is a data plane resource, its resource manager ID is kind of defined by convention. For us, we always regard Swagger as the source of trueth. While the Azure RBAC is using another form. Therefore, I've submit an issue towards the Swagger repo for this: Azure/azure-rest-api-specs#24568

@github-actions github-actions bot added this to the v3.63.0 milestone Jun 26, 2023
Copy link

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues.
If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.

@github-actions github-actions bot locked as resolved and limited conversation to collaborators May 18, 2024
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
5 participants