-
Notifications
You must be signed in to change notification settings - Fork 4.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Updates to azurerm_kubernetes_cluster fail when cluster uses managed AAD integration #7325
Comments
This comment has been minimized.
This comment has been minimized.
Week late on this buuutt... me and a colleague had same error yesterday. We noticed you could update the rbac details via cli so for anyone that wants a workaround while this is being looked at: we deleted the aks cluster, set the role_based_access_control block to
then created a null resource where we update the managed admin ids
However, you'll also need a ignore_change on the aks rbac block
az version: 2.8 EDIT: if tags change, it still raises the resetAADProfile error. You can add this to the ignore if that works for you, but obviously you can't update tags (big disadvantage). Unfortunately, there is no az aks update tags options either. Investigating using |
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
Really don't like this approach but it's working for us. Created 2 provisioners, one for the AAD admin group ids, one for updating the tags. Admin groups provisioner:
AKS tags update provisioner:
Provisioner scripts:
|
This comment has been minimized.
This comment has been minimized.
I also faced with this issue. I've got this error message when I've tried to update AAD settings manually through API , however I've managed to update settings with azure cli. |
This error also occurs when modifying other properties of the cluster such as the max node count on a node pool
error:
|
Looking at the TF code, I am not sure why the role_based_access_control is seen as a change when I update the # of max nodes on a node pool. Looking at the resource schema, I wonder if this setting has a bearing on forcing an update: |
This comment has been minimized.
This comment has been minimized.
Updating kubernetes version also caused this issue: |
I can reproduce the same error while updating the autoscaler configuration (e.g. update max_count 3 -> 4). Versions: |
Short of it, AAD v2 is a preview feature and it was enabled in the provider. the resetAADProfile is not supported with AAD v2 clusters (from MS side). Therefore the call to reset it should be omitted if managed = true until microsoft starts supporting the call. |
resetAADProfile with API version 2020-06-01 seems to support So I guess this could be fixed by using the new API version. |
Yeah I am seeing this when amending pool size, doing Kubernetes upgrades or changing auto scale, so unusable currently. @patpicos I don't believe it is preview any more, all the preview markers have been removed from the docs and the old version is now referred to as legacy - https://docs.microsoft.com/en-us/azure/aks/managed-aad |
@sam-cogan that is very interesting news. This commit for the documentation confirms what you are saying. MicrosoftDocs/azure-docs@96ab8c1#diff-90a9850acdb4834ff96cc6562e19144e I didnt see a notice in AKS release notes. Perhaps one is imminent. @PSanetra might be on the right path w/ updating the API version and see if it makes these errors go away |
Got confirmation from PM on being GA |
We are creating a new cluster today with AAD v2 support, will let you know how it goes! Will look into it if it is not working |
I created a new cluster yesterday and can confirm the issue is present. You do not see it at cluster creation (at least I didn't) but when you try and modify the cluster to change the amount of nodes, update version etc. you will see the issue. |
The feature is not GA anymore, due to a delayed rollout: Azure/AKS#1489 (comment). Also, when I deploy it with a custom build az feature register --name AAD-V2 --namespace Microsoft.ContainerService
az provider register -n Microsoft.ContainerService I'm working on a PR at the moment, seems to work but Acceptance testing takes a little while. |
I've implemented a fix and added Acceptance tests to cover the scenarios in this issue. If nothing goes wrong it will make next release! 🎉 |
This has been released in version 2.21.0 of the provider. Please see the Terraform documentation on provider versioning or reach out if you need any assistance upgrading. As an example: provider "azurerm" {
version = "~> 2.21.0"
}
# ... other configuration ... |
Upgrading provider to 2.21.0 version works :) |
With the managed AAD how do we attach an ACR instance to the AKS cluster? Before with a manually setup service provider you would just propogate the ACR role to that principal, but as far as I can see there is no way to get access to the underlying service principal that gets setup automatically. |
@tkinz27 your talking about two different things here. The managed AAD integration this issue refers to is related to being able to login to the cluster for admin work as an AAD user, has nothing to do with the clusters access to other resources. Using managed identity for the cluster identity creates a user assigned managed identity which you can retrieve the name of using the "user_assigned_identity_id" of the "kubelet_identity" block. you would then grant this managed identity access to ACR. |
Ohhh... Sorry new to Azure (coming from AWS) and all the auth has definitely been confusing. Thank you for so quickly clearing that up for me. |
EDIT: This is working fine now, it was my loosing configuration. Thanks aristosvo! So I added like instructed to main main.tf
WORKED!I get still error about ResetAADProfile althoug I used v2.21.0 azurerm provider. Error: updating Managed Kubernetes Cluster AAD Profile in cluster "sutinenseaks-aks" (Resource Group "sutinenseaks-rg"): containerservice.ManagedClustersClient#ResetAADProfile: Failure sending request: StatusCode=400 -- Original Error: Code="BadRequest" Message="Operation 'resetAADProfile' is not allowed for managed AAD enabled cluster." on main.tf line 45, in resource "azurerm_kubernetes_cluster" "demo": I up Upgraded also kubernetes provider 1.11.1 -> 1.12.0, not still working terraform version
My try was done according that tutorial |
@sutinse1 Can you provide the configuration you are using? |
What I see is a cluster setup with AAD-v1 integration. Apparently either the backward compatibility here is a problem or you're mixing things up in your setup. I think the first is the issue, I'll run a few tests when I've time at hand. I'd recommend for now to restructure/simplify your terraform file for the AAD integration: resource "azurerm_kubernetes_cluster" "demo" {
...
role_based_access_control {
enabled = true
azure_active_directory {
managed = true
// optional:
// admin_group_object_ids = [<AAD group object ids which you want to make cluster admin via AAD login>]
}
}
...
} |
@sutinse1 Can you explain in short what you did before you ended up with the mentioned error? What I think you did was as follows:
resource "azurerm_kubernetes_cluster" "demo" {
...
role_based_access_control {
enabled = true
azure_active_directory {
client_app_id = var.aad_client_app_id
server_app_id = var.aad_server_app_id
server_app_secret = var.aad_server_app_secret
tenant_id = var.aad_tenant_id
}
}
...
}
If not, I'm very curious how your configuration ended up in the state with the error 😄 |
@aristosvo I did just like you wrote, I upgraded to AAD-v2 with registering feature. // I registered AAD-V2 feature // Updated cluster I somehow think that terraform can query if AAD is used :) My mistake. My configuration now (I have to uncomment SP's) role_based_access_control { |
I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues. If you feel this issue should be reopened, we encourage creating a new issue linking back to this one for added context. If you feel I made an error 🤖 🙉 , please reach out to my human friends 👉 [email protected]. Thanks! |
Community Note
Terraform (and AzureRM Provider) Version
Terraform v0.12.26
Affected Resource(s)
azurerm_kubernetes_cluster
Terraform Configuration Files
Debug Output
Panic Output
Expected Behavior
Actual Behavior
Error: updating Managed Kubernetes Cluster AAD Profile in cluster "aks-service" (Resource Group "aks-service-rg"): containerservice.ManagedClustersClient#ResetAADProfile: Failure sending request: StatusCode=400 -- Original Error: Code="BadRequest" Message="Operation 'resetAADProfile' is not allowed for managed AAD enabled cluster."
Steps to Reproduce
terraform plan
terraform apply
terraform plan
terraform apply
Important Factoids
References
The text was updated successfully, but these errors were encountered: