Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[azurerm_kubernetes_cluster] kube_admin_config should be sanitized #4107

Closed
wants to merge 2 commits into from
Closed

[azurerm_kubernetes_cluster] kube_admin_config should be sanitized #4107

wants to merge 2 commits into from

Conversation

nexxai
Copy link
Contributor

@nexxai nexxai commented Aug 16, 2019

Fixes #4105

Copy link
Collaborator

@katbyte katbyte left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi @nexxai,

Thank you for the PR. However i am not entirely sure why you wish to mark the entire block as sensitive as the password & certificate keys are already marked as such. Could you share your reasoning?

@nexxai
Copy link
Contributor Author

nexxai commented Aug 18, 2019

I don't have a particularly good reason other than kube_admin_config_raw has been marked fully Sensitive and the original request was to have kube_admin_config match it.

@rudolphjacksonm can you please explain in more detail? I don't use this resource so I don't have the output of kube_admin_config in front of me.

@ghost ghost removed the waiting-response label Aug 18, 2019
@katbyte
Copy link
Collaborator

katbyte commented Aug 18, 2019

kube_admin_config_raw does make sense as it will have the passwords and other sensitive information within it. However i'm not sure what other attributes of kube_admin_config could potentially be sensitive.

@rudolphjacksonm
Copy link
Contributor

The password is exposed in kube_admin_config as well. What it means in essence is that if someone is granted standard user access to an AKS cluster, and they copy the admin password into their kubeconfig, they can then elevate themselves to a cluster admin. This bypasses being able to enforce access strictly via AAD. Mind you, the users still have to be granted some form of access to the cluster in the first place.

@ghost ghost removed the waiting-response label Aug 20, 2019
@tombuildsstuff
Copy link
Contributor

@rudolphjacksonm

The password is exposed in kube_admin_config as well. What it means in essence is that if someone is granted standard user access to an AKS cluster, and they copy the admin password into their kubeconfig, they can then elevate themselves to a cluster admin. This bypasses being able to enforce access strictly via AAD. Mind you, the users still have to be granted some form of access to the cluster in the first place.

Taking a look into this as @katbyte has mentioned the sensitive fields within this block appear to be marked as Sensitive - as such they should be masked (e.g. show as "") in the output of the plan - are you seeing a different behaviour here? If so would you be able to provide a sanitized output from terraform plan and the version of Terraform Core & the Azure Provider you're using?

Looking at the block specifically, the sensitive fields in this block (password and client_key) are marked as sensitive; and the other fields are predictable (for example, if you know the cluster name, the URL for the cluster is predictable) - as such I'm wondering which specific fields could be sensitive here?

Thanks!

@rudolphjacksonm
Copy link
Contributor

rudolphjacksonm commented Aug 27, 2019

Hi @tombuildsstuff,

I'm not seeing that behavior on my end at all. Below is a heavily sanitized terraform plan. The client_key and password fields in the kube_admin_config block are shown as literal strings instead of being masked. We're running terraform version 0.12.6, and you can see the provider versions in the plan output below. Let me know if you need anything else!


Successfully configured the backend "azurerm"! Terraform will automatically
use this backend unless the backend configuration changes.

Initializing provider plugins...
- Checking for available provider plugins...
- Downloading plugin for provider "external" (terraform-providers/external) 1.2.0...
- Downloading plugin for provider "template" (terraform-providers/template) 2.1.2...
- Downloading plugin for provider "azurerm" (terraform-providers/azurerm) 1.32.1...

The following providers do not have any version constraints in configuration,
so the latest version was installed.

To prevent automatic upgrades to new major versions that may contain breaking
changes, it is recommended to add version = "..." constraints to the
corresponding provider blocks in configuration, with the constraint strings
suggested below.

* provider.azurerm: version = "~> 1.32"
* provider.external: version = "~> 1.2"
* provider.template: version = "~> 2.1"

Terraform has been successfully initialized!

You may now begin working with Terraform. Try running "terraform plan" to see
any changes that are required for your infrastructure. All Terraform commands
should now work.

If you ever set or change modules or backend configuration for Terraform,
rerun this command to reinitialize your working directory. If you forget, other
commands will detect it and remind you to do so if necessary.
Success! The configuration is valid.

Refreshing Terraform state in-memory prior to plan...
The refreshed state will be used to calculate this plan, but will not be
persisted to local or remote state storage.

azurerm_resource_group.sandboxuks1: Refreshing state... [id=/subscriptions/<SUBSCRIPTION_ID>/resourceGroups/<RESOURCEGROUP_NAME>].
module.aks-cluster.data.azurerm_client_config.current: Refreshing state...
module.aks-cluster.data.azurerm_subscription.current: Refreshing state...
module.aks-cluster.azurerm_kubernetes_cluster.aks_with_aad_parameters[0]: Refreshing state... [id=/subscriptions/<SUBSCRIPTION_ID>/resourcegroups/<RESOURCEGROUP_NAME>/providers/Microsoft.ContainerService/managedClusters/<RESOURCEGROUP_NAME>]
module.aks-cluster.azurerm_public_ip.aks_pip: Refreshing state... [id=/subscriptions/<SUBSCRIPTION_ID>/resourceGroups/<RESOURCEGROUP_NAME>_PIP/providers/Microsoft.Network/publicIPAddresses/<RESOURCEGROUP_NAME>_PIP]
module.aks-cluster.azurerm_monitor_diagnostic_setting.aks_cluster_diagnostics[0]: Refreshing state... [id=/subscriptions/<SUBSCRIPTION_ID>/resourceGroups/<RESOURCEGROUP_NAME>/providers/Microsoft.ContainerService/managedClusters/<RESOURCEGROUP_NAME>|aks-cluster-to-eventhub]

------------------------------------------------------------------------

An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
-/+ destroy and then create replacement
<= read (data resources)

Terraform will perform the following actions:

# module.aks-cluster.data.external.aks_agentpool_name will be read during apply
# (config refers to values not yet known)
<= data "external" "aks_agentpool_name"  {
    + id      = (known after apply)
    + program = [
        + "bash",
        + ".terraform/modules/aks-cluster/scripts/agentpool.sh",
        ]
    + query   = {
        + "rg_name" = "MC_<RESOURCEGROUP_NAME>_<RESOURCEGROUP_NAME>_uksouth"
        }
    + result  = (known after apply)
    }

# module.aks-cluster.azurerm_kubernetes_cluster.aks_with_aad_parameters[0] must be replaced
-/+ resource "azurerm_kubernetes_cluster" "aks_with_aad_parameters" {
    - api_server_authorized_ip_ranges = [] -> null
        dns_prefix                      = "<DNS_PREFIX>"
    ~ fqdn                            = "<CLUSTER_FQDN>" -> (known after apply)
    ~ id                              = "/subscriptions/<SUBSCRIPTION_ID>/resourcegroups/<RESOURCEGROUP_NAME>/providers/Microsoft.ContainerService/managedClusters/<RESOURCEGROUP_NAME>" -> (known after apply)
    ~ kube_admin_config               = [
        - {
            - client_certificate     = "<CLIENT_CERT>"
            - client_key             = "LITERAL_CLIENT_KEY_STRING" <--this is not obscured in any way, the client key is shown as a literal string
            - cluster_ca_certificate = "<CLUSTER_CA_CERT>"
            - host                   = "<HOSTNAME>"
            - password               = "LITERAL_PASSWORD_STRING" <--this is not obscured in any way, this is the current password for the kube admin config
            - username               = "clusterAdmin_<RESOURCEGROUP_NAME>_<RESOURCEGROUP_NAME>"
            },
        ] -> (known after apply)
    ~ kube_admin_config_raw           = (sensitive value)
    ~ kube_config                     = [
        - {
            - client_certificate     = ""
            - client_key             = ""
            - cluster_ca_certificate = "<CLUSTER_CA_CERT>"
            - host                   = "<HOSTNAME>"
            - password               = ""
            - username               = "clusterUser_<RESOURCEGROUP_NAME>_<RESOURCEGROUP_NAME>"
            },
        ] -> (known after apply)
    ~ kube_config_raw                 = (sensitive value)
    ~ kubernetes_version              = "1.13.9" -> "1.13.7"
        location                        = "uksouth"
        name                            = "<RESOURCEGROUP_NAME>"
    ~ node_resource_group             = "MC_<RESOURCEGROUP_NAME>_<RESOURCEGROUP_NAME>_uksouth" -> (known after apply)
        resource_group_name             = "<RESOURCEGROUP_NAME>"
    ~ tags                            = {} -> (known after apply)

    ~ addon_profile {
        + aci_connector_linux {
            + enabled     = (known after apply)
            + subnet_name = (known after apply)
            }

        + http_application_routing {
            + enabled                            = (known after apply)
            + http_application_routing_zone_name = (known after apply)
            }

        + oms_agent {
            + enabled                    = (known after apply)
            + log_analytics_workspace_id = (known after apply)
            }
        }

    ~ agent_pool_profile {
        - availability_zones  = [] -> null
            count               = 2
        + dns_prefix          = (known after apply)
        - enable_auto_scaling = false -> null
        ~ fqdn                = "<AGENTPOOL_PROFILE_FQDN>" -> (known after apply)
        - max_count           = 0 -> null
            max_pods            = 80
        - min_count           = 0 -> null
            name                = "agentpool"
        - node_taints         = [] -> null
            os_disk_size_gb     = 64
            os_type             = "Linux"
            type                = "AvailabilitySet"
            vm_size             = "Standard_DS4_v2"
            vnet_subnet_id      = "/subscriptions/<SUBSCRIPTION_ID>/resourceGroups/<RESOURCEGROUP_NAME>/providers/Microsoft.Network/virtualNetworks/<VNET_NAME>/subnets/<SUBNET_NAME>"
        }

        linux_profile {
            admin_username = "<ADMIN_USERNAME>"

            ssh_key {
                key_data = "<SSH_KEY>"
            }
        }

    ~ network_profile {
            dns_service_ip     = "10.254.0.10"
            docker_bridge_cidr = "172.17.0.1/16"
            load_balancer_sku  = "basic"
            network_plugin     = "azure"
        ~ network_policy     = "calico" -> "azure" # forces replacement
        + pod_cidr           = (known after apply)
            service_cidr       = "10.254.0.0/20"
        }

        role_based_access_control {
            enabled = true

            azure_active_directory {
                client_app_id     = "<AAD_CLIENT_ID>"
                server_app_id     = "<AAD_SERVER_ID>"
                server_app_secret = (sensitive value)
                tenant_id         = "*******"
            }
        }

    - service_principal {
        - client_id = "<SP_CLIENT_ID>" -> null
        }
    + service_principal {
        + client_id     = "<SP_CLIENT_ID>"
        + client_secret = (sensitive value)
        }
    }

# module.aks-cluster.azurerm_monitor_diagnostic_setting.aks_nsg_diagnostics[0] must be replaced
-/+ resource "azurerm_monitor_diagnostic_setting" "aks_nsg_diagnostics" {
        eventhub_authorization_rule_id = "/subscriptions/<SUBSCRIPTION_ID>/resourceGroups/<RESOURCEGROUP_ID>/providers/Microsoft.EventHub/namespaces/<NS_NAME>/AuthorizationRules/RootManageSharedAccessKey"
        eventhub_name                  = "aks-nsg-diagnostics"
    ~ id                             = "/subscriptions/<SUBSCRIPTION_ID>/resourceGroups/<RESOURCEGROUP_ID>/providers/Microsoft.Network/networkSecurityGroups/<AGENTPOOL_NAME>" -> (known after apply)
        name                           = "aks-nsg-to-eventhub"
    ~ target_resource_id             = "/subscriptions/<SUBSCRIPTION_ID>/resourceGroups/<RESOURCEGROUP_ID>/providers/Microsoft.Network/networkSecurityGroups/<AGENTPOOL_NAME>" -> (known after apply) # forces replacement

    - log {
        - category = "NetworkSecurityGroupEvent" -> null
        - enabled  = true -> null

        - retention_policy {
            - days    = 0 -> null
            - enabled = true -> null
            }
        }
    + log {
        + category = "NetworkSecurityGroupEvent"
        + enabled  = true

        + retention_policy {
            + enabled = true
            }
        }
    - log {
        - category = "NetworkSecurityGroupRuleCounter" -> null
        - enabled  = true -> null

        - retention_policy {
            - days    = 0 -> null
            - enabled = true -> null
            }
        }
    + log {
        + category = "NetworkSecurityGroupRuleCounter"
        + enabled  = true

        + retention_policy {
            + enabled = true
            }
        }
    }

Plan: 2 to add, 0 to change, 2 to destroy.

------------------------------------------------------------------------

This plan was saved to: terraform.tfplan

To perform exactly these actions, run the following command to apply:
    terraform apply "terraform.tfplan"```

@ghost ghost removed the waiting-response label Aug 27, 2019
@TheMacStack
Copy link

It is possible and very simply to construct the cluster-admin kubeconfig from the outputs in kube_admin_config so redacting just kube_admin_config_raw is useless and actually very missleading due to the false sense of "security". The entire kube_admin_config and kube_config fields should be sensitive

@katbyte
Copy link
Collaborator

katbyte commented Oct 18, 2019

Hi @nexxai,

We took a look into this and it seems to be a core issue. As such i have opened https://github.com/hashicorp/terraform/issues/23118 to track it and am going to close this PR.

@ghost
Copy link

ghost commented Mar 29, 2020

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues.

If you feel this issue should be reopened, we encourage creating a new issue linking back to this one for added context. If you feel I made an error 🤖 🙉 , please reach out to my human friends 👉 [email protected]. Thanks!

@ghost ghost locked and limited conversation to collaborators Mar 29, 2020
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Sanitize kube_admin_config in terraform plan/apply output
5 participants