Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Problems using outputs from azurerm_container_service due to extra hashes #467

Closed
rberlind opened this issue Oct 30, 2017 · 8 comments
Closed
Labels

Comments

@rberlind
Copy link

I am having trouble using the exported attributes from the azurerm_container_service resource in Terraform outputs due to extra hashes that are added to the generated resource. In particular, the four profiles (master, agent_pool, linux, and diagnostics) all have a hash added between themselves and their attributes. So, we end up with things like: master_profile.33315859.fqdn, agent_pool_profile.1030309280.fqdn, and diagnostics_profile.734881840.storage_uri.

But according to the resource's documentation, the resource is supposed to export the following attributes: id, master_profile.fqdn, agent_pool_profile.fqdn, and diagnostics_profile.storage_uri. Not knowing the value of the hashes ahead of time makes it hard to define outputs that use these.

Looking at the resource code, I see where the hashes are being added, but it is unclear why.

Ideally, the hashes would not be added. If they have to be added in order to successfully interact with the Azure SDK, then some improved documentation on how to use the exported attributes in Terraform outputs would be useful.

For the Terraform configuration below, I was able to get outputs displayed for id and acs_agent_pool_fqdn (although the latter had no value), but not for acs_master_fqdn. I used the hash 1030309280 for acs_agent_pool_fqdn but did not use the hash 33315859 that I could have used for acs_master_fqdn.

Terraform Version

Terraform v0.10.7

Affected Resource(s)

Please list the resources as a list, for example:

  • azurerm_container_service

Terraform Configuration Files

outputs.tf

output "id" {
  value = "${azurerm_container_service.k8sexample.id}"
}

output "acs_master_fqdn" {
  value = "${azurerm_container_service.k8sexample.master_profile.fqdn}"
}

output "acs_agent_pool_fqdn" {
  value = "${azurerm_container_service.k8sexample.agent_pool_profile.1030309280.fqdn}"
}

variables.tf

variable "private_key_filename" {
  default     = "private_key.pem"
  description = "Name of the SSH private key in PEM format"
}

variable "public_key_openssh_filename" {
  default     = "public_key_openssh"
  description = "Name of the SSH public key in OpenSSH format"
}

variable "azure_subscription_id" {
  description = "Azure Subscription ID"
}

variable "azure_tenant_id" {
  description = "Azure Tenant ID"
}

variable "azure_client_id" {
  description = "Azure Client ID"
}

variable "azure_client_secret" {
  description = "Azure Client Secret"
}

variable "dns_master_prefix" {
  description = "DNS prefix for the master nodes of your cluster"
}

variable "dns_agent_pool_prefix" {
  description = "DNS prefix for the agent nodes of your cluster"
}

variable "azure_location" {
  description = "Azure Location, e.g. North Europe"
  default = "East US"
}

variable "resource_group_name" {
  description = "Azure Resource Group Name"
  default = "k8sexample-acs"
}

variable "master_vm_count" {
  description = "Number of master VMs to create"
  default = 1
}

variable "vm_size" {
  description = "Azure VM type"
  default = "Standard_A2"
}

variable "worker_vm_count" {
  description = "Number of worker VMs to initially create"
  default = 1
}

variable "admin_user" {
  description = "Administrative username for the VMs"
  default = "azureuser"
}

variable "cluster_name" {
  description = "Name of the K8s cluster"
  default = "k8sexample-cluster"
}

variable "agent_pool_name" {
  description = "Name of the K8s agent pool"
  default = "default"
}

variable "diagnostics_enabled" {
  description = "Boolean indicating whether to enable VM diagnostics"
  default = "false"
}

variable "environment" {
  description = "value passed to ACS Environment tag"
  default = "test"
}

main.tf

terraform {
  required_version = ">= 0.10.1"
}

module "ssh_key" {
  source = "github.com/hashicorp-modules/ssh-keypair-data.git"
  private_key_filename = "${var.private_key_filename}"
}

resource "null_resource" "save_ssh_keys" {
  provisioner "local-exec" {
    command = "echo \"${chomp(module.ssh_key.private_key_pem)}\" > ${var.private_key_filename}"
  }

  provisioner "local-exec" {
    command = "chmod 600 ${var.private_key_filename}"
  }

  provisioner "local-exec" {
    command = "echo \"${chomp(module.ssh_key.public_key_data)}\" > ${var.public_key_openssh_filename}"
  }

  provisioner "local-exec" {
    command = "chmod 600 ${var.public_key_openssh_filename}"
  }
}

provider "azurerm" {
  subscription_id = "${var.azure_subscription_id}"
  tenant_id       = "${var.azure_tenant_id}"
  client_id       = "${var.azure_client_id}"
  client_secret   = "${var.azure_client_secret}"
}

# Azure Resource Group
resource "azurerm_resource_group" "k8sexample" {
  name     = "${var.resource_group_name}"
  location = "${var.azure_location}"
}

# Azure Container Service with Kubernetes orchestrator
resource "azurerm_container_service" "k8sexample" {
  name                   = "${var.cluster_name}"
  location               = "${azurerm_resource_group.k8sexample.location}"
  resource_group_name    = "${azurerm_resource_group.k8sexample.name}"
  orchestration_platform = "Kubernetes"

  master_profile {
    count      =  "${var.master_vm_count}"
    dns_prefix = "${var.dns_master_prefix}"
  }

  linux_profile {
    admin_username = "${var.admin_user}"
    ssh_key {
      key_data = "${chomp(module.ssh_key.public_key_data)}"
    }
  }

  agent_pool_profile {
    name       = "${var.agent_pool_name}"
    count      =  "${var.worker_vm_count}"
    dns_prefix = "${var.dns_agent_pool_prefix}"
    vm_size    = "${var.vm_size}"
  }

  service_principal {
    client_id     = "${var.azure_client_id}"
    client_secret = "${var.azure_client_secret}"
  }

  diagnostics_profile {
    enabled = "${var.diagnostics_enabled}"
  }

  tags {
    Environment = "${var.environment}"
  }
}

k8s.tfvars

azure_subscription_id = "<YOUR-AZURE-SUBSCRIPTION-ID-FOR-TERRAFORM>"
azure_tenant_id       = "<YOUR-AZURE-TENANT-ID-FOR-TERRAFORM>"
azure_client_id       = "<YOUR-AZURE-CLIENT-ID-FOR-TERRAFORM>"
azure_client_secret   = "<YOUR-AZURE-CLIENT-SECRET-FOR-TERRAFORM>"

dns_master_prefix = "<YOUR-MASTER-DNS-PREFIX>"
dns_agent_pool_prefix = "<YOUR-AGENT-POOL-DNS-PREFIX>"

Expected Behavior

I should have seen all 3 outputs without having to specify hashes.

Actual Behavior

What actually happened?
I saw acs_agent_pool_fqdn with blank value.
I saw id
I did not see acs_master_fqdn at all.

Steps to Reproduce

Please list the steps required to reproduce the issue, for example:

  1. terraform apply
@hbuckle
Copy link
Contributor

hbuckle commented Nov 2, 2017

It's a bit messy but I think you should be able to do something like the following

output "acs_agent_pool_fqdn" {
  value = "${lookup(azurerm_container_service.k8sexample.agent_pool_profile[0], "fqdn")}"
}

The profiles are actually single item arrays containing a hash

@rberlind
Copy link
Author

rberlind commented Nov 2, 2017

Thanks @hbuckle. I'll try your suggestion.
It's useful to know that they are single item arrays.
Do you happen to know why that is the case? Was it somehow necessary to interact properly with the Azure RM API that we use?

@hbuckle
Copy link
Contributor

hbuckle commented Nov 2, 2017

All the subresources seem to be implemented that way but I have no idea why, one of the devs may be able to explain.

@paultyng
Copy link
Contributor

paultyng commented Mar 5, 2018

Was this solved by #907?

@rberlind
Copy link
Author

rberlind commented Mar 5, 2018

I have not tried the new version, but I don't see that the changes in that PR would completely address my issue even if it did so for the FQDN. I think we would still have to refer to other outputs with the trick Henry provided on 11/2/2017. That is what I ended up doing to work around the issue.

@achandmsft achandmsft modified the milestone: 1.4.0 Mar 8, 2018
@metacpp metacpp self-assigned this Apr 18, 2018
@metacpp metacpp modified the milestone: Soon Apr 18, 2018
@R0quef0rt
Copy link

This issue still exists in Terraform 0.11.10. Just tested the @hbuckle solution, and it fixes the problem.

@tombuildsstuff
Copy link
Contributor

hi @rberlind @hbuckle @R0quef0rt

Apologies for the delayed update here!

Microsoft's recently announced the deprecation of Azure Container Service in favour of Azure (Managed) Kubernetes Service.

In preparation for this we're deprecating the azurerm_container_service resource in version 1.20 of the AzureRM Provider - as such when using a 1.x version of the AzureRM Provider the azurerm_container_service will continue to function until the API is removed on the 31st January 2020.

We plan to remove support for the azurerm_container_service resource in version 2.0 of the AzureRM Provider - should you wish to continue using the AzureRM Provider past this date you'll need to ensure your Provider block is pinned to a 1.x version, for example:

provider "azurerm" {
  version = ">= 1.0.0, <= 2.0.0"
}

alternatively you can pin the Provider block to a specific version (more information about version pinning can be found on the Provider page):

provider "azurerm" {
  version = "=1.19.0"
}

If you're using ACS with Kubernetes Microsoft's recommendation is to migrate to using Azure (Managed) Kubernetes Service (AKS) which is available as the azurerm_kubernetes_cluster resource within Terraform.

Since the azurerm_container_service resource is being deprecated, I'm going to close this issue for the moment.

Thanks!

@ghost
Copy link

ghost commented Mar 5, 2019

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues.

If you feel this issue should be reopened, we encourage creating a new issue linking back to this one for added context. If you feel I made an error 🤖 🙉 , please reach out to my human friends 👉 [email protected]. Thanks!

@ghost ghost locked and limited conversation to collaborators Mar 5, 2019
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
Projects
None yet
Development

No branches or pull requests

7 participants