Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Error: string is required when updating from 1.10.0 to 1.10.1 #435

Closed
avthart opened this issue Aug 28, 2020 · 11 comments · Fixed by #456
Closed

Error: string is required when updating from 1.10.0 to 1.10.1 #435

avthart opened this issue Aug 28, 2020 · 11 comments · Fixed by #456
Assignees

Comments

@avthart
Copy link

avthart commented Aug 28, 2020

Error: string is required with terraform plan when we update from 1.10.0 to 1.10.1

We are still using Terraform v0.12.x

Currently, trying to find which string is required.

@abhi1693
Copy link

I have the same issue with terraform v0.13

@rawmind0
Copy link
Contributor

This seems to be a terraform issue, hashicorp/terraform#25752. It should be addressed on tf v0.13.1. Could you please try to upgrade to this tf version?? If not working with tf v0.13.1, could you please add more detailed data to be able to reproduce the issue??

@rawmind0 rawmind0 self-assigned this Aug 31, 2020
@abhi1693
Copy link

I upgraded to v0.13.1 and didn't work. I just deleted the state files and re-did everything after which it started working.

@nickvth
Copy link

nickvth commented Aug 31, 2020

Tomorrow I will add some tf debug log, deleting state file is not an option.

@nickvth
Copy link

nickvth commented Sep 1, 2020

@rawmind0 Debug log

2020/09/01 14:11:50 [DEBUG] ReferenceTransformer: "rancher2_cluster.cluster" references: []
2020-09-01T14:11:50.252Z [DEBUG] plugin.terraform-provider-rancher2_v1.10.1: 2020/09/01 14:11:50 [WARN] unexpected type cty.String for map in json state
2020-09-01T14:11:50.252Z [DEBUG] plugin.terraform-provider-rancher2_v1.10.1: 2020/09/01 14:11:50 [WARN] unexpected type cty.String for map in json state
2020/09/01 14:11:50 [ERROR] <root>: eval: *terraform.EvalReadState, err: string is required
2020/09/01 14:11:50 [ERROR] <root>: eval: *terraform.EvalSequence, err: string is required

@rawmind0
Copy link
Contributor

rawmind0 commented Sep 1, 2020

@nickvth , thanks to provide debug but i need to reproduce the issue. I've made tf upgrades with cluster data and resource defined but working fine to me. Could you please provide rancher2_cluster.cluster tf definition to test with it??

@nickvth
Copy link

nickvth commented Sep 1, 2020

@nickvth , thanks to provide debug but i need to reproduce the issue. I've made tf upgrades with cluster data and resource defined but working fine to me. Could you please provide rancher2_cluster.cluster tf definition to test with it??

Did you first create a cluster with < 1.10.1 and then upgrade plugin and then rerun terraform plan?

@rawmind0
Copy link
Contributor

rawmind0 commented Sep 1, 2020

Did you first create a cluster with < 1.10.1 and then upgrade plugin and then rerun terraform plan?

Obviously, on tf 0.12 and 0.13 but not getting any issue.

@fad3t
Copy link

fad3t commented Sep 14, 2020

Hi, I'm having the same issue when upgrading the provider from 1.10.0 to 1.10.1 or 1.10.2. I tried multiple Terraform 0.13.x releases, same problem. Here's the corresponding cluster definition:

resource "rancher2_cluster" "rke-backend-01" {
  name = "rke-backend-01"
  cluster_auth_endpoint {
    fqdn = "api.rke-backend-01.domain.com:6443"
  }
  rke_config {
    authentication {
      strategy = "x509|webhook"
      sans = [
        "api.rke-backend-01.domain.com",
        "10.10.10.10"
      ]
    }
    private_registries {
      url        = "registry.domain.com"
      user       = "docker"
      password   = var.registry_password
      is_default = true
    }
    network {
      plugin = "calico"
    }
    services {
      kube_api {
        service_cluster_ip_range = "10.43.0.0/16"
        pod_security_policy      = true
        secrets_encryption_config {
          enabled = true
        }
      }
      kube_controller {
        cluster_cidr             = "10.42.0.0/16"
        service_cluster_ip_range = "10.43.0.0/16"
      }
      kubelet {
        cluster_domain     = "cluster.local"
        cluster_dns_server = "10.43.0.10"
      }
      etcd {
        backup_config {
          enabled = true
          s3_backup_config {
            endpoint    = "s3.domain.com"
            bucket_name = "bucket_rancher"
            access_key  = var.s3_access_key
            secret_key  = var.s3_secret_key
            custom_ca   = filebase64("ca.pem")
          }
        }
      }
    }
    ingress {
      extra_args = {
        "enable-ssl-passthrough" = ""
      }
    }
  }
  default_pod_security_policy_template_id = data.rancher2_pod_security_policy_template.unrestricted.id
  enable_cluster_monitoring               = true
  cluster_monitoring_input {
    answers = {
      "grafana.persistence.enabled"             = true
      "grafana.persistence.size"                = "10Gi"
      "grafana.persistence.storageClass"        = "csi-rbd-sc"
      "prometheus.persistence.enabled"          = "true"
      "prometheus.persistence.size"             = "30Gi"
      "prometheus.persistence.storageClass"     = "csi-rbd-sc"
      "prometheus.resources.core.limits.cpu"    = "1000m"
      "prometheus.resources.core.limits.memory" = "1500Mi"
    }
  }
}

@rawmind0
Copy link
Contributor

@fad3t , thanks for your example. Finally i could reproduce the issue.

Submitted PR #456 to fix it. Once merged, cutting new 1.10.3 release.

@fad3t
Copy link

fad3t commented Sep 15, 2020

Great, thanks a lot @rawmind0 !

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
5 participants