Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

vApp may need powering off to avoid vApp network removal errors #473

Closed
Didainius opened this issue Mar 10, 2020 · 0 comments · Fixed by #489
Closed

vApp may need powering off to avoid vApp network removal errors #473

Didainius opened this issue Mar 10, 2020 · 0 comments · Fixed by #489
Assignees

Comments

@Didainius
Copy link
Collaborator

With the HCL example as in the bottom of this issue, sometimes such error may occur during destroy operation:

Error: error removing vApp network: operation error: task did not complete successfully: [400:BAD_REQUEST] - [ 86acce78-d07e-4a3e-bba3-8d663d1aff15 ] Invalid NAT rule containing vAppScopedVmId 97cc3db5-6c74-41f3-b12e-45601fd28dfd and nic index 0. Either the VM is not connected to the network or it is configured with DHCP addressing mode. NAT rule cannot be configured for nics with DHCP addressing mode. - task error: [400 - BAD_REQUEST] [ 86acce78-d07e-4a3e-bba3-8d663d1aff15 ] Invalid NAT rule containing vAppScopedVmId 97cc3db5-6c74-41f3-b12e-45601fd28dfd and nic index 0. Either the VM is not connected to the network or it is configured with DHCP addressing mode. NAT rule cannot be configured for nics with DHCP addressing mode.

During investigation we proved that if vApp is actually powered off during removal - then deletion works fine (UI also does not allow to remove vApp networks if vApp is running). There is a need to evaluate a proper way to handle vApp state during network removal (evaluate if powering it off during network removal makes sense, or should be tackled somehow differently).

variable "fencing" {
  type = bool
  default = true
}


resource "vcd_vapp" "test" {
  name = "test-vApp"

  power_on = false
}

# Attach existing Org networks
resource "vcd_vapp_org_network" "org-routed" {
  vapp_name = vcd_vapp.test.name
  org_network_name  = "my-vdc-int-net"

  is_fenced = var.fencing
}

resource "vcd_vapp_org_network" "org-isolated" {
  vapp_name = vcd_vapp.test.name
  org_network_name  = "isolated2"

  is_fenced = var.fencing
}

resource "vcd_vapp_org_network" "org-direct" {
  vapp_name = vcd_vapp.test.name
  org_network_name  = "direct-network"

  is_fenced = var.fencing
}

# Create vApp networks - 1 isolated, 3x attached
resource "vcd_vapp_network" "isolated" {
  name               = "isolated"
  vapp_name          = vcd_vapp.test.name
  gateway            = "192.168.1.1"
  netmask            = "255.255.255.0"
  dns1               = "192.168.1.1"
  dns2               = "192.168.1.2"
  dns_suffix         = "mybiz.biz"
  guest_vlan_allowed = true

  static_ip_pool {
    start_address = "192.168.1.51"
    end_address   = "192.168.1.100"
  }

  dhcp_pool {
    start_address = "192.168.1.2"
    end_address   = "192.168.1.50"
  }
}

resource "vcd_vapp_network" "attached-routed" {
  name      = "attached-routed"
  vapp_name = vcd_vapp.test.name
  gateway   = "192.168.3.1"
  netmask   = "255.255.255.0"

  org_network_name = "my-vdc-int-net"

  static_ip_pool {
    start_address = "192.168.3.51"
    end_address   = "192.168.3.100"
  }

  dhcp_pool {
    start_address = "192.168.3.2"
    end_address   = "192.168.3.50"
  }
}

resource "vcd_vapp_network" "attached-isolated" {
  name      = "attached-isolated"
  vapp_name = vcd_vapp.test.name
  gateway   = "192.168.4.1"
  netmask   = "255.255.255.0"

  org_network_name = "isolated_net"

  static_ip_pool {
    start_address = "192.168.4.51"
    end_address   = "192.168.4.100"
  }

  dhcp_pool {
    start_address = "192.168.4.2"
    end_address   = "192.168.4.50"
  }
}

resource "vcd_vapp_network" "attached-direct" {
  name      = "attached-direct"
  vapp_name = vcd_vapp.test.name
  gateway   = "192.168.5.1"
  netmask   = "255.255.255.0"

  org_network_name = "direct-network"

  static_ip_pool {
    start_address = "192.168.5.51"
    end_address   = "192.168.5.100"
  }

  dhcp_pool {
    start_address = "192.168.5.2"
    end_address   = "192.168.5.50"
  }
}


# # VM with all types of networks
resource "vcd_vapp_vm" "web1" {
  vapp_name     = vcd_vapp.test.name
  name          = "web1"
  catalog_name  = "my-catalog"
  template_name = "photon-os"
  memory        = 1024
  cpus          = 2
  cpu_cores     = 1

  network {
    type               = "vapp"
    name               = vcd_vapp_network.attached-routed.name
    ip_allocation_mode = "POOL"
    is_primary         = true
  }

  network {
    type               = "vapp"
    name               = vcd_vapp_network.attached-isolated.name
    ip_allocation_mode = "POOL"
  }

#     ### No IPs in direct network
  network {
    type               = "vapp"
    name               = vcd_vapp_network.attached-direct.name
    ip_allocation_mode = "POOL"
  }

  network {
    type               = "vapp"
    name               = vcd_vapp_network.isolated.name
    ip_allocation_mode = "POOL"
  }

  # Attached org networks of all types
  network {
    type               = "org"
    name               = vcd_vapp_org_network.org-routed.org_network_name
    ip_allocation_mode = "POOL"
  }
  
  network {
    type               = "org"
    name               = vcd_vapp_org_network.org-isolated.org_network_name
    ip_allocation_mode = "POOL"
  }

  network {
    type               = "org"
    name               = vcd_vapp_org_network.org-direct.org_network_name
    ip_allocation_mode = "DHCP"
  }

}
Didainius added a commit that referenced this issue Apr 10, 2020
This PR bumps govcd dependency to support 10.1 and also:
* Adds a note about NSX-T not being supported yet
* Adds a support deprecation warning during apply phase for vCD <=9.1
* Adds a link to changelog in GitHub in main documentation page (https://www.terraform.io/docs/providers/vcd/index.html)
* Removes 9.1 from the list of supported versions and adds 10.1 in https://www.terraform.io/docs/providers/vcd/index.html
* Uses "Undeploy()" instead of PowerOff() for VMs during update phase
* Closes #473 by pulling in vmware/go-vcloud-director#299 .
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants