You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Dec 5, 2020. It is now read-only.
As a new vsphere_virtual_machine.vm resource is created, the hostname changes and there sould be a rancher_host resource replacement as well, the new host is created by the provisioner of the VM:
provisioner"remote-exec" {
connection {
type="ssh"user="root"password="${var.root_password}"
}
inline=[
# Add host to rancher"${rancher_registration_token.rancher-token.command}",
# Wait for load balancer service to be up and running"sleep 20",
]
}
Actual Behavior
The rancher_host resource gets renamed, which results in having two hosts in rancher, one up and runnig and another one with the same name, which is the old VM that has been deleted.
Steps to Reproduce
Please list the steps required to reproduce the issue, for example:
terraform apply
edit the vsphere_virtual_machine.vm resource template attribute which forces the replacement of the resource
terraform apply
Important Factoids
This behaviour should be configurable on the rancher_host resource, with something like triggers or keepers on null_resource and random provider.
@raphink I am not using the resource anymore as I am interacting directly with rancher's API. I will test it and report here if it's still causing the issue! Thanks
Terraform Version
terraform 0.11.7
provider.rancher v1.2.1
Affected Resource(s)
Terraform Configuration Files
Debug Output
None.
Panic Output
None.
Expected Behavior
As a new vsphere_virtual_machine.vm resource is created, the hostname changes and there sould be a rancher_host resource replacement as well, the new host is created by the provisioner of the VM:
Actual Behavior
The rancher_host resource gets renamed, which results in having two hosts in rancher, one up and runnig and another one with the same name, which is the old VM that has been deleted.
Steps to Reproduce
Please list the steps required to reproduce the issue, for example:
terraform apply
terraform apply
Important Factoids
This behaviour should be configurable on the rancher_host resource, with something like triggers or keepers on null_resource and random provider.
References
The text was updated successfully, but these errors were encountered: