Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[scaleway provider] scaleway_volume_attachment does not get destroyed and thus can't migrate from C2S to C2M #14686

Closed
polymeris opened this issue May 19, 2017 · 3 comments

Comments

@polymeris
Copy link

polymeris commented May 19, 2017

Terraform Version

v0.9.5

Affected Resource(s)

  • scaleway_volume_attachment
  • scaleway_server

Terraform Configuration Files

C2S

{
	"provider": {
		"scaleway": {
			"region": "par1"
		}
	},
	"resource": {
		"scaleway_volume": {
			"data": {
				"name": "test-data",
				"size_in_gb": 25,
				"type": "l_ssd"
			}
		},
		"scaleway_volume_attachment": {
			"data": {
				"server": "${scaleway_server.host.id}",
				"volume": "${scaleway_volume.data.id}"
			}
		},
		"scaleway_server": {
			"host": {
				"name": "test",
				"type": "C2S",
				"image": "d0c50b37-f0f5-4924-97b0-1fd40c457efe"
			}
		},
		"scaleway_ip": {
			"ip": {
				"server": "${scaleway_server.host.id}"
			}
		}
	}
}

Steps to Reproduce

Please list the steps required to reproduce the issue, for example:

  1. Create the above file
  2. terraform apply
  3. Edit the file, change the type from C2S to C2M
  4. terraform apply

Expected Behavior

  • A new C2M server gets created
  • It is attached to the existing "test-data" volume
  • Reserved IP is associated with it
  • Both the original C2S server and its main volume get destroyed.

Actual Behavior

[...]
Error applying plan:

1 error(s) occurred:

* scaleway_volume_attachment.data (destroy): 1 error(s) occurred:

* scaleway_volume_attachment.data: timeout while waiting for state to become 'success' (timeout: 5m0s)
[...]

None of the resources (server, volumes, volume_attachments, ips) get destroyed.

I suspect the issue is with the server not being stopped before trying to detach the volume. (EDIT: maybe not, manually stopping the server does not help)

Edit: logs

https://gist.github.com/polymeris/4b74548ee854709ff12a2ea185c1fbce

@nicolai86
Copy link
Contributor

Thank you for the bug report, @polymeris .

In this case the best we can probably do is to increase the timeout.
To change volumes on a scaleway server the server needs to be stopped, and to stop a server all data is transfered from the attached volumes. there are some cases in the scaleway community forum where this apparently took more than 24 hours.

From your logs I can see that terraform only waited 5 minutes which is far too short. I'll look into this.

@polymeris
Copy link
Author

Is that just for "baremetal" servers?

I tried stopping the server manually before terraform apply and still get the same issue, btw.

@ghost
Copy link

ghost commented Apr 9, 2020

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues.

If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.

@ghost ghost locked and limited conversation to collaborators Apr 9, 2020
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

No branches or pull requests

4 participants