Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[scaleway provider] scaleway_volume_attachment does not get destroyed and thus can't migrate from C2S to C2M #3

Closed
hashibot opened this issue Jun 13, 2017 · 0 comments · Fixed by #13
Labels

Comments

@hashibot
Copy link

This issue was originally opened by @polymeris as hashicorp/terraform#14686. It was migrated here as part of the provider split. The original body of the issue is below.


Terraform Version

v0.9.5

Affected Resource(s)

  • scaleway_volume_attachment
  • scaleway_server

Terraform Configuration Files

C2S

{
	"provider": {
		"scaleway": {
			"region": "par1"
		}
	},
	"resource": {
		"scaleway_volume": {
			"data": {
				"name": "test-data",
				"size_in_gb": 25,
				"type": "l_ssd"
			}
		},
		"scaleway_volume_attachment": {
			"data": {
				"server": "${scaleway_server.host.id}",
				"volume": "${scaleway_volume.data.id}"
			}
		},
		"scaleway_server": {
			"host": {
				"name": "test",
				"type": "C2S",
				"image": "d0c50b37-f0f5-4924-97b0-1fd40c457efe"
			}
		},
		"scaleway_ip": {
			"ip": {
				"server": "${scaleway_server.host.id}"
			}
		}
	}
}

Steps to Reproduce

Please list the steps required to reproduce the issue, for example:

  1. Create the above file
  2. terraform apply
  3. Edit the file, change the type from C2S to C2M
  4. terraform apply

Expected Behavior

  • A new C2M server gets created
  • It is attached to the existing "test-data" volume
  • Reserved IP is associated with it
  • Both the original C2S server and its main volume get destroyed.

Actual Behavior

[...]
Error applying plan:

1 error(s) occurred:

* scaleway_volume_attachment.data (destroy): 1 error(s) occurred:

* scaleway_volume_attachment.data: timeout while waiting for state to become 'success' (timeout: 5m0s)
[...]

None of the resources (server, volumes, volume_attachments, ips) get destroyed.

I suspect the issue is with the server not being stopped before trying to detach the volume. (EDIT: maybe not, manually stopping the server does not help)

Edit: logs

https://gist.github.com/polymeris/4b74548ee854709ff12a2ea185c1fbce

@hashibot hashibot added the bug label Jun 13, 2017
nicolai86 added a commit that referenced this issue Jun 30, 2017
4a153c4 introduced a deadlock caused my
duplicated calls to `mu.Lock()` in a function as well as inside a
`resource.Retry` closure.

fixes #3
jeansebastienh pushed a commit to jeansebastienh/terraform-provider-scaleway that referenced this issue May 4, 2021
docs(rdb): fix the meta information
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging a pull request may close this issue.

1 participant