-
Notifications
You must be signed in to change notification settings - Fork 9.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
dms-vpc-role is not configured properly when creating aws_dms_replication_instance #11025
Comments
I believe this is related to this issue: In my experience, what solves both issues is to manage the DMS roles in a separate Terraform job. In our case, we have a "top-level" terraform job that sets up our basic infrastructure and exports key objects, and then we have other jobs that leverage remote state to integrate with those exported objects. When I moved the DMS role creation to the top-level job, both this issue and the above linked issue disappear:
If I had to guess, I would say there is a missing dependency that both allows the DMS instance to start creating before the roles are fully provisioned, and allows the DMS roles to be deleted before the instance teardown has completed (which causes the ENI cleanup to fail). I did try adding an explicit dependency on the DMS instance to the roles, which did not help. |
Just for additional info, running with |
My workaround is to
|
I can confirm that |
Just ran into this as well, depends_on (as stated in the documentation) is not adequate. The dirty sleep above seems to work for now. Until a cleaner fix can be implemented, a documentation update would be great! |
I got the same error |
depends_on + sleep worked for me ;) |
depends_on + sleep worked for me as well |
+1 pls fix 🥳 |
See this link. Might be worth following this advice on an aws account that has never used dms. |
By the way, I found the same issue also in CloudFormation. |
I tried both the options mentioned, however I am still getting this error. Not sure how to proceed. Appreciate your help and suggestions. ~ Tried depends On When I verify for the roles created in AWS console I see the required roles created with appropriate policy. Even on the second attempt to apply the change I still get the error. Thanks |
Hello All.
data "aws_iam_policy_document" "dms_assume_role" {
} resource "aws_iam_role" "dms-vpc-role" { resource "aws_iam_role_policy_attachment" "dms-vpc-role-AmazonDMSVPCManagementRole" { Once created made sure this role and policy "AmazonDMSVPCManagementRole" is attached to the role via AWS Console
resource "aws_dms_replication_subnet_group" "subnet-group" { depends_on = [ And it looks like this resource specifically looking for role named "dms-vpc-role" as defined in step 1. It looks like there is a bug in the provider especially "resource "aws_dms_replication_subnet_group"" Thanks |
A Retry step has been added to the create function for This resolves the |
This functionality has been released in v4.31.0 of the Terraform AWS Provider. Please see the Terraform documentation on provider versioning or reach out if you need any assistance upgrading. For further feature requests or bug reports with this functionality, please create a new GitHub issue following the template. Thank you! |
Good Morning My first post, here. I have the same problem, using the dms module, when I change the name "dms-vpc-role", to a custom name, I get the same error using the latest provider. Any idea ? https://github.com/terraform-aws-modules/terraform-aws-dms/blob/v1.5.3/main.tf#L88 |
@ffelipek07 , This IAM role name needs to be hardcoded. Thanks |
@ffelipek07 Not sure how to go about it. Thanks |
@ffelipek07 ,
If AWS expects this role for whatever the reason.. then we have to live with it. Thanks |
I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues. |
This is a similar (or the same) issue as terraform-providers/terraform-provider-aws#7748 which was closed.
Community Note
Terraform Version
Affected Resource(s)
Terraform Configuration Files
Debug Output
Error applying plan:
xxx: resource "aws_dms_replication_subnet_group" "replication_subnet" {
Expected Behavior
On first
terraform apply
:Actual Behavior
On first
terraform apply
:Error applying plan:
xxx: resource "aws_dms_replication_subnet_group" "replication_subnet" {
On second
terraform apply
:Steps to Reproduce
terraform apply
The text was updated successfully, but these errors were encountered: