-
Notifications
You must be signed in to change notification settings - Fork 156
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
docdb.ElasticCluster resource is not behaving as expected #4273
Comments
It looks like this is just how long it takes to provision this resource. I also tried provisioning one in the AWS console manually and it took just as long.
This is the diff that I see (curious if you see anything differently). [urn=urn:pulumi:dev::pulumi-typescript-app::pulumi:pulumi:Stack::pulumi-typescript-app-dev]
+-aws:docdb/elasticCluster:ElasticCluster: (replace)
[id=arn:aws:docdb-elastic:us-east-2:12345678910:cluster/1a9e5cef-24a8-4cb6-81cd-58bf7a4af373]
[urn=urn:pulumi:dev::pulumi-typescript-app::aws:docdb/elasticCluster:ElasticCluster::chall-cluster]
[provider=urn:pulumi:dev::pulumi-typescript-app::pulumi:providers:aws::default_6_44_0::e8549a8a-5758-4aeb-baaf-a12fd2e2604d]
adminUserName : "chall"
adminUserPassword : [secret]
~ arn : "arn:aws:docdb-elastic:us-east-2:12345678910:cluster/1a9e5cef-24a8-4cb6-81cd-58bf7a4af373" => output<string>
authType : "PLAIN_TEXT"
~ endpoint : "chall-cluster-8ed8f01-12345678910.us-east-2.docdb-elastic.amazonaws.com" => output<string>
~ id : "arn:aws:docdb-elastic:us-east-2:616138583583:cluster/1a9e5cef-24a8-4cb6-81cd-58bf7a4af373" => output<string>
~ kmsKeyId : "AWS_OWNED_KMS_KEY" => output<string>
~ name : "chall-cluster-8ed8f01" => "chall-cluster-f8c21fb"
~ preferredMaintenanceWindow: "Sun:04:05-Sun:04:35" => output<string>
shardCapacity : 2
shardCount : 2
subnetIds : [
[0]: "subnet-09f1542f52e34258d"
[1]: "subnet-0f1175830383f6edb"
]
- tagsAll : {}
- vpcSecurityGroupIds : [
- [0]: "sg-0e7b99ad3860e94db"
]
+ vpcSecurityGroupIds : output<string>
Resources:
+-1 to replace
4 unchanged Looking at the diff grpc logs it looks like the replace is due to the {
"method": "/pulumirpc.ResourceProvider/Diff",
"request": {
"id": "arn:aws:docdb-elastic:us-east-2:123456789123:cluster/1a9e5cef-24a8-4cb6-81cd-58bf7a4af373",
"urn": "urn:pulumi:dev::pulumi-typescript-app::aws:docdb/elasticCluster:ElasticCluster::chall-cluster",
"olds": {
"adminUserName": "chall",
"adminUserPassword": "password",
"arn": "arn:aws:docdb-elastic:us-east-2:123456789123:cluster/1a9e5cef-24a8-4cb6-81cd-58bf7a4af373",
"authType": "PLAIN_TEXT",
"endpoint": "chall-cluster-8ed8f01-123456789123.us-east-2.docdb-elastic.amazonaws.com",
"id": "arn:aws:docdb-elastic:us-east-2:123456789123:cluster/1a9e5cef-24a8-4cb6-81cd-58bf7a4af373",
"kmsKeyId": "AWS_OWNED_KMS_KEY",
"name": "chall-cluster-8ed8f01",
"preferredMaintenanceWindow": "Sun:04:05-Sun:04:35",
"shardCapacity": 2,
"shardCount": 2,
"subnetIds": [
"subnet-09f1542f52e34258d",
"subnet-0f1175830383f6edb"
],
"tagsAll": {},
"vpcSecurityGroupIds": [
"sg-0e7b99ad3860e94db"
]
},
"news": {
"adminUserName": "chall",
"adminUserPassword": "password",
"authType": "PLAIN_TEXT",
"name": "chall-cluster-8ed8f01",
"shardCapacity": 2,
"shardCount": 2,
"subnetIds": [
"subnet-09f1542f52e34258d",
"subnet-0f1175830383f6edb"
]
},
"oldInputs": {
"adminUserName": "chall",
"adminUserPassword": "password",
"authType": "PLAIN_TEXT",
"name": "chall-cluster-8ed8f01",
"shardCapacity": 2,
"shardCount": 2,
"subnetIds": [
"subnet-09f1542f52e34258d",
"subnet-0f1175830383f6edb"
]
}
},
"response": {
"replaces": [
"kmsKeyId"
],
"changes": "DIFF_SOME",
"diffs": [
"tagsAll"
]
},
"metadata": {
"kind": "resource",
"mode": "client",
"name": "aws"
}
} My hunch is that this one is caused by pulumi/pulumi-terraform-bridge#2171 because the
I think this is expected behavior. The name (first argument) is how pulumi identifies the cluster. If you change this pulumi thinks that the old one disappeared and the new one appeared and doesn't know they are related. You can use the alias resource option to tell pulumi that it should map to the old resource. |
I have the same issue and wondered if there's a workaround for now. I was going to use I'm assuming it's using this which requires an ARN: https://docs.aws.amazon.com/documentdb/latest/developerguide/API_elastic_GetCluster.html |
@notjosse I just tried reproducing |
@corymhall Thanks so much for the support and for tackling this so quickly! |
@corymhall wouldn't this be addressed now?
|
Describe what happened
The aws.docdb.ElasticCluster resource is presenting some unexpected/unwanted behavior. The issues are the following:
Sample program
"""An AWS Python Pulumi program"""
import pulumi
import pulumi_aws as aws
default_config = pulumi.Config()
aws_config = pulumi.Config("aws")
elastic_cluster = aws.docdb.ElasticCluster(
"elastic-cluster",
admin_user_name="elasticadmin",
admin_user_password="password",
auth_type="PLAIN_TEXT",
shard_capacity=2,
shard_count=2,
)
docdb = aws.docdb.Cluster("docdb",
cluster_identifier="my-docdb-cluster",
engine="docdb",
master_username="foo",
master_password="mustbeeightchars",
backup_retention_period=5,
preferred_backup_window="07:00-09:00",
skip_final_snapshot=True)
pulumi.export("elastic_cluster_arn", elastic_cluster.id)
pulumi.export("elastic_cluster_endpoint", elastic_cluster.endpoint)
pulumi.export("elastic_cluster_id", elastic_cluster.id.apply(lambda arn: arn.split("/")[-1]))
Log output
No response
Affected Resource(s)
aws.docdb.ElasticCluster
Output of
pulumi about
CLI
Version 3.124.0
Go Version go1.22.5
Go Compiler gc
Plugins
KIND NAME VERSION
resource aws 6.43.0
language python unknown
resource random 4.16.3
Host
OS darwin
Version 14.5
Arch arm64
Dependencies:
NAME VERSION
pip 24.1.1
pulumi_aws 6.43.0
pulumi_random 4.16.3
python-dotenv 1.0.1
setuptools 70.2.0
wheel 0.43.0
Pulumi locates its logs in /var/folders/6_/j5ng6ypd5_96pdf4b849tc6c0000gp/T/ by default
Additional context
No response
Contributing
Vote on this issue by adding a 👍 reaction.
To contribute a fix for this issue, leave a comment (and link to your pull request, if you've opened one already).
The text was updated successfully, but these errors were encountered: