-
Notifications
You must be signed in to change notification settings - Fork 9.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add user_group_ids field to elasticache replication group #20406
Conversation
When I put this PR together, I had thought that a set of ids could be accepted, but upon testing it turns out that it accepts an array of strings, but will only accept one. Should I leave this as it is I'm thinking AWS would have made is this way for a reason, perhaps to support it in the future? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Welcome @jamesglennan 👋
It looks like this is your first Pull Request submission to the Terraform AWS Provider! If you haven’t already done so please make sure you have checked out our CONTRIBUTING guide and FAQ to make sure your contribution is adhering to best practice and has all the necessary elements in place for a successful approval.
Also take a look at our FAQ which details how we prioritize Pull Requests for inclusion.
Thanks again, and welcome to the community! 😃
f1b69e6
to
269dd1f
Compare
Hey @jamesglennan, I needed multiple user groups provided to the replication group, and am currently using null resource for this, so i came at the same issue as you did. Take a look at this, it seems that taking only single-element array as input is an error within documentation, rather than error with the implementation: |
@hpdobrica That issue looks related to multiple users to a user group, not multiple user groups to a replication group as far as i can tell? |
oh sorry, my bad, just confirmed that cli indeed throws this error when i try to add more:
|
Yes, please! This would be a good addition now since we can define groups and users would be nice to associate it with the replication group itself! |
Any news on this? |
Hello guys, When are you guys planning to release this version ? |
Bump. I have run into a need for this, and unfortunately will have to fall back to using redis AUTH instead of RBAC until this is merged in. Thanks for all your work, @jamesglennan! |
Bump, I would also need this. |
Pull request #21306 has significantly refactored the AWS Provider codebase. As a result, most PRs opened prior to the refactor now have merge conflicts that must be resolved before proceeding. Specifically, PR #21306 relocated the code for all AWS resources and data sources from a single We recognize that many pull requests have been open for some time without yet being addressed by our maintainers. Therefore, we want to make it clear that resolving these conflicts in no way affects the prioritization of a particular pull request. Once a pull request has been prioritized for review, the necessary changes will be made by a maintainer -- either directly or in collaboration with the pull request author. For a more complete description of this refactor, including examples of how old filepaths and function names correspond to their new counterparts: please refer to issue #20000. For a quick guide on how to amend your pull request to resolve the merge conflicts resulting from this refactor and bring it in line with our new code patterns: please refer to our Service Package Refactor Pull Request Guide. |
Hey @jamesglennan ! Also, to answer your question, I believe that there is a potential opportunity for multiple group IDs to be added to 1 replication_group, so I would keep it as an array/list. Everything I looked at seems to be working very smoothly via my initial code review. Thanks again for your commitment! |
Apologies everyone, I have been a little busy, I will try and carve out some time later today or tomorrow to finish this up- thanks for patience! |
I know the feeling! If you need any help, let the community know and we can pitch in! |
14e0100
to
eb32087
Compare
06028fa
to
50fe2d4
Compare
Plz, release to this ASAP...!! |
yeah, having |
@ewbankkit @jamesglennan @breathingdust Could you give us a piece of advice about this issue? |
@zhelding Is there any chance i can get a review on this, seems like a lot of folks are keen to get this in? Thanks in advance- I know i asked before, just wanted to see if there was any movement? |
@jamesglennan Thank you for your work on this and noticing something the community really wants! FYI, we do like to see acceptance tests run for PRs. (It's not required since some are unable to.) To run the relevant acceptance tests, use the command below. However, even without the results, I'm beginning a review and will let you know if we need anything else from you. % make testacc TESTS=TestAccElastiCacheReplicationGroup_ PKG=elasticache |
f56b3e1
to
9ccec1d
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks great! 🎉
Output from acceptance tests (us-west-2
):
% make testacc TESTS='TestAccElastiCacheReplicationGroup_[A-Z]' PKG=elasticache
==> Checking that code complies with gofmt requirements...
TF_ACC=1 go test ./internal/service/elasticache/... -v -count 1 -parallel 20 -run='TestAccElastiCacheReplicationGroup_[A-Z]' -timeout 180m
--- PASS: TestAccElastiCacheReplicationGroup_Validation_noNodeType (12.68s)
--- PASS: TestAccElastiCacheReplicationGroup_Validation_globalReplicationGroupIdAndNodeType (889.51s)
--- PASS: TestAccElastiCacheReplicationGroup_ClusterMode_singleNode (1129.69s)
--- PASS: TestAccElastiCacheReplicationGroup_GlobalReplicationGroupIDClusterModeValidation_numNodeGroupsOnSecondary (1246.18s)
--- PASS: TestAccElastiCacheReplicationGroup_NumberCacheClustersMemberClusterDisappears_noChange (1452.11s)
--- PASS: TestAccElastiCacheReplicationGroup_ValidationMultiAz_noAutomaticFailover (1.49s)
--- PASS: TestAccElastiCacheReplicationGroup_NumberCacheClustersMemberClusterDisappearsRemoveMemberCluster_atTargetSize (1469.56s)
--- PASS: TestAccElastiCacheReplicationGroup_NumberCacheClusters_multiAZEnabled (1593.53s)
--- PASS: TestAccElastiCacheReplicationGroup_NumberCacheClustersFailover_autoFailoverDisabled (1821.60s)
--- PASS: TestAccElastiCacheReplicationGroup_ClusterMode_updateReplicasPerNodeGroup (1889.17s)
--- PASS: TestAccElastiCacheReplicationGroup_ClusterMode_nonClusteredParameterGroup (786.58s)
--- PASS: TestAccElastiCacheReplicationGroup_NumberCacheClustersMemberClusterDisappears_addMemberCluster (2090.06s)
--- PASS: TestAccElastiCacheReplicationGroup_ClusterMode_basic (1035.07s)
--- PASS: TestAccElastiCacheReplicationGroup_ClusterModeUpdateNumNodeGroups_scaleUp (2275.45s)
--- PASS: TestAccElastiCacheReplicationGroup_GlobalReplicationGroupID_basic (2276.63s)
--- PASS: TestAccElastiCacheReplicationGroup_NumberCacheClusters_basic (2491.00s)
--- PASS: TestAccElastiCacheReplicationGroup_GlobalReplicationGroupID_disappears (2493.97s)
--- PASS: TestAccElastiCacheReplicationGroup_NumberCacheClustersMemberClusterDisappearsRemoveMemberCluster_scaleDown (2609.05s)
--- PASS: TestAccElastiCacheReplicationGroup_ClusterModeUpdateNumNodeGroupsAndReplicasPerNodeGroup_scaleUp (2955.18s)
--- PASS: TestAccElastiCacheReplicationGroup_NumberCacheClustersFailover_autoFailoverEnabled (3006.77s)
--- PASS: TestAccElastiCacheReplicationGroup_GlobalReplicationGroupID_full (3442.08s)
--- PASS: TestAccElastiCacheReplicationGroup_EngineVersion_update (3452.59s)
--- PASS: TestAccElastiCacheReplicationGroup_GlobalReplicationGroupIDClusterMode_basic (3478.27s)
--- PASS: TestAccElastiCacheReplicationGroup_ClusterModeUpdateNumNodeGroupsAndReplicasPerNodeGroup_scaleDown (3528.34s)
--- PASS: TestAccElastiCacheReplicationGroup_ClusterModeUpdateNumNodeGroups_scaleDown (3031.39s)
PASS
ok github.com/hashicorp/terraform-provider-aws/internal/service/elasticache 3922.781s
% make testacc TESTS='TestAccElastiCacheReplicationGroup_[a-z]' PKG=elasticache
==> Checking that code complies with gofmt requirements...
TF_ACC=1 go test ./internal/service/elasticache/... -v -count 1 -parallel 20 -run='TestAccElastiCacheReplicationGroup_[a-z]' -timeout 180m
--- PASS: TestAccElastiCacheReplicationGroup_clusteringAndCacheNodesCausesError (3.30s)
--- PASS: TestAccElastiCacheReplicationGroup_disappears (638.38s)
--- PASS: TestAccElastiCacheReplicationGroup_basic (706.11s)
--- PASS: TestAccElastiCacheReplicationGroup_updateDescription (785.69s)
--- PASS: TestAccElastiCacheReplicationGroup_finalSnapshot (891.78s)
--- PASS: TestAccElastiCacheReplicationGroup_updateParameterGroup (908.46s)
--- PASS: TestAccElastiCacheReplicationGroup_dataTiering (1035.29s)
--- PASS: TestAccElastiCacheReplicationGroup_multiAzInVPC (1056.22s)
--- PASS: TestAccElastiCacheReplicationGroup_vpc (1097.61s)
--- PASS: TestAccElastiCacheReplicationGroup_redisClusterInVPC2 (1348.10s)
--- PASS: TestAccElastiCacheReplicationGroup_tags (1369.43s)
--- PASS: TestAccElastiCacheReplicationGroup_enableAtRestEncryption (1410.00s)
--- PASS: TestAccElastiCacheReplicationGroup_updateMaintenanceWindow (1412.73s)
--- PASS: TestAccElastiCacheReplicationGroup_useCMKKMSKeyID (1429.09s)
--- PASS: TestAccElastiCacheReplicationGroup_uppercase (1701.32s)
--- PASS: TestAccElastiCacheReplicationGroup_enableAuthTokenTransitEncryption (1764.73s)
--- PASS: TestAccElastiCacheReplicationGroup_enableSnapshotting (1766.84s)
--- PASS: TestAccElastiCacheReplicationGroup_updateAuthToken (1783.41s)
--- PASS: TestAccElastiCacheReplicationGroup_multiAzNotInVPC (2033.28s)
--- PASS: TestAccElastiCacheReplicationGroup_updateNodeSize (2238.57s)
PASS
ok github.com/hashicorp/terraform-provider-aws/internal/service/elasticache 2239.974s
This functionality has been released in v3.74.0 of the Terraform AWS Provider. Please see the Terraform documentation on provider versioning or reach out if you need any assistance upgrading. For further feature requests or bug reports with this functionality, please create a new GitHub issue following the template. Thank you! |
I'm going to lock this pull request because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues. |
Community Note
Closes #20328
Output from acceptance testing: