-
Notifications
You must be signed in to change notification settings - Fork 342
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
vpc_net check mode, IPV6 CIDR assoc/disassoc #631
vpc_net check mode, IPV6 CIDR assoc/disassoc #631
Conversation
Build failed.
|
Build failed.
|
Build succeeded.
|
""" | ||
start_time = time() | ||
criteria_match = False | ||
while time() < start_time + 300: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just wondering if it would be feasible to define a waiter instead? @tremble
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm trying to add support for opensearch module. To support custom endpoint, I discovered I need to fix issues in the aws_acm
module to support tags (ansible-collections/community.aws#870), which depends on adding new IAM permissions (mattclay/aws-terminator#188). Along the way, I fixed some doc issues and use of deprecated tasks. Then I found out aws_acm
needs to support certificate requests (ansible-collections/community.aws#869) so I can issue a certificate for the custom opensearch
endpoint. Also, certificate requests would potentially require adding a new module for managing private CAs in AWS acm, otherwise we cannot test signing certificate requests with private CAs. Then because I'm testing opensearch
with IPv6, I discovered I need to fix the IPv6 association in the ec2_vpc_net
module (#631). The ec2_vpc_net
was making changes even in check mode. Also, the ec2_route_table
wasn't documented for IPv6 so #634 . While adding integration tests for #634, I found out community.aws
relies on ansible.netcommons
, but the ipsubnet
filter has been moved to ansible.utils
. That's one more problem to fix that I haven't done yet. Then it turns out both ansible.netcommons
and ansible.utils
have an issue that can cause up to 2 ^ 128 objects to be created: ansible-collections/ansible.utils#132. There is also a related 2 ^ N loop in netaddr
: netaddr/netaddr#241.
Along the way, I was thinking of several other problems (such as ec2_vpc_net
not supporting user-defined IPv6 subnets, missing doc). How about opening tracking issues for these issues that are being discovered along the way, otherwise I'm concerned there will be no end in sight.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@sebastien-rosset Thank you for taking time to work of this. Could you please add a changelog fragment?
Build failed.
|
Build failed.
|
Build succeeded.
|
@alinabuzachis, @jillr , IMO this is a severe bug. Dry-run aka check-mode should never modify anything. |
@sebastien-rosset Thank you very much for your contributions overall. You're correct, there shouldn't be any change in check_mode. By the way, DryRun is not the same thing as check_mode. DryRun only checks permissions. I'm going to review this PR later on today and let you know. Sorry for the delay. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@sebastien-rosset Thank you for working on this. Please reference the PR in the changelog. Other than that LGTM!
integration tests are failing because of
|
recheck |
Backport to stable-2: 💔 cherry-picking failed — conflicts found❌ Failed to cleanly apply 3e24a37 on top of patchback/backports/stable-2/3e24a37b3f0c6678c3936a8a6cf228e02c64068c/pr-631 Backporting merged PR #631 into main
🤖 @patchback |
Backport to stable-3: 💚 backport PR created✅ Backport PR branch: Backported as #722 🤖 @patchback |
vpc_net check mode, IPV6 CIDR assoc/disassoc SUMMARY Implement check mode correctly for the ec2_vpc_net module. The module was incorrectly making actual changes when executed in check mode. In check mode, do not change the configuration. Previously the module was making VPC changes in the following scenarios: Association with IPv4 CIDR or IPv6 CIDR. Disassociation from IPv4 CIDR or IPv6 CIDR. Handle case when Amazon-provided ipv6 block is enabled, then disabled, then enabled again. Do not disable IPv6 CIDR association (using Amazon pool) if ipv6_cidr property is not present in the task. If the VPC already exists and ipv6_cidr property, retain the current config. Add integration tests: Enable, disable, then re-enable Amazon-provided IPv6 CIDR. When VPC already exists and ipv6_cidr property is not specified, validate this does not disable IPv6 CIDR association. ISSUE TYPE Bugfix Pull Request COMPONENT NAME ec2_vpc_net ADDITIONAL INFORMATION Reviewed-by: Sebastien Rosset <None> Reviewed-by: Alina Buzachis <None> Reviewed-by: Joseph Torcasso <None> Reviewed-by: Markus Bergholz <[email protected]> (cherry picked from commit 3e24a37)
[PR #631/3e24a37b backport][stable-3] vpc_net check mode, IPV6 CIDR assoc/disassoc This is a backport of PR #631 as merged into main (3e24a37). SUMMARY Implement check mode correctly for the ec2_vpc_net module. The module was incorrectly making actual changes when executed in check mode. In check mode, do not change the configuration. Previously the module was making VPC changes in the following scenarios: Association with IPv4 CIDR or IPv6 CIDR. Disassociation from IPv4 CIDR or IPv6 CIDR. Handle case when Amazon-provided ipv6 block is enabled, then disabled, then enabled again. Do not disable IPv6 CIDR association (using Amazon pool) if ipv6_cidr property is not present in the task. If the VPC already exists and ipv6_cidr property, retain the current config. Add integration tests: Enable, disable, then re-enable Amazon-provided IPv6 CIDR. When VPC already exists and ipv6_cidr property is not specified, validate this does not disable IPv6 CIDR association. ISSUE TYPE Bugfix Pull Request COMPONENT NAME ec2_vpc_net ADDITIONAL INFORMATION
Improve doc to show support for IPv6 CIDR block SUMMARY Improve doc to show IPv6 CIDR blocks are supported. Add example with IPv6 CIDR block. Add missing attribute to return values. Remove duplicate assertions in integration tests. Add tests for IPv6 subnets in integration tests. ISSUE TYPE Bugfix Pull Request COMPONENT NAME ec2_vpc_route_table ADDITIONAL INFORMATION While testing IPv6 in this module, I discovered the ipsubnet filter does not work if the prefix length has a fairly high value such as /120. I was running an integration test with a /120 subnet in this PR, but the ipsubnet call never returns and uses 100% CPU. I changed the value to a /64 subnet so this PR can pass. The ipsubnet issue needs to be fixed in ansible.netcommon and the netaddr package. I've created a unit test to reproduce the problem: ansible-collections/ansible.netcommon#362 The integration tests in this PR depend on #631 for the VPC configuration. Reviewed-by: Mark Chappell <None> Reviewed-by: Sebastien Rosset <None> Reviewed-by: Alina Buzachis <None> Reviewed-by: Jill R <None>
Improve doc to show support for IPv6 CIDR block SUMMARY Improve doc to show IPv6 CIDR blocks are supported. Add example with IPv6 CIDR block. Add missing attribute to return values. Remove duplicate assertions in integration tests. Add tests for IPv6 subnets in integration tests. ISSUE TYPE Bugfix Pull Request COMPONENT NAME ec2_vpc_route_table ADDITIONAL INFORMATION While testing IPv6 in this module, I discovered the ipsubnet filter does not work if the prefix length has a fairly high value such as /120. I was running an integration test with a /120 subnet in this PR, but the ipsubnet call never returns and uses 100% CPU. I changed the value to a /64 subnet so this PR can pass. The ipsubnet issue needs to be fixed in ansible.netcommon and the netaddr package. I've created a unit test to reproduce the problem: ansible-collections/ansible.netcommon#362 The integration tests in this PR depend on #631 for the VPC configuration. Reviewed-by: Mark Chappell <None> Reviewed-by: Sebastien Rosset <None> Reviewed-by: Alina Buzachis <None> Reviewed-by: Jill R <None> (cherry picked from commit c06d77c)
[PR #634/c06d77c9 backport][stable-3] Improve doc to show support for IPv6 CIDR block This is a backport of PR #634 as merged into main (c06d77c). SUMMARY Improve doc to show IPv6 CIDR blocks are supported. Add example with IPv6 CIDR block. Add missing attribute to return values. Remove duplicate assertions in integration tests. Add tests for IPv6 subnets in integration tests. ISSUE TYPE Bugfix Pull Request COMPONENT NAME ec2_vpc_route_table ADDITIONAL INFORMATION While testing IPv6 in this module, I discovered the ipsubnet filter does not work if the prefix length has a fairly high value such as /120. I was running an integration test with a /120 subnet in this PR, but the ipsubnet call never returns and uses 100% CPU. I changed the value to a /64 subnet so this PR can pass. The ipsubnet issue needs to be fixed in ansible.netcommon and the netaddr package. I've created a unit test to reproduce the problem: ansible-collections/ansible.netcommon#362 The integration tests in this PR depend on #631 for the VPC configuration.
) (ansible-collections#751) [PR ansible-collections#634/c06d77c9 backport][stable-3] Improve doc to show support for IPv6 CIDR block This is a backport of PR ansible-collections#634 as merged into main (c06d77c). SUMMARY Improve doc to show IPv6 CIDR blocks are supported. Add example with IPv6 CIDR block. Add missing attribute to return values. Remove duplicate assertions in integration tests. Add tests for IPv6 subnets in integration tests. ISSUE TYPE Bugfix Pull Request COMPONENT NAME ec2_vpc_route_table ADDITIONAL INFORMATION While testing IPv6 in this module, I discovered the ipsubnet filter does not work if the prefix length has a fairly high value such as /120. I was running an integration test with a /120 subnet in this PR, but the ipsubnet call never returns and uses 100% CPU. I changed the value to a /64 subnet so this PR can pass. The ipsubnet issue needs to be fixed in ansible.netcommon and the netaddr package. I've created a unit test to reproduce the problem: ansible-collections/ansible.netcommon#362 The integration tests in this PR depend on ansible-collections#631 for the VPC configuration.
SUMMARY
Implement check mode correctly for the
ec2_vpc_net
module. The module was incorrectly making actual changes when executed in check mode.ipv6_cidr
property is not present in the task. If the VPC already exists andipv6_cidr
property, retain the current config.ipv6_cidr
property is not specified, validate this does not disable IPv6 CIDR association.ISSUE TYPE
COMPONENT NAME
ec2_vpc_net
ADDITIONAL INFORMATION