-
Notifications
You must be signed in to change notification settings - Fork 342
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
ec2_instance: wait on instance state, rather than status check #481
ec2_instance: wait on instance state, rather than status check #481
Conversation
50f7e36
to
557c840
Compare
Hello, again, everyone! It appears that there are no blockers for this PR other than waiting on a major release because of the potential (but unlikely) breaking change. See ansible-collections/community.aws#461 (comment). @markuman also asked if this PR might address #469. I think that changing ec2 instance type is potentially fraught with enough testing issues that I'd rather tackle it in a separate PR. If there's agreement that this PR is ready to merge, then I could stack another PR on it to address the changing instance type. Thanks @tremble, for lighting a fire under me by reopening this. That's great maintainership! |
Hi @overhacked, when I checked the integration tests weren't passing. I need to dig into why but haven't had much free time this week. |
The `await_instances()` function was calling `module.client('ec2').get_waiter('instance_status_ok')` by default, which causes boto3 to wait on the `DescribeInstanceStatus` api to return "Ok". The status API is linked to [EC2's instance status checks](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/monitoring-system-instance-status-check.html#reporting_status), which lag the instance's real state by 3-5 minutes. It's unlikely that most Ansible users who set `wait: true` want their playbooks to delay until EC2's status checks catch up. The more efficient approach is to use `wait_for` or `wait_for_connection`. What is useful, however, is to wait the short period of time necessary for EC2 to move an instance from `pending` to `running`. Before this change, the `ec2_instance` module would wait for an "Ok" status check for newly-created instances *before* waiting for any other state change. This change makes the default `state: present` wait for `instance_exists`, and `state: running` wait for `instance_running`. `state: started` (previously just an alias for `state: running`) now will wait for `instance_status_ok` if `wait: true`. The only, IMHO unlikely, scenario that this is a breaking change for is if an Ansible user had combined `state: present/running/started` and `wait: true` and is actually depending on the play to halt until EC2's status checks return OK. Based on my own informal survey of playbooks on GitHub, I did not see anyone using `ec2_instance` in this way. Most playbooks follow `ec2_instance` with `wait_for` or `wait_for_connection`.
In some cases, we need to branch based on desired EC2 instance state. In others, on desired module state. Some CI checks were failing due to not checking the right value. Renamed and/or assigned local variables in a few critical functions here to be clear on which state we are referring to.
Added a `force_wait` parameter to `await_instances()` Using it to await `instance_exists` when creating new instances before modifying them, because `diff_instance_and_params()` can fail if it runs too soon before the EC2 API is consistent.
b680792
to
cf206df
Compare
…o longer guarantees running)
cf206df
to
3cda932
Compare
…nstance to exist rather than start
3cda932
to
1e45fd8
Compare
recheck |
1 similar comment
recheck |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM with the changelog nit. This is a really nice improvement, thanks @tremble!
Thanks for the initial work on this @overhacked. If CI plays ball, hopefully we'll be able to get this in place for the 2.0.0 release (RealSoonNow...) |
This comment has been minimized.
This comment has been minimized.
2 similar comments
This comment has been minimized.
This comment has been minimized.
recheck |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM!
Wait for instance to start before attempting to attach a volume SUMMARY Following #481 ec2_vol tests failed because the instances hadn't finished starting before trying to attach botocore.exceptions.ClientError: An error occurred (IncorrectState) when calling the AttachVolume operation: Instance 'i-0d740b81c23f11a4c' is not 'running'. ISSUE TYPE Bugfix Pull Request COMPONENT NAME ec2_vol ADDITIONAL INFORMATION https://7860a9269d0955476c03-81788054a9528b940bcdbd064c8a3746.ssl.cf1.rackcdn.com/488/b02aebad5940f5500e4ccb93c093bc00339b2426/check/ansible-test-cloud-integration-aws-py36_1/110bed5/job-output.txt Reviewed-by: None <None>
ec2_asg: Add purge_tags to AutoScalingGroups. SUMMARY Add purge_tags to ec2_asg module. Fixes ansible-collections#481. ISSUE TYPE Feature Pull Request COMPONENT NAME ec2_asg ADDITIONAL INFORMATION There was another PR (currently closed) ansible-collections#482 - with similar functionality but I'm not sure if having modules to handle tags for individual services/modules is a good way to have this functionality. It will certainly cause increase in number of modules. Hence tried modifying existing ec2_asg module to be able to do this. This utilizes underlying API calls to: https://docs.aws.amazon.com/autoscaling/ec2/APIReference/API_DescribeTags.html https://docs.aws.amazon.com/autoscaling/ec2/APIReference/API_CreateOrUpdateTags.html https://docs.aws.amazon.com/autoscaling/ec2/APIReference/API_DeleteTags.html Reviewed-by: Alina Buzachis <None> Reviewed-by: Jill R <None> Reviewed-by: Mandar Kulkarni <[email protected]>
ec2_asg: Add purge_tags to AutoScalingGroups. SUMMARY Add purge_tags to ec2_asg module. Fixes ansible-collections#481. ISSUE TYPE Feature Pull Request COMPONENT NAME ec2_asg ADDITIONAL INFORMATION There was another PR (currently closed) ansible-collections#482 - with similar functionality but I'm not sure if having modules to handle tags for individual services/modules is a good way to have this functionality. It will certainly cause increase in number of modules. Hence tried modifying existing ec2_asg module to be able to do this. This utilizes underlying API calls to: https://docs.aws.amazon.com/autoscaling/ec2/APIReference/API_DescribeTags.html https://docs.aws.amazon.com/autoscaling/ec2/APIReference/API_CreateOrUpdateTags.html https://docs.aws.amazon.com/autoscaling/ec2/APIReference/API_DeleteTags.html Reviewed-by: Alina Buzachis <None> Reviewed-by: Jill R <None> Reviewed-by: Mandar Kulkarni <[email protected]>
ec2_asg: Add purge_tags to AutoScalingGroups. SUMMARY Add purge_tags to ec2_asg module. Fixes ansible-collections#481. ISSUE TYPE Feature Pull Request COMPONENT NAME ec2_asg ADDITIONAL INFORMATION There was another PR (currently closed) ansible-collections#482 - with similar functionality but I'm not sure if having modules to handle tags for individual services/modules is a good way to have this functionality. It will certainly cause increase in number of modules. Hence tried modifying existing ec2_asg module to be able to do this. This utilizes underlying API calls to: https://docs.aws.amazon.com/autoscaling/ec2/APIReference/API_DescribeTags.html https://docs.aws.amazon.com/autoscaling/ec2/APIReference/API_CreateOrUpdateTags.html https://docs.aws.amazon.com/autoscaling/ec2/APIReference/API_DeleteTags.html Reviewed-by: Alina Buzachis <None> Reviewed-by: Jill R <None> Reviewed-by: Mandar Kulkarni <[email protected]>
Attempt to revive ansible-collections/community.aws#461
(Thanks @overhacked)
SUMMARY
The
await_instances()
function was callingmodule.client('ec2').get_waiter('instance_status_ok')
by default, which causes boto3 to wait on theDescribeInstanceStatus
api to return "Ok". The status API is linked to EC2's instance status checks, which lag the instance's real state by 3-5 minutes.It's unlikely that most Ansible users who set
wait: true
want their playbooks to delay until EC2's status checks catch up. The more efficient approach is to usewait_for
orwait_for_connection
. What is useful, however, is to wait the short period of time necessary for EC2 to move an instance frompending
torunning
.Before this change, the
ec2_instance
module would wait for an "Ok" status check for newly-created instances before waiting for any other state change. This change makes the defaultstate: present
wait forinstance_exists
, andstate: running
wait forinstance_running
.state: started
(previously just an alias forstate: running
) now will wait forinstance_status_ok
ifwait: true
.The only, IMHO unlikely, scenario that this is a breaking change for is if an Ansible user had combined
state: present/running/started
andwait: true
and is actually depending on the play to halt until EC2's status checks return OK. Based on my own informal survey of playbooks on GitHub, I did not see anyone usingec2_instance
in this way. Most playbooks followec2_instance
withwait_for
orwait_for_connection
.ISSUE TYPE
COMPONENT NAME
ec2_instance
ADDITIONAL INFORMATION