Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[PR #1716/86c60b49 backport][stable-5] ecs: integration test and new purge parameters #1731

Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
8 changes: 8 additions & 0 deletions changelogs/fragments/ecs_service_and_ecs_integration_test.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,8 @@
minor_changes:
- ecs_service - new parameter ``purge_placement_strategy`` to have the ability to remove the placement strategy of an ECS Service (https://github.com/ansible-collections/community.aws/pull/1716).
- ecs_service - new parameter ``purge_placement_constraints`` to have the ability to remove the placement constraints of an ECS Service (https://github.com/ansible-collections/community.aws/pull/1716).
trivial:
- ecs_cluster - rework and repair ecs_cluster integration test.
deprecated_features:
- ecs_service - In a release after 2024-06-01, tha default value of ``purge_placement_strategy`` will be change from ``false`` to ``true`` (https://github.com/ansible-collections/community.aws/pull/1716).
- ecs_service - In a release after 2024-06-01, tha default value of ``purge_placement_constraints`` will be change from ``false`` to ``true`` (https://github.com/ansible-collections/community.aws/pull/1716).
33 changes: 31 additions & 2 deletions plugins/modules/ecs_service.py
Original file line number Diff line number Diff line change
Expand Up @@ -148,6 +148,14 @@
description: A cluster query language expression to apply to the constraint.
required: false
type: str
purge_placement_constraints:
version_added: 5.3.0
description:
- Toggle overwriting of existing placement constraints. This is needed for backwards compatibility.
- By default I(purge_placement_constraints=false). In a release after 2024-06-01 this will be changed to I(purge_placement_constraints=true).
required: false
type: bool
default: false
placement_strategy:
description:
- The placement strategy objects to use for tasks in your service. You can specify a maximum of 5 strategy rules per service.
Expand All @@ -162,6 +170,14 @@
field:
description: The field to apply the placement strategy against.
type: str
purge_placement_strategy:
version_added: 5.3.0
description:
- Toggle overwriting of existing placement strategy. This is needed for backwards compatibility.
- By default I(purge_placement_strategy=false). In a release after 2024-06-01 this will be changed to I(purge_placement_strategy=true).
required: false
type: bool
default: false
force_deletion:
description:
- Forcibly delete the service. Required when deleting a service with >0 scale, or no target group.
Expand Down Expand Up @@ -396,7 +412,9 @@
returned: always
type: int
loadBalancers:
description: A list of load balancer objects
description:
- A list of load balancer objects
- Updating the loadbalancer configuration of an existing service requires botocore>=1.24.14.
returned: always
type: complex
contains:
Expand Down Expand Up @@ -822,7 +840,8 @@ def create_service(self, service_name, cluster_name, task_definition, load_balan
def update_service(self, service_name, cluster_name, task_definition, desired_count,
deployment_configuration, placement_constraints, placement_strategy,
network_configuration, health_check_grace_period_seconds,
force_new_deployment, capacity_provider_strategy, load_balancers):
force_new_deployment, capacity_provider_strategy, load_balancers,
purge_placement_constraints, purge_placement_strategy):
params = dict(
cluster=cluster_name,
service=service_name,
Expand All @@ -834,9 +853,15 @@ def update_service(self, service_name, cluster_name, task_definition, desired_co
params['placementConstraints'] = [{key: value for key, value in constraint.items() if value is not None}
for constraint in placement_constraints]

if purge_placement_constraints and not placement_constraints:
params['placementConstraints'] = []

if placement_strategy:
params['placementStrategy'] = placement_strategy

if purge_placement_strategy and not placement_strategy:
params['placementStrategy'] = []

if network_configuration:
params['networkConfiguration'] = network_configuration
if force_new_deployment:
Expand Down Expand Up @@ -907,6 +932,7 @@ def main():
expression=dict(required=False, type='str')
)
),
purge_placement_constraints=dict(required=False, default=False, type='bool'),
placement_strategy=dict(
required=False,
default=[],
Expand All @@ -917,6 +943,7 @@ def main():
field=dict(type='str'),
)
),
purge_placement_strategy=dict(required=False, default=False, type='bool'),
health_check_grace_period_seconds=dict(required=False, type='int'),
network_configuration=dict(required=False, type='dict', options=dict(
subnets=dict(type='list', elements='str'),
Expand Down Expand Up @@ -1061,6 +1088,8 @@ def main():
module.params['force_new_deployment'],
capacityProviders,
updatedLoadBalancers,
module.params['purge_placement_constraints'],
module.params['purge_placement_strategy'],
)

else:
Expand Down
4 changes: 1 addition & 3 deletions tests/integration/targets/ecs_cluster/aliases
Original file line number Diff line number Diff line change
@@ -1,6 +1,4 @@
# reason: slow
# Tests take around 15 minutes to run
unsupported
time=20m

cloud/aws

Expand Down
2 changes: 2 additions & 0 deletions tests/integration/targets/ecs_cluster/defaults/main.yml
Original file line number Diff line number Diff line change
Expand Up @@ -4,6 +4,8 @@ user_data: |
echo ECS_CLUSTER={{ ecs_cluster_name }} >> /etc/ecs/ecs.config

ecs_service_name: "{{ resource_prefix }}-service"
ecs_service_role_name: "ansible-test-ecsServiceRole-{{ tiny_prefix }}"
ecs_task_role_name: "ansible-test-ecsServiceRole-task-{{ tiny_prefix }}"
ecs_task_image_path: nginx
ecs_task_name: "{{ resource_prefix }}-task"
ecs_task_memory: 128
Expand Down
5 changes: 4 additions & 1 deletion tests/integration/targets/ecs_cluster/meta/main.yml
Original file line number Diff line number Diff line change
@@ -1 +1,4 @@
dependencies: []
dependencies:
- role: setup_botocore_pip
vars:
botocore_version: "1.24.14"
147 changes: 147 additions & 0 deletions tests/integration/targets/ecs_cluster/tasks/01_create_requirements.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,147 @@
- name: ensure IAM service role exists
iam_role:
name: "{{ ecs_service_role_name }}"
assume_role_policy_document: "{{ lookup('file','ecs-trust-policy.json') }}"
state: present
create_instance_profile: yes
managed_policy:
- AmazonEC2ContainerServiceRole
wait: True

- name: ensure AWSServiceRoleForECS role exists
iam_role_info:
name: AWSServiceRoleForECS
register: iam_role_result

# # This should happen automatically with the right permissions...
#- name: fail if AWSServiceRoleForECS role does not exist
# fail:
# msg: >
# Run `aws iam create-service-linked-role --aws-service-name=ecs.amazonaws.com ` to create
# a linked role for AWS VPC load balancer management
# when: not iam_role_result.iam_roles

- name: create a VPC to work in
ec2_vpc_net:
cidr_block: 10.0.0.0/16
state: present
name: '{{ resource_prefix }}_ecs_cluster'
resource_tags:
Name: '{{ resource_prefix }}_ecs_cluster'
register: setup_vpc

- name: create a key pair to use for creating an ec2 instance
ec2_key:
name: '{{ resource_prefix }}_ecs_cluster'
state: present
when: ec2_keypair is not defined # allow override in cloud-config-aws.ini
register: setup_key

- name: create subnets
ec2_vpc_subnet:
az: '{{ aws_region }}{{ item.zone }}'
tags:
Name: '{{ resource_prefix }}_ecs_cluster-subnet-{{ item.zone }}'
vpc_id: '{{ setup_vpc.vpc.id }}'
cidr: "{{ item.cidr }}"
state: present
register: setup_subnet
with_items:
- zone: a
cidr: 10.0.1.0/24
- zone: b
cidr: 10.0.2.0/24

- name: create an internet gateway so that ECS agents can talk to ECS
ec2_vpc_igw:
vpc_id: '{{ setup_vpc.vpc.id }}'
state: present
register: igw

- name: create a security group to use for creating an ec2 instance
ec2_group:
name: '{{ resource_prefix }}_ecs_cluster-sg'
description: 'created by Ansible integration tests'
state: present
vpc_id: '{{ setup_vpc.vpc.id }}'
rules: # allow all ssh traffic but nothing else
- ports: 22
cidr_ip: 0.0.0.0/0
register: setup_sg

- set_fact:
# As a lookup plugin we don't have access to module_defaults
connection_args:
region: "{{ aws_region }}"
aws_access_key: "{{ aws_access_key }}"
aws_secret_key: "{{ aws_secret_key }}"
aws_security_token: "{{ security_token | default(omit) }}"
no_log: True

- name: set image id fact
set_fact:
ecs_image_id: "{{ lookup('aws_ssm', '/aws/service/ecs/optimized-ami/amazon-linux-2/recommended/image_id', **connection_args) }}"

- name: provision ec2 instance to create an image
ec2_instance:
key_name: '{{ ec2_keypair|default(setup_key.key.name) }}'
instance_type: t3.micro
state: present
image_id: '{{ ecs_image_id }}'
wait: yes
user_data: "{{ user_data }}"
instance_role: "{{ ecs_service_role_name }}"
tags:
Name: '{{ resource_prefix }}_ecs_agent'
security_group: '{{ setup_sg.group_id }}'
vpc_subnet_id: '{{ setup_subnet.results[0].subnet.id }}'
register: setup_instance

- name: create target group
elb_target_group:
name: "{{ ecs_target_group_name }}1"
state: present
protocol: HTTP
port: 8080
modify_targets: no
vpc_id: '{{ setup_vpc.vpc.id }}'
target_type: instance
health_check_interval: 5
health_check_timeout: 2
healthy_threshold_count: 2
unhealthy_threshold_count: 2
register: elb_target_group_instance

- name: create second target group to use ip target_type
elb_target_group:
name: "{{ ecs_target_group_name }}2"
state: present
protocol: HTTP
port: 8080
modify_targets: no
vpc_id: '{{ setup_vpc.vpc.id }}'
target_type: ip
health_check_interval: 5
health_check_timeout: 2
healthy_threshold_count: 2
unhealthy_threshold_count: 2
register: elb_target_group_ip

- name: create load balancer
elb_application_lb:
name: "{{ ecs_load_balancer_name }}"
state: present
scheme: internal
security_groups: '{{ setup_sg.group_id }}'
subnets: "{{ setup_subnet.results | map(attribute='subnet.id') | list }}"
listeners:
- Protocol: HTTP
Port: 80
DefaultActions:
- Type: forward
TargetGroupName: "{{ ecs_target_group_name }}1"
- Protocol: HTTP
Port: 81
DefaultActions:
- Type: forward
TargetGroupName: "{{ ecs_target_group_name }}2"
76 changes: 76 additions & 0 deletions tests/integration/targets/ecs_cluster/tasks/10_ecs_cluster.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,76 @@
# cluster "{{ ecs_cluster_name }}" is used for ecs_service tests
- name: create an ECS cluster
ecs_cluster:
name: "{{ ecs_cluster_name }}"
state: present
register: ecs_cluster

- name: check that ecs_cluster changed
assert:
that:
- ecs_cluster.changed

- name: immutable create same ECS cluster
ecs_cluster:
name: "{{ ecs_cluster_name }}"
state: present
register: ecs_cluster_again

- name: check that ecs_cluster did not change
assert:
that:
- not ecs_cluster_again.changed

- name: create an ECS cluster to test capacity provider strategy
ecs_cluster:
name: "{{ ecs_cluster_name }}-cps"
state: present
register: ecs_cluster

- name: add capacity providers and strategy
ecs_cluster:
name: "{{ ecs_cluster_name }}-cps"
state: present
purge_capacity_providers: True
capacity_providers:
- FARGATE
- FARGATE_SPOT
capacity_provider_strategy:
- capacity_provider: FARGATE
base: 1
weight: 1
- capacity_provider: FARGATE_SPOT
weight: 100
register: ecs_cluster_update

- name: check that ecs_cluster was correctly updated
assert:
that:
- ecs_cluster_update.changed
- ecs_cluster_update.cluster is defined
- ecs_cluster_update.cluster.capacityProviders is defined
- "'FARGATE' in ecs_cluster_update.cluster.capacityProviders"

- name: immutable add capacity providers and strategy
ecs_cluster:
name: "{{ ecs_cluster_name }}-cps"
state: present
purge_capacity_providers: True
capacity_providers:
- FARGATE
- FARGATE_SPOT
capacity_provider_strategy:
- capacity_provider: FARGATE
base: 1
weight: 1
- capacity_provider: FARGATE_SPOT
weight: 100
register: ecs_cluster_update

- name: check that ecs_cluster was correctly updated
assert:
that:
- not ecs_cluster_update.changed
- ecs_cluster_update.cluster is defined
- ecs_cluster_update.cluster.capacityProviders is defined
- "'FARGATE' in ecs_cluster_update.cluster.capacityProviders"
Loading