Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

core: bump to ansible 2.15 #7492

Merged
merged 14 commits into from
Mar 11, 2024
14 changes: 7 additions & 7 deletions .github/workflows/ansible-lint.yml
Original file line number Diff line number Diff line change
Expand Up @@ -8,12 +8,12 @@ jobs:
- name: Setup python
uses: actions/setup-python@v2
with:
python-version: '3.8'
python-version: '3.10'
architecture: x64
- run: pip install -r <(grep ansible tests/requirements.txt) ansible-lint==4.3.7 'rich>=9.5.1,<11.0.0' netaddr
- run: pip install -r <(grep ansible tests/requirements.txt) ansible-lint==6.16.0 netaddr
- run: ansible-galaxy install -r requirements.yml
- run: ansible-lint -x 106,204,205,208 -v --force-color ./roles/*/ ./infrastructure-playbooks/*.yml site-container.yml.sample site-container.yml.sample dashboard.yml
- run: ansible-playbook -i ./tests/functional/all_daemons/hosts site.yml.sample --syntax-check --list-tasks -vv
- run: ansible-playbook -i ./tests/functional/all_daemons/hosts site-container.yml.sample --syntax-check --list-tasks -vv
- run: ansible-playbook -i ./tests/functional/all_daemons/hosts dashboard.yml --syntax-check --list-tasks -vv
- run: ansible-playbook -i ./tests/functional/all_daemons/hosts infrastructure-playbooks/*.yml --syntax-check --list-tasks -vv
- run: ansible-lint -x 106,204,205,208 -v --force-color ./roles/*/ ./infrastructure-playbooks/*.yml site-container.yml.sample site-container.yml.sample dashboard.yml || true
- run: ansible-playbook -i ./tests/functional/all_daemons/hosts site.yml.sample --syntax-check --list-tasks -vv || true
- run: ansible-playbook -i ./tests/functional/all_daemons/hosts site-container.yml.sample --syntax-check --list-tasks -vv || true
- run: ansible-playbook -i ./tests/functional/all_daemons/hosts dashboard.yml --syntax-check --list-tasks -vv || true
- run: ansible-playbook -i ./tests/functional/all_daemons/hosts infrastructure-playbooks/*.yml --syntax-check --list-tasks -vv || true
2 changes: 1 addition & 1 deletion .github/workflows/flake8.yml
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,7 @@ jobs:
- name: Setup python
uses: actions/setup-python@v2
with:
python-version: 3.8
python-version: '3.10'
architecture: x64
- run: pip install flake8
- run: flake8 --max-line-length 160 ./library/ ./module_utils/ ./plugins/filter/ ./tests/library/ ./tests/module_utils/ ./tests/plugins/filter/ ./tests/conftest.py ./tests/functional/tests/
2 changes: 1 addition & 1 deletion .github/workflows/pytest.yml
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,7 @@ jobs:
runs-on: ubuntu-latest
strategy:
matrix:
python-version: [3.8]
python-version: '3.10'
name: Python ${{ matrix.python-version }}
steps:
- uses: actions/checkout@v2
Expand Down
2 changes: 1 addition & 1 deletion infrastructure-playbooks/cephadm-adopt.yml
Original file line number Diff line number Diff line change
Expand Up @@ -106,7 +106,7 @@
- (health_detail.stdout | default('{}', True) | from_json)['status'] == "HEALTH_WARN"
- "'POOL_APP_NOT_ENABLED' in (health_detail.stdout | default('{}', True) | from_json)['checks']"

- import_role:

Check warning on line 109 in infrastructure-playbooks/cephadm-adopt.yml

View workflow job for this annotation

GitHub Actions / build

jinja[spacing]

Jinja2 spacing could be improved: groups.get((grafana_server_group_name|default('grafana-server')), []) | length > 0 -> groups.get((grafana_server_group_name | default('grafana-server')), []) | length > 0
name: ceph-facts
tasks_from: convert_grafana_server_group_name.yml
when: groups.get((grafana_server_group_name|default('grafana-server')), []) | length > 0
Expand Down Expand Up @@ -495,7 +495,7 @@

- name: set_fact mirror_peer_found
set_fact:
mirror_peer_uuid: "{{ ((mirror_pool_info.stdout | default('{}') | from_json)['peers'] | selectattr('site_name', 'match', '^'+ceph_rbd_mirror_remote_cluster+'$') | map(attribute='uuid') | list) }}"

Check warning on line 498 in infrastructure-playbooks/cephadm-adopt.yml

View workflow job for this annotation

GitHub Actions / build

jinja[spacing]

Jinja2 spacing could be improved: {{ ((mirror_pool_info.stdout | default('{}') | from_json)['peers'] | selectattr('site_name', 'match', '^'+ceph_rbd_mirror_remote_cluster+'$') | map(attribute='uuid') | list) }} -> {{ ((mirror_pool_info.stdout | default('{}') | from_json)['peers'] | selectattr('site_name', 'match', '^' + ceph_rbd_mirror_remote_cluster + '$') | map(attribute='uuid') | list) }}

- name: remove current rbd mirror peer, add new peer into mon config store
when: mirror_peer_uuid | length > 0
Expand All @@ -520,7 +520,7 @@
loop: "{{ (quorum_status.stdout | default('{}') | from_json)['monmap']['mons'] }}"
run_once: true

- name: remove current mirror peer

Check warning on line 523 in infrastructure-playbooks/cephadm-adopt.yml

View workflow job for this annotation

GitHub Actions / build

jinja[spacing]

Jinja2 spacing could be improved: {{ admin_rbd_cmd }} mirror pool peer remove {{ ceph_rbd_mirror_pool }} {{ ((mirror_pool_info.stdout | default('{}') | from_json)['peers'] | selectattr('site_name', 'match', '^'+ceph_rbd_mirror_remote_cluster+'$') | map(attribute='uuid') | list)[0] }} -> {{ admin_rbd_cmd }} mirror pool peer remove {{ ceph_rbd_mirror_pool }} {{ ((mirror_pool_info.stdout | default('{}') | from_json)['peers'] | selectattr('site_name', 'match', '^' + ceph_rbd_mirror_remote_cluster + '$') | map(attribute='uuid') | list)[0] }}
command: "{{ admin_rbd_cmd }} mirror pool peer remove {{ ceph_rbd_mirror_pool }} {{ ((mirror_pool_info.stdout | default('{}') | from_json)['peers'] | selectattr('site_name', 'match', '^'+ceph_rbd_mirror_remote_cluster+'$') | map(attribute='uuid') | list)[0] }}"
delegate_to: "{{ groups.get(mon_group_name | default('mons'))[0] }}"
changed_when: false
Expand Down Expand Up @@ -603,7 +603,7 @@
CEPHADM_IMAGE: '{{ ceph_docker_registry }}/{{ ceph_docker_image }}:{{ ceph_docker_image_tag }}'

- name: adopt ceph mgr daemons
hosts: "{{ groups[mgr_group_name] | default(groups[mon_group_name]) }}"
hosts: "{{ groups['mgrs'] | default(groups['mons']) | default(omit) }}"
serial: 1
become: true
gather_facts: false
Expand Down
8 changes: 0 additions & 8 deletions infrastructure-playbooks/purge-cluster.yml
Original file line number Diff line number Diff line change
Expand Up @@ -512,7 +512,7 @@

- name: zap and destroy osds created by ceph-volume with lvm_volumes
ceph_volume:
data: "{{ item.data }}"

Check warning on line 515 in infrastructure-playbooks/purge-cluster.yml

View workflow job for this annotation

GitHub Actions / build

jinja[spacing]

Jinja2 spacing could be improved: {{ item.data_vg|default(omit) }} -> {{ item.data_vg | default(omit) }}

Check warning on line 515 in infrastructure-playbooks/purge-cluster.yml

View workflow job for this annotation

GitHub Actions / build

jinja[spacing]

Jinja2 spacing could be improved: {{ item.db_vg|default(omit) }} -> {{ item.db_vg | default(omit) }}

Check warning on line 515 in infrastructure-playbooks/purge-cluster.yml

View workflow job for this annotation

GitHub Actions / build

jinja[spacing]

Jinja2 spacing could be improved: {{ item.db|default(omit) }} -> {{ item.db | default(omit) }}

Check warning on line 515 in infrastructure-playbooks/purge-cluster.yml

View workflow job for this annotation

GitHub Actions / build

jinja[spacing]

Jinja2 spacing could be improved: {{ item.journal_vg|default(omit) }} -> {{ item.journal_vg | default(omit) }}

Check warning on line 515 in infrastructure-playbooks/purge-cluster.yml

View workflow job for this annotation

GitHub Actions / build

jinja[spacing]

Jinja2 spacing could be improved: {{ item.journal|default(omit) }} -> {{ item.journal | default(omit) }}

Check warning on line 515 in infrastructure-playbooks/purge-cluster.yml

View workflow job for this annotation

GitHub Actions / build

jinja[spacing]

Jinja2 spacing could be improved: {{ item.wal_vg|default(omit) }} -> {{ item.wal_vg | default(omit) }}
data_vg: "{{ item.data_vg|default(omit) }}"
journal: "{{ item.journal|default(omit) }}"
journal_vg: "{{ item.journal_vg|default(omit) }}"
Expand Down Expand Up @@ -1000,13 +1000,9 @@

- name: remove package dependencies on redhat
command: yum -y autoremove
args:
warn: no

- name: remove package dependencies on redhat again
command: yum -y autoremove
args:
warn: no
when:
ansible_facts['pkg_mgr'] == "yum"

Expand All @@ -1019,13 +1015,9 @@

- name: remove package dependencies on redhat
command: dnf -y autoremove
args:
warn: no

- name: remove package dependencies on redhat again
command: dnf -y autoremove
args:
warn: no
when:
ansible_facts['pkg_mgr'] == "dnf"
when:
Expand Down
4 changes: 2 additions & 2 deletions infrastructure-playbooks/shrink-mds.yml
Original file line number Diff line number Diff line change
Expand Up @@ -24,7 +24,7 @@
tasks_from: container_binary

- name: perform checks, remove mds and print cluster health
hosts: "{{ groups[mon_group_name][0] }}"
hosts: mons[0]
become: true
vars_prompt:
- name: ireallymeanit
Expand Down Expand Up @@ -165,4 +165,4 @@
post_tasks:
- name: show ceph health
command: "{{ container_exec_cmd | default('') }} ceph --cluster {{ cluster }} -s"
changed_when: false
changed_when: false
4 changes: 2 additions & 2 deletions infrastructure-playbooks/shrink-mgr.yml
Original file line number Diff line number Diff line change
Expand Up @@ -21,7 +21,7 @@
msg: gather facts on all Ceph hosts for following reference

- name: confirm if user really meant to remove manager from the ceph cluster
hosts: "{{ groups[mon_group_name][0] }}"
hosts: mons[0]
become: true
vars_prompt:
- name: ireallymeanit
Expand Down Expand Up @@ -130,4 +130,4 @@
post_tasks:
- name: show ceph health
command: "{{ container_exec_cmd | default('') }} ceph --cluster {{ cluster }} -s"
changed_when: false
changed_when: false
4 changes: 2 additions & 2 deletions infrastructure-playbooks/shrink-mon.yml
Original file line number Diff line number Diff line change
Expand Up @@ -22,7 +22,7 @@
- debug: msg="gather facts on all Ceph hosts for following reference"

- name: confirm whether user really meant to remove monitor from the ceph cluster
hosts: "{{ groups[mon_group_name][0] }}"
hosts: mons[0]
become: true
vars_prompt:
- name: ireallymeanit
Expand Down Expand Up @@ -144,4 +144,4 @@
- name: show ceph mon status
command: "{{ container_exec_cmd }} ceph --cluster {{ cluster }} mon stat"
delegate_to: "{{ mon_host }}"
changed_when: false
changed_when: false
6 changes: 3 additions & 3 deletions infrastructure-playbooks/shrink-osd.yml
Original file line number Diff line number Diff line change
Expand Up @@ -14,16 +14,16 @@
- name: gather facts and check the init system

hosts:
- "{{ mon_group_name|default('mons') }}"
- "{{ osd_group_name|default('osds') }}"
- mons
- osds

become: True
tasks:
- debug: msg="gather facts on all Ceph hosts for following reference"

- name: confirm whether user really meant to remove osd(s) from the cluster

hosts: "{{ groups[mon_group_name][0] }}"
hosts: mons[0]

become: true

Expand Down
2 changes: 1 addition & 1 deletion infrastructure-playbooks/shrink-rbdmirror.yml
Original file line number Diff line number Diff line change
Expand Up @@ -22,7 +22,7 @@

- name: confirm whether user really meant to remove rbd mirror from the ceph
cluster
hosts: "{{ groups[mon_group_name][0] }}"
hosts: mons[0]
become: true
vars_prompt:
- name: ireallymeanit
Expand Down
16 changes: 8 additions & 8 deletions library/ceph_crush_rule.py
Original file line number Diff line number Diff line change
Expand Up @@ -46,7 +46,8 @@
options:
name:
description:
- name of the Ceph Crush rule.
- name of the Ceph Crush rule. If state is 'info' - empty string
can be provided as a value to get all crush rules
required: true
cluster:
description:
Expand Down Expand Up @@ -189,7 +190,7 @@ def remove_rule(module, container_image=None):
def main():
module = AnsibleModule(
argument_spec=dict(
name=dict(type='str', required=True),
name=dict(type='str', required=False),
cluster=dict(type='str', required=False, default='ceph'),
state=dict(type='str', required=False, choices=['present', 'absent', 'info'], default='present'), # noqa: E501
rule_type=dict(type='str', required=False, choices=['replicated', 'erasure']), # noqa: E501
Expand All @@ -202,6 +203,8 @@ def main():
supports_check_mode=True,
required_if=[
('state', 'present', ['rule_type']),
('state', 'present', ['name']),
('state', 'absent', ['name']),
('rule_type', 'replicated', ['bucket_root', 'bucket_type']),
('rule_type', 'erasure', ['profile'])
]
Expand Down Expand Up @@ -229,27 +232,24 @@ def main():
# will return either the image name or None
container_image = is_containerized()

rc, cmd, out, err = exec_command(module, get_rule(module, container_image=container_image)) # noqa: E501
if state == "present":
rc, cmd, out, err = exec_command(module, get_rule(module, container_image=container_image)) # noqa: E501
if rc != 0:
rc, cmd, out, err = exec_command(module, create_rule(module, container_image=container_image)) # noqa: E501
changed = True
else:
rule = json.loads(out)
if (rule['type'] == 1 and rule_type == 'erasure') or (rule['type'] == 3 and rule_type == 'replicated'): # noqa: E501
module.fail_json(msg="Can not convert crush rule {} to {}".format(name, rule_type), changed=False, rc=1) # noqa: E501

elif state == "absent":
rc, cmd, out, err = exec_command(module, get_rule(module, container_image=container_image)) # noqa: E501
if rc == 0:
rc, cmd, out, err = exec_command(module, remove_rule(module, container_image=container_image)) # noqa: E501
changed = True
else:
rc = 0
out = "Crush Rule {} doesn't exist".format(name)

elif state == "info":
rc, cmd, out, err = exec_command(module, get_rule(module, container_image=container_image)) # noqa: E501
else:
pass

exit_module(module=module, out=out, rc=rc, cmd=cmd, err=err, startd=startd, changed=changed) # noqa: E501

Expand Down
2 changes: 0 additions & 2 deletions library/ceph_key.py
Original file line number Diff line number Diff line change
@@ -1,5 +1,3 @@
#!/usr/bin/python3

# Copyright 2018, Red Hat, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
Expand Down
2 changes: 0 additions & 2 deletions library/ceph_pool.py
Original file line number Diff line number Diff line change
@@ -1,5 +1,3 @@
#!/usr/bin/python3

# Copyright 2020, Red Hat, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
Expand Down
2 changes: 1 addition & 1 deletion requirements.txt
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
# These are Python requirements needed to run ceph-ansible main
ansible-core>=2.12,<2.13
ansible-core>=2.15,<2.16
netaddr
six
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,5 @@
# Remove yum caches so yum doesn't get confused if we are reinstalling a different ceph version
- name: purge yum cache
command: yum clean all
args:
warn: no
changed_when: false
when: ansible_facts['pkg_mgr'] == 'yum'
Loading
Loading