Skip to content

Commit

Permalink
Ansible 7 compatibility fixes (#15)
Browse files Browse the repository at this point in the history
+ Move from `ec2` to `amazon.aws.ec2_instance` module.
  + `volumes` structure changed; maintain old syntax in `cluster_defs`.
  + Spot instances no longer supported.
+ `ec2_instance` returns success when instances are also in `pending`, so also check this.
+ Fix for route53 rescue.
+ Use older name of `instance_role` instead of `iam_instance_profile` for ec2_instance, as it is backwards compatible.
+ Update minimum Ansible version to 5.6.0; update assertions.
+ Enable selecting Ansible version in Jenkinsfile_testsuite and Jenkinsfile_ops
  + Jenkinsfile_testsuite: remove `findAll()`, which can't escape the sandbox
+ Revert to community versions of `libvirt` where possible
+ Add retries to libvirt pool refresh (in case of concurrent background operations)
  • Loading branch information
dseeley authored Dec 31, 2022
1 parent 15dffde commit e832c5f
Show file tree
Hide file tree
Showing 18 changed files with 110 additions and 104 deletions.
4 changes: 2 additions & 2 deletions EXAMPLE/Pipfile
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@ boto3 = "*"
boto = "*"
botocore = "*"
requests = "*"
ansible = ">=2.9"
ansible = ">=5.6"
jmespath = "*"
dnspython = "*"
google-auth = "*"
Expand All @@ -24,4 +24,4 @@ PyVmomi = "*"
[dev-packages]

[requires]
python_version = "3"
python_version = "3.10"
6 changes: 3 additions & 3 deletions EXAMPLE/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,8 +7,8 @@ _**Please refer to the full [README.md](https://github.com/dseeley/clusterverse/
Contributions are welcome and encouraged. Please see [CONTRIBUTING.md](https://github.com/dseeley/clusterverse/blob/master/CONTRIBUTING.md) for details.

## Requirements
+ Ansible >= 2.9
+ Python >= 2.7
+ Ansible >= 5.6.0
+ Python >= 3.8


---
Expand Down Expand Up @@ -93,9 +93,9 @@ ansible-playbook redeploy.yml -e buildenv=sandbox -e cloud_type=azure -e region=
### Mandatory command-line variables:
+ `-e buildenv=<sandbox>` - The environment (dev, stage, etc), which must be an attribute of `cluster_vars` defined in `group_vars/<clusterid>/cluster_vars.yml`
+ `-e canary=['start', 'finish', 'filter', 'none', 'tidy']` - Specify whether to start, finish or filter a canary redeploy (or 'none', to redeploy the whole cluster in one command). See below (`-e canary_filter_regex`) for `canary=filter`.
+ `-e redeploy_scheme=<subrole_name>` - The scheme corresponds to one defined in `roles/clusterverse/redeploy`

### Extra variables:
+ `-e redeploy_scheme=<subrole_name>` - The scheme corresponds to one defined in `roles/clusterverse/redeploy`
+ `-e canary_tidy_on_success=[true|false]` - Whether to run the tidy (remove the replaced VMs and DNS) on successful redeploy
+ `-e canary_filter_regex='^.*-test-sysdisks.*$'` - Sets the regex pattern used to filter the target hosts by their hostnames - mandatory when using `canary=filter`
+ `-e myhosttypes="master,slave"`- In redeployment you can define which host type you like to redeploy. If not defined it will redeploy all host types
Original file line number Diff line number Diff line change
Expand Up @@ -44,7 +44,7 @@ cluster_vars:
auto_volumes:
- { device_name: "/dev/sda1", mountpoint: "/", fstype: "ext4", volume_type: "gp3", volume_size: 8, encrypted: True, delete_on_termination: true }
- { device_name: "/dev/sdf", mountpoint: "/media/mysvc", fstype: "ext4", volume_type: "gp3", volume_size: 1, encrypted: True, delete_on_termination: true }
- { device_name: "/dev/sdg", mountpoint: "/media/mysvc", fstype: "ext4", volume_type: "gp3", volume_size: 1, iops: 100, encrypted: True, delete_on_termination: true }
- { device_name: "/dev/sdg", mountpoint: "/media/mysvc", fstype: "ext4", volume_type: "gp3", volume_size: 1, iops: 3000, encrypted: True, delete_on_termination: true }
lvmparams: { vg_name: "vg0", lv_name: "lv0", lv_size: "100%VG" }
flavor: t4g.nano
# image: "ami-08ff82115239305ce" # eu-west-1 22.04 arm64 hvm-ssd 20220616. Ubuntu images can be located at https://cloud-images.ubuntu.com/locator/
Expand All @@ -62,7 +62,7 @@ cluster_vars:
auto_volumes:
- { device_name: "/dev/sda1", mountpoint: "/", fstype: "ext4", volume_type: "gp3", volume_size: 8, encrypted: True, delete_on_termination: true }
- { device_name: "/dev/sdf", mountpoint: "/media/mysvc", fstype: "ext4", volume_type: "gp3", volume_size: 1, encrypted: True, delete_on_termination: true, perms: { owner: "root", group: "root", mode: "775" } }
- { device_name: "/dev/sdg", mountpoint: "/media/mysvc2", fstype: "ext4", volume_type: "gp3", volume_size: 1, iops: 100, encrypted: True, delete_on_termination: true }
- { device_name: "/dev/sdg", mountpoint: "/media/mysvc2", fstype: "ext4", volume_type: "gp3", volume_size: 1, iops: 3000, encrypted: True, delete_on_termination: true }
flavor: t3a.nano
version: "{{sysdisks_version | default('')}}"
vms_by_az: { a: 1, b: 1, c: 0 }
Expand Down
4 changes: 2 additions & 2 deletions EXAMPLE/cluster_defs/azure/cluster_vars__cloud.yml
Original file line number Diff line number Diff line change
Expand Up @@ -6,8 +6,8 @@ ssh_whitelist: ['10.0.0.0/8']
redeploy_schemes_supported: ['_scheme_addallnew_rmdisk_rollback', '_scheme_addnewvm_rmdisk_rollback', '_scheme_rmvm_rmdisk_only'] # TODO: support _scheme_rmvm_keepdisk_rollback

## Source images from which to clone. Set these as variables so they can be selected on command line (for automated testing).
_ubuntu2204image: { "publisher": "canonical", "offer": "0001-com-ubuntu-server-jammy", "sku": "22_04-lts-gen2", "version": "latest" }
_ubuntu2004image: { "publisher": "canonical", "offer": "0001-com-ubuntu-server-focal", "sku": "20_04-lts-gen2", "version": "latest" } # or specific: "version": "20.04.202107200"
_ubuntu2204image: { "publisher": "canonical", "offer": "0001-com-ubuntu-server-jammy", "sku": "22_04-lts-gen2", "version": "latest" } # or specific version: "version": "22.04.202206220"
_ubuntu2004image: { "publisher": "canonical", "offer": "0001-com-ubuntu-server-focal", "sku": "20_04-lts-gen2", "version": "latest" }
_ubuntu1804image: { "publisher": "canonical", "offer": "UbuntuServer", "sku": "18_04-lts-gen2", "version": "latest" }
_centos7image: { "publisher": "eurolinuxspzoo1620639373013", "offer": "centos-7-9-free", "sku": "centos-7-9-free", "version": "latest" }
_alma8image: { "publisher": "almalinux", "offer": "almalinux", "sku": "8_5-gen2", "version": "latest" }
Expand Down
19 changes: 11 additions & 8 deletions EXAMPLE/jenkinsfiles/Jenkinsfile_ops
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@ def DEFAULT_CLUSTERVERSE_TESTSUITE_URL = "https://github.com/dseeley/clustervers
def DEFAULT_CLUSTERVERSE_TESTSUITE_BRANCH = "master"

//This allows us to create our own Docker image for this specific use-case. Once it is built, it will not be rebuilt, so only adds delay the first time we use it.
def create_custom_image(image_name, params = "") {
def create_custom_image(image_name, build_opts = "") {
// Create a lock to prevent building the same image in parallel
lock('IMAGEBUILDLOCK__' + image_name + '__' + env.NODE_NAME) {
def jenkins_username = sh(script: 'whoami', returnStdout: true).trim()
Expand All @@ -22,18 +22,20 @@ def create_custom_image(image_name, params = "") {
ENV HOME=${env.JENKINS_HOME}
ENV PIPENV_VENV_IN_PROJECT=true
ENV TZ=Europe/London
SHELL ["/bin/bash", "-c"]
RUN groupadd -g ${jenkins_gid} ${jenkins_username} && useradd -m -u ${jenkins_uid} -g ${jenkins_gid} -s /bin/bash ${jenkins_username}
### Note: use pip to install pipenv (not apt) to avoid pypa/pipenv#2196 (when using PIPENV_VENV_IN_PROJECT)
RUN apt-get update \
&& apt-get install -y git iproute2 \
python3-boto python3-boto3 python3-dev python3-distutils python3-docker python3-dnspython python3-google-auth python3-googleapi python3-jinja2 python3-jmespath python3-libcloud python3-libvirt python3-lxml python3-netaddr python3-paramiko python3-passlib python3-pip python3-pyvmomi python3-ruamel.yaml python3-setuptools python3-wheel python3-xmltodict \
&& pip3 install pycdlib pipenv ansible==5.9.0 \
`## uncomment if ansible=6.0.0 # && ansible-galaxy collection install azure.azcollection -p \$(pip3 show ansible | grep ^Location | sed 's/Location: \\(.*\\)/\\1/') --force` \
&& pip3 install -r \$(pip3 show ansible | grep ^Location | sed 's/Location: \\(.*\\)/\\1/')/ansible_collections/azure/azcollection/requirements-azure.txt
&& apt-get install -y git iproute2 python3-boto python3-boto3 python3-dev python3-distutils python3-docker python3-dnspython python3-google-auth python3-googleapi python3-jinja2 python3-jmespath python3-libcloud python3-libvirt python3-lxml python3-netaddr python3-paramiko python3-passlib python3-pip python3-pyvmomi python3-ruamel.yaml python3-setuptools python3-wheel python3-xmltodict \
&& pip3 install pycdlib pipenv ansible==${params.ANSIBLE_VERSION}
RUN if [ \$(echo -e "\$(pip3 show ansible | grep ^Version | sed -r 's/^Version: (.*)/\\1/')\\n6.4.0"|sort|head -1) != "6.4.0" ]; then ansible-galaxy collection install community.libvirt:==1.2.0 -p \$(pip3 show ansible | grep ^Location | sed 's/Location: \\(.*\\)/\\1/') --force; fi \
&& if [ \$(echo -e "\$(pip3 show ansible | grep ^Version | sed -r 's/^Version: (.*)/\\1/')\\n6.1.0"|sort|head -1) != "6.1.0" ]; then ansible-galaxy collection install azure.azcollection:==1.13.0 -p \$(pip3 show ansible | grep ^Location | sed 's/Location: \\(.*\\)/\\1/') --force; fi \
&& pip3 install -r \$(pip3 show ansible | grep ^Location | sed -r 's/^Location: (.*)/\\1/')/ansible_collections/azure/azcollection/requirements-azure.txt
""".stripIndent()

writeFile(file: "Dockerfile", text: dockerfile, encoding: "UTF-8")
custom_build = docker.build(image_name, params + "--network host .")
custom_build = docker.build(image_name, build_opts + "--network host .")

return (custom_build)
}
Expand Down Expand Up @@ -65,6 +67,7 @@ properties([
string(name: 'CV_GIT_BRANCH', defaultValue: DEFAULT_CLUSTERVERSE_BRANCH, description: "The clusterverse branch to test."),
credentials(name: 'CV_GIT_CREDS', credentialType: 'com.cloudbees.plugins.credentials.impl.UsernamePasswordCredentialsImpl', defaultValue: 'GITHUB_SVC_USER', description: 'Jenkins username/password credentials for GitHub', required: false),
string(name: 'USER_CMDLINE_VARS', defaultValue: '', description: "Any user-defined command-line parameters."),
string(name: 'ANSIBLE_VERSION', defaultValue: '7.1.0', description: "Ansible version."),
])
])

Expand Down Expand Up @@ -101,7 +104,7 @@ node {
// docker.build("cvops", "--build-arg JENKINS_USERNAME=${jenkins_username} --build-arg JENKINS_UID=${jenkins_uid} --build-arg JENKINS_GID=${jenkins_gid} ./jenkinsfiles").inside("${docker_parent_net_str} -e JENKINS_HOME=${env.JENKINS_HOME}") {

/*** Create a custom docker image within this Jenkinsfile ***/
create_custom_image("ubuntu_cvtest", "").inside("--init ${docker_parent_net_str}") {
create_custom_image("ubuntu_cvtest_${params.ANSIBLE_VERSION}", "").inside("--init ${docker_parent_net_str}") {
stage('Setup Environment') {
sh 'printenv | sort'
println("common_deploy_vars params:" + params)
Expand Down
4 changes: 2 additions & 2 deletions Pipfile
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@ boto3 = "*"
boto = "*"
botocore = "*"
requests = "*"
ansible = ">=2.9"
ansible = ">=5.6"
jmespath = "*"
dnspython = "*"
google-auth = "*"
Expand All @@ -22,4 +22,4 @@ google-api-python-client = "*"
[dev-packages]

[requires]
python_version = "3.7"
python_version = "3.10"
10 changes: 2 additions & 8 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -41,20 +41,14 @@ To active the pipenv:
### libvirt (Qemu)
+ It is non-trivial to set up username/password access to a remote libvirt host, so we use an ssh key instead .
+ Your ssh user should be a member of the `libvirt` and `kvm` groups.
+ Store the config in
```yaml
cluster_vars:
libvirt_ip:
username:
private_key:
storage_pool:
```
+ Store the config in `cluster_vars.libvirt`

### ESXi (free)
+ Username & password for a privileged user on an ESXi host
+ SSH must be enabled on the host
+ Set the `Config.HostAgent.vmacore.soap.maxSessionCount` variable to 0 to allow many concurrent tests to run.
+ Set the `Security.SshSessionLimit` variable to max (100) to allow as many ssh sessions as possible.
+ Store the config in `cluster_vars.esxi`

### Azure
+ Create an Azure account.
Expand Down
16 changes: 6 additions & 10 deletions _dependencies/tasks/main.yml
Original file line number Diff line number Diff line change
Expand Up @@ -36,20 +36,16 @@

- name: Preflight check
block:
- name: Ansible requirement
assert:
that: "ansible_version.full is version('2.12.4', '>=')"
fail_msg: "ansible-core 2.12.4 (pypi v5.6.0) is required."

- name: assertions based on required collections
block:
- assert:
that: "(ansible_version.full is version('2.9.6', '>=') and ansible_version.full is version('2.10.6', '<=')) or ('community.aws' in galaxy_collections and galaxy_collections['community.aws'].version is version('1.5.0', '>='))"
fail_msg: "If Ansible > 2.9.6 then community.aws > 1.5.0 is required for valid community.aws.route53 support (by default in Ansible v4)."

- name: azure collection requirements
block:
- assert: { that: "ansible_version.full is version_compare('2.10', '>=')", fail_msg: "ansible-core > 2.10 required for Azure support." }
- assert: { that: "'azure.azcollection' in galaxy_collections", fail_msg: "Please ensure the azure.azcollection collection is installed: ansible-galaxy collection install azure.azcollection (or ansible-galaxy collection install --ignore-errors -fr requirements.yml)" }
when: cluster_vars.type == "azure"

- name: libvirt collection requirements
block:
- assert: { that: "galaxy_collections['community.libvirt'].version is version('1.2.0', '>=')", fail_msg: "community.libvirt > 1.2.0 required for libvirt support (default in Ansible >= 6.3.0)." }
- assert: { that: "'dseeley.libvirt' in galaxy_collections", fail_msg: "Please ensure the dseeley.libvirt collection is installed: ansible-galaxy collection install git+https://github.com/dseeley/libvirt.git (or ansible-galaxy collection install --ignore-errors -fr requirements.yml)" }
- assert: { that: "'dseeley.inventory_lookup' in galaxy_collections", fail_msg: "Please ensure the dseeley.inventory_lookup collection is installed: ansible-galaxy collection install dseeley.inventory_lookup (or ansible-galaxy collection install --ignore-errors -fr requirements.yml)" }
when: cluster_vars.type == "libvirt"
Expand Down
6 changes: 3 additions & 3 deletions clean/tasks/aws.yml
Original file line number Diff line number Diff line change
Expand Up @@ -3,17 +3,17 @@
- name: clean/aws | clean vms
block:
- name: clean/aws | Remove instances termination protection
ec2:
amazon.aws.ec2_instance:
aws_access_key: "{{cluster_vars[buildenv].aws_access_key}}"
aws_secret_key: "{{cluster_vars[buildenv].aws_secret_key}}"
region: "{{ cluster_vars.region }}"
state: "{{ item.instance_state }}"
termination_protection: "no"
termination_protection: false
instance_ids: ["{{ item.instance_id }}"]
with_items: "{{ hosts_to_clean | json_query(\"[].{instance_id:instance_id, instance_state: instance_state}\") | default([]) }}"

- name: clean/aws | Delete VMs
ec2:
amazon.aws.ec2_instance:
aws_access_key: "{{cluster_vars[buildenv].aws_access_key}}"
aws_secret_key: "{{cluster_vars[buildenv].aws_secret_key}}"
region: "{{ cluster_vars.region }}"
Expand Down
2 changes: 1 addition & 1 deletion clean/tasks/libvirt.yml
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@
- name: clean/libvirt
block:
- name: clean/libvirt | 'destroy' (forcible shutdown) VM
dseeley.libvirt.virt:
community.libvirt.virt:
uri: 'qemu+ssh://{{ cluster_vars.libvirt.username }}@{{ cluster_vars.libvirt.hypervisor }}/system?keyfile=id_rsa__libvirt_svc&no_verify=1'
name: "{{item.name}}"
state: destroyed
Expand Down
2 changes: 1 addition & 1 deletion cluster_hosts/tasks/get_cluster_hosts_state_aws.yml
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@
ec2_instance_info:
filters:
"tag:cluster_name": "{{cluster_name}}"
"instance-state-name": ["running", "stopped"]
"instance-state-name": ["running", "pending", "stopped"]
aws_access_key: "{{cluster_vars[buildenv].aws_access_key}}"
aws_secret_key: "{{cluster_vars[buildenv].aws_secret_key}}"
region: "{{cluster_vars.region}}"
Expand Down
4 changes: 2 additions & 2 deletions cluster_hosts/tasks/get_cluster_hosts_target_libvirt.yml
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
---

- name: get_cluster_hosts_target/libvirt | Get basic instance info of all vms - to get filtered images
dseeley.libvirt.virt:
community.libvirt.virt:
uri: 'qemu+ssh://{{ cluster_vars.libvirt.username }}@{{ cluster_vars.libvirt.hypervisor }}/system?keyfile=id_rsa__libvirt_svc&no_verify=1'
command: list_vms
delegate_to: localhost
Expand All @@ -13,7 +13,7 @@
debug: msg={{ latest_machine }}

- name: get_cluster_hosts_target/libvirt | get_xml of the latest image that matches cluster_vars.image
dseeley.libvirt.virt:
community.libvirt.virt:
uri: 'qemu+ssh://{{ cluster_vars.libvirt.username }}@{{ cluster_vars.libvirt.hypervisor }}/system?keyfile=id_rsa__libvirt_svc&no_verify=1'
command: get_xml
name: "{{ latest_machine }}"
Expand Down
8 changes: 4 additions & 4 deletions config/tasks/create_dns_a.yml
Original file line number Diff line number Diff line change
Expand Up @@ -83,12 +83,12 @@
aws_access_key: "{{cluster_vars[buildenv].aws_access_key}}"
aws_secret_key: "{{cluster_vars[buildenv].aws_secret_key}}"
state: present
zone: "{{item.invocation.module_args.zone}}"
record: "{{item.invocation.module_args.record}}"
zone: "{{cluster_vars.dns_nameserver_zone}}"
record: "{{item.item.item.hostname}}.{{cluster_vars.dns_user_domain}}"
type: A
ttl: 60
value: "{{item.invocation.module_args.value}}"
private_zone: "{{item.invocation.module_args.private_zone}}"
value: "{{item.item.item.ipv4}}"
private_zone: "{{item.item.item.private_zone}}"
overwrite: true
wait: yes
become: false
Expand Down
Loading

0 comments on commit e832c5f

Please sign in to comment.