Skip to content

Commit

Permalink
[FEATURE] A refactoring of the rollback code... (#47)
Browse files Browse the repository at this point in the history
* A refactoring of the rollback code, which also necessitated a refactor of some other variables.  Speed is considerably improved in all circumstances, and a couple of edge cases are fixed.  Breaks the rollback interface - predelete_role now receives a list of VMs 'hosts_to_remove' to delete, rather than a single VM (was 'host_to_redeploy').  This has potential to massively increase redeploy speed, if clustering affinity is configured.

+ A new VM tag/label 'lifecycle_state' is created describing the lifecycle state of the VM.  It is either 'current', 'retiring' or 'redeployfail'.
  + A cluster's VMs will now always have the same epoch suffix (even when adding to the cluster)
  + The '-e clean' functionality has changed; you can now either clean hosts in every 'lifecycle_state' ('-e clean=all'), or optionally just the VMs in one of the above states ('-e clean=retiring').
  + Redeploy will assert if the topology has changed (number of VMs does not match reality)

+ A new global fact 'cluster_hosts_state' is created that contains information on all running VMs with the derived cluster_name; i.e. the _state_ of the cluster.
  + Variables in 'cluster_hosts_state' are used instead of constantly querying the infrastructure, esp during redeploy.

+ Alternate redeploy scheme: '_scheme_addallnew_rmdisk_rollback'.
  + A full mirror of the cluster is deployed.
  + If the process proceeds correctly:
    + `predeleterole` is called with a _list_ of the old VMs, in 'hosts_to_remove'.
    + The old VMs are stopped.
  + If the process fails for any reason, the old VMs are reinstated, and the new VMs stopped (rollback)
  + To delete the old VMs, either set '-e canary_tidy_on_success=true', or call redeploy.yml with '-e canary=tidy'

+ The existing '_scheme_addnewvm_rmdisk_rollback' scheme is refactored to use the new variables. It is functionally similar, but does not terminate the VMs on success.
  + For each node in the cluster:
    + Create a new VM
    + Run `predeleterole` on the previous node as a _list_ (for compatibility), in 'hosts_to_remove'.
    + Shut down the previous node.
  + If the process fails for any reason, the old VMs are reinstated, and any new VMs that were built are stopped (rollback)
  + To delete the old VMs, either set '-e canary_tidy_on_success=true', or call redeploy.yml with '-e canary=tidy'

Fixes #25

* Ensure that the 'release' tag/label is consistent within a cluster (e.g. during a scaling deploy); don't allow user to set a different label, and if one is not specified on command line, apply the existing label.

* Move location of release_version logic for redeploy

* Fix canary_tidy_on_success to apply only when canary is "none" or "finish"

* + Add a short sleep to allow DNS operation to complete.  Possibly the records are not replicated when the Ansible module returns, but without a small sleep, the dig command will sometimes fail and create a negative cache, which means name won't resolve until the SOA TTL expires.
+ Remove `delegate_to: localhost` on the dig command, so that it can work if we are running through a bastion host.
  + If the dig command needs to check an external IP, use 8.8.8.8, otherwise it will default to resolving the cloud DNS and return the internal VPC IP, which will not validate against the ansible_host.
+ Add some sequence diagrams to show redeploy lifecycle_state for _scheme_addallnew_rmdisk_rollback

* + Enable redeploying to larger or smaller clusters.
+ Prevent from running on a cluster built with older version of clusterverse
  + Add a new playbook `clusterverse_label_upgrade_v1-v2.yml`, to add necessary labels to an older cluster.
+ Add skip_release_version_check option.
+ Make external dns resolver variable

* + Change cluster_hosts_flat to cluster_hosts_target
+ Change nested logging output to print useful trace

* Fix for DNS dig check in GCP - only add a '.' to fqdn when there isn't already one at the end.

* Only allow canary=tidy to tidy (remove) powered-down VMS.  Tidy is meant to clean up after a successful redeploy - if there are non-current machines still powered-up, something is wrong.

* Fix for canary_tidy_on_success

* Fix merge error in installing file/metricbeat

Co-authored-by: Dougal Seeley <[email protected]>
  • Loading branch information
dseeley and Dougal Seeley authored Apr 14, 2020
1 parent aed55d0 commit 321486f
Show file tree
Hide file tree
Showing 79 changed files with 1,571 additions and 2,014 deletions.
2 changes: 1 addition & 1 deletion EXAMPLE/.vaultpass-client.py
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
#!/usr/bin/env python
#!/usr/bin/env python3
import os
import sys
# import argparse
Expand Down
21 changes: 11 additions & 10 deletions EXAMPLE/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -48,14 +48,14 @@ The `cluster.yml` sub-role immutably deploys a cluster from the config defined a
### AWS:
```
ansible-playbook -u ubuntu --private-key=/home/<user>/.ssh/<rsa key> cluster.yml -e buildenv=sandbox -e clusterid=vtp_aws_euw1 [email protected]
ansible-playbook -u ubuntu --private-key=/home/<user>/.ssh/<rsa key> cluster.yml -e buildenv=sandbox -e clusterid=vtp_aws_euw1 [email protected] --tags=clusterverse_clean -e clean=true -e release_version=v1.0.1
ansible-playbook -u ubuntu --private-key=/home/<user>/.ssh/<rsa key> cluster.yml -e buildenv=sandbox -e clusterid=vtp_aws_euw1 [email protected] -e clean=true -e release_version=v1.0.1
ansible-playbook -u ubuntu --private-key=/home/<user>/.ssh/<rsa key> cluster.yml -e buildenv=sandbox -e clusterid=vtp_aws_euw1 [email protected] --tags=clusterverse_clean -e clean=_all_ -e release_version=v1.0.1
ansible-playbook -u ubuntu --private-key=/home/<user>/.ssh/<rsa key> cluster.yml -e buildenv=sandbox -e clusterid=vtp_aws_euw1 [email protected] -e clean=_all_
```
### GCP:
```
ansible-playbook -u <username> --private-key=/home/<user>/.ssh/<rsa key> cluster.yml -e buildenv=sandbox -e clusterid=vtp_gce_euw1 [email protected]
ansible-playbook -u <username> --private-key=/home/<user>/.ssh/<rsa key> cluster.yml -e buildenv=sandbox -e clusterid=vtp_gce_euw1 [email protected] --tags=clusterverse_clean -e clean=true
ansible-playbook -u <username> --private-key=/home/<user>/.ssh/<rsa key> cluster.yml -e buildenv=sandbox -e clusterid=vtp_gce_euw1 [email protected] -e clean=true
ansible-playbook -u <username> --private-key=/home/<user>/.ssh/<rsa key> cluster.yml -e buildenv=sandbox -e clusterid=vtp_gce_euw1 [email protected] --tags=clusterverse_clean -e clean=_all_ -e release_version=v1.0.
ansible-playbook -u <username> --private-key=/home/<user>/.ssh/<rsa key> cluster.yml -e buildenv=sandbox -e clusterid=vtp_gce_euw1 [email protected] -e clean=_all_
```

### Mandatory command-line variables:
Expand All @@ -67,7 +67,7 @@ ansible-playbook -u <username> --private-key=/home/<user>/.ssh/<rsa key> cluster
+ `-e app_class=<proxy>` - Normally defined in `group_vars/<clusterid>/cluster_vars.yml`. The class of application (e.g. 'database', 'webserver'); becomes part of the fqdn
+ `-e release_version=<v1.0.1>` - Identifies the application version that is being deployed.
+ `-e dns_tld_external=<test.example.com>` - Normally defined in `group_vars/<clusterid>/cluster_vars.yml`.
+ `-e clean=true` - Deletes all existing VMs and security groups before creating
+ `-e clean=[current|retiring|redeployfail|_all_]` - Deletes VMs in `lifecycle_state`, or `_all_`, as well as networking and security groups
+ `-e do_package_upgrade=true` - Upgrade the OS packages (not good for determinism)
+ `-e reboot_on_package_upgrade=true` - After updating packages, performs a reboot on all nodes.
+ `-e prometheus_node_exporter_install=false` - Does not install the prometheus node_exporter
Expand All @@ -76,7 +76,7 @@ ansible-playbook -u <username> --private-key=/home/<user>/.ssh/<rsa key> cluster
+ `-e create_gce_network=true` - Create GCP network and subnetwork (probably needed if creating from scratch and using public network)

### Tags
+ `clusterverse_clean`: Deletes all VMs and security groups (also needs `-e clean=true` on command line)
+ `clusterverse_clean`: Deletes all VMs and security groups (also needs `-e clean=[current|retiring|redeployfail|_all_]` on command line)
+ `clusterverse_create`: Creates only EC2 VMs, based on the hosttype_vars values in group_vars/all/cluster.yml
+ `clusterverse_config`: Updates packages, sets hostname, adds hosts to DNS

Expand All @@ -88,18 +88,19 @@ The `redeploy.yml` sub-role will completely redeploy the cluster; this is useful

### AWS:
```
ansible-playbook -u ubuntu --private-key=/home/<user>/.ssh/<rsa key> redeploy.yml -e buildenv=sandbox -e clusterid=vtp_aws_euw1 [email protected]
ansible-playbook -u ubuntu --private-key=/home/<user>/.ssh/<rsa key> redeploy.yml -e buildenv=sandbox -e clusterid=vtp_aws_euw1 [email protected] -e canary=none
```
### GCP:
```
ansible-playbook -u <username> --private-key=/home/<user>/.ssh/<rsa key> redeploy.yml -e buildenv=sandbox -e clusterid=vtp_gce_euw1 [email protected]
ansible-playbook -u <username> --private-key=/home/<user>/.ssh/<rsa key> redeploy.yml -e buildenv=sandbox -e clusterid=vtp_gce_euw1 [email protected] -e canary=none
```

### Mandatory command-line variables:
+ `-e clusterid=<vtp_aws_euw1>` - A directory named `clusterid` must be present in `group_vars`. Holds the parameters that define the cluster; enables a multi-tenanted repository.
+ `-e buildenv=<sandbox>` - The environment (dev, stage, etc), which must be an attribute of `cluster_vars` defined in `group_vars/<clusterid>/cluster_vars.yml`
+ `-e canary=['start', 'finish', 'none']` - Specify whether to start or finish a canary deploy, or 'none' deploy
+ `-e canary=['start', 'finish', 'none', 'tidy']` - Specify whether to start or finish a canary deploy, or 'none' deploy

### Extra variables:
+ `-e 'redeploy_scheme'=<subrole_name>` - The scheme corresponds to one defined in
+ `-e redeploy_scheme=<subrole_name>` - The scheme corresponds to one defined in `roles/clusterverse/redeploy`
+ `-e canary_tidy_on_success=[true|false]` - Whether to run the tidy (remove the replaced VMs and DNS) on successful redeploy
+ `-e myhosttypes="master,slave"`- In redeployment you can define which host type you like to redeploy. If not defined it will redeploy all host types
2 changes: 1 addition & 1 deletion EXAMPLE/cluster.yml
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@
hosts: localhost
connection: local
roles:
- { role: clusterverse/clean, tags: [clusterverse_clean], when: clean is defined and clean|bool }
- { role: clusterverse/clean, tags: [clusterverse_clean], when: clean is defined }
- { role: clusterverse/create, tags: [clusterverse_create] }
- { role: clusterverse/dynamic_inventory, tags: [clusterverse_dynamic_inventory] }

Expand Down
42 changes: 42 additions & 0 deletions EXAMPLE/clusterverse_label_upgrade_v1-v2.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,42 @@
---

- name: Clusterverse label upgrade v1-v2
hosts: localhost
connection: local
gather_facts: true
tasks:
- import_role:
name: 'clusterverse/_dependencies'

- import_role:
name: 'clusterverse/cluster_hosts'
tasks_from: get_cluster_hosts_state.yml

- block:
- name: clusterverse_label_upgrade_v1-v2 | Add lifecycle_state and cluster_suffix label to AWS VM
ec2_tag:
aws_access_key: "{{cluster_vars[buildenv].aws_access_key}}"
aws_secret_key: "{{cluster_vars[buildenv].aws_secret_key}}"
region: "{{cluster_vars.region}}"
resource: "{{ item.instance_id }}"
tags:
lifecycle_state: "current"
cluster_suffix: "{{ item.name | regex_replace('^.*-(.*)$', '\\1') }}"
with_items: "{{ hosts_to_relabel }}"
when: cluster_vars.type == "aws"

- name: clusterverse_label_upgrade_v1-v2 | Add lifecycle_state and cluster_suffix label to GCE VM
gce_labels:
resource_name: "{{item.name}}"
project_id: "{{cluster_vars.project_id}}"
resource_location: "{{item.regionzone}}"
credentials_file: "{{gcp_credentials_file}}"
resource_type: instances
labels:
lifecycle_state: "current"
cluster_suffix: "{{ item.name | regex_replace('^.*-(.*)$', '\\1') }}"
state: present
with_items: "{{ hosts_to_relabel }}"
when: cluster_vars.type == "gce"
vars:
hosts_to_relabel: "{{ cluster_hosts_state | json_query(\"[?!(tagslabels.cluster_suffix) || !(tagslabels.lifecycle_state)]\") }}"
26 changes: 13 additions & 13 deletions EXAMPLE/group_vars/_skel/cluster_vars.yml
Original file line number Diff line number Diff line change
Expand Up @@ -7,9 +7,9 @@ gcp_credentials_json: "{{ lookup('file', gcp_credentials_file) | default({'proje
app_name: "test" # The name of the application cluster (e.g. 'couchbase', 'nginx'); becomes part of cluster_name.
app_class: "test" # The class of application (e.g. 'database', 'webserver'); becomes part of the fqdn

dns_tld_external: "" # Top-level domain for external access. Leave blank if no external DNS (use IPs only)
dns_tld_external: "" # Top-level domain for external access. gcloud dns needs a trailing '.'. Leave blank if no external DNS (use IPs only)

beats_target_hosts: [] # The destination hosts for e.g. filebeat/ metricbeat logs
beats_target_hosts: [] # The destination hosts for e.g. filebeat/ metricbeat logs

## Vulnerability scanners - Tenable and/ or Qualys cloud agents:
cloud_agent:
Expand Down Expand Up @@ -54,24 +54,24 @@ cluster_name: "{{app_name}}-{{buildenv}}" # Identifies the cluster within
# secgroup_new:
# - proto: "tcp"
# ports: ["22"]
# cidr_ip: 0.0.0.0/0
# cidr_ip: "0.0.0.0/0"
# rule_desc: "SSH Access"
# - proto: "tcp"
# ports: ["{{ prometheus_node_exporter_port | default(9100) }}"]
# group_name: ["{{buildenv}}-private-sg"]
# group_desc: "Access from all VMs attached to {{buildenv}}-private-sg"
# rule_desc: "Prometheus instances attached to {{buildenv}}-private-sg can access the exporter port(s)."
# - proto: all
# group_name: ["{{cluster_name}}-sg"]
# group_desc: "{{ cluster_name }} rules"
# rule_desc: "Access from all VMs attached to the {{ cluster_name }}-sg group"
# sandbox:
# hosttype_vars:
# sys: {vms_by_az: {a: 1, b: 1, c: 1}, flavor: t3.micro, auto_volumes: []}
# #sys: {vms_by_az: {a: 1, b: 1, c: 1}, skip_beat_install:true, flavor: t3.micro, auto_volumes: []}
# #sysdisks: {vms_by_az: {a: 1, b: 1, c: 1}, flavor: t3.micro, auto_volumes: [{"device_name": "/dev/sdb", mountpoint: "/var/log/mysvc", fstype: "ext4", "volume_type": "gp2", "volume_size": 2, ephemeral: False, encrypted: True, "delete_on_termination": true, perms: {owner: "root", group: "sudo", mode: "775"} }, {"device_name": "/dev/sdc", mountpoint: "/var/log/mysvc2", fstype: "ext4", "volume_type": "gp2", "volume_size": 2, ephemeral: False, encrypted: True, "delete_on_termination": true}, {"device_name": "/dev/sdd", mountpoint: "/var/log/mysvc3", fstype: "ext4", "volume_type": "gp2", "volume_size": 2, ephemeral: False, encrypted: True, "delete_on_termination": true}]}
# #sysnvme_multi: {vms_by_az: {a: 1, b: 1, c: 1}, flavor: i3en.2xlarge, auto_volumes: [], nvme: {volumes: [{mountpoint: "/var/log/mysvc", fstype: ext4, volume_size: 2500}, {mountpoint: "/var/log/mysvc2", fstype: ext4, volume_size: 2500}]} } }
# #sysnvme_lvm: {vms_by_az: {a: 1, b: 1, c: 1}, flavor: i3en.2xlarge, auto_volumes: [], nvme: {volumes: [{mountpoint: "/var/log/mysvc", fstype: ext4, volume_size: 2500}, {mountpoint: "/var/log/mysvc", fstype: ext4, volume_size: 2500}], lvmparams: {vg_name: "vg0", lv_name: "lv0", lv_size: "100%FREE"} } }
# sys: {vms_by_az: {a: 1, b: 1, c: 1}, flavor: t3a.nano, auto_volumes: []}
# #sysnobeats: {vms_by_az: {a: 1, b: 1, c: 1}, skip_beat_install:true, flavor: t3a.nano, auto_volumes: []
# #sysdisks: {vms_by_az: {a: 1, b: 1, c: 1}, flavor: t3a.nano, auto_volumes: [{"device_name": "/dev/sdb", mountpoint: "/var/log/mysvc", fstype: "ext4", "volume_type": "gp2", "volume_size": 2, ephemeral: False, encrypted: True, "delete_on_termination": true, perms: {owner: "root", group: "sudo", mode: "775"} }, {"device_name": "/dev/sdc", mountpoint: "/var/log/mysvc2", fstype: "ext4", "volume_type": "gp2", "volume_size": 2, ephemeral: False, encrypted: True, "delete_on_termination": true}, {"device_name": "/dev/sdd", mountpoint: "/var/log/mysvc3", fstype: "ext4", "volume_type": "gp2", "volume_size": 2, ephemeral: False, encrypted: True, "delete_on_termination": true}]}
# #hostnvme_multi: {vms_by_az: {a: 1, b: 1, c: 1}, flavor: i3en.2xlarge, auto_volumes: [], nvme: {volumes: [{mountpoint: "/var/log/mysvc", fstype: ext4, volume_size: 2500}, {mountpoint: "/var/log/mysvc2", fstype: ext4, volume_size: 2500}]} } }
# #hostnvme_lvm: {vms_by_az: {a: 1, b: 1, c: 1}, flavor: i3en.2xlarge, auto_volumes: [], nvme: {volumes: [{mountpoint: "/var/log/mysvc", fstype: ext4, volume_size: 2500}, {mountpoint: "/var/log/mysvc", fstype: ext4, volume_size: 2500}], lvmparams: {vg_name: "vg0", lv_name: "lv0", lv_size: "+100%FREE"} } }
# #hostssd: {vms_by_az: {a: 1, b: 1, c: 0}, flavor: c3.large, auto_volumes: [{device_name: "/dev/sdb", mountpoint: "/var/log/mysvc", fstype: "ext4", "volume_type": "gp2", "volume_size": 2, ephemeral: False, encrypted: True, "delete_on_termination": true}]}
# #hosthdd: {vms_by_az: {a: 1, b: 1, c: 0}, flavor: h1.2xlarge, auto_volumes: [{device_name: "/dev/sdb", mountpoint: "/var/log/mysvc", fstype: "ext4", "volume_type": "gp2", "volume_size": 2, ephemeral: False, encrypted: True, "delete_on_termination": true}]}
# aws_access_key: ""
# aws_secret_key: ""
# vpc_name: "test{{buildenv}}"
Expand All @@ -89,9 +89,9 @@ cluster_name: "{{app_name}}-{{buildenv}}" # Identifies the cluster within
# region: &region "europe-west1"
# dns_zone_internal: "c.{{gcp_credentials_json.project_id}}.internal"
# dns_zone_external: "{%- if dns_tld_external -%}{{_cloud_type}}-{{_region}}.{{app_class}}.{{buildenv}}.{{dns_tld_external}} {%- endif -%}"
# dns_server: "" # Specify DNS server. nsupdate, route53 or clouddns. If empty string is specified, no DNS will be added.
# dns_server: "" # Specify DNS server. nsupdate, route53 or clouddns. If empty string is specified, no DNS will be added.
# assign_public_ip: "yes"
# inventory_ip: "public" # 'public' or 'private', (private in case we're operating in a private LAN). If public, 'assign_public_ip' must be 'yes'
# inventory_ip: "public" # 'public' or 'private', (private in case we're operating in a private LAN). If public, 'assign_public_ip' must be 'yes'
# project_id: "{{gcp_credentials_json.project_id}}"
# ip_forward: "false"
# ssh_guard_whitelist: &ssh_guard_whitelist ['10.0.0.0/8'] # Put your public-facing IPs into this (if you're going to access it via public IP), to avoid rate-limiting.
Expand All @@ -115,7 +115,7 @@ cluster_name: "{{app_name}}-{{buildenv}}" # Identifies the cluster within
# #sysdisks: {vms_by_az: {b: 1, c: 1, d: 1}, flavor: f1-micro, rootvol_size: "10", auto_volumes: [{auto_delete: true, interface: "SCSI", volume_size: 2, mountpoint: "/var/log/mysvc", fstype: "ext4", perms: {owner: "root", group: "sudo", mode: "775"}}, {auto_delete: true, interface: "SCSI", volume_size: 2, mountpoint: "/var/log/mysvc2", fstype: "ext4"}, {auto_delete: true, interface: "SCSI", volume_size: 3, mountpoint: "/var/log/mysvc3", fstype: "ext4"}]}
# vpc_network_name: "test-{{buildenv}}"
# vpc_subnet_name: ""
# preemptible: "false"
# preemptible: "no"
# deletion_protection: "no"
#_cloud_type: *cloud_type
#_region: *region
Expand Down
10 changes: 4 additions & 6 deletions EXAMPLE/group_vars/test_aws_euw1/cluster_vars.yml
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@
app_name: "test" # The name of the application cluster (e.g. 'couchbase', 'nginx'); becomes part of cluster_name.
app_class: "test" # The class of application (e.g. 'database', 'webserver'); becomes part of the fqdn

dns_tld_external: "" # Top-level domain for external access. Leave blank if no external DNS (use IPs only)
dns_tld_external: "" # Top-level domain for external access. gcloud dns needs a trailing '.'. Leave blank if no external DNS (use IPs only)

## Vulnerability scanners - Tenable and/ or Qualys cloud agents:
cloud_agent:
Expand Down Expand Up @@ -47,21 +47,19 @@ cluster_vars:
secgroup_new:
- proto: "tcp"
ports: ["22"]
cidr_ip: 0.0.0.0/0
cidr_ip: "0.0.0.0/0"
rule_desc: "SSH Access"
- proto: "tcp"
ports: ["{{ prometheus_node_exporter_port | default(9100) }}"]
group_name: ["{{buildenv}}-private-sg"]
group_desc: "Access from all VMs attached to {{buildenv}}-private-sg"
rule_desc: "Prometheus instances attached to {{buildenv}}-private-sg can access the exporter port(s)."
- proto: all
group_name: ["{{cluster_name}}-sg"]
group_desc: "{{ cluster_name }} rules"
rule_desc: "Access from all VMs attached to the {{ cluster_name }}-sg group"
sandbox:
hosttype_vars:
sys: {vms_by_az: {a: 1, b: 1, c: 1}, flavor: t3.nano, auto_volumes: []}
# sysdisks: {vms_by_az: {a: 1, b: 0, c: 0}, flavor: t3.nano, auto_volumes: [{"device_name": "/dev/sdb", mountpoint: "/var/log/mysvc", fstype: "ext4", "volume_type": "gp2", "volume_size": 2, ephemeral: False, encrypted: True, "delete_on_termination": true, perms: {owner: "root", group: "sudo", mode: "775"} }, {"device_name": "/dev/sdc", mountpoint: "/var/log/mysvc2", fstype: "ext4", "volume_type": "gp2", "volume_size": 3, ephemeral: False, encrypted: True, "delete_on_termination": true}, {"device_name": "/dev/sdd", mountpoint: "/var/log/mysvc3", fstype: "ext4", "volume_type": "gp2", "volume_size": 2, ephemeral: False, encrypted: True, "delete_on_termination": true}]}
sys: {vms_by_az: {a: 1, b: 1, c: 1}, flavor: t3a.nano, auto_volumes: []}
# sysdisks: {vms_by_az: {a: 1, b: 0, c: 0}, flavor: t3a.nano, auto_volumes: [{"device_name": "/dev/sdb", mountpoint: "/var/log/mysvc", fstype: "ext4", "volume_type": "gp2", "volume_size": 2, ephemeral: False, encrypted: True, "delete_on_termination": true, perms: {owner: "root", group: "sudo", mode: "775"} }, {"device_name": "/dev/sdc", mountpoint: "/var/log/mysvc2", fstype: "ext4", "volume_type": "gp2", "volume_size": 3, ephemeral: False, encrypted: True, "delete_on_termination": true}, {"device_name": "/dev/sdd", mountpoint: "/var/log/mysvc3", fstype: "ext4", "volume_type": "gp2", "volume_size": 2, ephemeral: False, encrypted: True, "delete_on_termination": true}]}
# hostnvme_multi: {vms_by_az: {a: 1, b: 0, c: 0}, flavor: i3en.2xlarge, auto_volumes: [], nvme: {volumes: [{mountpoint: "/var/log/mysvc", fstype: ext4, volume_size: 2500}, {mountpoint: "/var/log/mysvc2", fstype: ext4, volume_size: 2500}]} }
# hostnvme_lvm: {vms_by_az: {a: 1, b: 0, c: 0}, flavor: i3en.2xlarge, auto_volumes: [], nvme: {volumes: [{mountpoint: "/var/log/mysvc", fstype: ext4, volume_size: 2500}, {mountpoint: "/var/log/mysvc", fstype: ext4, volume_size: 2500}], lvmparams: {vg_name: "vg0", lv_name: "lv0", lv_size: "+100%FREE"} } }
# hostssd: {vms_by_az: {a: 1, b: 0, c: 0}, flavor: c3.large, auto_volumes: [{device_name: "/dev/sdb", mountpoint: "/var/log/mysvc", fstype: "ext4", "volume_type": "gp2", "volume_size": 2, ephemeral: False, encrypted: True, "delete_on_termination": true}]}
Expand Down
Loading

0 comments on commit 321486f

Please sign in to comment.