Skip to content

Commit

Permalink
enos: use on-demand targets (#21459) (#21464)
Browse files Browse the repository at this point in the history
Add an updated `target_ec2_instances` module that is capable of
dynamically splitting target instances over subnet/az's that are
compatible with the AMI architecture and the associated instance type
for the architecture. Use the `target_ec2_instances` module where
necessary. Ensure that `raft` storage scenarios don't provision
unnecessary infrastructure with a new `target_ec2_shim` module.

After a lot of trial, the state of Ec2 spot instance capacity, their
associated APIs, and current support for different fleet types in AWS
Terraform provider, have proven to make using spot instances for
scenario targets too unreliable.

The current state of each method:
* `target_ec2_fleet`: unusable due to the fact that the `instant` type
  does not guarantee fulfillment of either `spot` or `on-demand`
  instance request types. The module does support both `on-demand` and
  `spot` request types and is capable of bidding across a maximum of
  four availability zones, which makes it an attractive choice if the
  `instant` type would always fulfill requests. Perhaps a `request` type
  with `wait_for_fulfillment` option like `aws_spot_fleet_request` would
  make it more viable for future consideration.
* `target_ec2_spot_fleet`: more reliable if bidding for target instances
  that have capacity in the chosen zone. Issues in the AWS provider
  prevent us from bidding across multiple zones succesfully. Over the
  last 2-3 months target capacity for the instance types we'd prefer to
  use has dropped dramatically and the price is near-or-at on-demand.
  The volatility for nearly no cost savings means we should put this
  option on the shelf for now.
* `target_ec2_instances`: the most reliable method we've got. It is now
  capable of automatically determing which subnets and availability
  zones to provision targets in and has been updated to be usable for
  both Vault and Consul targets. By default we use the cheapest medium
  instance types that we've found are reliable to test vault.

* Update .gitignore
* enos/modules/create_vpc: create a subnet for every availability zone
* enos/modules/target_ec2_fleet: bid across the maximum of four
  availability zones for targets
* enos/modules/target_ec2_spot_fleet: attempt to make the spot fleet bid
  across more availability zones for targets
* enos/modules/target_ec2_instances: create module to use
  ec2:RunInstances for scenario targets
* enos/modules/target_ec2_shim: create shim module to satisfy the
  target module interface
* enos/scenarios: use target_ec2_shim for backend targets on raft
  storage scenarios
* enos/modules/az_finder: remove unsed module

Signed-off-by: Ryan Cragun <[email protected]>
Co-authored-by: Ryan Cragun <[email protected]>
  • Loading branch information
1 parent 4b00b33 commit 9bd7bba
Show file tree
Hide file tree
Showing 18 changed files with 511 additions and 117 deletions.
19 changes: 7 additions & 12 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -60,18 +60,13 @@ Vagrantfile
!enos/**/*.hcl

# Enos
enos/.enos
enos/enos-local.vars.hcl
enos/support
# Enos local Terraform files
enos/.terraform/*
enos/.terraform.lock.hcl
enos/*.tfstate
enos/*.tfstate.*
enos/**/.terraform/*
enos/**/.terraform.lock.hcl
enos/**/*.tfstate
enos/**/*.tfstate.*
.enos
enos-local.vars.hcl
enos/**/support
enos/**/kubeconfig
.terraform
.terraform.lock.hcl
.tfstate.*

.DS_Store
.idea
Expand Down
39 changes: 26 additions & 13 deletions enos/enos-modules.hcl
Original file line number Diff line number Diff line change
Expand Up @@ -61,27 +61,40 @@ module "shutdown_multiple_nodes" {
source = "./modules/shutdown_multiple_nodes"
}

# create target instances using ec2:CreateFleet
module "target_ec2_fleet" {
source = "./modules/target_ec2_fleet"

capacity_type = "on-demand" // or "spot", use on-demand until we can stabilize spot fleets
common_tags = var.tags
instance_mem_min = 4096
instance_cpu_min = 2
max_price = "0.1432" // On-demand cost for RHEL amd64 on t3.medium in us-east
project_name = var.project_name
ssh_keypair = var.aws_ssh_keypair_name
common_tags = var.tags
project_name = var.project_name
ssh_keypair = var.aws_ssh_keypair_name
}

# create target instances using ec2:RunInstances
module "target_ec2_instances" {
source = "./modules/target_ec2_instances"

common_tags = var.tags
project_name = var.project_name
ssh_keypair = var.aws_ssh_keypair_name
}

# don't create instances but satisfy the module interface
module "target_ec2_shim" {
source = "./modules/target_ec2_shim"

common_tags = var.tags
project_name = var.project_name
ssh_keypair = var.aws_ssh_keypair_name
}

# create target instances using ec2:RequestSpotFleet
module "target_ec2_spot_fleet" {
source = "./modules/target_ec2_spot_fleet"

common_tags = var.tags
instance_mem_min = 4096
instance_cpu_min = 2
max_price = "0.1432" // On-demand cost for RHEL amd64 on t3.medium in us-east
project_name = var.project_name
ssh_keypair = var.aws_ssh_keypair_name
common_tags = var.tags
project_name = var.project_name
ssh_keypair = var.aws_ssh_keypair_name
}

module "vault_agent" {
Expand Down
2 changes: 1 addition & 1 deletion enos/enos-scenario-agent.hcl
Original file line number Diff line number Diff line change
Expand Up @@ -93,7 +93,7 @@ scenario "agent" {
}

step "create_vault_cluster_targets" {
module = module.target_ec2_spot_fleet
module = module.target_ec2_instances
depends_on = [step.create_vpc]

providers = {
Expand Down
4 changes: 2 additions & 2 deletions enos/enos-scenario-autopilot.hcl
Original file line number Diff line number Diff line change
Expand Up @@ -104,7 +104,7 @@ scenario "autopilot" {
}

step "create_vault_cluster_targets" {
module = module.target_ec2_spot_fleet
module = module.target_ec2_instances
depends_on = [step.create_vpc]

providers = {
Expand Down Expand Up @@ -197,7 +197,7 @@ scenario "autopilot" {
}

step "create_vault_cluster_upgrade_targets" {
module = module.target_ec2_spot_fleet
module = module.target_ec2_instances
depends_on = [step.create_vpc]

providers = {
Expand Down
14 changes: 7 additions & 7 deletions enos/enos-scenario-replication.hcl
Original file line number Diff line number Diff line change
Expand Up @@ -113,7 +113,7 @@ scenario "replication" {

# Create all of our instances for both primary and secondary clusters
step "create_primary_cluster_targets" {
module = module.target_ec2_spot_fleet
module = module.target_ec2_instances
depends_on = [
step.create_vpc,
]
Expand All @@ -132,7 +132,7 @@ scenario "replication" {
}

step "create_primary_cluster_backend_targets" {
module = module.target_ec2_spot_fleet
module = matrix.primary_backend == "consul" ? module.target_ec2_instances : module.target_ec2_shim
depends_on = [
step.create_vpc,
]
Expand All @@ -142,7 +142,7 @@ scenario "replication" {
}

variables {
ami_id = step.ec2_info.ami_ids["amd64"]["ubuntu"]["22.04"]
ami_id = step.ec2_info.ami_ids["arm64"]["ubuntu"]["22.04"]
awskms_unseal_key_arn = step.create_vpc.kms_key_arn
cluster_tag_key = local.backend_tag_key
common_tags = local.tags
Expand All @@ -151,7 +151,7 @@ scenario "replication" {
}

step "create_primary_cluster_additional_targets" {
module = module.target_ec2_spot_fleet
module = module.target_ec2_instances
depends_on = [
step.create_vpc,
step.create_primary_cluster_targets,
Expand All @@ -172,7 +172,7 @@ scenario "replication" {
}

step "create_secondary_cluster_targets" {
module = module.target_ec2_spot_fleet
module = module.target_ec2_instances
depends_on = [step.create_vpc]

providers = {
Expand All @@ -189,15 +189,15 @@ scenario "replication" {
}

step "create_secondary_cluster_backend_targets" {
module = module.target_ec2_spot_fleet
module = matrix.secondary_backend == "consul" ? module.target_ec2_instances : module.target_ec2_shim
depends_on = [step.create_vpc]

providers = {
enos = provider.enos.ubuntu
}

variables {
ami_id = step.ec2_info.ami_ids["amd64"]["ubuntu"]["22.04"]
ami_id = step.ec2_info.ami_ids["arm64"]["ubuntu"]["22.04"]
awskms_unseal_key_arn = step.create_vpc.kms_key_arn
cluster_tag_key = local.backend_tag_key
common_tags = local.tags
Expand Down
6 changes: 3 additions & 3 deletions enos/enos-scenario-smoke.hcl
Original file line number Diff line number Diff line change
Expand Up @@ -114,7 +114,7 @@ scenario "smoke" {
}

step "create_vault_cluster_targets" {
module = module.target_ec2_spot_fleet
module = module.target_ec2_instances
depends_on = [step.create_vpc]

providers = {
Expand All @@ -131,15 +131,15 @@ scenario "smoke" {
}

step "create_vault_cluster_backend_targets" {
module = module.target_ec2_spot_fleet
module = matrix.backend == "consul" ? module.target_ec2_instances : module.target_ec2_shim
depends_on = [step.create_vpc]

providers = {
enos = provider.enos.ubuntu
}

variables {
ami_id = step.ec2_info.ami_ids["amd64"]["ubuntu"]["22.04"]
ami_id = step.ec2_info.ami_ids["arm64"]["ubuntu"]["22.04"]
awskms_unseal_key_arn = step.create_vpc.kms_key_arn
cluster_tag_key = local.backend_tag_key
common_tags = local.tags
Expand Down
6 changes: 3 additions & 3 deletions enos/enos-scenario-ui.hcl
Original file line number Diff line number Diff line change
Expand Up @@ -81,7 +81,7 @@ scenario "ui" {
}

step "create_vault_cluster_targets" {
module = module.target_ec2_spot_fleet
module = module.target_ec2_instances
depends_on = [step.create_vpc]

providers = {
Expand All @@ -98,15 +98,15 @@ scenario "ui" {
}

step "create_vault_cluster_backend_targets" {
module = module.target_ec2_spot_fleet
module = matrix.backend == "consul" ? module.target_ec2_instances : module.target_ec2_shim
depends_on = [step.create_vpc]

providers = {
enos = provider.enos.ubuntu
}

variables {
ami_id = step.ec2_info.ami_ids["amd64"]["ubuntu"]["22.04"]
ami_id = step.ec2_info.ami_ids["arm64"]["ubuntu"]["22.04"]
awskms_unseal_key_arn = step.create_vpc.kms_key_arn
cluster_tag_key = local.backend_tag_key
common_tags = local.tags
Expand Down
6 changes: 3 additions & 3 deletions enos/enos-scenario-upgrade.hcl
Original file line number Diff line number Diff line change
Expand Up @@ -109,7 +109,7 @@ scenario "upgrade" {
}

step "create_vault_cluster_targets" {
module = module.target_ec2_spot_fleet
module = module.target_ec2_instances
depends_on = [step.create_vpc]

providers = {
Expand All @@ -126,15 +126,15 @@ scenario "upgrade" {
}

step "create_vault_cluster_backend_targets" {
module = module.target_ec2_spot_fleet
module = matrix.backend == "consul" ? module.target_ec2_instances : module.target_ec2_shim
depends_on = [step.create_vpc]

providers = {
enos = provider.enos.ubuntu
}

variables {
ami_id = step.ec2_info.ami_ids["amd64"]["ubuntu"]["22.04"]
ami_id = step.ec2_info.ami_ids["arm64"]["ubuntu"]["22.04"]
awskms_unseal_key_arn = step.create_vpc.kms_key_arn
cluster_tag_key = local.backend_tag_key
common_tags = local.tags
Expand Down
28 changes: 0 additions & 28 deletions enos/modules/az_finder/main.tf

This file was deleted.

15 changes: 12 additions & 3 deletions enos/modules/create_vpc/main.tf
Original file line number Diff line number Diff line change
@@ -1,4 +1,11 @@
data "aws_region" "current" {}
data "aws_availability_zones" "available" {
state = "available"

filter {
name = "zone-name"
values = ["*"]
}
}

resource "random_string" "cluster_id" {
length = 8
Expand Down Expand Up @@ -34,14 +41,16 @@ resource "aws_vpc" "vpc" {
}

resource "aws_subnet" "subnet" {
count = length(data.aws_availability_zones.available.names)
vpc_id = aws_vpc.vpc.id
cidr_block = var.cidr
cidr_block = cidrsubnet(var.cidr, 8, count.index)
availability_zone = data.aws_availability_zones.available.names[count.index]
map_public_ip_on_launch = true

tags = merge(
var.common_tags,
{
"Name" = "${var.name}-subnet"
"Name" = "${var.name}-subnet-${data.aws_availability_zones.available.names[count.index]}"
},
)
}
Expand Down
5 changes: 0 additions & 5 deletions enos/modules/create_vpc/outputs.tf
Original file line number Diff line number Diff line change
@@ -1,8 +1,3 @@
output "aws_region" {
description = "AWS Region for resources"
value = data.aws_region.current.name
}

output "vpc_id" {
description = "Created VPC ID"
value = aws_vpc.vpc.id
Expand Down
Loading

0 comments on commit 9bd7bba

Please sign in to comment.