Skip to content

Commit

Permalink
Check for secrets
Browse files Browse the repository at this point in the history
  • Loading branch information
miguelhar committed Oct 16, 2024
1 parent 4107afb commit 086d491
Show file tree
Hide file tree
Showing 5 changed files with 61 additions and 4 deletions.
2 changes: 1 addition & 1 deletion modules/infra/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -64,7 +64,7 @@
| <a name="input_network"></a> [network](#input\_network) | vpc = {<br> id = Existing vpc id, it will bypass creation by this module.<br> subnets = {<br> private = Existing private subnets.<br> public = Existing public subnets.<br> pod = Existing pod subnets.<br> }), {})<br> }), {})<br> network\_bits = {<br> public = Number of network bits to allocate to the public subnet. i.e /27 -> 32 IPs.<br> private = Number of network bits to allocate to the private subnet. i.e /19 -> 8,192 IPs.<br> pod = Number of network bits to allocate to the private subnet. i.e /19 -> 8,192 IPs.<br> }<br> cidrs = {<br> vpc = The IPv4 CIDR block for the VPC.<br> pod = The IPv4 CIDR block for the Pod subnets.<br> }<br> use\_pod\_cidr = Use additional pod CIDR range (ie 100.64.0.0/16) for pod networking. | <pre>object({<br> vpc = optional(object({<br> id = optional(string, null)<br> subnets = optional(object({<br> private = optional(list(string), [])<br> public = optional(list(string), [])<br> pod = optional(list(string), [])<br> }), {})<br> }), {})<br> network_bits = optional(object({<br> public = optional(number, 27)<br> private = optional(number, 19)<br> pod = optional(number, 19)<br> }<br> ), {})<br> cidrs = optional(object({<br> vpc = optional(string, "10.0.0.0/16")<br> pod = optional(string, "100.64.0.0/16")<br> }), {})<br> use_pod_cidr = optional(bool, true)<br> })</pre> | `{}` | no |
| <a name="input_region"></a> [region](#input\_region) | AWS region for the deployment | `string` | n/a | yes |
| <a name="input_ssh_pvt_key_path"></a> [ssh\_pvt\_key\_path](#input\_ssh\_pvt\_key\_path) | SSH private key filepath. | `string` | n/a | yes |
| <a name="input_storage"></a> [storage](#input\_storage) | storage = {<br> filesystem\_type = File system type(netapp\|efs)<br> efs = {<br> access\_point\_path = Filesystem path for efs.<br> backup\_vault = {<br> create = Create backup vault for EFS toggle.<br> force\_destroy = Toggle to allow automatic destruction of all backups when destroying.<br> backup = {<br> schedule = Cron-style schedule for EFS backup vault (default: once a day at 12pm).<br> cold\_storage\_after = Move backup data to cold storage after this many days.<br> delete\_after = Delete backup data after this many days.<br> }<br> }<br> }<br> netapp = {<br> migrate\_from\_efs = {<br> enabled = When enabled, both EFS and NetApp resources will be provisioned simultaneously during the migration period.<br> datasync = {<br> enabled = Toggle to enable AWS DataSync for automated data transfer from EFS to NetApp FSx.<br> schedule = Cron-style schedule for the DataSync task, specifying how often the data transfer will occur (default: hourly).<br> }<br> }<br> deployment\_type = netapp ontap deployment type,('MULTI\_AZ\_1', 'MULTI\_AZ\_2', 'SINGLE\_AZ\_1', 'SINGLE\_AZ\_2')<br> storage\_capacity = Filesystem Storage capacity<br> throughput\_capacity = Filesystem throughput capacity<br> automatic\_backup\_retention\_days = How many days to keep backups<br> daily\_automatic\_backup\_start\_time = Start time in 'HH:MM' format to initiate backups<br><br> storage\_capacity\_autosizing = Options for the FXN automatic storage capacity increase, cloudformation template<br> enabled = Enable automatic storage capacity increase.<br> threshold = Used storage capacity threshold.<br> percent\_capacity\_increase = The percentage increase in storage capacity when used storage exceeds<br> LowFreeDataStorageCapacityThreshold. Minimum increase is 10 %.<br> notification\_email\_address = The email address for alarm notification.<br> }))<br> }<br> s3 = {<br> force\_destroy\_on\_deletion = Toogle to allow recursive deletion of all objects in the s3 buckets. if 'false' terraform will NOT be able to delete non-empty buckets.<br> }<br> ecr = {<br> force\_destroy\_on\_deletion = Toogle to allow recursive deletion of all objects in the ECR repositories. if 'false' terraform will NOT be able to delete non-empty repositories.<br> }<br> enable\_remote\_backup = Enable tagging required for cross-account backups<br> costs\_enabled = Determines whether to provision domino cost related infrastructures, ie, long term storage<br> }<br> } | <pre>object({<br> filesystem_type = optional(string, "efs")<br> efs = optional(object({<br> access_point_path = optional(string, "/domino")<br> backup_vault = optional(object({<br> create = optional(bool, true)<br> force_destroy = optional(bool, true)<br> backup = optional(object({<br> schedule = optional(string, "0 12 * * ? *")<br> cold_storage_after = optional(number, 35)<br> delete_after = optional(number, 125)<br> }), {})<br> }), {})<br> }), {})<br> netapp = optional(object({<br> migrate_from_efs = optional(object({<br> enabled = optional(bool, false)<br> datasync = optional(object({<br> enabled = optional(bool, false)<br> schedule = optional(string, "cron(0 * * * ? *)")<br> }), {})<br> }), {})<br> deployment_type = optional(string, "SINGLE_AZ_1")<br> storage_capacity = optional(number, 1024)<br> throughput_capacity = optional(number, 128)<br> automatic_backup_retention_days = optional(number, 90)<br> daily_automatic_backup_start_time = optional(string, "00:00")<br> storage_capacity_autosizing = optional(object({<br> enabled = optional(bool, false)<br> threshold = optional(number, 70)<br> percent_capacity_increase = optional(number, 30)<br> notification_email_address = optional(string, "")<br> }), {})<br> volume = optional(object({<br> create = optional(bool, true)<br> name_suffix = optional(string, "domino_shared_storage")<br> storage_efficiency_enabled = optional(bool, true)<br> junction_path = optional(string, "/domino")<br> size_in_megabytes = optional(number, 1099511)<br> }), {})<br> }), {})<br> s3 = optional(object({<br> force_destroy_on_deletion = optional(bool, true)<br> }), {})<br> ecr = optional(object({<br> force_destroy_on_deletion = optional(bool, true)<br> }), {}),<br> enable_remote_backup = optional(bool, false)<br> costs_enabled = optional(bool, true)<br> })</pre> | `{}` | no |
| <a name="input_storage"></a> [storage](#input\_storage) | storage = {<br> filesystem\_type = File system type(netapp\|efs)<br> efs = {<br> access\_point\_path = Filesystem path for efs.<br> backup\_vault = {<br> create = Create backup vault for EFS toggle.<br> force\_destroy = Toggle to allow automatic destruction of all backups when destroying.<br> backup = {<br> schedule = Cron-style schedule for EFS backup vault (default: once a day at 12pm).<br> cold\_storage\_after = Move backup data to cold storage after this many days.<br> delete\_after = Delete backup data after this many days.<br> }<br> }<br> }<br> netapp = {<br> migrate\_from\_efs = {<br> enabled = When enabled, both EFS and NetApp resources will be provisioned simultaneously during the migration period.<br> datasync = {<br> enabled = Toggle to enable AWS DataSync for automated data transfer from EFS to NetApp FSx.<br> schedule = Cron-style schedule for the DataSync task, specifying how often the data transfer will occur (default: hourly).<br> }<br> }<br> deployment\_type = netapp ontap deployment type,('MULTI\_AZ\_1', 'MULTI\_AZ\_2', 'SINGLE\_AZ\_1', 'SINGLE\_AZ\_2')<br> storage\_capacity = Filesystem Storage capacity<br> throughput\_capacity = Filesystem throughput capacity<br> automatic\_backup\_retention\_days = How many days to keep backups<br> daily\_automatic\_backup\_start\_time = Start time in 'HH:MM' format to initiate backups<br><br> storage\_capacity\_autosizing = Options for the FXN automatic storage capacity increase, cloudformation template<br> enabled = Enable automatic storage capacity increase.<br> threshold = Used storage capacity threshold.<br> percent\_capacity\_increase = The percentage increase in storage capacity when used storage exceeds<br> LowFreeDataStorageCapacityThreshold. Minimum increase is 10 %.<br> notification\_email\_address = The email address for alarm notification.<br> }))<br> }<br> s3 = {<br> force\_destroy\_on\_deletion = Toogle to allow recursive deletion of all objects in the s3 buckets. if 'false' terraform will NOT be able to delete non-empty buckets.<br> }<br> ecr = {<br> force\_destroy\_on\_deletion = Toogle to allow recursive deletion of all objects in the ECR repositories. if 'false' terraform will NOT be able to delete non-empty repositories.<br> }<br> enable\_remote\_backup = Enable tagging required for cross-account backups<br> costs\_enabled = Determines whether to provision domino cost related infrastructures, ie, long term storage<br> }<br> } | <pre>object({<br> filesystem_type = optional(string, "efs")<br> efs = optional(object({<br> access_point_path = optional(string, "/domino")<br> backup_vault = optional(object({<br> create = optional(bool, true)<br> force_destroy = optional(bool, true)<br> backup = optional(object({<br> schedule = optional(string, "0 12 * * ? *")<br> cold_storage_after = optional(number, 35)<br> delete_after = optional(number, 125)<br> }), {})<br> }), {})<br> }), {})<br> netapp = optional(object({<br> migrate_from_efs = optional(object({<br> enabled = optional(bool, false)<br> datasync = optional(object({<br> enabled = optional(bool, false)<br> schedule = optional(string, "cron(0 * * * ? *)")<br> }), {})<br> }), {})<br> deployment_type = optional(string, "SINGLE_AZ_1")<br> storage_capacity = optional(number, 1024)<br> throughput_capacity = optional(number, 128)<br> automatic_backup_retention_days = optional(number, 90)<br> daily_automatic_backup_start_time = optional(string, "00:00")<br> storage_capacity_autosizing = optional(object({<br> enabled = optional(bool, false)<br> threshold = optional(number, 70)<br> percent_capacity_increase = optional(number, 30)<br> notification_email_address = optional(string, "")<br> }), {})<br> volume = optional(object({<br> create = optional(bool, true)<br> name_suffix = optional(string, "domino_shared_storage")<br> storage_efficiency_enabled = optional(bool, true)<br> junction_path = optional(string, "/domino")<br> size_in_megabytes = optional(number, 11094880)<br> }), {})<br> }), {})<br> s3 = optional(object({<br> force_destroy_on_deletion = optional(bool, true)<br> }), {})<br> ecr = optional(object({<br> force_destroy_on_deletion = optional(bool, true)<br> }), {}),<br> enable_remote_backup = optional(bool, false)<br> costs_enabled = optional(bool, true)<br> })</pre> | `{}` | no |
| <a name="input_tags"></a> [tags](#input\_tags) | Deployment tags. | `map(string)` | `{}` | no |
| <a name="input_use_fips_endpoint"></a> [use\_fips\_endpoint](#input\_use\_fips\_endpoint) | Use aws FIPS endpoints | `bool` | `false` | no |
| <a name="input_vpn_connections"></a> [vpn\_connections](#input\_vpn\_connections) | create = Create a VPN connection.<br> connections = List of VPN connections, each with:<br> - name: Name for identification (optional).<br> - shared\_ip: Customer's shared IP Address (optional).<br> - cidr\_block: CIDR block for the customer's network (optional). | <pre>object({<br> create = optional(bool, false)<br> connections = optional(list(object({<br> name = optional(string, "")<br> shared_ip = optional(string, "")<br> cidr_blocks = optional(list(string), [])<br> })), [])<br> })</pre> | `{}` | no |
Expand Down
1 change: 1 addition & 0 deletions modules/infra/submodules/storage/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -69,6 +69,7 @@ No modules.
| [terraform_data.check_backup_role](https://registry.terraform.io/providers/hashicorp/terraform/latest/docs/resources/data) | resource |
| [terraform_data.pull_through_cache_deletion](https://registry.terraform.io/providers/hashicorp/terraform/latest/docs/resources/data) | resource |
| [terraform_data.set_monitoring_private_acl](https://registry.terraform.io/providers/hashicorp/terraform/latest/docs/resources/data) | resource |
| [terraform_data.wait_for_secrets](https://registry.terraform.io/providers/hashicorp/terraform/latest/docs/resources/data) | resource |
| [aws_caller_identity.this](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/caller_identity) | data source |
| [aws_elb_service_account.this](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/elb_service_account) | data source |
| [aws_iam_policy.aws_backup_role_policy](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/iam_policy) | data source |
Expand Down
6 changes: 6 additions & 0 deletions modules/infra/submodules/storage/datasync.tf
Original file line number Diff line number Diff line change
Expand Up @@ -92,6 +92,12 @@ resource "aws_datasync_task" "efs_to_netapp_migration" {
source_location_arn = aws_datasync_location_efs.this[0].arn
destination_location_arn = aws_datasync_location_fsx_ontap_file_system.this[0].arn

options {
posix_permissions = "NONE"
gid = "NONE"
uid = "NONE"
}

schedule {
schedule_expression = var.storage.netapp.migrate_from_efs.datasync.schedule
}
Expand Down
54 changes: 52 additions & 2 deletions modules/infra/submodules/storage/netapp.tf
Original file line number Diff line number Diff line change
Expand Up @@ -30,7 +30,7 @@ resource "aws_security_group_rule" "netapp_outbound" {

locals {
netapp_ontap_components_user = local.deploy_netapp ? {
filesystem = "netappadmin"
filesystem = "fsxadmin"
svm = "vsadmin"
} : {}
}
Expand Down Expand Up @@ -61,10 +61,60 @@ resource "aws_secretsmanager_secret_version" "netapp" {
}


locals {
required_secret_names = [for key in keys(local.netapp_ontap_components_user) : "${var.deploy_id}-netapp-ontap-${key}"]
}


## Mitigating propagation delay: Error: reading Secrets Manager Secret Version ...couldn't find resource

resource "terraform_data" "wait_for_secrets" {
provisioner "local-exec" {
command = <<-EOF
set -x -o pipefail
sleep_duration=10
max_retries=30 # 5 minutes with 10-second intervals
required_secrets=(${join(" ", local.required_secret_names)})
check_secrets() {
secrets=$(aws secretsmanager list-secrets --region ${var.region} --query 'SecretList[?starts_with(Name, `${var.deploy_id}`)].Name' --output text)
for secret in "$${required_secrets[@]}"; do
if ! grep -q "$secret" <<< "$secrets"; then
return 1
fi
done
return 0
}
for i in $(seq 1 $max_retries); do
if check_secrets; then
exit 0
fi
echo "Waiting for secrets... attempt $i"
sleep "$sleep_duration"
done
echo "Timed out waiting for secrets."
exit 1
EOF
interpreter = ["bash", "-c"]
environment = {
AWS_USE_FIPS_ENDPOINT = tostring(var.use_fips_endpoint)
}
}

depends_on = [aws_secretsmanager_secret.netapp]
}


data "aws_secretsmanager_secret_version" "netapp_creds" {
for_each = local.netapp_ontap_components_user
secret_id = aws_secretsmanager_secret.netapp[each.key].id
depends_on = [aws_secretsmanager_secret_version.netapp]
depends_on = [terraform_data.wait_for_secrets]
}


Expand Down
2 changes: 1 addition & 1 deletion modules/infra/variables.tf
Original file line number Diff line number Diff line change
Expand Up @@ -411,7 +411,7 @@ variable "storage" {
name_suffix = optional(string, "domino_shared_storage")
storage_efficiency_enabled = optional(bool, true)
junction_path = optional(string, "/domino")
size_in_megabytes = optional(number, 1099511)
size_in_megabytes = optional(number, 11094880)
}), {})
}), {})
s3 = optional(object({
Expand Down

0 comments on commit 086d491

Please sign in to comment.