The securitymetrics.org infrastructure contains three components:
-
The www.securitymetrics.org website is generated by the Octopress static site generator and hosted on a legacy web host. It contains 100% static HTML and does not need a web applicaition server. That project is found in the securitymetrics repo.
-
The archive.securitymetrics.org server hosts the mailing list's archive. The archive runs on Mailman 3's Hyperkitty web application. It runs in an Amazon Web Services virtual private cloud (VPC).
-
The mail.securitymetrics.org server hosts the Mailman 3 listserv. It runs on the same server as the archive, on an Amazon VPC.
Three tools work in tandem to build the environment. Packer builds the host images. Terraform creates the networking, storage, and host infrastructure from a templated 'blueprint.' Ansible configures new hosts after they are created. Terraform and Ansible share a common set of YAML-formatted configuration variables stored in env_vars/
.
Note: the AWS instance of the securitymetrics archive is pre-production. A current "staging" version is running in my personal blog domain. It is not guaranteed to be stable.
Hosts are created in two ways depending on whether the host is used for development or production. Vagrant creates dev machines; Terraform creates production.
Vagrant provisions a local virtual machine running on the developer machine. The command vagrant up
pulls a Virtualbox image running a current version of Alpine Linux. Vagrant configures a public network on the VM using the en0: Wi-Fi (AirPort)
adapter in bridge
mode. The dev machine's name is devbox
. It is assigned to the Ansible host group dev
as Vagrant provisions it. Details are contained in the top-level file Vagrantfile
.
After initial provisioning is done, Vagrant runs the Ansible playbook playbook.yml
to configure the host (see the Configuration section below).
The Terraform configuration file main.tf
describes how to create production environments in Amazon Web Services. The Terraform execution plan will create a designated environment it does not already exist.
The Alpine Amazon Machine Images (AMIs) used for EC2 nodes are based on a current version of Alpine Linux. Exactly one machine for each fully-qualified domain name is created. The AWS Virtual Private cloud the node is placed into is configurable and is assumed to already exist; the VPC is not provisioned by Terraform. An Elastic IP is created if necessary and assigned to Internet-facing nodes, and matching DNS A records are placed into the top-level DNS zone (eg, securitymetrics.org) managed by AWS Route 53.
The Terraform plan ensures that exactly one EC2 host with names www
._public_domain_
and mail
._public_domain_
in a given environment is created, along with an associated security group, Elastic IP and DNS record. The AWS Name
and Environment
tags uniquely identify the single instance of EC2 node, security group, Elastic IP address, DNS A record and DNS MX record. It is executed in the project root directory, with the environment variables passed as a parameter:
terraform apply
...where the current Terraform workspace furnishes the name of the environment. If the current environment is called tf
, for example, the Environment tag for all resources will also be tf
. The YAML file env_vars/
environment
main.yml
contains environment-specific settings such as server and domain names. The YAML file env_vars/default/main.yml
contains default settings.
The Terraform plan:
-
Uploads the SSH key
ec2_ssh_key
to EC2. These keys reside outside the project, in the developer's home directory. -
Creates an Elastic File System (EFS) and mount-point, in the subnet
aws_vpc_subnet_id
, using the creation tokenenvironment
-nfs
to ensure only one is created. It attaches an AWS security group namedenvironment
-nfs
that allows inbound NFS traffic (port 2049) from any IP address in the private VPC subnet, and allows all outbound traffic to any IP address in the private VPC subnet. The EFS resource and secuity group are both tagged with the Nameenvironment
-nfs
and Environmentenvironment
. -
Creates an AWS security group, in the subnet
aws_vpc_subnet_id
, for each port opened to the Internet: 22 (SSH), 25 (SMTP), and 80/443 (HTTP/HTTPS). The groups are namedenvironment
-ssh
,environment
-smtp
, andenvironment
-https
respectively. HTTP and HTTPS are combined in the same security group. Each security group allows traffic to or from any IP address. Each group is tagged with Environmentenvironment
. -
Creates an EC2 node named
www
._public_domain_
, in the VPC subnetaws_vpc_subnet_id
, using the AMIec2_instance_ami
as the source. This AMI is a current version of Alpine Linux. The playbook does not assign a public IP address, but enables monitoring, and sets theName
andEnvironment
tags towww
._public_domain_
andec2_env
, respectively. It assigns the IAM instance profile roleec2_iam_role
. -
Creates an EC2 Elastic IP, if it does not already exist, setting the
Name
andEnvironment
tags towww
._public_domain_
andec2_env
, respectively. -
Associates the EC2 Elastic IP with the EC2 node.
-
Registers a DNS
A
record with Amazon Route 53 in thepublic_domain
hosted DNS zone, with the record's name set towww
._public_domain_
and the value set to the Elastic IP address.
After Terraform creates the EC2 node, the Ansible playbook playbook.yml
configures it (see the next section). Integration with Ansible is achieved as follows. The Terraform configuration declares a local-exec
provisioner that runs the ansible-playbook
command to execute a playbook (default: playbook.yml
).
As part of the command options, Terraform specifies a dynamic inventory file (default: hosts_ec2.yml
) that retrieves metadata about all EC2 instances from AWS, using the AWS Environment
tag to group hosts. For example, if an EC2 instance has the Environment tag staging
, it is grouped in Ansible into the staging
group.
To narrow down the instances to configure, Ansible reads the contents of .terraform/environment
to determine the current environment in use; it will configure only EC2 instances from this environment.
The production site is not running yet.
The Ansible playbook playbook.yml
configures the test, dev and prod production machines in three steps. The playbook:
-
Bootstraps Ansible by installing Python onto the machine. Because the machine is on Alpine Linux, Python is not installed by default. In order to do this, we suppress Ansible's initial fact-fathering and then run
apk
to add thepython3
package if it is not already installed. After Python is installed, Ansible collects its facts as usual. If the stringamazon
is found in the Ansible host factansible_bios_version
, the variableis_ec2_env
is set totrue
so that other tasks can use it. -
Executes the
base
,docker
,keys
,mailman
, andimport_archive
roles as required. In addition, for public mail servers, the playbook includes and runs theupdate_dns
tasks from theamazon
role to ensure that any MX, SPF or DKIM DNS records are updated. Details on each role follow in the next section. -
After installation, removes developer tools to harden the machine (slightly).
All roles execute on target hosts with elevated (root) privileges. We use the strategy of importing roles (rather than including them) so that each role shares its variables with other roles. The amazon
role is an exception: in this case, the playbook includes
(rather thab imports) its update_dns
tasks so that it runs with normal privileges on the local controller host.
The base
role installs all of the essential services needed for a basic machine, other than its core applications. Essential services include logging, job scheduling (cron
), network time protocol (NTP), and remote login (SSH). For hosts with names ending in .local
, the base
role also installs multicast DNS so that it can be easily discovered and connected to on the local network. Specifically, the role:
-
Using
apk
, verifies the presence of the thebusybox-initscripts
,audit
,curl
,git
,net-tools
, andlogrotate
packages, installing them if they do not exist. The BusyBox init scripts include standard services for andcron
,syslog
, NTP. -
Configures SSH by copying a templated
sshd_config
to/etc/ssh
. This version is identical to the stock version but disables remoteroot
login password-based authentication. -
Enables the NTP, cron and syslog services (
chronyd
,crond
andsyslog
), requiring them to start at bootup. Note: the Amazon Machine Image configures the NTP daemon to use Amazon's NTP services; the Ansible playbook does not attempt to verify this setting (although it may in the future). -
For hosts with nanes ending in
.local
, configures the Avahi multicast DNS responder (mDNS) so that local VMs can be connected or browsed to by their local host names, for example (devbox.local
). The templatedavahi-daemon.conf
is copied to/etc/avahi
. -
Configures the host's kernel to use swap-memory limits, an important feature for Docker. As described in the Alpine wiki, the playbook adds the
cgroup_enable=memory swapaccount=1
to theextlinux
configuration (/etc/update-extlinux.conf
), and notifies Ansible to reboot the host if the values changed. -
Sets the hostname to
www
._public_domain_
. -
Flushes any notified handlers, which may cause
sshd
oravahi-daemon
to restart if their configurations changed. If swap memory support is newly enabled, the server reboots. If the server reboots, Ansible will wait for up to 5 minutes for the host to be up again before proceeding.
The docker
role installs Docker and Docker Compose. It also enables user-namespace mapping so that containers that normally run as root
actually run as less-privileged users. The role:
-
Installs the
docker
package. -
Installs dependencies needed by Docker Compose. These include
gcc
,libc-dev
,libffi-dev
,make
,openssl-dev
, andpython3-dev
. -
Installs the Python package Docker Compose (
docker-compose
). -
Creates a new user and group (default name:
dremap
), which Docker will use to configure user remapping. -
Creates the files
/etc/subuid
and/etc/subgid
which define the UID and GID ranges used by remapped users. By default these ranges start at 3000. -
Creates a new user and group (default name:
droot
) that represents the user or group that any containers that run asroot
will use. The UID and GID defaults to 3000. Inside the containers, the UID and GID will appear as 0. -
Creates several new unprivileged user and groups (default prefix:
drun
) that represents typical users and groups that containers may create. The UID and GID default to 3100 through 3103. Inside the containers, the UID and GID will appear as 100 through 103. -
Configures the Docker daemon to use user-namespace mappping, set the default log level to
info
, stop containers for asking for new privileges, remove inter-container communications, and enable live restores. The templateddaemon.json
is copied to/etc/docker
. -
Enables the Docker service (
docker
) and requires it to start at bootup. -
Flushes any notified handlers, which may cause
docker
if its configuration changed.
The configuration for Docker incorporates practices from the CIS Benchmark for Docker, with automated auditing provided by Docker Bench. It also includes several tips from the Alpine wiki pages on Docker support.
User-namespace support in Docker is incomplete, because per-container mappings are not possible as of this writing. Nonetheless, many useful articles describe how to use user-namespace support; the docker
role incorporates many of their ideas.
The keys
generates a self-signed TLS certificate for use with mail or web servers. These are not used in production, but it allows Nginx to bootstrap itself. Production certs are created using Let's Encrypt. For servers running a mail server, the role also creates DKIM keys. The role:
-
Installs dependencies needed by Ansible's
openssl
modules. These include thegcc
,libc-dev
,libffi-dev
,openssl-dev
, andpython3-dev
modules, and the Pythonpyopenssl
library. -
Adds a user and group, both named
certificate_user
and with the UID/GIDcertificate_uid
, for handling certificates. The UID and GID default to 4000, which appear as 1000 inside containers. -
Creates the
tls_data
directory for storing certificates, and changes user and group ownership tocertificate_user
. The owner has read-write permissions; the group is read-only. -
Generates a TLS certificate private key
privkey.pem
, certificate signing request (CSR)selfsigned-csr.pem
, and self-signed TLS certificatefullchain.pem
, placing the results intls_data
. The owner has read-write permissions; the group is read-only. -
Creates a directory for DKIM keys, and then creates a PEM-encoded 2048-bit public-private key pair to be stored in it. The files are called
{{ public_domain }}.private
and{{ public_domain }}.public
, respectively. Owner and group have read-only permissions. Note: the Ansible DKIM creation tasks do not specify an owner or group for the directory, or for any of the files within it; these default toroot
. Later roles (eg themailman
role sets ownership. This strategy allows Ansible to ensure that the files exist, without setting ownership here that is later overridden bymailman
(which shows as a undesirable "change"). -
Creates the value of the DNS
TXT
DKIM record, which contains the public key. The content is derived from the DKIM public key file{{ public_domain }}.public
, passed through a custom Ansible filter calleddkim_record
. This filter is stored inmail_security/filter_plugins/dkim_record.py
. The output is stored in{{ public_domain }}.txt
. As with the DKIM public and private keys, owner and group have read-only permissions but the names of owner and group are not specified; they default toroot
. -
If
letsencrypt_certificates
is true, installs theopenssl
package,acme-tiny
for creating Let's Encrypt TLS certificates. The role also creates an account directoryletsencrypt_account_dir
and copies the Let's Encrypt account key to it, setting the permissions for the directory for read-write for the owner (certificate_user
) and read-only for the group (alsocertificate_user
). It sets the permissions for the account key to read-only for the owner and group, and copies a templated ACME renewal scriptrenew_certs.sh
to the account directory. It creates an empty ACME challenge directoryacme_challenge
and sets its owner and group tocertificate_user
, with read-write privileges for owner and read for others. Finally, the role adds acron
job to run the renewal script on the first day of every month. -
If
letsencrypt_certificates
is true, and either the current TLS certificate is self-signed or it expires in less than 30 days, the role creates a certificate-signing request using the Let's Encrypt account key, configures and starts a temporary Nginx web server container, generates a new certificate usingacme-tiny
, and tears down the container when it's done.
The mailman
role configures the Mailman listserv software using five separate Docker containers:
-
Mailman core, which provides the mailing list's back-end services
-
Mailman web, which includes the Postorius admnistrative interface and the HyperKitty web import_archive
-
Postgres, which provides database storage for the Mailman application
-
Nginx, which serves web pages for Mailman web
-
Postfix, which is the MTA for the server as well as for the securitymetrics.org domain
The containers are all Alpine-based and are sourced from the Docker Hub registry. The containers are customized for the securitymetrics.org environment in three ways.
First, where possible, configuration and data directories are stored externally from the container on the main host. "Bind mounts" inject the configuration and data directories into the container. This strategy ensures that if a container is shut down or removed, its contents persist on the host machine. For example, the Postfix container uses any DKIM keys it finds in /etc/opendkim/keys
, which are bind-mounted from the host directory dkim_data
.
Second, where possible, environment variables are injected into the Docker containers at startup. The postgres
container, for example, receives the Mailman database name, user, and password as vars. Mailman-web, Mailman-core and Postfix also accept various environment variables.
Third, in several cases the playbook takes advantage of container-specific customization capabilities. Mailman-core loads a customization file called mailman-extra.cfg
. Mailman-web loads one called settings_local.py
. And, at startup, Postfix parses any shell scripts injected into the /docker-init.db/
; the playbook passes one in called configure_postfix.sh
.
The mailman
role is complex. It performs the following tasks:
-
Configures the PostgresQL data directory
postgres_data
, which is bind-mounted into the containerpostgres
. The userdpostgres
(UID 3070), created by the role, owns the directory and can read and write to it. The groupdpostgres
(GID 3070) is assigned to it but has no read or write access. Inside the container, the user appears aspostgres
(UID 70), and the group appears aspostgres
(GID 70). -
Configures the Mailman Core data directory
mailman_core
, which is bind-mounded into the containermailman-core
. The user (UID 5100), created by the role, owns the directory and can read and write to it. The root group (GID 5000) is assigned to it and has read-only access. Inside the container, the user appears asmailman
(UID 100), and the group appears asroot
(GID 0). Note: a sub-directoryvar/data
has the group ownership5101
(in container:mailman
, GID 101) with read-write access with the set-GID bit set. This allows Mailman to write its Postfix-specific LMTP address-mapping files, and ensure that they are group-owned bymailman
. -
Copies a templated Mailman Core supplemental configuration file
mailman-extra.cfg
tomailman_core
, which is bind-mounted into themailman-core
container. Its only function is to set the administrator's email tomailman_email
. -
Configures the Mailman Web data directory
mailman_web_data
, which is bind-mounded into the containermailman-web
. The user (UID 5100), created by the role, owns the directory and can read and write to it. The group (GID 5101) is assigned to it and has read-only access. Inside the container, the user appears asmailman
(UID 100), and the group appears asmailman
(GID 101). -
Copies a templated Mailman Web supplemental configuration file
settings_local.py
tomailman_web_data
, which is bind-mounted into themailman-web
container. Its primary function is to disable the social-networking login functions of Mailman. Social logins are disabled by adding anINSTALLED_APPS
section. -
Configures the Postfix directories used for Initialization, data storage and logging, are bind-mounted into the container
postfix
. The user (UID 5000) owns the directories and can read and write to them. The group (GID 5000) is assigned to them and has read-only access. Inside the container, the user appears asroot
(UID 0), and the group appears asroot
(GID 0). -
Copies a templated Postfix configuration script
configure_postfix.sh
to thepostfix_init
directory. Thepostfix
container runs this script right after initialization. It sets up Mailman-specific transport maps, configures TLS support to use the certificates stored intls_data
(bind-mounted as/etc/tls
), and tightens the configuration to make it slightly harder for spammers. The contents are based on several articles. -
Adds the users that the
postfix
andnginx
containers run as at runtime to the groupcertificate_user
. Postfix runs as UID 5100, which appears in-container aspostfix
(UID 100). Nginx runs as UID 5000, which appears in-container asnginx
(UID 0). By adding these users to the group, the containers gain read access to the bind-mounted TLS certificate directory. -
Sets the DKIM directory
dkim_data
permissions to the user that thepostfix
container uses for its OpenDKIM daemon. The directory and all files within it are set to be owned by UID 5102, which appears in-container asopendkim
(UID 102). The group owner is UID 5103, which appears in-container asopendkim
(GID 103). -
Creates a minimal Nginx configuration directory
nginx_conf
and copies the filesnginx.conf
,mime.types
,uwsgi_params
, to it. It copies the website configuration filemailman.conf
to the subdirectoryconf.d
. The owner and group UID/GID 5000, which appears in-container asroot
(UID/GID 0) have read-only access to these files. -
Using Docker Compose, creates the Docker containers
postgres
,postfix
,mailman-core
,mailman-web
andnginx
. In general, data, configuration and logging directories are bind-mounted read-write into each container, with "sidecar" containers that supply other services bind-mounted as read-only. The containers are all attached to the privatedocker
virtual network on the host, with each with IP addresses in the 172.19.199.0/24 subnet.
For Amazon servers running a mail server, the update_dns
tasks create a DKIM record in DNS, and create an SPF record whitelisting the server to send email. These tasks runs only if the is_ec2_env
variable evaluates to true
.
For security reasons, the portions of this role that update Amazon Web Services do not execute on the remote host; these steps use the local_action
idiom to execute on the Ansible controller workstation. The role:
-
Registers a DNS
MX
record with Amazon Route 53 in thepublic_domain
hosted DNS zone, with the record's name set topublic_domain
and the value set to the Elastic IP address. -
Registers a DNX
TXT
SPF record indicating that the host shown in the MX record is allowed to send mail on behalf ofpublic_domain
, with no other allowed sending IPs, and that any other purporting to be frompublic_domain
should be rejected. The resulting SPF syntax for the record is short and sweet:"v=spf1 mx -all"
. -
Registers a DNS
TXT
record with Amazon Route 53 in thepublic_domain
hosted DNS zone, with the record's name set topublic_domain
and the value set to the results of the previous step.
The import_archive
role imports the legacy Mailman 2.1 securitymetrics
mailing list into Mailman 3. This role only runs if the import_archive
variable evaluates to true
. It performs the following steps if the zero-byte file archive.imported
is not found in the mailman_core
directory:
-
Copies the mailing list configuration (
config.pck
) from the project directory'setc
directory tomailman_core/
on the host, with -
Copies the mailing list archives (
discuss.mbox
) from the project directory'setc
directory tomailman_core/
on the host. -
Creates a new mailing list
discuss
for the domainpublic_domain
. -
Imports the mailing list configuration into Mailman 3, by executing the command
mailman import21
inside themailman-core
container. -
Imports the mailing list archives into Mailman 3, by executing the command
python3 manage.py hyperkitty_import
inside themailman-core
container. -
Indexes the mailing list archives into Mailman 3, by executing the command
python3 manage.py update_index_one_list
inside themailman-core
container. -
Creates a zero-byte file
archive.imported
as a "memento" of the import so that it is not imported again.
After Mailman and Postfix are working, the following Mailman UI configuration appears to work:
- domain: markerbench.com
- web host: markerbench.com
However, emails that aren't one of the whitelisted Mailman addresses result in a bounce message:
<[email protected]>: mail for markerbench.com loops back to myself.
Note: this property in daemon.json
can't be set at the moment.
"userland-proxy": false
Note that these properties are not set in daemon.json
, but will be, after ECS is verified as working:
"icc": false,
"no-new-privileges": true,
"userns-remap": "dremap"
The tools used to provision and configure securitymetrics.org include VirtualBox (virtual machine emulator), Vagrant (VM provisioning), and Ansible (configuration management). Git manages the versions of all project artifacts.
Set up the project directory, for example, ~/workspace/securitymetrics
. Initialize Git:
git init
Update PIP3:
pip3 install --upgrade pip
Ansible provisions and configures the dev, testing and production machines for securitymetrics.org. Ansible allows a local virtual machine to be quickly spun up and configured with a single command. Ansible also provisions the testing and production machines in Amazon.
Install Ansible:
pip3 install --upgrade ansible
Create that a one-line Ansible Vault password file at ~/.ansible_vault_pass_securitymetrics
.
Note: top-level project file ansible.cfg
, the vault_password_file
entry in the defaults
is set to read the password from this file by default to decrypt vaulted materials:
[defaults]
vault_password_file = ~/.ansible_vault_pass_securitymetrics
For development, (VirtualBox)[https://www.virtualbox.org] is used for running local virtual machines. Download and install it.
HashiCorp's Vagrant tool creates and bootstraps the local VirtualBox machines used for testing. Download and install it.
After installation, also install the Alpine plugin, without which Alpine-based VMs can't be provisioned:
vagrant plugin install vagrant-alpine
Change to the project directory and test that Vagrant is working correctly by provisioning a local dev VM:
vagrant up
vagrant ssh
Vagrant will create the virtual machine based on the instructions in Vagrantfile
, spin up the VM, and configure it by running the Ansible playbook.yml
playbook.
Amazon Web Services houses the testing and production machines for securitymetrics.org. This section describes how to install the required Python packages Ansible needs, and how to verify that they are working properly.
-
In the AWS web console, create an IAM user with permissions
AmazonEC2FullAccess
andAmazonRoute53FullAccess
. Generate credentials and download the.csv
file. -
Install the AWS command-line-interface on Python 3, as well as Python client (
boto
,boto3
andbotocore
). Initialize the Amazon configuration client:pip3 install awscli pip3 install boto pip3 install boto3 pip3 install botocore aws configure
Supply the AWS Access Key ID, AWS Secret Access Key from the .csv
file. This will create the ~/.aws/credentials
file and profile called default
in the configuration file ~/.aws/config
. Verify the config file contains something similar to this:
[default]
output = json
region = us-east-1
-
Verify the AWS command-line client can connect with credentials; this command should succeed without any errors:
aws ec2 describe-instances
-
Verify the Boto client connects, starting the
python3
interpreter and pasting the following code. It should succeed without any errors:import boto3 ec2 = boto3.client('ec2') response = ec2.describe_instances() print(response)
-
Change to the
securitymetrics
project and verify that the Ansible EC2 inventory plugin can read its inventory:ansible-inventory -i --graph
...which should produce output similar to this:
@all:
|--@aws_ec2:
| |--ec2-3-212-5-14.compute-1.amazonaws.com
|--@testing:
| |--ec2-3-212-5-14.compute-1.amazonaws.com
|--@ungrouped:
For production, HashiCorp's Packer builds custom Amazon Machine Images (AMIs) with a basic Alpine Linux configuration, including Docker and AWS utilities. In the ajaquith
GitHub repository, the forked alpine-ec2-ami repository contains scripts for building the AMIs.
On OS X, install Packer using brew
:
brew install packer
See the alpine-ec2-ami
project's README for more details on how to build the AMIs. But in general: clone the repo; change to the project directory, and build using make
:
make PROFILE=arj
HashiCorp's Terraform bootstraps the Amazon environment.
Enable remote state storage in AWS S3 by creating a new S3 bucket cloudadmin.markerbench.com
, with no versioning, access logging, request metrics, or object-level logging. Enable AES-256 default encryption. Disable all public access by default.
Create a custom AWS policy called TerraformStateManagement
with the following privileges:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "s3:ListBucket",
"Resource": "arn:aws:s3:::cloudadmin.markerbench.com"
},
{
"Effect": "Allow",
"Action": ["s3:GetObject", "s3:PutObject"],
"Resource": "arn:aws:s3:::cloudadmin.markerbench.com/terraform/\*"
}
]
}
terraform plan out=out terraform show -json out > out.json jq '.resource_changes[].address' out.json | wc -l jq '.resource_changes[].change | select(.actions[] | contains("update")) | .actions[]' out.json | wc -l
jq '.planned_values.root_module.resources[].address' out.json
Configure the OS X SSH login agent to require a password upon first use of an SSH key, by editing ~/.ssh/config
so that it uses the SSH keychain, adds SSH keys automatically, and sets the default identity file:
Host *
AddKeysToAgent yes
UseKeychain yes
IdentityFile ~/.ssh/id_rsa
Alpine 3.10 - 98M, 641M, post config before containers, 1.36GB after Ports 22, 323 Minimal Ubuntu 19.04 - Disco - 737MB Ports 22, 53, 53u, 68u CentOS 7 (x86_64) - with Updates HVM - 885MB Ports 22, 111, 25, 68u, 111u, 323u, 973u
Assume we have provisioned an EFS instance. Its file system id is fs-29b863cb
or similar. The EFS instance needs to be added to a security group that allows NFS inbound traffic from all VPC addresses (this could be narrowed later). Example security group: type NFS
(port range 2049), type TCP
, source 172.31.16.0/20
.
Mount EFS using stunnel
SSL encryption by creating a mount point eg /root/efs
and then mounting the filesystem ID:
mount -t efs -o tls fs-29b863cb:/ /root/efs
Logs are in /var/log/amazon/efs
. The mount log is mount.log
. Watchdog is mount-watchdog.log
.
APK packages required: py3-virtualenv Python packages required: awscli
https://s3.amazonaws.com/aws-cloudwatch/downloads/latest/awslogs-agent-setup.py
As described in the CloudWatch Quick Start guide, in the AWS console, manually create a Role called CloudWatchPush
with the following JSON permissions:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"logs:CreateLogGroup",
"logs:CreateLogStream",
"logs:PutLogEvents",
"logs:DescribeLogStreams"
],
"Resource": [
"arn:aws:logs:\*:\*:\*"
]
}
]
}
Create role CloudWatchAgent
and attach the policy CloudWatchPush
. Add EC2
as the trusted entity.
Create user TestCloudWatchAgent
with "programmatic" access type. Directly attach the policy CloudWatchPush
. Extract the access key and secret access key.
kill -SIGINT (agent process)
mkdir -p /opt/nginx/html adduser -u 3101 -D -h /opt/nginx/html -s /sbin/nologin nginx docker network create --subnet 172.19.199.0/24 mailman
docker run --name postgres
-v /opt/postgres/data:/var/lib/postgresql/data
-e POSTGRES_DB=mailmandb
-e POSTGRES_USER=mailman
-e POSTGRES_PASSWORD=password
--network mailman --ip 172.19.199.4
postgres:9.6-alpine
docker run --name mailman-core
-v /opt/mailman/core:/opt/mailman
-e DATABASE_CLASS=mailman.database.postgresql.PostgreSQLDatabase
-e DATABASE_TYPE=postgres
-e DATABASE_URL=postgres://mailman:[email protected]/mailmandb
-e HYPERKITTY_API_KEY=foo
-e HYPERKITTY_URL=http://172.19.199.3:8000/hyperkitty/
-e MAILMAN_REST_URL=http://172.19.199.2:8001
-e MAILMAN_REST_USER=restadmin
-e MAILMAN_REST_PASSWORD=password
-e MTA=postfix
-e SMTP_HOST=172.19.199.5
--network mailman --ip 172.19.199.2
maxking/mailman-core
docker run --name postfix
-v /opt/postfix/data:/var/spool/postfix
-v /opt/postfix/log:/var/log/postfix
-v /opt/keys/tls:/etc/tls:ro
-v /opt/keys/dkim:/etc/opendkim/keys:ro
-v /opt/mailman/core/var/data:/var/data/mailman:ro
-v /opt/postfix/init:/docker-init.db:ro
-e HOSTNAME=devbox.local
-e "MYNETWORKS=172.19.199.0/24 127.0.0.0/8"
-e ALLOWED_SENDER_DOMAINS=devbox.local
-e MASQUERADED_DOMAINS=devbox.local
-e MESSAGE_SIZE_LIMIT=102400
--network mailman --ip 172.19.199.5
boky/postfix
docker run --name mailman-web
-v /opt/mailman/web:/opt/mailman-web-data
-e DATABASE_TYPE=postgres
-e DATABASE_URL=postgres://mailman:[email protected]/mailmandb
-e DJANGO_ALLOWED_HOSTS=devbox.local
-e HYPERKITTY_API_KEY=foo
-e HYPERKITTY_URL=http://172.19.199.3:8000/hyperkitty/
-e MAILMAN_ADMIN_USER=mailman
-e MAILMAN_ADMIN_EMAIL=[email protected]
-e MAILMAN_REST_URL=http://172.19.199.2:8001
-e MAILMAN_REST_USER=restadmin
-e MAILMAN_REST_PASSWORD=password
-e MAILMAN_HOST=172.19.199.2
-e POSTORIUS_TEMPLATE_BASE_URL=http://172.19.199.3:8000
-e SECRET_KEY=secret
-e SERVE_FROM_DOMAIN=devbox.local
-e SMTP_HOST=172.19.199.5
-e UWSGI_STATIC_MAP=/static=/opt/mailman-web-data/static
--network mailman --ip 172.19.199.3
maxking/mailman-web
docker run --name nginx
-v /opt/keys/acme/challenge:/var/www/acme:ro
-v /opt/keys/tls:/etc/tls:ro
-v /opt/mailman/web:/opt/mailman-web-data:ro
-v /opt/nginx/conf:/etc/nginx:ro
-v /opt/nginx/html:/usr/share/nginx/html:ro
-p "80:80"
-p "443:443"
--network mailman_mailman --ip 172.19.199.6
nginx:mainline-alpine
Container | Bind mounts | Permissions (host) | Permissions (container) |
---|---|---|---|
postgres | {{ postgres_data }}:/var/lib/postgresql/data | 5070:5070 0700 | postgres[70]:postgres[70] |
mailman-core | {{ mailman_core }}:/opt/mailman | 5100:5000 0750 | mailman[100]:root[0] |
{{ mailman_core }}/var/data:/opt/mailman/var/data | 5100:5101 2750 | mailman[100]:users[100] | |
postfix | {{ postfix_data }}:/var/spool/postfix | 5000:5000 0711 | postfix[100]:root[0] |
{{ postfix_log }}:/var/log/postfix | 5100:5000 0711 | postfix[100]:root[0] | |
{{ tls_data }}:/etc/tls:ro | certs:certs 0750 | 1000:1000 | |
{{ dkim_data }}:/etc/opendkim/keys:ro | 5102:5103 750 | opendkim[102]:opendkim[103] | |
{{ mailman_core }}/var/data:/var/data/mailman:ro | 5100:5101 2750 | postfix[100]:101 | |
{{ postfix_init }}:/docker-init.db/:ro | 5100:5000 750 | postfix[100]:root[0] | |
mailman-web | {{ mailman_web_data }}:/opt/mailman-web-data | 5100:5101 750 | mailman[100]:mailman[101] |
{{ acme_challenge }}:/var/www/acme:ro | certs:certs 750 | ?:root[0] | |
{{ tls_data }}:/etc/tls:ro | certs:certs 750 | ?:? | |
{{ mailman_web_data }}:/opt/mailman-web-data:ro | 5100:5000 750 | nginx[100]:root[0] | |
{{ nginx_conf }}:/etc/nginx:ro | 0:3000 550 | ?:root[0] | |
{{ nginx_html }}:/usr/share/nginx/html:ro | 0:3000 550 | ?:root[0] |
User[uid] | Used by containers |
---|---|
[5000] | nginx |
[5070] | postgres |
[5100] | mailman-core, mailman-web, postfix |
[5101] | postfix:vmail] |
[5102] | postfix:opendkim |
[5103] | |
certs[4000] | (none, but used by ACME) |
Group[gid] | Used by Containers |
---|---|
[5000] | mailman-core, nginx, postfix |
[5070] | postgres |
[5100] | |
[5101] | mailman-web:mailman |
[5102] | postfix:postdrop |
[5103] | postfix:opendkim |
[6000] | nginx, postfix. Members: 5000,5100, 5102 |
Vieving the list of containers, including stopped ones, and then stoppping and removing them:
docker ps -a
docker stop {id}
docker rm {id}
Viewing the list of networks, and pruning unused ones:
docker network list
docker network prune
Getting shell in a running container:
docker ps
docker exec -it <container name> sh
Viewing container logs:
sudo docker logs --f <container id>
ansible-inventory -i hosts --list
If ssh-agent has been used to add SSH-keys to the background agent, the agent can interfere with SSH logins. Remove them this way before running Ansible.
"$(ssh-agent)"
ssh-add -D
Encrypt each sensitive individually using the encrypt_string
command of ansible-vault
:
ansible-vault encrypt_string foo
Paste the content directly into the vars file:
a_secret_key: !vault |
$ANSIBLE_VAULT;1.1;AES256
32353831393965363138646337646633356638663039613939326365623866343737633665313133
3636376335333534373631373266376632393863383735630a396430393031373866643665386532
33373739346137373831663864343432333464636137636535643266663666336630653538323961
3763623739633234660a343230333366636163613835383263303737333263373132653964346663
30613366303732356336643862656136356134396266623032356561613861373766
Testing sending from inside the container:
echo "Subject: hello" | sendmail [email protected]
Verifying config:
postconf 1> /dev/null
To verify Mailman Core is running, in the mailman-core
container, typing
mailman info
results in something similar to:
mailman info
GNU Mailman 3.2.2 (La Villa Strangiato)
Python 3.6.6 (default, Aug 24 2018, 05:04:18)
[GCC 6.4.0]
config file: /etc/mailman.cfg
db url: postgres://mailman:longmailmanpassword@database/mailmandb
devmode: DISABLED
REST root url: http://172.19.199.2:8001/3.1/
REST credentials: restadmin:restsecret
Use browser to navigate to http:/testbox.local/postorius/
Shell into the mailman-web
container. Type:
python3 manage.py mmclient
At the Python shell typye:
>>> client
Expected output:
<Client ({{ mailman_rest_user }}:{{ mailman_rest_password }}) http://172.19.199.2:8001/3.0/>
In test Postfix won't connect to upstream destination MTAs (such as gmail). But accounts cannot be reset without an email message. In order to reset accounts, go into the Postfix queue, view the stuck message, and flush the queue:
sudo docker exec -it postfix /bin/sh
mailq
postcat -vq <message id>
postsuper -d ALL
free -m
cat /proc/meminfo
To change base URL for pipermail:
cd /usr/lib/mailman/
bin/withlist -l -r fix_url discuss -u archive.securitymetrics.org