Here I describe my server setup, which provides the following services:
-
E-Mail (Mailcow)
-
Cloud (Seafile)
-
Dynamic DNS (dprandzioch/docker-ddns @ Github)
-
OpenVPN (kylemanna/docker-openvpn @ Github)
-
Offside Backup (Borg Backup)
-
A simple HTML Webpage
This Setup is set behind HAProxy and LetsEncrypt. And is deployed with Docker and Docker-Compose, to keep it as simple as possible.
I’m not a professional administrator and I run this setup privately only.
This is not supposed to be a copy&paste tutorial, but will probably work as such as well.
-
Server with a static IP Address
-
Domain
-
Little Home Server for Offsite Backup (Raspberry PI/Intel Nuc/Hetzner Storagebox)
I’m using a VPS (virtual private server) from Contabo, called 'VPS L SSD' and a Intel NUC, both with Debian 10.
Important
|
You should secure your server and your ssh connection. |
First we need to install Docker and Docker-Compose. These steps are well documented on the official docker website, so I will just link to them.
Just like with Docker, there are enough instructions to install certbot.
Inside cli.ini
, we configure Let’s Encrypt.
rsa-key-size = 4096
email = [email protected]
authenticator = standalone
domains = example.com, www.example.com, sea.example.com, autoconfig.example.com, autodiscover.example.com, imap.example.com, mx.example.com, smtp.example.com, ns.example.com
Important
|
Under the point domains, we enter the url’s for which we want to have an SSL certificate. |
Create folder structure (you can customize the location if you want) and download the required files.
git clone https://github.com/oyren/yet_another_complete_groupware_setup /root/docker
git clone https://github.com/mailcow/mailcow-dockerized.git /root/docker/mailcow-dockerized
Inside the web
folder, you will find the docker-compose.yml
.
I will split the files into parts to explain them, you have to replace example.com
with your own domain and make other adjustments if necessary.
Beginning with HAProxy, which we will use as a reserve proxy and which takes care of SSL.
version: '3.5'
services:
haproxy:
container_name: haproxy
image: haproxy:latest
ports:
- "80:80" # (1)
- "443:443" # (2)
volumes:
- ./haproxy.cfg:/usr/local/etc/haproxy/haproxy.cfg:ro # (3)
- ./example.com.pem:/usr/local/etc/haproxy/example.com.pem:ro # (4)
networks:
- gateway # (5)
- mailcowdockerized_mailcow-network
restart: unless-stopped
-
Bind http(80) host port to haproxy docker container
-
Bind https(443) host port to haproxy docker container
-
Forward haproxy.cfg with read only access
-
Forward example.com.pem to use SSL Termination in haproxy (more about this later)
-
gateway is the docker network to reach the other containers
As you can see, all http/https requests are routed through HAProxy.
SSL Termination is the practice of terminating/decrypting an SSL connection at the load balancer, and sending unencrypted connections to the backend servers.
Important
|
You also need to edit the haproxy.cfg (search for example.com and adjust it).
|
Next we have Seafile, which in my opinion is far better than Nextcloud. Nextcloud tries to provide everything but nothing reasonable. Just my two cents.
db: #seafile_db
container_name: seafile-mysql
image: mariadb:10.1
environment:
- MYSQL_ROOT_PASSWORD=secret_pw # Requested, set the root's password of MySQL service.
- MYSQL_LOG_CONSOLE=true
volumes:
- ./seafile-mysql/db:/var/lib/mysql <1> # Requested, specifies the path to MySQL data persistent store.
networks:
- seafile-net
memcached:
container_name: seafile-memcached
image: memcached:1.5.6
entrypoint: memcached -m 256
networks:
- seafile-net
seafile:
container_name: seafile
image: seafileltd/seafile-mc:latest
volumes:
- ./seafile-shared:/shared (2)
environment:
- DB_HOST=db
- DB_ROOT_PASSWD=secret_pw # Requested, the value shuold be root's password of MySQL service.
- SEAFILE_SERVER_LETSENCRYPT=false # Whether to use https or not.
depends_on:
- db
- memcached
networks:
- seafile-net
-
seafile-mysql is the folder where seafile will store the database
-
seafile-shared is the folder where seafile will store the data/config files
Seafile is well documented, so as always I will just link it here. There you can find how to modify Seafile Server Configuration and so on.
Since HAProxy takes care of SSL, Seafile does not need to do this.
This docker allows you to set up a dynamic DNS server that allows you to connect to devices at home from anywhere in the world.
With this container you can sent a API request from your router (FritzBox in my case) and then you can reach your home network e.g. at home.dyndns.example.com (this will be needed for later offsite backup).
How this goes you will find later under the subsection Usage
or under the good documentation from dprandzioch @ Github.
ddns:
image: davd/docker-ddns:latest
ports:
- "53:53"
- "53:53/udp"
networks:
- gateway
environment:
RECORD_TTL: 60
ZONE: dyndns.example.com # (1)
SHARED_SECRET: changeme # (2)
restart: unless-stopped
-
is your dyndns domain (NS-Zone)
-
SHARED_SECRET must be provided each time you update a DNS record via the API
I wrote a small website to get faster access to the subpages like SOGo from mailcow and seafile. .docker-compose.yml
example_webpage:
build:
context: ./example_welcome # (1)
dockerfile: Dockerfile
networks:
- gateway
restart: unless-stopped
-
folder where the Dockerfile is placed (you can easily change this)
The index.html and Dockerfile is placed inside the example_welcome folder and should be self-explanatory.
Important
|
If you change anything on the html page, you have to rebuild the container. There are several ways to do this, I use the following command (service_name = example_webpage). |
docker-compose up -d --no-deps --build <service_name>
You should have already cloned mailcow.
And renew we can use the already existing documentation.
First follow the Prerequisites:
Check if umask
returns 0022.
Then follow the instructions starting at point 3:
Important
|
We use HAProxy as reserve proxy and to operate SSL, so there are some Important points in mailcow.conf , they are listed below. Skip docker-compose up -d for the moment.
|
HTTP_BIND=127.0.0.1
HTTP_PORT=8080
HTTPS_BIND=127.0.0.1
HTTPS_PORT=8443
ADDITIONAL_SAN=imap.example.com,smtp.example.com
# Skip running ACME (acme-mailcow, Let's Encrypt certs) - y/n
SKIP_LETS_ENCRYPT=y
This is how my DNS Records look like:
# Name Type Value
example.com A 1.2.3.4
AAAA 2222:1111:3333::1
CAA letsencrypt.org
MX mx.example.com
TXT "v=spf1 mx ~all"
autoconfig.example.com A 1.2.3.4
AAAA 2222:1111:3333::1
autodiscover.example.com A 1.2.3.4
AAAA 2222:1111:3333::1
dkim._domainkey.example.com TXT v=DKIM1;k=rsa;t=s;s=email;p=....
dyndns.example.com NS ns.example.com
imap.example.com A 1.2.3.4
AAAA 2222:1111:3333::1
mx.example.com A 1.2.3.4
AAAA 2222:1111:3333::1
ns.example.com A 1.2.3.4
AAAA 2222:1111:3333::1
sea.example.com A 1.2.3.4
AAAA 2222:1111:3333::1
smtp.example.com A 1.2.3.4
AAAA 2222:1111:3333::1
vpn.example.com A 1.2.3.4
AAAA 2222:1111:3333::1
www.example.com A 1.2.3.4
AAAA 2222:1111:3333::1
Important
|
Let’s Encrypt certificates have a ninety-day lifetimes, so you have to renew them, before they expire. |
You could run it through a cronjob regularly.
Important
|
Also here you have to change example.com and the path if necessary.
|
#!/bin/bash
# the ( ; ) part lets you execute a command from another directory in bash
# this makes it more reliable, as docker the folder name to name volumes
(cd /root/docker/web/ ; docker-compose down)
# Wait a few seconds, so there won't be any conflicts.
sleep 15
(cd /root/docker/mailcow-dockerized/ ; docker-compose down)
certbot certonly --expand --non-interactive --cert-name example.com
cat /etc/letsencrypt/live/example.com/fullchain.pem /etc/letsencrypt/live/example.com/privkey.pem > /root/docker/web/example.com.pem
cp /etc/letsencrypt/live/example.com/fullchain.pem /root/docker/mailcow-dockerized/data/assets/ssl/cert.pem
cp /etc/letsencrypt/live/example.com/privkey.pem /root/docker/mailcow-dockerized/data/assets/ssl/key.pem
(cd /root/docker/mailcow-dockerized/ ; docker-compose up -d)
sleep 15
(cd /root/docker/web/ ; docker-compose up -d)
As you can see the certificate will be copied at 2 places, once for haproxy and for mailcow.
Now you should reach the following services under the following links:
Services | url |
---|---|
Seafile |
sea.example.com |
SOGo |
mx.example.com/SOGo |
Mailcow Admin/UI |
mx.example.com |
Welcome Webpage |
example.com |
With the following API request you can set a IPv4 or IPv6 address.
https://ns.example.eu/update?secret=<yourpw>&domain=home&addr=1.2.3.4
-
secret
: The shared secret set before -
domain
: The subdomain to your configured domain, in this example it would result in home.dyndns.example.com. -
addr
: IPv4 or IPv6 address of the name record
Then you can reach the registered IP under home.dyndns.example.com (1.2.3.4).
https://www.davd.io/build-your-own-dynamic-dns-in-5-minutes/[Here] you can read more about it.
As backup system I use borg backup, which runs on an intel nuc at my home.
I reach the intel nuc via ssh and the previously established dynamic dns service, to reach it i just had to forward the ssh port in my router settings to nuc.
Important
|
You should secure your SSH Login. Among others, I use a public key authentication. |
borg init --encryption=repokey ssh://[email protected]:22/./backup
The backup is performed once a day (cronjob) and is based on the following script. Which should be understandable on the basis of the comments.
#!/usr/bin/env bash
## Create log folder
mkdir -p /var/log/borg
##
## export enviroment variables
##
## Path to SSH Key
export BORG_RSH="ssh -i /root/.ssh/serverbackup_ed25519"
## Borg Repository Password
export BORG_PASSPHRASE="YOUR_BORG_Password"
##
## set the variables
##
## log file and path
LOG="/var/log/borg/backup.log"
## Backup user on your remote machine (intel nuc in my case)
BACKUP_USER="serverbackup"
## Backup location (/home/serverbackup/backup)
REPOSITORY_DIR="./backup"
## put everything together and hand over the domain under which the backup server can be reached
REPOSITORY="ssh://${BACKUP_USER}@home.dyndns.example.com:22/${REPOSITORY_DIR}"
##
## Write output to logfile
##
exec > >(tee -i ${LOG})
exec 2>&1
echo "###### Backup started: $(date) ######"
## Create list of installed software
dpkg --get-selections > /backup/software.list
## I use the path /backup to store the mailcow backups and the seafile mysqldump's
## create needed folders if not exist
mkdir -p /backup/seafile_database
mkdir -p /backup/mailcow/
## Mailcow Backup
## https://mailcow.github.io/mailcow-dockerized-docs/b_n_r_backup/
export MAILCOW_BACKUP_LOCATION=/backup/mailcow
/root/docker/mailcow-dockerized/helper-scripts/backup_and_restore.sh backup all
## Seafile
## https://manual.seafile.com/deploy/deploy_with_docker.html
docker exec -it seafile-mysql mysqldump -uroot --opt ccnet_db > /backup/seafile_database/$(date +%F_%R)_ccnet_db.sql
docker exec -it seafile-mysql mysqldump -uroot --opt seafile_db > /backup/seafile_database/$(date +%F_%R)_seafile_db.sql
docker exec -it seafile-mysql mysqldump -uroot --opt seahub_db > /backup/seafile_database/$(date +%F_%R)_seahub_db.sql
echo "Transfer Files ..."
## Here files and folders which are backed up and which are excluded are configured.
borg create -v --stats \
$REPOSITORY::'{now:%Y-%m-%d_%H:%M}' \
/root \
/etc \
/backup \
/var/spool/cron \
--exclude /root/docker/iota_iri/data \
--exclude /root/docker/mailcow-dockerized \
--exclude /sys \
--exclude /var/run \
--exclude /run \
--exclude /lost+found \
--exclude /mnt
echo "###### Backup finished: $(date) ######"
## Cleanup
borg prune -v ${REPOSITORY} \
--keep-daily=7 \
--keep-weekly=4 \
--keep-monthly=6
rm -R /backup/mailcow/*
rm -R /backup/seafile_database/*
Borg Backup is also very well documented here.
This Script was shamelessly stolen by Hetzner and adapted by me.