Skip to content

Commit

Permalink
Add support for Valkey and recommend it instead of Redis & KeyDB
Browse files Browse the repository at this point in the history
Related to #247
  • Loading branch information
spantaleev committed Nov 23, 2024
1 parent 1154b50 commit 014d569
Show file tree
Hide file tree
Showing 18 changed files with 371 additions and 284 deletions.
58 changes: 29 additions & 29 deletions docs/running-multiple-instances.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,11 +4,11 @@ The way this playbook is structured, each Ansible role can only be invoked once

If you need multiple instances (of whichever service), you'll need some workarounds as described below.

The example below focuses on hosting multiple [KeyDB](services/keydb.md) instances, but you can apply it to hosting multiple instances or whole stacks of any kind.
The example below focuses on hosting multiple [Valkey](services/valkey.md) instances, but you can apply it to hosting multiple instances or whole stacks of any kind.

Let's say you're managing a host called `mash.example.com` which installs both [PeerTube](services/peertube.md) and [NetBox](services/netbox.md). Both of these services require a [KeyDB](services/keydb.md) instance. If you simply add `keydb_enabled: true` to your `mash.example.com` host's `vars.yml` file, you'd get a KeyDB instance (`mash-keydb`), but it's just one instance. As described in our [KeyDB](services/keydb.md) documentation, this is a security problem and potentially fragile as both services may try to read/write the same data and get in conflict with one another.
Let's say you're managing a host called `mash.example.com` which installs both [PeerTube](services/peertube.md) and [NetBox](services/netbox.md). Both of these services require a [Valkey](services/valkey.md) instance. If you simply add `valkey_enabled: true` to your `mash.example.com` host's `vars.yml` file, you'd get a Valkey instance (`mash-valkey`), but it's just one instance. As described in our [Valkey](services/valkey.md) documentation, this is a security problem and potentially fragile as both services may try to read/write the same data and get in conflict with one another.

We propose that you **don't** add `keydb_enabled: true` to your main `mash.example.com` file, but do the following:
We propose that you **don't** add `valkey_enabled: true` to your main `mash.example.com` file, but do the following:

## Re-do your inventory to add supplementary hosts

Expand Down Expand Up @@ -40,7 +40,7 @@ When running Ansible commands later on, you can use the `-l` flag to limit which

## Adjust the configuration of the supplementary hosts to use a new "namespace"

Multiple hosts targetting the same server as described above still causes conflicts, because services will use the same paths (e.g. `/mash/keydb`) and service/container names (`mash-keydb`) everywhere.
Multiple hosts targetting the same server as described above still causes conflicts, because services will use the same paths (e.g. `/mash/valkey`) and service/container names (`mash-valkey`) everywhere.

To avoid conflicts, adjust the `vars.yml` file for the new hosts (`mash.example.com-netbox-deps` and `mash.example.com-peertube-deps`)
and set non-default and unique values in the `mash_playbook_service_identifier_prefix` and `mash_playbook_service_base_directory_name_prefix` variables. Examples below:
Expand Down Expand Up @@ -73,15 +73,15 @@ mash_playbook_service_base_directory_name_prefix: 'netbox-'

########################################################################
# #
# keydb #
# valkey #
# #
########################################################################

keydb_enabled: true
valkey_enabled: true

########################################################################
# #
# /keydb #
# /valkey #
# #
########################################################################
```
Expand Down Expand Up @@ -114,30 +114,30 @@ mash_playbook_service_base_directory_name_prefix: 'peertube-'

########################################################################
# #
# keydb #
# valkey #
# #
########################################################################

keydb_enabled: true
valkey_enabled: true

########################################################################
# #
# /keydb #
# /valkey #
# #
########################################################################
```

The above configuration will create **2** KeyDB instances:
The above configuration will create **2** Valkey instances:

- `mash-netbox-keydb` with its base data path in `/mash/netbox-keydb`
- `mash-peertube-keydb` with its base data path in `/mash/peertube-keydb`
- `mash-netbox-valkey` with its base data path in `/mash/netbox-valkey`
- `mash-peertube-valkey` with its base data path in `/mash/peertube-valkey`

These instances reuse the `mash` user and group and the `/mash` data path, but are not in conflict with each other.


## Adjust the configuration of the base host

Now that we've created separate KeyDB instances for both PeerTube and NetBox, we need to put them to use by editing the `vars.yml` file of the main host (the one that installs PeerTbue and NetBox) to wire them to their KeyDB instances.
Now that we've created separate Valkey instances for both PeerTube and NetBox, we need to put them to use by editing the `vars.yml` file of the main host (the one that installs PeerTbue and NetBox) to wire them to their Valkey instances.

You'll need configuration (`inventory/host_vars/mash.example.com/vars.yml`) like this:

Expand All @@ -152,17 +152,17 @@ netbox_enabled: true

# Other NetBox configuration here

# Point NetBox to its dedicated KeyDB instance
netbox_environment_variable_redis_host: mash-netbox-keydb
netbox_environment_variable_redis_cache_host: mash-netbox-keydb
# Point NetBox to its dedicated Valkey instance
netbox_environment_variable_redis_host: mash-netbox-valkey
netbox_environment_variable_redis_cache_host: mash-netbox-valkey

# Make sure the NetBox service (mash-netbox.service) starts after its dedicated KeyDB service (mash-netbox-keydb.service)
# Make sure the NetBox service (mash-netbox.service) starts after its dedicated Valkey service (mash-netbox-valkey.service)
netbox_systemd_required_services_list_custom:
- mash-netbox-keydb.service
- mash-netbox-valkey.service

# Make sure the NetBox container is connected to the container network of its dedicated KeyDB service (mash-netbox-keydb)
# Make sure the NetBox container is connected to the container network of its dedicated Valkey service (mash-netbox-valkey)
netbox_container_additional_networks_custom:
- mash-netbox-keydb
- mash-netbox-valkey

########################################################################
# #
Expand All @@ -180,16 +180,16 @@ netbox_container_additional_networks_custom:

# Other PeerTube configuration here

# Point PeerTube to its dedicated KeyDB instance
peertube_config_redis_hostname: mash-peertube-keydb
# Point PeerTube to its dedicated Valkey instance
peertube_config_redis_hostname: mash-peertube-valkey

# Make sure the PeerTube service (mash-peertube.service) starts after its dedicated KeyDB service (mash-peertube-keydb.service)
# Make sure the PeerTube service (mash-peertube.service) starts after its dedicated Valkey service (mash-peertube-valkey.service)
peertube_systemd_required_services_list_custom:
- "mash-peertube-keydb.service"
- "mash-peertube-valkey.service"

# Make sure the PeerTube container is connected to the container network of its dedicated KeyDB service (mash-peertube-keydb)
# Make sure the PeerTube container is connected to the container network of its dedicated Valkey service (mash-peertube-valkey)
peertube_container_additional_networks_custom:
- "mash-peertube-keydb"
- "mash-peertube-valkey"

########################################################################
# #
Expand All @@ -201,9 +201,9 @@ peertube_container_additional_networks_custom:

## Questions & Answers

**Can't I just use the same KeyDB instance for multiple services?**
**Can't I just use the same Valkey instance for multiple services?**

> You may or you may not. See the [KeyDB](services/keydb.md) documentation for why you shouldn't do this.
> You may or you may not. See the [Valkey](services/valkey.md) documentation for why you shouldn't do this.
**Can't I just create one host and a separate stack for each service** (e.g. Nextcloud + all dependencies on one inventory host; PeerTube + all dependencies on another inventory host; with both inventory hosts targetting the same server)?

Expand Down
16 changes: 8 additions & 8 deletions docs/services/authelia.md
Original file line number Diff line number Diff line change
Expand Up @@ -16,9 +16,9 @@ This service requires the following other services:
- (optional) a MySQL / [MariaDB](mariadb.md) database - if enabled for your Ansible inventory host (and you don't also enable Postgres), Authelia will be connected to the MariaDB server automatically
- or SQLite, used by default when none of the above database choices is enabled for your Ansible inventory host

- (optional, but recommended) [KeyDB](keydb.md)
- (optional, but recommended) [Valkey](valkey.md)
- for storing session information in a persistent manner
- if KeyDB is not enabled, session information is stored in-memory and restarting Authelia destroys user sessions
- if Valkey is not enabled, session information is stored in-memory and restarting Authelia destroys user sessions

- a [Traefik](traefik.md) reverse-proxy server
- for serving the Authelia portal website
Expand Down Expand Up @@ -87,11 +87,11 @@ authelia_config_access_control_rules:
- domain: 'service1.example.com'
policy: one_factor

# The configuration below connects Authelia to the KeyDB instance, for session storage purposes.
# You may wish to run a separate KeyDB instance for Authelia, because KeyDB is not multi-tenant.
# The configuration below connects Authelia to the Valkey instance, for session storage purposes.
# You may wish to run a separate Valkey instance for Authelia, because Valkey is not multi-tenant.
# Read more in docs/services/redis.md.
# If KeyDB is not available, session data will be stored in memory and will be lost on container restart.
authelia_config_session_redis_host: "{{ keydb_identifier if keydb_enabled else '' }}"
# If Valkey is not available, session data will be stored in memory and will be lost on container restart.
authelia_config_session_redis_host: "{{ valkey_identifier if valkey_enabled else '' }}"

########################################################################
# #
Expand All @@ -111,9 +111,9 @@ On the Authelia base URL, there's a portal website where you can log in and mana

### Session storage

As mentioned in the default configuration above (see `authelia_config_session_redis_host`), you may wish to run [KeyDB](keydb.md) for storing session data.
As mentioned in the default configuration above (see `authelia_config_session_redis_host`), you may wish to run [Valkey](valkey.md) for storing session data.

You may wish to run a separate KeyDB instance for Authelia, because KeyDB is not multi-tenant. See [our KeyDB documentation page](keydb.md) for additional details. When running a separate instance of KeyDB, you may need to connect Authelia to the KeyDB instance's container network via the `authelia_container_additional_networks_custom` variable.
You may wish to run a separate Valkey instance for Authelia, because Valkey is not multi-tenant. See [our Valkey documentation page](valkey.md) for additional details. When running a separate instance of Valkey, you may need to connect Authelia to the Valkey instance's container network via the `authelia_container_additional_networks_custom` variable.


### Authentication storage providers
Expand Down
Loading

0 comments on commit 014d569

Please sign in to comment.