Skip to content

mong/shinyproxy

Repository files navigation

Proxy our shiny apps

Introduction

Re-using the name of the underlying Spring boot web application shinyproxy is the deployer of shiny web applications developed and mentained by SKDE. Both shinyserver and the web applications it it a proxy for are deployed as docker containers and replicated at a given number of nodes to reduce potential downtime.

mongr.no shinyproxy setup

shinyproxy is part of the infrastructure at mongr.no and serves the shiny application imongr.

Config

Configuration of shinyproxy is defined in the application.yml-file. Re-configuration will in most likely occur as a result of new shiny application being added (or old ones removed). For details please see the ShinyProxy docs.

Build

Our shinyproxy will itself run as a docker container. To build the corresponding image move into the directory [project] holding the Dockerfile and run

docker build -t hnskde/shinyproxy-[project]:latest .

Then, push this image to the registry:

docker push hnskde/shinyproxy-[project]:latest

Install

All steps are performed from the command line at each server instance (node) that will be running shinyproxy.

First time

Make sure that the current content of this repo is available by using git:

git clone https://github.com/mong/shinyproxy.git

If the server to be hosting shinyproxy is just created (vanilla state), move into the newly created shinyproxy project directory and run the following script:

./install.sh

If AWS LogWatch will be used credetials need to be defined and available to docker. Add the following to /etc/systemd/system/docker.service.d/override.conf:

Environment="AWS_ACCESS_KEY_ID=[some_key_id]" "AWS_SECRET_ACCESS_KEY=[some_seecret_access_key]"

Corresponding values are found using the AWS Identity and Access Management (IAM) service. Save the file and reload the daemon

sudo systemctl daemon-reload

and restart the docker service

sudo systemctl restart docker

Then, download the latest image from registry:

docker pull hnskde/shinyproxy-[project]

Repeat the above instructions at all nodes.

Update

Please note that an update of the shinyproxy will render all shiny applications behind it inaccessible. Therefore, make sure to perform all the following steps one node at a time. This will make sure that while one node is down for an update the other nodes will still serve users of the shiny applications.

First, make sure to download the latest update of the shinyproxy image from the registry:

docker pull hnskde/shinyproxy-[project]

If the update also includes changes of docker-compose.yml get the latest version using git:

git pull origin main

Then, take down shinyproxy docker container:

docker compose down

and clean up old images and containers:

docker system prune

Finally, bring up the updated shinyproxy container:

docker compose up -d

Repeat the above steps on all nodes.

Start and stop service

To enable shinyproxy use docker compose to start the relevant services in detached mode. Move into the shinyproxy directory and run:

docker compose up -d

To stop it do:

docker compose down

To bring the services down an up again in one go do:

docker compose restart

For other options please consult the docker compose docs.

Note on shiny applications

Install

shinyproxy do not pull images from remote registries. To make images available locally at each node these have to be pulled the first time they are used, e.g.

docker pull hnskde/qmongr

Update

Updating the shiny applications is a somewhat different process and part of a continuous integration and delivery (ci/cd) scheme. At each node running shinyproxy a cron job is defined to trigger update routine every 5 minutes on weekdays:

*/5 * * * 1-5 $HOME/shinyproxy/update_images.sh >/dev/null 2>&1

If updates are found the corresponding images are downloaded. A new version of a shiny application will be available once shinyproxy restarts the corresonding container from the updated image.