Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Docker deployment #503

Open
13 tasks
Kobzol opened this issue Sep 19, 2024 · 3 comments
Open
13 tasks

Docker deployment #503

Kobzol opened this issue Sep 19, 2024 · 3 comments
Labels
help wanted Extra attention is needed

Comments

@Kobzol
Copy link
Collaborator

Kobzol commented Sep 19, 2024

We would like to have the option to deploy Kelvin fully inside Docker, ideally with a single command, if possible. We want to have the following services running inside Docker, networked together:

  • The Django backend
  • nginx, which will serve the Django backend
  • Postgres (DB)
  • Redis (cache)
  • A set of Django RQ workers

This corresponds to the architecture described in docs.

Ideally, it should be possible to deploy everything with a single docker-compose.yml file. All configuration (directory/file paths, ports etc.) should ideally be configurable in the docker-compose file, through environment variables loaded from an .env file. You can find an example of that in the existing docker-compose.yml file.

Here is a broad TODO list of things (in almost arbitrary order) that we need to do in order to make this possible:

  • Add nginx to docker-compose.yml
    • Make it possible to map a host directory that contains nginx config
    • Make it possible to map a host directory that contains certificates
  • Make it possible to map a host directory containing persistent data for the Redis instance
  • Make sure that all the services in the docker compose file can talk to each other through the network
  • Build the JS frontend in the Kelvin Dockerfile, to make it available in the Docker image
    • Use a multi-stage build to only include the frontend.js, frontend.css files and the dolos directory in the final Docker image
  • Make it possible to map a host directory containing local_settings.py, which will be used to override configuration for the Django backend running inside of Docker
  • Setup nginx so that it serves the Kelvin Django backend
  • Configure a startup script that will run python3 manage.py migrate everytime the whole Docker deployment starts
  • Make it possible to start RQ workers inside Docker
    • Make it possible to run each worker in multiple instances
    • Workers use Docker internally, configure Docker-in-Docker. An example can be found here, but it needs to be tested if it works, and how it interacts with Docker permissions.

If there is a better way to do this, other than docker-compose, we can also try it. But please no Kubernetes :)

@Kobzol Kobzol added the help wanted Extra attention is needed label Sep 19, 2024
@mrlvsb mrlvsb deleted a comment Sep 19, 2024
@JersyJ
Copy link
Contributor

JersyJ commented Sep 19, 2024

As mentioned on VSB Discord, I am taking this task. (just noticing -> no one works on this in paralell).

@JersyJ
Copy link
Contributor

JersyJ commented Sep 21, 2024

* [ ]  Make it possible to start RQ workers inside Docker
  
  * [ ]  Make it possible to run each worker in multiple instances
  * [ ]  Workers use Docker internally, configure Docker-in-Docker. An example can be found [here](https://github.com/mrlvsb/kelvin/blob/1f96ae303fc3c61e76c56e1c076bac0c8940393c/docker-compose.yml), but it needs to be tested if it works, and how it interacts with Docker permissions.

I am thinking about the possible solutions:

1. DinD with Sysbox Runtime:

A classic Docker-in-Docker (DinD) approach with a secure implementation.

Pros:

  • Full isolation: Containers can run their own Docker daemon, offering strong sandboxing.

Cons:

  • Docker inside container has it own local image repository. Whenever the containers run in current setup, we would need to:
    build all the images within the container
    or
    pass them as tar file
    or
    have local docker repository (in docker compose)
    or
    publish the images on GHCR (GitHub Container Registry).

2. DooD (Docker out of Docker):

Here, we run the Docker CLI inside the container, but the daemon remains on the host. We would be creating sibling containers, unlike the child containers in solution 1.

Pros:

  • Simpler, container only need access to the host's Docker socket, so they avoid the overhead of running another Docker daemon. Images and caching can be shared with the host directly.

Cons:

  • Mount paths: if the container running the Docker CLI creates a container with a bind mount, the mount path must be relative to the host (as otherwise the host Docker daemon on the host won’t be able to perform the mount correctly).
  • Potential security risk:if the container has access to the host’s Docker socket, it can potentially gain root access to the host. However, this is already the case in the current setup.

@Kobzol any opinion?

@Kobzol
Copy link
Collaborator Author

Kobzol commented Oct 4, 2024

Sorry, I didn't have time to look into this yet. First we need to get the Docker change merged, then somehow deploy the Docker version on a new server and then we can start looking into DinD.

At a glance, I would probably choose DooD, to avoid complexity with managing the local images or rebuilding them all the time.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
help wanted Extra attention is needed
Projects
None yet
Development

No branches or pull requests

2 participants