Skip to content

3. Gitlab pipeline to build images

Maksym Zaporozhets edited this page Jun 19, 2023 · 3 revisions

GitLab pipeline to build DB images

How it works

The command docker:mysql:reconstruct-db runs inside a Docker container, which is removed right after execution. This command:

  • Downloads the dump and metadata files
  • Runs database Docker container original my.cnf plus the following configuration: datadir=/var/lib/mysql_datadir/
  • Imports database with docker:mysql:import-db -f command
  • Tags the image with the :latest and :<Y-m-d-H-i-s> tags
  • Checks connection to the database and ensures there are at least some tables in the DB
  • Pushes the image to the registry
  • Deletes the image from runner to save disk space and keep the runner clean

Changing the MySQL datadir is mandatory because the data must be stored inside the container instead of the volume.

Later we plan to implement Dockerizer in Docker and have a pre-build container.

Pipeline requires the following environment variables to run:

  • AWS_ACCESS_KEY_ID / AWS_SECRET_ACCESS_KEY - download files from any S3 bucket
  • DOCKERIZER_AWS_S3_REGION / DOCKERIZER_AWS_S3_BUCKET / DOCKERIZER_AWS_S3_OBJECT_KEY - know what to download. These variables are passed by the AWS Lambda
  • DOCKERIZER_DOCKER_REGISTRY_USER / DOCKERIZER_DOCKER_REGISTRY_PASSWORD - registry credentials. Registry URL is extracted from the target_image in the metadata file.

IMPORTANT! Build time is not the time when the DB dump was created! To know the exact dump creation time we need to to add it to the dump file name. For now, we can't rely on this.

Setting up a pipeline

  1. Create a GitLab repository. Ensure this repository can use runners for Docker containers.

  2. Add the following two masked CI/CD variables:

  • AWS_ACCESS_KEY_ID
  • AWS_SECRET_ACCESS_KEY

As a simple solution, you'll need to configure an IAM user and role with full access to all buckets, so that docker:mysql:reconstruct-db can download any dump when the pipeline runs.

  1. Below are the files that you'll need to build a database.

Pipeline code in .gitlab-ci.yaml:

image: docker:20.10.22

variables:
  DOCKER_TLS_CERTDIR: "/certs"

before_script:
  - docker info

docker_build:
  stage: build
  tags:
    - docker
  rules:
    - if: $DOCKERIZER_AWS_S3_REGION && $DOCKERIZER_AWS_S3_BUCKET && $DOCKERIZER_AWS_S3_OBJECT_KEY
      when: always
      allow_failure: false
  # @TODO: Pack Dockerizer into a Docker image to have ability to run it as a standalone and ready-to-use app
  # Implement this is the way so that we do not need PHP installed locally to run Dockerizer
  before_script:
    - docker image ls -a
    - docker container ls -a
  script:
    # We run Docker in the host OS and must have the same directories as inside guest container.
    # In this case we can use mounts. Otherwise, host OS docker will not have files to mount or paths will be wrong
    - >
      docker run --rm
      --name dockerizer-app
      -v /var/run/docker.sock:/var/run/docker.sock
      -v /apps/dockerizer_for_php/var/tmp/:/apps/dockerizer_for_php/var/tmp/
      -e AWS_ACCESS_KEY_ID=$AWS_ACCESS_KEY_ID
      -e AWS_SECRET_ACCESS_KEY=$AWS_SECRET_ACCESS_KEY
      -e DOCKERIZER_AWS_S3_REGION=$DOCKERIZER_AWS_S3_REGION
      -e DOCKERIZER_AWS_S3_BUCKET=$DOCKERIZER_AWS_S3_BUCKET
      -e DOCKERIZER_AWS_S3_OBJECT_KEY=$DOCKERIZER_AWS_S3_OBJECT_KEY
      -e DOCKERIZER_DOCKER_REGISTRY_USER=$DOCKERIZER_DOCKER_REGISTRY_USER
      -e DOCKERIZER_DOCKER_REGISTRY_PASSWORD=$DOCKERIZER_DOCKER_REGISTRY_PASSWORD
      $(docker build -q .) php bin/dockerizer docker:mysql:reconstruct-db
  after_script:
    - docker logout # The token is valid for a job time. But we're just following better practices
    - docker image prune -f --filter label=whoami=dockerizer_image
    # Remove parent image. Redirect error to output if there was an error downloading the image
    - docker image rm php:8.1.1-cli 2>&1 || true
    - docker image rm docker:20.10.22-dind 2>&1 || true
    - docker container ls -a
    - docker image ls -a
    - rm -rf /apps/ 2>&1 || true

Dockerfile for Dockerizer:

FROM php:8.1.1-cli
LABEL whoami=dockerizer_image
RUN apt update
RUN apt install -y docker.io git libzip-dev zip unzip --no-install-recommends
RUN docker-php-ext-install pcntl pdo_mysql zip
RUN curl -k -sS https://getcomposer.org/installer | php -- --install-dir=/usr/local/bin --filename=composer
WORKDIR /apps/dockerizer_for_php
RUN git clone https://github.com/DefaultValue/dockerizer_for_php.git .
RUN composer install
RUN echo 'DOCKERIZER_PROJECTS_ROOT_DIR=/apps/' > .env.local

Most simple implementation includes placing these who files in a single GitLab repository and configuring environment variables in it.

Pushing to registry

By default, the pipeline won't be able to push Docker images to any registry other than its own. Right now CI_JOB_TOKEN and GitLab CI/CD job token article don't help to achieve this. Thus:

  • Create a special Database Image Builder user in GitLab.
  • Generate a personal access token with registry read/write access for this user.
  • Use username and token for DOCKERIZER_DOCKER_REGISTRY_USER and DOCKERIZER_DOCKER_REGISTRY_PASSWORD masked protected environment variables.
  • Add this user to the project with the Developer access level.
Clone this wiki locally