-
Notifications
You must be signed in to change notification settings - Fork 8
Import MySQL database into the Docker container
The command performs two actions:
- Import the database from the
.sql
or.sql.gz
file into a Docker container. - Optionally upload the dump to AWS S3 (see docker:mysql:upload-to-aws command and Building MySQL images with DB: how it works).
The idea is that developers often get databases from live or other servers to refresh local data. After importing the DB, they can easily upload it to AWS S3 to share with other developers, use the DB image in CI/CD pipelines, etc. Refreshing a database image becomes a natural process even if you aren't able to configure cron jobs to do this. This is quite a common situation when a company has many projects from different clients that are hosted on different servers. Developers may need to manually create a DB dump from some server management console. But uploading to AWS becomes a part of the import process and does not require any additional actions. See Building MySQL images with DB: how it works
Example usage:
php bin/dockerizer docker:mysql:import-db <dump> -c <container>
Advanced usage with aliases:
# Export a global variable containing the path to your projects root directory
export DOCKERIZER_PROJECTS_ROOT_DIR=${HOME}/misc/apps/
# Alias to run Dockerizer from any directory
alias DOCKERIZER='php -d xdebug.start_with_request=trigger ${DOCKERIZER_PROJECTS_ROOT_DIR}dockerizer_for_php/bin/dockerizer'
# Run `EXPORTDB -e -a` to export create a compressed dump of a MySQL database.
# You should be in the directory containing `docker-compose.yaml` file.
alias EXPORTDB='DOCKERIZER docker:mysql:export-db -c $(DOCKERIZER composition:get-container-name mysql)'
-
<dump>
(first argument): Path to the dump file. Can be either.sql
or.sql.gz
file. -
-c
(--container
, required): Container name or ID. -
-t
(--target-image
, optional): Docker image name including registry domain (if needed) and excluding tags. If not specified, the image name is taken from the container's labelcom.default-value.docker.registry.target-image
. -
-f
(--force
, flag): Ignore free space requirements (x1.6 from the uncompressed dump size). -
-e
(--exec
, flag): Execute import command instead of suggesting a manual tun when possible. -
--aws
(flag): Force upload to AWS without a prompt. Missing this option in the non-interactive move (-n
) means that the upload is not processed.
The database dump must not contain invalid DEFINER
for triggers, procedures, functions, and views. If it does, you
must manually remove it with the following command: sed -i 's/DEFINER=[^*]*\*/\*/g' dump.sql
.
- Check that the file mime type is one of:
application/sql
,text/plain
,application/gzip
. - Check MySQL container environment variables for username/password/db and try to connect to the database to ensure they are correct.
- Import the dump into the container.
- Upload archive to the AWS S3.
Here are some examples and the results you should get:
Example 1: `dump.sql` is 5GB, free space on the host machine is 20GB
Result: free disk space is > dump size * 2.6. Dump copied inside a container. You have a choice: continue import or do
this manually to see all possible errors and warnings. Respective commands for manual import are present in the console.
Example 2: `dump.sql` is 10GB, free space on the host machine is 20GB
Result: free disk space is > dump size * 1.7. Dump is archived and imported automatically.
Example 3: `dump.sql` is 15GB, free space on the host machine is 20GB
Result: free disk space is < dump size * 1.7. Command throws an error.
Example 4: `dump.sql.gz` is 1GB, free space on the host machine is 20GB
Result: Depends on the uncompressed dump size. Dump can be uncompressed like in the `Example 1` if there is enough free
space, or it will fall back to an archive import like in the `Example 2`.
Inside a Docker container: MYSQL_USER
/MYSQL_PASSWORD
/MYSQL_DATABASE
or respective MariaDB environment variables.
- docker:mysql:connect
- docker:mysql:export-db
- docker:mysql:import-db
- docker:mysql:upload-to-aws
- docker:mysql:generate-metadata
- docker:mysql:reconstruct-db
- docker:mysql:test-metadata
- magento:setup
- magento:reinstall
- magento:test-module-install
- magento:test-templates
- magento:test-dockerfiles
- MacOS and Docker Desktop support
- Bash aliases
- Install Symfony with Dockerizer
- Running-apps-with-existing-Docker-compositions
- Limitations
- Building MySQL images with DB: how it works
- Configure AWS Lambda
- Gitlab pipeline to build images
- Managing AWS S3 access
- Using AWS credentials with Dockerizer
- Access management cheat-sheet
- FAQ