In certain cases you may want to have a DB sandbox and mess around with the DB, then roll it back. Normally this can be achieved by backing up the DB and restoring it from the created DB dump. With large databases (over 500MB) this operation can take a considerable amount of time (10+ minutes).
With Docker we can sandbox the DB by creating a reusable docker image from the DB container. Spinning up such image takes less time than re-importing the DB with MySQL.
For example, a 2.6GB DB dump file takes 14 minutes to import with MySQL. (Re)launching a docker container from an image with that same imported dump takes only 2 minutes. Your results may obviously vary.
This is an advanced used case, so make sure you have a backup of the database before proceeding. Follow instructions below to get started.
In Docker terminology, a read-only Layer is called an image. In a traditional setup you would use the stock mysql image as the base image for your DB container.
📄 docker-compose.yml
# DB node
db:
image: mysql:5.5
...
The image never changes. Thanks to Union File System when process wants to write a file to an image, Docker creates a copy of that file in a new layer (the top-most writeable layer).
The stock mysql image defines a data volume (/var/lib/mysql
). This makes sure the DB data is permanently stored outside of the db
container on the host and is not lost as the container is (re)launches.
To make the sandbox DB mode possible we have to remove this permanent storage volume from the image (this is done be using a fork of the image), import the DB and commit the container as a new image with docker commit
command.
The new image will include the base image plus all in-memory changes made i.e. your DB snapshot. It is then used as the base image for the DB node going forward.
📄 docker-compose.yml
# DB node
db:
image: mysql_with_my_database:snapshot1
...
Now you can do any changes to the database you want and each time after the container is restarted all changes will be lost (as it doesn't have external persistant storage) and you will be back to your base image mysql_with_my_database:snapshot1
.
-
Create a DB dump
-
Replace the db service base image in
docker-compose.yml
withblinkreaction/mysql-sandbox
-
Reset containers
dsh reset
-
Import the DB dump you created in step 1.
-
Stop and commit the db service container (this will turn the container into a reusable docker image)
docker stop $(docker-compose ps -q db) && docker commit $(docker-compose ps -q db) <tag>
Replace
<tag>
with any meaningful tag you'd like to use for the image. E.g.db_backup
ordbdata/myproject:snapshot1
-
Replace the db service base image in
docker-compose.yml
with the selected tag. E.g.image: dbdata/myproject:snapshot1
-
Restart containers
dsh up
You will now have a sandboxed DB container, which defaults to the DB snapshot you created in step 1 every time the db container is restarted.
In case you have a large database it is not recommended to commit the container that already runs on snaphot Base Image (i.e. creating snapshot on top of snapshot). Every docker image holds all parent images + in-memory changes inside it. Thus, with every commit the size of the resulting image will increase by full size of the DB, not its delta.
In case of small database though it is ok to do that. But please keep in mind there's a limitation on depth of layers which currently equals 120.
You will need a DB dump to revert to. Either use the one created before enabling the sandbox mode or take another one while the db service contained is still up.
-
Revert the changes done to the db service in
docker-compose.yml
-
Reset containers
dsh reset
-
Import the DB dump.
Now the db service container is using a persistent data storage volume (/var/lib/mysql
).