-
Notifications
You must be signed in to change notification settings - Fork 69
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Persistence #2
Comments
This is not really needed. As long as one container exists the data is hold safely... Just use a xtrabackup job to save data externally. Or just install a swarm with 3 instances on aws, 3 instances on azure and 3 instances inhouse and let them build a galera of nine. =) They cannot die all at one. |
in my scenario, i can't use external resource, and i have to prevent that everything stop to work if there is a fault on all nodes. For example if all nodes will dies, and i have a backup of the data, the infrastructure stay down until i getting back to work 1 cluster, waiting it to bring up, then scaling it and restore the data For this reason, in other services, i've used glusterfs that sync on every local node of the cluster, the volume informations. Can i do something similar? If is impossible to use external folders to save persistent data, maybe could be useful if the boot process allow to restore the db from a backup (if it exists). This maybe not solve the problem that the cluster not scale automatically at boot, but it could save my life |
after some days in a swarm environment, i can say that the choice to not use persistent volumes is right. Trying to syncronize persistent volumes with gluster cause slow performances, and have many replicas is just enought |
The pull request is very clever on how it organizes by IP. However, it did give me issues when booting the cluster above 1 replica along with quite a few performance problems; but it could be that I'm using docker Since the only problem with persistence for this image is that each container has different data in their respective
You can then use any docker volume plugin to move and backup your persistent data, like |
If you just want to have a place, where the data is stored for backup reasons, this is a way to go. |
Very fair point. What about
EDIT: It looks like it doesn't recover well from a EDIT2: Sometimes it does, other times it requires manual intervention to remove files like
|
I am not quite sure, if this isnt a default behavior. Means, that if you would hard kill a classic Cluster Node, it would result in the same error/problem when starting up.
|
I haven't reached other bugs in reviving the Cluster Nodes, but I have seen the
I believe so. I've been killing Nodes 2 and 4 quite a few times with them having addresses The logs from
However, there was one irregular case where a wsrep entry for address
Killing Node 1 (
Killing Node 5 solved this issue. Like above, killing either or both nodes didn't change their addresses. |
I got the stuff with the configs wrong in my last posting. |
I know it was a while ago but how would I execute xtrabackup job? I understand the whole point is that the database folder is not exposed to the host, so it looks like running xtrabackup in another container is not an option. |
I realize the description on the readme states that this is supposed to work with no persistent data volumes, but I really like what you've done with this - I just wanted the extra peace of mind of having data persistence. I'd like to see it added as an option.
I made a version that, sensing a volume at /data, manages the mysql datadir in that location. Except, each container gets it's own subfolder. That way you can have multiple containers on each host. Or all the containers can share a network (nfs) location safely. There is also a cleanup function as containers come and go. I'll put in a pull request.
The text was updated successfully, but these errors were encountered: