-
Notifications
You must be signed in to change notification settings - Fork 69
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
a node issue could cause to be locked out from the cluster #8
Comments
If you do it the way I described in the README, if the first container would be created new, it would behave just as any other container started later. Because it detects the others already running containers and does not go in bootstrap mode. |
I've noticed that often the swarm networking is not stable and may cause several issues. Now i'm trying to deploy many services and, sometimes, the swarm become unstable, but the other services still continue to be working, this mariadb cluster instead become unreachable from other apps. Now my phpmyadmin instance, for example, show this error: mysqli_real_connect(): (HY000/1130): Host '10.0.10.5' is not allowed to connect to this MariaDB server is possible that there is some permission error for root user, or it is allowed to login from everywhere? |
i can confirm that working on swarm, deploying/removing services/stacks can cause errors on this service (but not on others!). I think that containers are changing theirs ip on overlay network and something stop to work. This weekend i will try to debug the issue, probabily the cluster don't change the ip of the nodes when it changes. Is possible to use hostnames instead of ips? is not elegant and not allow to use mode global, but could be useful UPDATE: i'm trying to reproduce the issue. With a fresh installation, the cluster seems solid as a rock. Starting to create some databases and populating them with only some data from a some apps (i'm trying to migrate from my docker vms to a swarm) and trying to restart docker service (today i've upgraded docker, so the system restart the docker service), the cluster seems that is able to reconnects the nodes, but the instances stop to connects to mariadb, root user is locked out from anywhere (if it exists somewhere). STEP TO REPRODUCE: docker-compose.yml
now the cluster is locked out if try to access to the container restarted and you can't! if you try to access to the other containers on the cluster except the one that is not restarted (by cli) they works and i can show the tables this is the log (docker service logs ...) NODE 0 is the one on which i run systemctl restart docker
this logs refers on a test on which i've tried to bind the address to 0.0.0.0 on my.conf, to try to start a cli inside the container after the boot process was terminated, but i can't to log in or to create a new password for root user i don't understand why node 1 and 2 log 'Initializing database' etc.. so it seems restarting mysql service (while the node restarted is the 0). |
Hi,
after sometime in wich the cluster runs great (a bit slow, but works), today i found it down (on the logs, the nodes seems that fail to connect between), so i tryied to investigate the problem, but i can't spend much time to debug while many services was down. All that i know is that yesterday i restarted one node. My question is: the cluster is affected of some known issue? What happen if the container that bootstrap the cluster restarts?
Sorry for my english :D
The text was updated successfully, but these errors were encountered: