Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Stopping containers during backup that are part of a stack replicates containers #17

Open
Cozmo25 opened this issue Jul 22, 2020 · 6 comments

Comments

@Cozmo25
Copy link

Cozmo25 commented Jul 22, 2020

What's the recommended way to prevent containers that are part of a service and stopped during backup from restarting and effectively scaling up those services?

i.e. PGADMIN service is stopped during backup, backup takes place, after backup is complete I now have 2 instances of PGADMIN service running when I only require 1

@prologic
Copy link

Also curious about this as I'm investigating whether this tool will also help solve my backup needs with Docker Local named Volumes in Swarm Clusters...

@Cozmo25
Copy link
Author

Cozmo25 commented Aug 20, 2020

@prologic I was able to change the restart_policy setting to “on-failure” which resolved this problem https://docs.docker.com/compose/compose-file/#restart

I did encounter other problems with my service names not being re-registered with my Traefik proxy after the restart but that’s another issue

@prologic
Copy link

That would be a bit of a blocker for me as I also use Traefik as my ingress. Hmmm 🤔

@jareware
Copy link
Owner

Have to say I haven't thought about the interactions with orchestrators at all.

So if you figure out elegant solutions, feel free to post them here, and I'll try to update the README accordingly.

@OMGTheCloud
Copy link

Old thread, but still relevant problem: All of my containers are deployed with docker stack deploy. I've tried a couple things. I have the container label set to docker-volume-backup.stop-during-backup=true, and the corresponding /var/run/docker.sock mounted for the backup container:

when I have my restart_condition: set to on-failure, then backup successfully stops the containers as expected, but they go in to a complete state. Watching the output of /backup.sh I see it successfully archives the volume data, then it says:
[info] Starting containers back up with the container IDs following, but, it does not actually start the containers, they stay at complete state. Bummer

I tried setting restart_condition to any, and that's not great either: When backup stops the containers, they are immediately re-deployed, which means they're in there touching the volume data before the backup job is done.

One work-around I found, is to change restart_condition to always and set delay: 60s. This (in my case) is long enough for the backup job to complete, then Docker Swarm orchestrator spins up a replacement container, long after the job is completed (though it could still be uploading at that point, but that doesnt matter).

Has anyone figured out how to have the backup container successfully manage the startup of the stopped container instance, when using docker stack deploy?

@jareware
Copy link
Owner

Yeah, using a fixed delay isn't great, but at least it works it seems.

Can't say I have better ideas, sorry.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants