Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Role does not fail when systemd container does not start #38

Open
flyemsafe opened this issue Jul 11, 2021 · 3 comments
Open

Role does not fail when systemd container does not start #38

flyemsafe opened this issue Jul 11, 2021 · 3 comments

Comments

@flyemsafe
Copy link

I had the following container_run_args which led to the service not being able to start.

    container_run_args: >-
      --rm
      -p 8080:8080 -p 8443:8443 -p 3478:3478/udp -p 10001:10001/udp
      -v "{{ exported_container_volumes_basedir }}/unifi:/unifi:Z"
      --hostname="unifi.{{ domain }}"
      --memory=2048M
      -e TZ="{{ rodhouse_timezone }}"
      #-e UNIFI_UID="{{ unifi_uid }}"
      #-e UNIFI_GID="{{ unifi_gid }}"

This commented out UID and GID caused the container not to start. Perhaps handlers should be flush then check if the service is running, fail the play if the service isn't running.

@benblasco
Copy link

What's the error you are getting?

I believe that the problem is with the fact that you have commented out the arguments being passed to the container. You need to delete those lines, because that entire string is being passed directly to podman, as you can see here:

https://github.com/ikke-t/podman-container-systemd/blob/d498f406afc5a694b56c1acf84be06d8e6f7b4c4/templates/systemd-service-single.j2

Try removing the two offending lines and your container should run. If not, please share more info and I will try to help!

@flyemsafe
Copy link
Author

I still think that the play should fail regardless if user error. The issue was the playbook completed with no errors but I had no containers running. I had to dig through systemd logs to see the issue you describe. There should be some check to see if the container or containers or pods are indeed running.

@ikke-t
Copy link
Owner

ikke-t commented Jul 17, 2021

It would be nice. Although likely tricky, as it also could be there are issues within container, like permissions, buggy configs for the application, bad mounts or whatever. So perhaps rather have another tasks after this role, as the check would be really application specific. Just saying, not even checking if container started necessarily gives OK status to whatever the container is supposed to do.

E.g, add another task that waits a while for a succesful web call to your container's service.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants