Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Poststop lifecycle hook prevents deloyment status going healthy #9361

Closed
optiz0r opened this issue Nov 14, 2020 · 6 comments · Fixed by #9548
Closed

Poststop lifecycle hook prevents deloyment status going healthy #9361

optiz0r opened this issue Nov 14, 2020 · 6 comments · Fixed by #9548
Labels
stage/accepted Confirmed, and intend to work on. No timeline committment though. theme/task lifecycle type/bug

Comments

@optiz0r
Copy link
Contributor

optiz0r commented Nov 14, 2020

Nomad version

Nomad v1.0.0-beta3 (fcb32ef)

Operating system and Environment details

Nomad with docker 19.03.8

Issue

Nomad task group with poststop lifecycle hook and consul service registration does not transition to a healthy deployment status. After a long timeout the allocation transitions to unhealthy in nomad. The deployment itself spends a long time in the running state, and eventually transitions to failed. All the while, the consul health check is showing healthy.

Removing the poststart lifecycle hook makes the allocation transition immediately to healthy status.

Reproduction steps

Job file

job "gitlab-runner" {
    region = "global"
    datacenters = ["dc1"]

    type = "service"

    update {
        stagger = "30s"
        max_parallel = 1
    }

    group "ci" {
        count = 2

        constraint {
            operator = "distinct_hosts"
            value = "true"
        }

        network {
            port "metrics" {
                static = 9252
            }
        }

        service {
            name = "gitlab-runner"
            tags = ["metrics"]
            port = "metrics"
            check {
                name = "alive"
                type = "http"
                path = "/metrics"
                interval = "10s"
                timeout = "2s"
            }
        }

        task "cleanup-runners" {
            driver = "docker"

            lifecycle {
                hook = "prestart"
            }

            config {
                image = "gitlab/gitlab-runner:latest"
                args = [
                    "verify", "--delete",
                ]
                volumes = [
                    "${NOMAD_ALLOC_DIR}/gitlab-runner:/etc/gitlab-runner",
                ]
            }

            resources {
                cpu = 100 # MHz
                memory = 128 # MB
            }
        }

        task "register" {
            driver = "docker"

            lifecycle {
                hook = "prestart"
            }

            config {
                image = "gitlab/gitlab-runner:latest"
                args = [
                    "register", "-n"
                ]
                volumes = [
                    "/run/docker.sock:/var/run/docker.sock",
                    "${NOMAD_ALLOC_DIR}/gitlab-runner:/etc/gitlab-runner",
                ]
            }

            vault {
                policies = ["gitlab-runner"]
                change_mode = "signal"
                change_signal = "SIGHUP"
            }

            env {
                CI_SERVER_URL = "https://gitlab.example.com/"
                RUNNER_NAME = "${attr.unique.hostname}"
                REGISTER_RUN_UNTAGGED = "1"
                RUNNER_EXECUTOR = "docker"
                DOCKER_IMAGE = "alpine:latest"
                DOCKER_VOLUMES = "/var/run/docker.sock:/var/run/docker.sock"
                REGISTER_LOCKED = "0"
                CONCURRENT = "4"
            }

            template {
                destination = "secrets/registration_token.env"
                env = true
                data = <<EOT
                REGISTRATION_TOKEN="{{with secret "secret/apps/gitlab-runner/registration_token"}}{{.Data.data.token}}{{end}}"
                EOT
            }

            resources {
                cpu = 100 # MHz
                memory = 128 # MB
            }
        }

        task "runner" {
            driver = "docker"

            config {
                image = "gitlab/gitlab-runner:latest"
                args = [
                    "run"
                ]
                volumes = [
                    "/run/docker.sock:/var/run/docker.sock",
                    "${NOMAD_ALLOC_DIR}/gitlab-runner:/etc/gitlab-runner",
                ]
                ports = ["metrics"]
            }
            
            env {
                LISTEN_ADDRESS = "0.0.0.0:9252"
            }

            resources {
                cpu = 500 # MHz
                memory = 512 # MB
            }

        }

        task "unregister" {
            driver = "docker"

            lifecycle {
                hook = "poststop"
            }

            config {
                image = "gitlab/gitlab-runner:latest"
                args = [
                    "unregister", "--all-runners"
                ]
                volumes = [
                    "/run/docker.sock:/var/run/docker.sock",
                    "${NOMAD_ALLOC_DIR}/gitlab-runner:/etc/gitlab-runner",
                ]
            }

            env {
                CI_SERVER_URL = "https://gitlab.example.com/"
            }

            resources {
                cpu = 100 # MHz
                memory = 128 # MB
            }
        }
    }
}

Deployment status

# Some time after allocs have started
$ ~/bin/nomad deployment status -verbose 8367f485
ID          = 8367f485-19ae-b600-c76c-a42d017eb71a
Job ID      = gitlab-runner
Job Version = 0
Status      = running
Description = Deployment is running

Deployed
Task Group  Desired  Placed  Healthy  Unhealthy  Progress Deadline
ci          2        2       0        0          2020-11-14T00:26:15Z
@rockaut
Copy link

rockaut commented Nov 30, 2020

Same here. Also on beta3 with different jobs.
Removing poststop and all is healthy again.

@drewbailey
Copy link
Contributor

Thank you for the report. I've reproduced this and we are looking into it.

@rockaut
Copy link

rockaut commented Dec 7, 2020

@drewbailey nice ... if there's something to test just ping me. I'm always willing to tinker around. Well... at least if it fits on arm64 :D

@tgross tgross added the stage/accepted Confirmed, and intend to work on. No timeline committment though. label Dec 17, 2020
jazzyfresh added a commit that referenced this issue Jan 12, 2021
* investigating where to ignore poststop task in alloc health tracker

* ignore poststop when setting latest start time for allocation

* clean up logic

* lifecycle: isolate mocks for poststop deployment test

* lifecycle: update comments in tracker

Co-authored-by: Jasmine Dahilig <[email protected]>
jazzyfresh added a commit that referenced this issue Jan 12, 2021
drewbailey pushed a commit that referenced this issue Jan 12, 2021
backspace pushed a commit that referenced this issue Jan 22, 2021
* investigating where to ignore poststop task in alloc health tracker

* ignore poststop when setting latest start time for allocation

* clean up logic

* lifecycle: isolate mocks for poststop deployment test

* lifecycle: update comments in tracker

Co-authored-by: Jasmine Dahilig <[email protected]>
backspace pushed a commit that referenced this issue Jan 22, 2021
@jfabales
Copy link

jfabales commented Jan 24, 2021

Hi,
Just noticed this happening to my jobs with poststart tasks and sidecar=false after upgrading to nomad v1.0.2. Allocations became unhealthy after poststart tasks have finished running, consul service checks all show healthy as well. Tasks all transition to healthy again after removing the poststart task.
So just wondering if the same needs to be done for poststart tasks?

Thanks!

@havenith
Copy link

havenith commented Nov 7, 2021

I'm seeing this too on Nomad 1.1.6. We need poststart non-sidecar tasks to complete the deployment steps for us, so this is basically a showstopper

@github-actions
Copy link

I'm going to lock this issue because it has been closed for 120 days ⏳. This helps our maintainers find and focus on the active issues.
If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.

@github-actions github-actions bot locked as resolved and limited conversation to collaborators Oct 14, 2022
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
stage/accepted Confirmed, and intend to work on. No timeline committment though. theme/task lifecycle type/bug
Projects
None yet
Development

Successfully merging a pull request may close this issue.

7 participants