Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Core (sigsegv) on startup with QNAP and 2.3 series #102

Open
riversedge opened this issue Jan 30, 2024 · 4 comments
Open

Core (sigsegv) on startup with QNAP and 2.3 series #102

riversedge opened this issue Jan 30, 2024 · 4 comments

Comments

@riversedge
Copy link

On a QNAP using Container Station and both latest and armhf branches will not start. You can watch the docker container launch, but appears to immediately sigsegv and return ExitCode 139. Nothing appears on the screen or logs that's of much use from what I can tell either.

I have tested several other branches that work fine. 1.7.5, 2.2.12, and 2.2.15 will all run. 2.3.35 and 2.3.57 crash immediately. Given I don't even see a bootloader or other text appear, possibly something with Alpine Linux 3.17 bootloader/settings?

docker events just shows:

container start [uuid] (image=nico640/docker-unms:latest, name=docker-unms-1)
container die [uuid] (exitCode=139, image=nico640/docker-unms:latest, name=docker-unms-1)

CPU Info: Annapurna Labs Alpine AL314 Quad-core ARM Cortex-A15 CPU @ 1.70GHz
uname -m returns armv7l

docker -v
Docker version 20.10.22-qnap7, build 57ed8b8

docker inspect docker-unms-1           
[
    {
        "Id": "[UUID]",
        "Created": "2024-01-30T02:23:25.39556586Z",
        "Path": "/init",
        "Args": [],
        "State": {
            "Status": "exited",
            "Running": false,
            "Paused": false,
            "Restarting": false,
            "OOMKilled": false,
            "Dead": false,
            "Pid": 0,
            "ExitCode": 139,
            "Error": "",
            "StartedAt": "2024-01-30T02:23:30.0161311Z",
            "FinishedAt": "2024-01-30T02:23:30.1245157Z"
        },
...
@Nico640
Copy link
Owner

Nico640 commented Feb 18, 2024

Hi, sorry for the late reply, do I understand correctly that docker logs docker-unms-1 returns nothing at all? Can you post the output if it does return something?

@riversedge
Copy link
Author

riversedge commented Feb 18, 2024 via email

@Nico640
Copy link
Owner

Nico640 commented Mar 10, 2024

Hmm, possible. The only thing I can think of that will cause the whole container to crash would be if the s6-supervise process /init died.

In that case it would be good to know if it also happens when trying to run the base image (nico640/s6-alpine-node), because that would narrow it down quite a bit.

Something like this:

docker run -d --name s6-test-latest nico640/s6-alpine-node:latest

then check if the container crashes or stays running and if there is anything in the logs (there should only be a few lines because the image doesn't actually do anything)

If it does crash, I have pushed a new image on the testing tag with a updated version of s6-supervise, check if that one also crashes:

docker run -d --name s6-test-testing nico640/s6-alpine-node:testing

@riversedge
Copy link
Author

The s6-test-latest crashes, immediately. docker container logs s6-test-latest returns nothing at all. Same thing seems to happen with the s6-test-testing one. It downloads, starts, and then just stops with no apparent messages. I you try to start it again, it starts, then just stops - nothing is ever displayed and no logs.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants