Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

runtime: linux/arm64 crash in runtime.sigtrampgo #32912

Closed
jing-rui opened this issue Jul 3, 2019 · 33 comments
Closed

runtime: linux/arm64 crash in runtime.sigtrampgo #32912

jing-rui opened this issue Jul 3, 2019 · 33 comments
Labels
FrozenDueToAge NeedsInvestigation Someone must examine and confirm this is a valid issue and not a duplicate of an existing one.
Milestone

Comments

@jing-rui
Copy link

jing-rui commented Jul 3, 2019

What version of Go are you using (go version)?

$ go version
go version go1.13beta1 linux/arm64

Does this issue reproduce with the latest release?

Yes, we reproduce it with go1.11.2 and go1.13beta1.

What operating system and processor architecture are you using (go env)?

go env Output
$ go env
GO111MODULE=""
GOARCH="arm64"
GOBIN=""
GOCACHE="/root/.cache/go-build"
GOENV="/root/.config/go/env"
GOEXE=""
GOFLAGS=""
GOHOSTARCH="arm64"
GOHOSTOS="linux"
GONOPROXY=""
GONOSUMDB=""
GOOS="linux"
GOPATH="/root/go"
GOPRIVATE=""
GOPROXY="https://proxy.golang.org,direct"
GOROOT="/usr/local/go"
GOSUMDB="sum.golang.org"
GOTMPDIR=""
GOTOOLDIR="/usr/local/go/pkg/tool/linux_arm64"
GCCGO="gccgo"
AR="ar"
CC="gcc"
CXX="g++"
CGO_ENABLED="1"
GOMOD=""
CGO_CFLAGS="-g -O2"
CGO_CPPFLAGS=""
CGO_CXXFLAGS="-g -O2"
CGO_FFLAGS="-g -O2"
CGO_LDFLAGS="-g -O2"
PKG_CONFIG="pkg-config"
GOGCCFLAGS="-fPIC -pthread -fmessage-length=0 -fdebug-prefix-map=/tmp/go-build139440885=/tmp/go-build -gno-record-gcc-switches"

What did you do?

We are running docker test on arm64/aarch64 physical machine with kernel 4.19.36, docker containers configured with health-cmd, so dockerd will call exec command periodically. Test framework also execute docker run/stop commands to docker containers. The core dump happens on containerd-shim and runc.(1-5 core dumps per day.)

What did you expect to see?

no crash in runtime

What did you see instead?

containerd-shim core with go1.13beta1.

Core was generated by `containerd-shim -namespace moby -workdir /var/lib/docker/containerd/daemon/io.c'.
Program terminated with signal SIGSEGV, Segmentation fault.
#0  0x00000000000554cc in runtime.sigtrampgo (sig=<optimized out>, info=0x0, ctx=0x0) at /usr/local/go/src/runtime/signal_unix.go:308
308             if sp < g.m.gsignal.stack.lo || sp >= g.m.gsignal.stack.hi {
Dump of assembler code for function runtime.sigtrampgo:
   0x0000000000055480 <+0>:     str     x30, [sp, #-192]!
   0x0000000000055484 <+4>:     stur    x29, [sp, #-8]
   0x0000000000055488 <+8>:     sub     x29, sp, #0x8
   0x000000000005548c <+12>:    ldr     w0, [sp, #200]
   0x0000000000055490 <+16>:    str     w0, [sp, #8]
   0x0000000000055494 <+20>:    ldr     x0, [sp, #208]
   0x0000000000055498 <+24>:    str     x0, [sp, #16]
   0x000000000005549c <+28>:    ldr     x1, [sp, #216]
   0x00000000000554a0 <+32>:    str     x1, [sp, #24]
   0x00000000000554a4 <+36>:    bl      0x55f20 <runtime.sigfwdgo>
   0x00000000000554a8 <+40>:    ldrb    w0, [sp, #32]
   0x00000000000554ac <+44>:    cbnz    x0, 0x55758 <runtime.sigtrampgo+728>
   0x00000000000554b0 <+48>:    mov     x0, x28
   0x00000000000554b4 <+52>:    cbz     x0, 0x556e0 <runtime.sigtrampgo+608>
   0x00000000000554b8 <+56>:    str     x0, [sp, #96]
   0x00000000000554bc <+60>:    stp     xzr, xzr, [sp, #56]
   0x00000000000554c0 <+64>:    stp     xzr, xzr, [sp, #72]
   0x00000000000554c4 <+68>:    str     xzr, [sp, #88]
   0x00000000000554c8 <+72>:    ldr     x1, [x0, #48]
=> 0x00000000000554cc <+76>:    ldr     x2, [x1, #80]
   0x00000000000554d0 <+80>:    add     x3, sp, #0xc8
   0x00000000000554d4 <+84>:    ldr     x4, [x2]
   0x00000000000554d8 <+88>:    cmp     x3, x4
---
i reg
x0             0x4000000480        274877908096
x1             0xd                 13
x2             0xb                 11
x3             0x1                 1
x4             0x69b0a0            6926496
x5             0x989680            10000000
x6             0xa000000           167772160
x7             0x18                24
x8             0x65                101
x9             0x3938700000000     1006632960000000
x10            0x500ad04848b2      88007374293170
x11            0xffffffffa2f24bde  -1561179170
x12            0x43b20             277280
x13            0x178               376
x14            0xb                 11
x15            0x8                 8
x16            0xffffcd0cbf68      281474121908072
x17            0xffffcd0cbf48      281474121908040
x18            0x0                 0
x19            0x0                 0
x20            0x4000043ee0        274878185184
x21            0xd                 13
x22            0x0                 0
x23            0x0                 0
x24            0x0                 0
x25            0x0                 0
x26            0x4000043dd0        274878184912
x27            0x69a952            6924626
x28            0x4000000480        274877908096
x29            0x400003cc08        274878155784
x30            0x554a8             349352
sp             0x400003cc10        0x400003cc10
pc             0x554cc             0x554cc <runtime.sigtrampgo+76>
cpsr           0x80000000          [ EL=0 N ]
fpsr           0x10                16
fpcr           0x0                 0
---
(gdb) p /x $x0
$1 = 0x4000000480
(gdb) p g
$2 = (runtime.g *) 0x4000000480
p /x g.m.gsignal.stack
$3 = {lo = 0x4000036000, hi = 0x400003e000}


(gdb) p *g
$4 = {stack = {lo = 274878177280, hi = 274878185472}, stackguard0 = 274878178160, stackguard1 = 274878178160, _panic = 0x0, _defer = 0x0, m = 0x4000034000, sched = {sp = 274878185408,
    pc = 277388, g = 274877908096, ctxt = 0x0, ret = 0, lr = 0, bp = 0}, syscallsp = 0, syscallpc = 0, stktopsp = 0, param = 0x0, atomicstatus = 0, stackLock = 0, goid = 0, schedlink = 0,
  waitsince = 0, waitreason = 0 '\000', preempt = false, paniconfault = false, preemptscan = false, gcscandone = false, gcscanvalid = false, throwsplit = false, raceignore = 0 '\000',
  sysblocktraced = false, sysexitticks = 0, traceseq = 0, tracelastp = 0, lockedm = 0, sig = 0, writebuf = {array = 0x0, len = 0, cap = 0}, sigcode0 = 0, sigcode1 = 0, sigpc = 0, gopc = 0,
  ancestors = 0x0, startpc = 0, racectx = 0, waiting = 0x0, cgoCtxt = {array = 0x0, len = 0, cap = 0}, labels = 0x0, timer = 0x0, selectDone = 0, gcAssistBytes = 0}
(gdb) p *g.m
$5 = {g0 = 0x4000000480, morebuf = {sp = 0, pc = 0, g = 0, ctxt = 0x0, ret = 0, lr = 0, bp = 0}, divmod = 0, procid = 32875, gsignal = 0x4000000300, goSigStack = {stack = {lo = 0, hi = 0},
    stackguard0 = 0, stackguard1 = 0, stktopsp = 0}, sigmask = {0, 0}, tls = {0, 0, 0, 0, 0, 0}, mstartfn = {void (void)} 0x4000034000, curg = 0x0, caughtsig = 0, p = 0, nextp = 0,
  oldp = 0, id = 1, mallocing = 0, throwing = 0, preemptoff = 0x0 "", locks = 0, dying = 0, profilehz = 0, spinning = false, blocked = false, newSigstack = true, printlock = 0 '\000',
  incgo = false, freeWait = 0, fastrand = {1597334677, 4294407959}, needextram = false, traceback = 0 '\000', ncgocall = 0, ncgo = 0, cgoCallersUse = 0, cgoCallers = 0x0, park = {key = 0},
  alllink = 0x67ffe0 <runtime.m0>, schedlink = 0, mcache = 0x0, lockedg = 0, createstack = {0 <repeats 32 times>}, lockedExt = 0, lockedInt = 0, nextwaitm = 0,
  waitunlockf = {void (runtime.g *, void *, bool *)} 0x4000034000, waitlock = 0x0, waittraceev = 0 '\000', waittraceskip = 0, startingtrace = false, syscalltick = 0, thread = 0,
  freelink = 0x0, libcall = {fn = 0, n = 0, args = 0, r1 = 0, r2 = 0, err = 0}, libcallpc = 0, libcallsp = 0, libcallg = 0, syscall = {fn = 0, n = 0, args = 0, r1 = 0, r2 = 0, err = 0},
  vdsoSP = 0, vdsoPC = 307440, dlogPerM = {<No data fields>}, mOS = {<No data fields>}}

--- drop futexsleep thread info, left two useful stack.
Thread 8 (LWP 32881):
#0  syscall.Syscall6 () at /usr/local/go/src/syscall/asm_linux_arm64.s:44
#1  0x00000000002dc864 in github.com/containerd/containerd/vendor/golang.org/x/sys/unix.EpollWait (epfd=9, events=..., msec=-1, n=0, err=...)
    at /root/containerd-1.2.0/.gopath/src/github.com/containerd/containerd/vendor/golang.org/x/sys/unix/zsyscall_linux_arm64.go:1499
#2  0x00000000002ddae4 in github.com/containerd/containerd/vendor/github.com/containerd/console.(*Epoller).Wait (e=0x400007e080, ~r0=...)
    at /root/containerd-1.2.0/.gopath/src/github.com/containerd/containerd/vendor/github.com/containerd/console/console_linux.go:110
#3  0x000000000006dda4 in runtime.goexit () at /usr/local/go/src/runtime/asm_arm64.s:1128

Thread 7 (LWP 32882):
#0  runtime.epollwait () at /usr/local/go/src/runtime/sys_linux_arm64.s:596
#1  0x000000000003ca1c in runtime.netpoll (block=true, ~r1=...) at /usr/local/go/src/runtime/netpoll_epoll.go:71
#2  0x000000000004617c in runtime.findrunnable (gp=<optimized out>, inheritTime=<optimized out>) at /usr/local/go/src/runtime/proc.go:2372
#3  0x0000000000046e88 in runtime.schedule () at /usr/local/go/src/runtime/proc.go:2524
#4  0x0000000000043c40 in runtime.mstart1 () at /usr/local/go/src/runtime/proc.go:1208
#5  0x0000000000043b8c in runtime.mstart () at /usr/local/go/src/runtime/proc.go:1167
#6  0x000000000006ec80 in runtime.clone () at /usr/local/go/src/runtime/sys_linux_arm64.s:525
Backtrace stopped: previous frame identical to this frame (corrupt stack?)

It looks like X0 is valid g struct, and g.m.gsignal.stack in memory is fine. but X1 which load from g.m is 0xd the bad one.

runc core dump is the same:

Core was generated by `runc --root /var/run/docker/runtime-runc/moby --log /run/docker/containerd/daem'.
Program terminated with signal SIGSEGV, Segmentation fault.
#0  0x000000000043bfe0 in runtime.sigtrampgo ()
(gdb) thread apply all bt (drop runtime.usleep)

Thread 6 (Thread 0xffffa8b85070 (LWP 7652)):
#0  0x00000000004721c4 in syscall.Syscall ()
#1  0x000000000046f5d4 in syscall.read ()
Backtrace stopped: previous frame identical to this frame (corrupt stack?)
   0x000000000043bfd8 <+72>:    str     xzr, [sp, #96]
   0x000000000043bfdc <+76>:    ldr     x1, [x0, #48]
=> 0x000000000043bfe0 <+80>:    ldr     x2, [x1, #80]      the same position.
(gdb) i reg
x0             0x4000000d80        274877910400
x1             0x0                 0
x2             0x0                 0
x3             0x1                 1
x4             0xad4900            11356416
x5             0x0                 0
x6             0x7                 7
x7             0x2                 2
x8             0x62                98
x9             0x2                 2
x10            0x0                 0
x11            0x0                 0
x12            0x0                 0
x13            0x0                 0
x14            0x0                 0
x15            0x1                 1
x16            0x4000056d70        274878262640
x17            0x6adc90            7003280
x18            0x270f              9999
x19            0x42c5e0            4376032
x20            0x451d60            4529504
x21            0xffffec93f44e      281474650862670
x22            0xffffec93f44f      281474650862671
x23            0xad1fb0            11345840
x24            0xffffec93f530      281474650862896
x25            0x1000              4096
x26            0x7c2780            8136576
x27            0xad417a            11354490
x28            0x4000000d80        274877910400
x29            0x4000057ff0        274878267376
x30            0x43bfbc            4439996
sp             0x4000056cd0        0x4000056cd0
pc             0x43bfe0            0x43bfe0 <runtime.sigtrampgo+80>
cpsr           0x80000000          [ EL=0 N ]
fpsr           0x10                16
fpcr           0x0                 0

runc crash the same position, but X1=0. The runc core with go1.11.2.

@jing-rui jing-rui changed the title runtime: linux/arm64 crash in runtime.sigtramp runtime: linux/arm64 crash in runtime.sigtrampgo Jul 3, 2019
@ianlancetaylor ianlancetaylor added NeedsInvestigation Someone must examine and confirm this is a valid issue and not a duplicate of an existing one. release-blocker labels Jul 3, 2019
@ianlancetaylor ianlancetaylor added this to the Go1.13 milestone Jul 3, 2019
@agnivade
Copy link
Contributor

agnivade commented Jul 3, 2019

Related to #32738 ?

@cherrymui
Copy link
Member

Related to #32738 ?

No. That is for ARM32.

@cherrymui
Copy link
Member

@jing-rui thanks for the report.

I'm a little confused with the information. You mentioned at the fault, R0=g=0x4000000480, and R1 is g.m. But R1 is 13, whereas when you print *g its m field is 0x4000034000. Are both printed at the same location? I'm confused with this.

Also, sigtrampgo runs in the signal handler. So there should be a signal happens first. Do you happen to know what signal it is, and at which PC it is delivered (if it is a synchronous signal)? Is it possible to grab a stack trace from sigtrampgo where it faults?

@ianlancetaylor
Copy link
Contributor

@cherrymui We could see this kind of behavior--a discrepancy between what gdb sees in memory and what it sees in the registers--if the setting of the g.m field were done non-atomically (which it is) and if there were no memory barriers between the setting of g.m and the signal. I couldn't see any case where we could receive the signal on a thread that didn't see the setting of g.m, but I didn't look that hard.

@erifan
Copy link

erifan commented Jul 5, 2019

Hi @jing-rui , do you have some test cases to reproduce this issue ? And can you share the docker container information with us? Thanks.

@jing-rui
Copy link
Author

jing-rui commented Jul 5, 2019

////// more tips on containerd-shim core with go1.13
···
(gdb) p g
$1 = (runtime.g *) 0x4000000480 // g is point to 0x4000000480, that is x0.

the addr of g.m should be g + 48B, so &g.m == 0x40000004b0.

(gdb) x /2x 0x40000004b0
0x40000004b0: 0x00034000 0x00000040 // it shows g.m is 0x4000034000
(gdb) p g.m
$5 = (runtime.m *) 0x4000034000 // g.m == 0x4000034000

Dump of assembler code for function runtime.sigtrampgo:
...
0x00000000000554a4 <+36>: bl 0x55f20 <runtime.sigfwdgo>
0x00000000000554a8 <+40>: ldrb w0, [sp, #32]
0x00000000000554ac <+44>: cbnz x0, 0x55758 <runtime.sigtrampgo+728>
0x00000000000554b0 <+48>: mov x0, x28
0x00000000000554b4 <+52>: cbz x0, 0x556e0 <runtime.sigtrampgo+608>
0x00000000000554b8 <+56>: str x0, [sp, #96]
0x00000000000554bc <+60>: stp xzr, xzr, [sp, #56]
0x00000000000554c0 <+64>: stp xzr, xzr, [sp, #72]
0x00000000000554c4 <+68>: str xzr, [sp, #88]
0x00000000000554c8 <+72>: ldr x1, [x0, #48] // x1 load from x0+48B == 0x40000004b0, the x1 should be g.m = 0x4000034000.
=> 0x00000000000554cc <+76>: ldr x2, [x1, #80] // but x1 shows 13 in regs.
···

////// test with modified go1.11.2 for runc.

We can not get the argument sig, because it is optimized out. Tried
modify signal_unix.go on go1.11.2 and get stack like below. It show sig=11.

#0 0x00000000004420d0 in runtime.(*sigctxt).preparePanic (c=0x4000058cb0, gp=, sig=) at /usr/lib/golang/src/runtime/signal_arm64.go:68
#1 0x0000000000443114 in runtime.sighandler (ctxt=, gp=0x4000000480, info=0x4000058da0, sig=11) at /usr/lib/golang/src/runtime/signal_sighandler.go:74
#2 0x0000000000443980 in runtime.sigtrampgo (ctx=0x4000058e20, info=0x4000058da0, sig=) at /usr/lib/golang/src/runtime/signal_unix.go:351
#3 0x000000000045ac24 in runtime.sigtramp () at /usr/lib/golang/src/runtime/sys_linux_arm64.s:326
#4
#5 0x0000000000000000 in ?? ()

decode the siginfo struct:
(gdb) x /10x 0x4000058da0
0x4000058da0: 0x0000000b 0x00000000 0x00000002 0x00000000 // signo=11 errno=0 code=2
0x4000058db0: 0x00000000 0x00000000 0x00000000 0x00000000

We test modified sigtrapgo when check it will panic, exit(sig) direct and get status code=11.

so the sig should be 11. but we can not get more info for the cause of 11.

////// how to reproduce?
I'm not sure if we can reproduce it using docker-ce, suggest using docker-18.09 to have a try.

Create many containers(more than 20) using command below, and wait runc cores(1-3 hours):
docker run -tid --runtime=runc --health-cmd=ps --health-interval=1s ubuntu-arm-image

Tested in vm, not reproduce.

@jing-rui
Copy link
Author

jing-rui commented Jul 6, 2019

Using go1.11.2 we force exit when g.m == nil in sigtrapgo, we also get a coredump, quite strange here.
(gdb) bt
#0 0x0000000000431018 in runtime.needm (x=) at /usr/lib/golang/src/runtime/proc.go:1672
#1 0x0000000000000000 in ?? ()
Backtrace stopped: previous frame identical to this frame (corrupt stack?)
(gdb) i locals
_g_ = 0x706f2f3d48544150
mp = 0x0

1645 mp := lockextra(false)
1672 setg(mp.g0)

The mp should not be 0, and _g_ also looks bad.

@jing-rui
Copy link
Author

jing-rui commented Jul 8, 2019

Update: we add log in runc to detect where it crash, and we find it.

func parseSync(pipe io.Reader, fn func(*syncT) error) error {
        dec := json.NewDecoder(pipe)
        for {
                var sync syncT
                if err := dec.Decode(&sync); err != nil {            // crash here.

Modify code to exclude json package:

        data := make([]byte, 4096)

        n, err := pipe.Read(data)  // crash point

How pipe read crash golang runtime?

@ianlancetaylor
Copy link
Contributor

@jing-rui I don't think a pipe read is causing the crash. The crash is occurring while handling a signal. My guess would be that your program has paused at the point of the pipe read, and then the signal occurs.

@jing-rui
Copy link
Author

jing-rui commented Jul 9, 2019

we find another crash point.

134func (p *setnsProcess) execSetns() error {
135        status, err := p.cmd.Process.Wait()      // runc crash when calling wait4

The runc cmd.Start() will call "runc init" to exec a command in container namespace. When running "runc init", the C code nsexec() in nsexec.c running first before go code run. We add logs in nsexec.c and "runc init", the log shows that nsexec() exit success, but the parent process crash.

709                        if (write(pipenum, buf, len) != len) {
710                            writelog("write pipenum failed");
711                                kill(child, SIGKILL);
712                                bail("unable to send child pid to bootstrapper");
713                        }
714                        writelog("--- write pid to parent pipe ok");  // we get this log in log file.
715                        exit(0);
716                }
717
718        /*
719         * Stage 1: We're in the first child process. Our job is to join any
720         *          provided namespaces in the netlink payload and unshare all
721         *          of the requested namespaces. If we've been asked to
722         *          CLONE_NEWUSER, we will ask our parent (stage 0) to set up
723         *          our user mappings for us. Then, we create a new child
724         *          (stage 2: JUMP_INIT) for PID namespace. We then send the
725         *          child's PID to our parent (stage 0).
726         */
727        case JUMP_CHILD: {
// logs:
[1562681244072006240] --- write pid to parent pipe ok

The parent runc crash during "runc init" running, so we add logs in the "runc init" and get the expected log.

293                // We have an error during the initialization of the container's init,
294                // send it back to the parent process in the form of an initError.
295                if werr := utils.WriteJSON(pipe, syncT{procError}); werr != nil {
296                        logrus.Log("write json error %s", werr)
297                        fmt.Fprintln(os.Stderr, err)
298                        return
299                }

jr-runc.log:[1562681244072006240]I 2019-07-09 14:07:24.09 runc ers/runc/libcontainer/factory_linux.go:296:func1() write json error write pipe: broken pipe

From the core dump file, syscall.Syscall6() should be the wait4?

The golang1.11.2 is modified by us
293        if g == nil {
294                c := &sigctxt{info, ctx}
295                if sig == _SIGPROF {
296                        sigprofNonGoPC(c.sigpc())
297                        return
298                }
299                badsignal(uintptr(sig), c)
300                return
301        }
302
303        if g.m == nil || g.m.gsignal == nil {
304                exit(int32(sig))
305        }

(gdb) i thr
  Id   Target Id                         Frame
* 1    Thread 0xffffb31401d0 (LWP 62427) runtime.sigtrampgo (ctx=0x0, info=0x0, sig=<optimized out>) at /usr/lib/golang/src/runtime/signal_unix.go:303
  2    Thread 0xffffb5bb2010 (LWP 62412) runtime.futex () at /usr/lib/golang/src/runtime/sys_linux_arm64.s:407
  3    Thread 0xffffb39411d0 (LWP 62425) runtime.usleep () at /usr/lib/golang/src/runtime/sys_linux_arm64.s:136
  4    Thread 0xffffb293f1d0 (LWP 62428) syscall.Syscall6 () at /usr/lib/golang/src/syscall/asm_linux_arm64.s:43
  5    Thread 0xffffb18fd1d0 (LWP 62444) runtime.futex () at /usr/lib/golang/src/runtime/sys_linux_arm64.s:407
  6    Thread 0xffffb213e1d0 (LWP 62432) runtime.futex () at /usr/lib/golang/src/runtime/sys_linux_arm64.s:407

I'm wondering if the p.cmd.Process.Wait() can cause runtime crash?

@ianlancetaylor
Copy link
Contributor

It's quite unlikely that the backtrace will show you what is causing the crash. The backtrace is typically going to show you a point where the program is stalled. The crash is due to a signal being received, and since a signal can occur at any time, and since most programs spend most of their time stalled, the odds are very good that when the signal occurs the program will be stalled. In this case, stalled in the wait4 system call.

Is there a way that we can recreate this problem ourselves?

@mkeeler
Copy link

mkeeler commented Jul 10, 2019

I am not sure if its related but the linux/arm (32 bit) build of Consul crashes in this exact spot.

Its handling a SIGPROF signal but the call to getg() (which the compiler is going to generate instructions for) is returning an invalid pointer (0x1 in my case)

To reproduce, I use the binary here: https://releases.hashicorp.com/consul/1.5.2/consul_1.5.2_linux_arm.zip

Run consul agent -dev in one shell and consul debug in another. This has reliably caused the segfault in the unix impl of sigtramp for me exactly on line 308.

@notogawa
Copy link

I am not sure if it's related too, but the linux/arm32 build of containerd-shim crashes at the same point.

I build containerd(v1.2.7) for armv6l using go 1.11. And run a docker container on arm32 (Raspberry Pi 3). A container is configured to check it's health. This SEGV happens on containerd-shim at after long time (30minutes〜24hours) running.

Thread 3 "containerd-shim" received signal SIGSEGV, Segmentation fault.
[Switching to LWP 947]
runtime.sigtrampgo (ctx=0x2041d08, info=0x2041c88, sig=<optimized out>) at /usr/lib/go-1.11/src/runtime/signal_unix.go:307
307             if sp < g.m.gsignal.stack.lo || sp >= g.m.gsignal.stack.hi {
(gdb) bt
#0  runtime.sigtrampgo (ctx=0x2041d08, info=0x2041c88, sig=<optimized out>) at /usr/lib/go-1.11/src/runtime/signal_unix.go:307
#1  0x000665d4 in runtime.sigtramp () at /usr/lib/go-1.11/src/runtime/sys_linux_arm.s:445
#2  0x7eac4338 in ?? ()
Backtrace stopped: previous frame identical to this frame (corrupt stack?)
(gdb) disas
Dump of assembler code for function runtime.sigtrampgo:
   0x0004ef0c <+0>:     str     lr, [sp, #-84]! ; 0xffffffac
   0x0004ef10 <+4>:     mov     r0, #0
   0x0004ef14 <+8>:     add     r1, sp, #60     ; 0x3c
   0x0004ef18 <+12>:    bl      0x65a20 <runtime.duffzero+488>
   0x0004ef1c <+16>:    ldr     r0, [sp, #88]   ; 0x58
   0x0004ef20 <+20>:    str     r0, [sp, #4]
   0x0004ef24 <+24>:    ldr     r0, [sp, #92]   ; 0x5c
   0x0004ef28 <+28>:    str     r0, [sp, #8]
   0x0004ef2c <+32>:    ldr     r1, [sp, #96]   ; 0x60
   0x0004ef30 <+36>:    str     r1, [sp, #12]
   0x0004ef34 <+40>:    bl      0x4fa24 <runtime.sigfwdgo>
   0x0004ef38 <+44>:    ldrb    r0, [sp, #16]
   0x0004ef3c <+48>:    cmp     r0, #0
   0x0004ef40 <+52>:    bne     0x4f2e4 <runtime.sigtrampgo+984>
   0x0004ef44 <+56>:    cmp     r10, #0
   0x0004ef48 <+60>:    beq     0x4f28c <runtime.sigtrampgo+896>
   0x0004ef4c <+64>:    str     r10, [sp, #48]  ; 0x30
   0x0004ef50 <+68>:    add     r1, sp, #28
   0x0004ef54 <+72>:    mov     r0, #0
   0x0004ef58 <+76>:    bl      0x65a24 <runtime.duffzero+492>
=> 0x0004ef5c <+80>:    ldr     r2, [r10, #24]
   0x0004ef60 <+84>:    ldr     r3, [r2, #44]   ; 0x2c
   0x0004ef64 <+88>:    add     r4, sp, #88     ; 0x58
   0x0004ef68 <+92>:    ldr     r5, [r3]
   0x0004ef6c <+96>:    cmp     r4, r5
   0x0004ef70 <+100>:   bcs     0x4f274 <runtime.sigtrampgo+872>
   0x0004ef74 <+104>:   str     r4, [sp, #24]
   0x0004ef78 <+108>:   ldr     r0, [r2]
   0x0004ef7c <+112>:   ldr     r1, [r0]
   0x0004ef80 <+116>:   cmp     r4, r1
   0x0004ef84 <+120>:   bcc     0x4f124 <runtime.sigtrampgo+536>
   0x0004ef88 <+124>:   ldr     r0, [r0, #4]
   0x0004ef8c <+128>:   cmp     r4, r0
   0x0004ef90 <+132>:   bcs     0x4f124 <runtime.sigtrampgo+536>
   0x0004ef94 <+136>:   mov     r0, #0
   0x0004ef98 <+140>:   str     r0, [sp, #60]   ; 0x3c
   0x0004ef9c <+144>:   mov     r0, #0
--Type <RET> for more, q to quit, c to continue without paging--q
Quit
(gdb) i reg
r0             0x0                 0
r1             0x2041c54           33823828
r2             0x181               385
r3             0x4f6944            5204292
r4             0x2039ee8           33791720
r5             0xfffffff8          4294967288
r6             0x49766709          1232496393
r7             0x94c3              38083
r8             0x2db33             187187
r9             0x0                 0
r10            0xa2d54f8a          2731888522
r11            0x4f62eb            5202667
r12            0x341554d3          873813203
sp             0x2041c24           0x2041c24
lr             0x4ef5c             323420
pc             0x4ef5c             0x4ef5c <runtime.sigtrampgo+80>
cpsr           0xa0000010          -1610612720
fpscr          0x60000011          1610612753
(gdb) p g
$1 = (runtime.g *) 0xa2d54f8a
(gdb) p *g
Cannot access memory at address 0xa2d54f8a

@mkeeler
Copy link

mkeeler commented Jul 15, 2019

After further debugging, I found that roughly 75% of the time I am seeing a segfault when the g value is 0x1. The other 25% of the time I am seeing a segfault with a valid looking pointer but where it appears to not point to any valid g object but rather somewhere seemingly random into the processes address space.

For the 75% case I setup some hardware watchpoints to see where the g register (r10) was being set to 0x1. It turns out the culprit seems to be here:

MOVW runtime·vdsoClockgettimeSym(SB), R11

The backtrace when my watchpoint was hit is:

Continuing.
[Switching to LWP 21996]

Thread 1 "consul" hit Watchpoint 1: $r10 == 0x1

Old value = false
New value = true
0xb6fff3ba in __vdso_clock_gettime ()
(gdb) bt
#0  0xb6fff3ba in __vdso_clock_gettime ()
#1  0x00072c08 in runtime.nanotime () at /usr/local/go/src/runtime/sys_linux_arm.s:300
#2  0x000702e0 in runtime.systemstack () at /usr/local/go/src/runtime/asm_arm.s:354
#3  0x00070150 in runtime.rt0_go () at /usr/local/go/src/runtime/asm_arm.s:176
#4  0x00070148 in runtime.rt0_go () at /usr/local/go/src/runtime/asm_arm.s:172
#5  0x00000000 in ?? ()
Backtrace stopped: previous frame identical to this frame (corrupt stack?)

So it would seem that the call to __vdso_clock_gettime could be clobbering the r10 register.

@cherrymui
Copy link
Member

@mkeeler thanks! This is very helpful. I think you're right that R10 may be clobbered during __vdso_clock_gettime. Normally this is fine as the VDSO code would restore R10 upon return. But if we get a signal during VDSO, we would still see the bad g.

I think we should save g to TLS before making a VDSO call, and load g from TLS in the signal handler.

@mkeeler
Copy link

mkeeler commented Jul 15, 2019

__vdso_clock_gettime definitely does use r10:

(gdb) disassemble /s
Dump of assembler code for function __vdso_clock_gettime:
   0xb6fff3a8 <+0>:     stmdb   sp!, {r4, r5, r6, r7, r8, r9, r10, r11, lr}
   0xb6fff3ac <+4>:     mov     r10, r0
=> 0xb6fff3ae <+6>:     sub     sp, #28
   0xb6fff3b0 <+8>:     mov     r6, r1
   0xb6fff3b2 <+10>:    bl      0xb6fff5d8
   0xb6fff3b6 <+14>:    cmp.w   r10, #6
   0xb6fff3ba <+18>:    bhi.n   0xb6fff468 <__vdso_clock_gettime+192>
   0xb6fff3bc <+20>:    tbb     [pc, r10]
   0xb6fff3c0 <+24>:    ldrb    r6, [r1, r1]
   0xb6fff3c2 <+26>:    strb    r4, [r2, r1]
   0xb6fff3c4 <+28>:    lsls    r4, r2, #17
   0xb6fff3c6 <+30>:    movs    r6, r2
   0xb6fff3c8 <+32>:    ldr     r3, [r0, #0]
   0xb6fff3ca <+34>:    lsls    r4, r3, #31
   0xb6fff3cc <+36>:    bmi.n   0xb6fff3c8 <__vdso_clock_gettime+32>

@mkeeler
Copy link

mkeeler commented Jul 15, 2019

Both the arm 32 and arm 64 implementations of sigtramp conditionally load the g value when not coming from cgo.

Arm 64:

MOVBU runtime·iscgo(SB), R0
CMP $0, R0
BEQ 2(PC)
BL runtime·load_g(SB)

Arm 32:

MOVB runtime·iscgo(SB), R0
CMP $0, R0
BL.NE runtime·load_g(SB)

When debugging I also set some conditional breakpoints at that line in the arm 32 code to break when r0 was 0 and r10 was 0x1. It was hitting so the branch into the load_g function was not being taken to load the g from TLS.

So the nanotime and walltime assembly functions could call save_g but then sigtramp will need modifications to force it to call load_g

Note: to ensure my conditional breakpoint was working as expected I also set a breakpoint here:

MRC 15, 0, R0, C13, C0, 3 // fetch TLS base pointer

In the case where there was a segfault, sigtramp was always not taking the branch where load_g was being invoked.

@cherrymui
Copy link
Member

If I understand correctly we only have TLS set up if cgo is used. When cgo is not used, I'm not sure where we can save the g register...

@mkeeler
Copy link

mkeeler commented Jul 15, 2019

Ah, okay. I was thinking that check was whether or not Go was in the middle of executing a cgo function, not whether it was enabled/disable.

That would make sense because Consul gets compiled with cgo disabled. I will try with it enabled and see if the problem goes away.

@cherrymui
Copy link
Member

Enabling cgo probably doesn't solve the problem by itself, because we still didn't save the g register in runtime.walltime and nanotime.

@mkeeler
Copy link

mkeeler commented Jul 15, 2019

@cherrymui The following code can reproduce the segfault:

package main

import (
   _ "net/http/pprof"
   "fmt"
   "html"
   "log"
   "net/http"
   "time"
)

func main() {
   http.HandleFunc("/", func(w http.ResponseWriter, r *http.Request) {
      fmt.Fprintf(w, "Hello, %q", html.EscapeString(r.URL.Path))
   })

   http.HandleFunc("/hi", func(w http.ResponseWriter, r *http.Request){
      fmt.Fprintf(w, "Hi")
   })


   go func() {
      for {
         time.Now()
         //time.Sleep(1 * time.Millisecond)
      }
   }()

   log.Fatal(http.ListenAndServe(":8500", nil))
}

Compile with:

CGO_ENABLED=0 go build .

Tested With: go 1.12.7

Reproduction:

  1. Run the binary built from that source
  2. Run the following script:
#!/bin/bash
for ((i=1;i<=4;i++))
do
echo "Round $i"
curl -s http://localhost:8500/debug/pprof/heap >/dev/null
curl -s http://localhost:8500/debug/pprof/profile?seconds=30 >/dev/null &
curl -s http://localhost:8500/debug/pprof/trace?seconds=30 >/dev/null &
curl -s http://localhost:8500/debug/pprof/goroutine >/dev/null

while test "$(jobs | wc -l)" -gt 0
do
sleep 1
jobs
jobs | wc -l
done
done

Utilizing the pprof debug endpoints causes various profiling timers to be setup and eventually delivers SIGPROF signals asynchronously.

@ianlancetaylor
Copy link
Contributor

ianlancetaylor commented Aug 2, 2019

@mkeeler How long does it take your program to crash?

I thought I would be able to reproduce the problem with this function, but so far no luck on the arm and arm64 builders.

func TimeProfNow() {
	f, err := ioutil.TempFile("", "timeprofnow")
	if err != nil {
		fmt.Fprintln(os.Stderr, err)
		os.Exit(2)
	}

	if err := pprof.StartCPUProfile(f); err != nil {
		fmt.Fprintln(os.Stderr, err)
		os.Exit(2)
	}

	t0 := time.Now()
	t1 := t0
	// We should get a profiling signal 100 times a second,
	// so running for 10 seconds should be sufficient.
	for t1.Sub(t0) < 10*time.Second {
		t1 = time.Now()
	}

	pprof.StopCPUProfile()

	name := f.Name()
	if err := f.Close(); err != nil {
		fmt.Fprintln(os.Stderr, err)
		os.Exit(2)
	}
	if err := os.Remove(name); err != nil {
		fmt.Fprintln(os.Stderr, err)
		os.Exit(2)
	}
}

@mkeeler
Copy link

mkeeler commented Aug 3, 2019

For me it’s within 30 seconds although many times quicker

@ianlancetaylor
Copy link
Contributor

ianlancetaylor commented Aug 9, 2019

I'm still looking at this, but I'm removing the release-blocker label because this is not a regression.

@dmitshur
Copy link
Contributor

dmitshur commented Aug 9, 2019

I see.

The environment configuration used to create images for the linux-arm and linux-arm64 builders are in these directories:

From a quick look, it seems they both use pre-made Docker images as base:

In order to turn on VDSO, I understand we'd either have to find alternative images to use, or create our own.

@ianlancetaylor
Copy link
Contributor

What if, as a first step, we try updating from xenial to bionic and see if that changes anything? Is that hard or inadvisable? I really know nothing about this.

@dmitshur
Copy link
Contributor

dmitshur commented Aug 9, 2019

That sounds like a good first step to try. It's neither hard nor inadvisable, but it takes some work. Updating a builder image is a step we need to do whenever updating a builder to a new OS version, or when making kernel-level adjustments such as this one (we've done similar adjustments in the past to enable SMT in order to improve performance, see CL 145022). There's a relatively fixed amount of work to do each time.

Do you mind opening a separate issue for the task of updating the builder from xenial to bionic so we can track it there?

@ianlancetaylor
Copy link
Contributor

Opened #33574.

@jing-rui
Copy link
Author

We created a repo https://github.com/jing-rui/mypipe which can be reproduce the crash.

  1. git clone https://github.com/jing-rui/mypipe
  2. make # build mypipe
  3. ./gencore.sh # run mypipe

On our platform, build with go version go1.12.7 linux/arm64. we open about 5 terminal and run ./gencore.sh, it can be crash less than 30min, normally 10min. The example trace is at https://github.com/jing-rui/mypipe/tree/master/crash-traceback
beg=2019-08-31-14-01-47.594 now=2019-08-31-14-06-05.665 loop=31607 ret=0
beg=2019-08-31-14-01-47.594 now=2019-08-31-14-06-05.673 loop=31608 ret=0
fatal error: unexpected signal during runtime execution
[signal SIGSEGV: segmentation violation code=0x1 addr=0x0 pc=0x0]

runtime stack:
runtime.throw(0xee229, 0x2a)
/usr/local/go/src/runtime/panic.go:617 +0x54 fp=0x400011ddf0 sp=0x400011ddc0 pc=0x384b4
runtime: unexpected return pc for runtime.sigpanic called from 0x0
stack: frame={sp:0x400011ddf0, fp:0x400011de20} stack=[0x400011c000,0x400011e000)
...

We find that, it is easy to reproduce using loop out side of mypipe.go, that means we should loop in gencore.sh. If we put the loop into mypipe.go like below, then it will not crash after a long time test.
func main() {
args := len(os.Args)
if args != 2 {
loop(1) // such as using loop(1000000)
return
}

@andybons andybons modified the milestones: Go1.13, Go1.14 Sep 3, 2019
nyuichi added a commit to nyuichi/go that referenced this issue Sep 3, 2019
The crash occurs when go runtime calls a VDSO function (say
__vdso_clock_gettime) and a signal arrives to that thread.
Since VDSO functions temporarily destroy the G register (R10),
Go functions asynchronously executed in that thread (i.e. Go's signal
handler) can try to load data from the destroyed G, which causes
segmentation fault.
@gopherbot
Copy link
Contributor

Change https://golang.org/cl/192937 mentions this issue: runtime: fix crash during VDSO calls on arm

nyuichi added a commit to nyuichi/go that referenced this issue Sep 5, 2019
nyuichi added a commit to nyuichi/go that referenced this issue Sep 5, 2019
The crash occurs when go runtime calls a VDSO function (say
__vdso_clock_gettime) and a signal arrives to that thread.
Since VDSO functions temporarily destroy the G register (R10),
Go functions asynchronously executed in that thread (i.e. Go's signal
handler) can try to load data from the destroyed G, which causes
segmentation fault.
nyuichi added a commit to nyuichi/go that referenced this issue Sep 6, 2019
nyuichi added a commit to nyuichi/go that referenced this issue Sep 6, 2019
The crash occurs when go runtime calls a VDSO function (say
__vdso_clock_gettime) and a signal arrives to that thread.
Since VDSO functions temporarily destroy the G register (R10),
Go functions asynchronously executed in that thread (i.e. Go's signal
handler) can try to load data from the destroyed G, which causes
segmentation fault.
nyuichi added a commit to nyuichi/go that referenced this issue Sep 9, 2019
The crash occurs when go runtime calls a VDSO function (say
__vdso_clock_gettime) and a signal arrives to that thread.
Since VDSO functions temporarily destroy the G register (R10),
Go functions asynchronously executed in that thread (i.e. Go's signal
handler) can try to load data from the destroyed G, which causes
segmentation fault.
nyuichi added a commit to nyuichi/go that referenced this issue Sep 10, 2019
The crash occurs when go runtime calls a VDSO function (say
__vdso_clock_gettime) and a signal arrives to that thread.
Since VDSO functions temporarily destroy the G register (R10),
Go functions asynchronously executed in that thread (i.e. Go's signal
handler) can try to load data from the destroyed G, which causes
segmentation fault.
nyuichi added a commit to nyuichi/go that referenced this issue Sep 10, 2019
The crash occurs when go runtime calls a VDSO function (say
__vdso_clock_gettime) and a signal arrives to that thread.
Since VDSO functions temporarily destroy the G register (R10),
Go functions asynchronously executed in that thread (i.e. Go's signal
handler) can try to load data from the destroyed G, which causes
segmentation fault.
eikenb added a commit to hashicorp/consul-template that referenced this issue Sep 10, 2019
Enable CGO for all arm/arm64 builds to address Go bug.

golang/go#32912

Also restrict arm(64) builds to Linux only as it is the only one anyone
is using.
nyuichi added a commit to nyuichi/go that referenced this issue Sep 11, 2019
The crash occurs when go runtime calls a VDSO function (say
__vdso_clock_gettime) and a signal arrives to that thread.
Since VDSO functions temporarily destroy the G register (R10),
Go functions asynchronously executed in that thread (i.e. Go's signal
handler) can try to load data from the destroyed G, which causes
segmentation fault.
eikenb added a commit to hashicorp/envconsul that referenced this issue Dec 18, 2019
Build all arm binaries with CGO enabled to address golang bug..

golang/go#32912

Includes arm64 to standardize the platforms supported with those
consul-template supports.

Fixes #227
@golang golang locked and limited conversation to collaborators Sep 10, 2020
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
FrozenDueToAge NeedsInvestigation Someone must examine and confirm this is a valid issue and not a duplicate of an existing one.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

10 participants