Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

not ok 15 checkpoint --lazy-pages and restore (Fedora 33, runc restore fails) #2805

Closed
kolyshkin opened this issue Feb 10, 2021 · 19 comments · Fixed by #2881
Closed

not ok 15 checkpoint --lazy-pages and restore (Fedora 33, runc restore fails) #2805

kolyshkin opened this issue Feb 10, 2021 · 19 comments · Fixed by #2881

Comments

@kolyshkin
Copy link
Contributor

This one is from Fedora 33, not CentOS 7 (from https://github.com/opencontainers/runc/runs/1875101628?check_suite_focus=true):

not ok 15 checkpoint --lazy-pages and restore
# (from function `fail' in file tests/integration/helpers.bash, line 270,
#  from function `runc_restore_with_pipes' in file tests/integration/checkpoint.bats, line 84,
#  in test file tests/integration/checkpoint.bats, line 208)
#   `runc_restore_with_pipes ./image-dir test_busybox_restore --lazy-pages' failed
# runc spec (status=0):
# 
# uffd-noncoop is supported
# runc state test_busybox (status=0):
# {
#   "ociVersion": "1.0.2-dev",
#   "id": "test_busybox",
#   "pid": 27205,
#   "status": "running",
#   "bundle": "/tmp/busyboxtest",
#   "rootfs": "/tmp/busyboxtest/rootfs",
#   "created": "2021-02-10T22:11:17.391758735Z",
#   "owner": ""
# }
# __runc restore test_busybox_restore failed (status: 1)
# time="2021-02-10T22:11:17Z" level=error msg="criu failed: type NOTIFY errno 0\nlog file: image-dir/restore.log"
# CRIU restore log errors (if any):
# (00.079314) 27265 (native) is going to execute the syscall 11, required is 15
# (00.079328) 27265 was trapped
# (00.079331) `- Expecting exit
# (00.079343) 27265 was trapped
# (00.079347) 27265 (native) is going to execute the syscall 18446744073709551615, required is 15
# (00.079358) Error (compel/src/lib/infect.c:1513): Task 27265 is in unexpected state: f7f
# (00.079378) Error (compel/src/lib/infect.c:1520): Task stopped with 15: Terminated
# (00.079381) Error (criu/cr-restore.c:2416): Can't stop all tasks on rt_sigreturn
# (00.079383) Error (criu/cr-restore.c:2454): Killing processes because of failure on restore.
# The Network was unlocked so some data or a connection may have been lost.
# (00.083468) Error (criu/mount.c:3396): mnt: Can't remove the directory /tmp/.criu.mntns.nxeT6J: No such file or directory
# (00.083478) Error (criu/cr-restore.c:2483): Restoring FAILED.
# runc restore failed
@kolyshkin
Copy link
Contributor Author

@avagin @adrianreber any ideas?

@kolyshkin
Copy link
Contributor Author

This is quite reproducible even on my machine -- I just set to run this test 400 times.

#  -- iteration 222 of 400 -- 
# runc state test_busybox (status=0):
# {
#   "ociVersion": "1.0.2-dev",
#   "id": "test_busybox",
#   "pid": 1935413,
#   "status": "running",
#   "bundle": "/tmp/busyboxtest",
#   "rootfs": "/tmp/busyboxtest/rootfs",
#   "created": "2021-02-11T03:11:05.698563777Z",
#   "owner": ""
# }
# __runc restore test_busybox_restore failed (status: 1)
# time="2021-02-10T19:11:06-08:00" level=error msg="criu failed: type NOTIFY errno 0\nlog file: image-dir/restore.log"
# CRIU restore log errors (if any):
# (00.114416) 1935477 (native) is going to execute the syscall 11, required is 15
# (00.114440) 1935477 was trapped
# (00.114443) `- Expecting exit
# (00.114459) 1935477 was trapped
# (00.114462) 1935477 (native) is going to execute the syscall 18446744073709551615, required is 15
# (00.114478) Error (compel/src/lib/infect.c:1513): Task 1935477 is in unexpected state: f7f
# (00.114499) Error (compel/src/lib/infect.c:1520): Task stopped with 15: Terminated
# (00.114502) Error (criu/cr-restore.c:2416): Can't stop all tasks on rt_sigreturn
# (00.114504) Error (criu/cr-restore.c:2454): Killing processes because of failure on restore.
# The Network was unlocked so some data or a connection may have been lost.
# (00.115057) Error (criu/mount.c:3396): mnt: Can't remove the directory /tmp/.criu.mntns.Sih9FO: No such file or directory
# (00.115068) Error (criu/cr-restore.c:2483): Restoring FAILED.
# runc restore failed
ok 5 checkpoint and restore in external network namespace
ok 6 checkpoint and restore with container specific CRIU config
make: *** [Makefile:86: localintegration] Error 1

@kolyshkin
Copy link
Contributor Author

kolyshkin commented Feb 11, 2021

Seems that the task being restored receives SIGTERM. It might happen because of systemd. Same cgroup is used for all tests, and there might be a case where runc ask systemd to remove the cgroup, hits the timeout (1 second) and continues.

time="2021-02-10T19:11:05-08:00" level=warning msg="Timed out while waiting for StopUnit(runc-test_busybox.scope) completion signal from dbus. Continuing..."
time="2021-02-10T19:11:05-08:00" level=error msg="remove /sys/fs/cgroup/system.slice/runc-test_busybox.scope: device or resource busy"

So, bats runs the next test case, while systemd (allegedly) still tries to remove the cgroup, but terminating its processes.

A potential workaround is in #2757 (last commit adds a random number to cgroup name).

@kolyshkin
Copy link
Contributor Author

kolyshkin commented Feb 11, 2021

Randomizing cgroup names and systemd unit names did not help:

# __runc restore test_busybox_restore failed (status: 1)
# time="2021-02-11T04:31:20Z" level=error msg="criu failed: type NOTIFY errno 0\nlog file: image-dir/restore.log"
# CRIU restore log errors (if any):
# (00.076078) 48521 (native) is going to execute the syscall 11, required is 15
# (00.076085) 48521 was trapped
# (00.076087) `- Expecting exit
# (00.076095) 48521 was trapped
# (00.076097) 48521 (native) is going to execute the syscall 18446744073709551615, required is 15
# (00.076104) Error (compel/src/lib/infect.c:1513): Task 48521 is in unexpected state: f7f
# (00.076122) Error (compel/src/lib/infect.c:1520): Task stopped with 15: Terminated
# (00.076124) Error (criu/cr-restore.c:2416): Can't stop all tasks on rt_sigreturn
# (00.076126) Error (criu/cr-restore.c:2454): Killing processes because of failure on restore.
# The Network was unlocked so some data or a connection may have been lost.
# (00.078238) Error (criu/mount.c:3396): mnt: Can't remove the directory /tmp/.criu.mntns.PZAiJZ: No such file or directory
# (00.078246) Error (criu/cr-restore.c:2483): Restoring FAILED.

@avagin
Copy link
Contributor

avagin commented Feb 11, 2021

Could you try to trace signals and find out who kill a restored process?
ls -l /sys/kernel/debug/tracing/events/signal/enable
Run tests
cat /sys/kernel/debug/tracing/trace

@kolyshkin
Copy link
Contributor Author

For some reason I can't repro it locally anymore.

With tracing on, I ran the test 5 times, each time is 400 iterations, and never caught it.

With tracing off, I ran it 3 times, also 400 iterations each, also never caught it 😞

@kolyshkin
Copy link
Contributor Author

As we're restoring the container under a different name, but into the same cgroup as the original container, there might be some kind of a race in there (just a wild guess -- I still don't have a clear understanding what's happening). Maybe trying to restore the container into a different cgroup will help. I'd try that but I can't repro :(

@kolyshkin
Copy link
Contributor Author

One more failure on Fedora, this time from #2829 (logs at https://github.com/opencontainers/runc/pull/2829/checks?check_run_id=2011827865). Same symptoms:

not ok 16 checkpoint --lazy-pages and restore
# (from function `fail' in file tests/integration/helpers.bash, line 266,
#  from function `runc_restore_with_pipes' in file tests/integration/checkpoint.bats, line 82,
#  in test file tests/integration/checkpoint.bats, line 206)
#   `runc_restore_with_pipes ./image-dir test_busybox_restore --lazy-pages' failed
# runc spec (status=0):
# 
# uffd-noncoop is supported
# runc state test_busybox (status=0):
# {
#   "ociVersion": "1.0.2-dev",
#   "id": "test_busybox",
#   "pid": 26732,
#   "status": "running",
#   "bundle": "/tmp/bats-run-24604/runc.8cgUD6/bundle",
#   "rootfs": "/tmp/bats-run-24604/runc.8cgUD6/bundle/rootfs",
#   "created": "2021-03-02T09:31:39.985666214Z",
#   "owner": ""
# }
# __runc restore test_busybox_restore failed (status: 1)
# time="2021-03-02T09:31:40Z" level=error msg="criu failed: type NOTIFY errno 0\nlog file: image-dir/restore.log"
# CRIU restore log errors (if any):
# (00.071368) 26792 (native) is going to execute the syscall 11, required is 15
# (00.071375) 26792 was trapped
# (00.071376) `- Expecting exit
# (00.071382) 26792 was trapped
# (00.071385) 26792 (native) is going to execute the syscall 18446744073709551615, required is 15
# (00.071391) Error (compel/src/lib/infect.c:1513): Task 26792 is in unexpected state: f7f
# (00.071404) Error (compel/src/lib/infect.c:1520): Task stopped with 15: Terminated
# (00.071406) Error (criu/cr-restore.c:2416): Can't stop all tasks on rt_sigreturn
# (00.071408) Error (criu/cr-restore.c:2454): Killing processes because of failure on restore.
# The Network was unlocked so some data or a connection may have been lost.
# (00.072493) Error (criu/mount.c:3396): mnt: Can't remove the directory /tmp/.criu.mntns.u6veq1: No such file or directory
# (00.072500) Error (criu/cr-restore.c:2483): Restoring FAILED.
# runc restore failed
ok 17 checkpoint and restore in external network namespace

@kolyshkin
Copy link
Contributor Author

kolyshkin commented Mar 26, 2021

Got the full log this time (from https://github.com/opencontainers/runc/runs/2193862344?check_suite_focus=true):

show/hide log ``` ok 15 checkpoint --pre-dump and restore not ok 16 checkpoint --lazy-pages and restore # (from function `fail' in file tests/integration/helpers.bash, line 266, # from function `runc_restore_with_pipes' in file tests/integration/checkpoint.bats, line 82, # in test file tests/integration/checkpoint.bats, line 206) # `runc_restore_with_pipes ./image-dir test_busybox_restore --lazy-pages' failed # runc spec (status=0): # # uffd-noncoop is supported # runc state test_busybox (status=0): # { # "ociVersion": "1.0.2-dev", # "id": "test_busybox", # "pid": 26757, # "status": "running", # "bundle": "/tmp/bats-run-24631/runc.VlgKY0/bundle", # "rootfs": "/tmp/bats-run-24631/runc.VlgKY0/bundle/rootfs", # "created": "2021-03-25T14:22:43.471403838Z", # "owner": "" # } # (00.000012) Version: 3.15 (gitid 0) # (00.000021) Running on localhost.localdomain Linux 5.8.15-301.fc33.x86_64 #1 SMP Thu Oct 15 16:58:06 UTC 2020 x86_64 # (00.000024) Would overwrite RPC settings with values from /etc/criu/runc.conf # (00.000036) Loaded kdat cache from /run/criu/criu.kdat # (00.000071) ======================================== # (00.000073) Dumping processes (pid: 26757) # (00.000074) ======================================== # (00.000080) rlimit: RLIMIT_NOFILE unlimited for self # (00.000086) Running pre-dump scripts # (00.000088) RPC # (00.000266) irmap: Searching irmap cache in work dir # (00.000279) No irmap-cache image # (00.000281) irmap: Searching irmap cache in parent # (00.000284) irmap: No irmap cache # (00.000613) cpu: x86_family 6 x86_vendor_id GenuineIntel x86_model_id Intel(R) Xeon(R) CPU E5-1650 v2 @ 3.50GHz # (00.000734) cpu: fpu: xfeatures_mask 0x5 xsave_size 832 xsave_size_max 832 xsaves_size 0 # (00.000758) cpu: fpu: x87 floating point registers xstate_offsets 0 / 0 xstate_sizes 160 / 160 # (00.000761) cpu: fpu: AVX registers xstate_offsets 576 / 576 xstate_sizes 256 / 256 # (00.000763) cpu: fpu:1 fxsr:1 xsave:1 xsaveopt:0 xsavec:0 xgetbv1:0 xsaves:0 # (00.000897) cg-prop: Parsing controller "cpu" # (00.000901) cg-prop: Strategy "replace" # (00.000903) cg-prop: Property "cpu.shares" # (00.000905) cg-prop: Property "cpu.cfs_period_us" # (00.000906) cg-prop: Property "cpu.cfs_quota_us" # (00.000907) cg-prop: Property "cpu.rt_period_us" # (00.000908) cg-prop: Property "cpu.rt_runtime_us" # (00.000910) cg-prop: Parsing controller "memory" # (00.000911) cg-prop: Strategy "replace" # (00.000912) cg-prop: Property "memory.limit_in_bytes" # (00.000913) cg-prop: Property "memory.memsw.limit_in_bytes" # (00.000914) cg-prop: Property "memory.swappiness" # (00.000916) cg-prop: Property "memory.soft_limit_in_bytes" # (00.000917) cg-prop: Property "memory.move_charge_at_immigrate" # (00.000918) cg-prop: Property "memory.oom_control" # (00.000919) cg-prop: Property "memory.use_hierarchy" # (00.000920) cg-prop: Property "memory.kmem.limit_in_bytes" # (00.000921) cg-prop: Property "memory.kmem.tcp.limit_in_bytes" # (00.000923) cg-prop: Parsing controller "cpuset" # (00.000924) cg-prop: Strategy "replace" # (00.000925) cg-prop: Property "cpuset.cpus" # (00.000926) cg-prop: Property "cpuset.mems" # (00.000927) cg-prop: Property "cpuset.memory_migrate" # (00.000929) cg-prop: Property "cpuset.cpu_exclusive" # (00.000930) cg-prop: Property "cpuset.mem_exclusive" # (00.000931) cg-prop: Property "cpuset.mem_hardwall" # (00.000932) cg-prop: Property "cpuset.memory_spread_page" # (00.000933) cg-prop: Property "cpuset.memory_spread_slab" # (00.000934) cg-prop: Property "cpuset.sched_load_balance" # (00.000935) cg-prop: Property "cpuset.sched_relax_domain_level" # (00.000937) cg-prop: Parsing controller "blkio" # (00.000938) cg-prop: Strategy "replace" # (00.000939) cg-prop: Property "blkio.weight" # (00.000940) cg-prop: Parsing controller "freezer" # (00.000941) cg-prop: Strategy "replace" # (00.000943) cg-prop: Parsing controller "perf_event" # (00.000944) cg-prop: Strategy "replace" # (00.000945) cg-prop: Parsing controller "net_cls" # (00.000946) cg-prop: Strategy "replace" # (00.000947) cg-prop: Property "net_cls.classid" # (00.000949) cg-prop: Parsing controller "net_prio" # (00.000950) cg-prop: Strategy "replace" # (00.000951) cg-prop: Property "net_prio.ifpriomap" # (00.000952) cg-prop: Parsing controller "pids" # (00.000953) cg-prop: Strategy "replace" # (00.000955) cg-prop: Property "pids.max" # (00.000956) cg-prop: Parsing controller "devices" # (00.000957) cg-prop: Strategy "replace" # (00.000958) cg-prop: Property "devices.list" # (00.000982) Preparing image inventory (version 1) # (00.001003) Add pid ns 1 pid 26788 # (00.001011) Add net ns 2 pid 26788 # (00.001017) Add ipc ns 3 pid 26788 # (00.001023) Add uts ns 4 pid 26788 # (00.001029) Add time ns 5 pid 26788 # (00.001039) Add mnt ns 6 pid 26788 # (00.001045) Add user ns 7 pid 26788 # (00.001051) Add cgroup ns 8 pid 26788 # (00.001053) cg: Dumping cgroups for 26788 # (00.001064) cg: `- New css ID 1 # (00.001066) cg: `- [] -> [/user.slice/user-1000.slice/session-11.scope] [0] # (00.001070) cg: Set 1 is criu one # (00.001091) Detected cgroup V2 freezer # (00.001092) freezing processes: 100000 attempts with 100 ms steps # (00.001103) cgroup.freeze=0 # (00.001150) cgroup.freeze=1 # (00.101810) cgroup.freeze=1 # (00.101866) freezing processes: 1 attempts done # (00.101951) SEIZE 26757: success # (00.102308) Seized task 26757, state 1 # (00.102312) seccomp: Collected tid_real 26757 mode 0 # (00.102327) Collected (0 attempts, 0 in_progress) # (00.102366) Collected (0 attempts, 0 in_progress) # (00.102373) Collected 26757 in 1 state # (00.102398) Will take pid namespace in the image # (00.102400) Add pid ns 9 pid 26757 # (00.102411) Will take net namespace in the image # (00.102413) Add net ns 10 pid 26757 # (00.102419) Will take ipc namespace in the image # (00.102421) Add ipc ns 11 pid 26757 # (00.102426) Will take uts namespace in the image # (00.102428) Add uts ns 12 pid 26757 # (00.102442) Will take mnt namespace in the image # (00.102443) Add mnt ns 13 pid 26757 # (00.102452) Will take cgroup namespace in the image # (00.102454) Add cgroup ns 14 pid 26757 # (00.102456) Lock network # (00.102457) Running network-lock scripts # (00.102458) RPC # (00.110225) type tmpfs source tmpfs mnt_id 342 s_dev 0x20 /bats-run-24631/runc.VlgKY0/bundle/rootfs @ ./ flags 0x1080001 options seclabel,size=1013612k,nr_inodes=409600 # (00.110280) type proc source proc mnt_id 343 s_dev 0x27 / @ ./proc flags 0x200000 options # (00.110299) type tmpfs source tmpfs mnt_id 344 s_dev 0x28 / @ ./dev flags 0x1000002 options seclabel,size=65536k,mode=755 # (00.110309) type devpts source devpts mnt_id 345 s_dev 0x29 / @ ./dev/pts flags 0x20000a options seclabel,gid=5,mode=620,ptmxmode=666 # (00.110327) type tmpfs source shm mnt_id 346 s_dev 0x2a / @ ./dev/shm flags 0x20000e options seclabel,size=65536k # (00.110333) type mqueue source mqueue mnt_id 347 s_dev 0x26 / @ ./dev/mqueue flags 0x20000e options seclabel # (00.110338) type sysfs source sysfs mnt_id 348 s_dev 0x2b / @ ./sys flags 0x20000f options seclabel # (00.110358) type cgroup2 source cgroup mnt_id 349 s_dev 0x1a /system.slice/runc-test_busybox.scope @ ./sys/fs/cgroup flags 0x20000f options seclabel # (00.110374) type proc source proc mnt_id 321 s_dev 0x27 /bus @ ./proc/bus flags 0x200001 options # (00.110378) type proc source proc mnt_id 322 s_dev 0x27 /fs @ ./proc/fs flags 0x200001 options # (00.110382) type proc source proc mnt_id 323 s_dev 0x27 /irq @ ./proc/irq flags 0x200001 options # (00.110386) type proc source proc mnt_id 324 s_dev 0x27 /sys @ ./proc/sys flags 0x200001 options # (00.110404) type proc source proc mnt_id 325 s_dev 0x27 /sysrq-trigger @ ./proc/sysrq-trigger flags 0x200001 options # (00.110408) type tmpfs source tmpfs mnt_id 326 s_dev 0x2c / @ ./proc/acpi flags 0x200001 options seclabel # (00.110414) type tmpfs source tmpfs mnt_id 327 s_dev 0x28 /null @ ./proc/kcore flags 0x1000002 options seclabel,size=65536k,mode=755 # (00.110421) type tmpfs source tmpfs mnt_id 328 s_dev 0x28 /null @ ./proc/keys flags 0x1000002 options seclabel,size=65536k,mode=755 # (00.110427) type tmpfs source tmpfs mnt_id 329 s_dev 0x28 /null @ ./proc/latency_stats flags 0x1000002 options seclabel,size=65536k,mode=755 # (00.110432) type tmpfs source tmpfs mnt_id 330 s_dev 0x28 /null @ ./proc/timer_list flags 0x1000002 options seclabel,size=65536k,mode=755 # (00.110437) type tmpfs source tmpfs mnt_id 331 s_dev 0x28 /null @ ./proc/sched_debug flags 0x1000002 options seclabel,size=65536k,mode=755 # (00.110441) type tmpfs source tmpfs mnt_id 332 s_dev 0x2d / @ ./sys/firmware flags 0x200001 options seclabel # (00.110446) type tmpfs source tmpfs mnt_id 333 s_dev 0x2e / @ ./proc/scsi flags 0x200001 options seclabel # (00.110452) mnt: Building mountpoints tree # (00.110453) mnt: Building plain mount tree # (00.110454) mnt: Working on 333->343 # (00.110456) mnt: Working on 332->348 # (00.110457) mnt: Working on 331->343 # (00.110458) mnt: Working on 330->343 # (00.110459) mnt: Working on 329->343 # (00.110460) mnt: Working on 328->343 # (00.110461) mnt: Working on 327->343 # (00.110462) mnt: Working on 326->343 # (00.110467) mnt: Working on 325->343 # (00.110469) mnt: Working on 324->343 # (00.110470) mnt: Working on 323->343 # (00.110471) mnt: Working on 322->343 # (00.110472) mnt: Working on 321->343 # (00.110473) mnt: Working on 349->348 # (00.110474) mnt: Working on 348->342 # (00.110475) mnt: Working on 347->344 # (00.110476) mnt: Working on 346->344 # (00.110477) mnt: Working on 345->344 # (00.110478) mnt: Working on 344->342 # (00.110480) mnt: Working on 343->342 # (00.110481) mnt: Working on 342->320 # (00.110482) mnt: Resorting children of 342 in mount order # (00.110483) mnt: Resorting children of 348 in mount order # (00.110484) mnt: Resorting children of 349 in mount order # (00.110485) mnt: Resorting children of 332 in mount order # (00.110486) mnt: Resorting children of 344 in mount order # (00.110488) mnt: Resorting children of 347 in mount order # (00.110489) mnt: Resorting children of 346 in mount order # (00.110490) mnt: Resorting children of 345 in mount order # (00.110491) mnt: Resorting children of 343 in mount order # (00.110493) mnt: Resorting children of 333 in mount order # (00.110494) mnt: Resorting children of 331 in mount order # (00.110495) mnt: Resorting children of 330 in mount order # (00.110496) mnt: Resorting children of 329 in mount order # (00.110497) mnt: Resorting children of 328 in mount order # (00.110498) mnt: Resorting children of 327 in mount order # (00.110499) mnt: Resorting children of 326 in mount order # (00.110500) mnt: Resorting children of 325 in mount order # (00.110501) mnt: Resorting children of 324 in mount order # (00.110502) mnt: Resorting children of 323 in mount order # (00.110503) mnt: Resorting children of 322 in mount order # (00.110504) mnt: Resorting children of 321 in mount order # (00.110505) mnt: Done: # (00.110507) mnt: [./](342->320) # (00.110508) mnt: [./sys](348->342) # (00.110509) mnt: [./sys/fs/cgroup](349->348) # (00.110511) mnt: <-- # (00.110512) mnt: [./sys/firmware](332->348) # (00.110513) mnt: <-- # (00.110514) mnt: <-- # (00.110515) mnt: [./dev](344->342) # (00.110516) mnt: [./dev/mqueue](347->344) # (00.110517) mnt: <-- # (00.110518) mnt: [./dev/shm](346->344) # (00.110520) mnt: <-- # (00.110521) mnt: [./dev/pts](345->344) # (00.110522) mnt: <-- # (00.110523) mnt: <-- # (00.110524) mnt: [./proc](343->342) # (00.110525) mnt: [./proc/scsi](333->343) # (00.110526) mnt: <-- # (00.110527) mnt: [./proc/sched_debug](331->343) # (00.110528) mnt: <-- # (00.110529) mnt: [./proc/timer_list](330->343) # (00.110531) mnt: <-- # (00.110532) mnt: [./proc/latency_stats](329->343) # (00.110533) mnt: <-- # (00.110534) mnt: [./proc/keys](328->343) # (00.110535) mnt: <-- # (00.110536) mnt: [./proc/kcore](327->343) # (00.110537) mnt: <-- # (00.110538) mnt: [./proc/acpi](326->343) # (00.110539) mnt: <-- # (00.110540) mnt: [./proc/sysrq-trigger](325->343) # (00.110542) mnt: <-- # (00.110543) mnt: [./proc/sys](324->343) # (00.110544) mnt: <-- # (00.110545) mnt: [./proc/irq](323->343) # (00.110546) mnt: <-- # (00.110547) mnt: [./proc/fs](322->343) # (00.110548) mnt: <-- # (00.110549) mnt: [./proc/bus](321->343) # (00.110550) mnt: <-- # (00.110551) mnt: <-- # (00.110552) mnt: <-- # (00.110560) mnt: Found /dev/null mapping for ./proc/sched_debug mountpoint # (00.110561) mnt: Found /dev/null mapping for ./proc/timer_list mountpoint # (00.110563) mnt: Found /dev/null mapping for ./proc/latency_stats mountpoint # (00.110564) mnt: Found /dev/null mapping for ./proc/keys mountpoint # (00.110565) mnt: Found /dev/null mapping for ./proc/kcore mountpoint # (00.110570) mnt: Inspecting sharing on 333 shared_id 0 master_id 0 (@./proc/scsi) # (00.110572) mnt: Inspecting sharing on 332 shared_id 0 master_id 0 (@./sys/firmware) # (00.110573) mnt: Inspecting sharing on 331 shared_id 0 master_id 0 (@./proc/sched_debug) # (00.110575) mnt: The mount 330 is bind for 331 (@./proc/timer_list -> @./proc/sched_debug) # (00.110576) mnt: The mount 329 is bind for 331 (@./proc/latency_stats -> @./proc/sched_debug) # (00.110578) mnt: The mount 328 is bind for 331 (@./proc/keys -> @./proc/sched_debug) # (00.110579) mnt: The mount 327 is bind for 331 (@./proc/kcore -> @./proc/sched_debug) # (00.110582) mnt: The mount 344 is bind for 331 (@./dev -> @./proc/sched_debug) # (00.110583) mnt: Inspecting sharing on 330 shared_id 0 master_id 0 (@./proc/timer_list) # (00.110584) mnt: Inspecting sharing on 329 shared_id 0 master_id 0 (@./proc/latency_stats) # (00.110586) mnt: Inspecting sharing on 328 shared_id 0 master_id 0 (@./proc/keys) # (00.110587) mnt: Inspecting sharing on 327 shared_id 0 master_id 0 (@./proc/kcore) # (00.110608) mnt: Inspecting sharing on 326 shared_id 0 master_id 0 (@./proc/acpi) # (00.110612) mnt: Inspecting sharing on 325 shared_id 0 master_id 0 (@./proc/sysrq-trigger) # (00.110613) mnt: The mount 324 is bind for 325 (@./proc/sys -> @./proc/sysrq-trigger) # (00.110614) mnt: The mount 323 is bind for 325 (@./proc/irq -> @./proc/sysrq-trigger) # (00.110616) mnt: The mount 322 is bind for 325 (@./proc/fs -> @./proc/sysrq-trigger) # (00.110617) mnt: The mount 321 is bind for 325 (@./proc/bus -> @./proc/sysrq-trigger) # (00.110618) mnt: The mount 343 is bind for 325 (@./proc -> @./proc/sysrq-trigger) # (00.110619) mnt: Inspecting sharing on 324 shared_id 0 master_id 0 (@./proc/sys) # (00.110620) mnt: Inspecting sharing on 323 shared_id 0 master_id 0 (@./proc/irq) # (00.110806) mnt: Inspecting sharing on 322 shared_id 0 master_id 0 (@./proc/fs) # (00.110811) mnt: Inspecting sharing on 321 shared_id 0 master_id 0 (@./proc/bus) # (00.110812) mnt: Inspecting sharing on 349 shared_id 0 master_id 0 (@./sys/fs/cgroup) # (00.110813) mnt: Inspecting sharing on 348 shared_id 0 master_id 0 (@./sys) # (00.110815) mnt: Inspecting sharing on 347 shared_id 0 master_id 0 (@./dev/mqueue) # (00.110816) mnt: Inspecting sharing on 346 shared_id 0 master_id 0 (@./dev/shm) # (00.110817) mnt: Inspecting sharing on 345 shared_id 0 master_id 0 (@./dev/pts) # (00.110818) mnt: Inspecting sharing on 344 shared_id 0 master_id 0 (@./dev) # (00.110819) mnt: Inspecting sharing on 343 shared_id 0 master_id 0 (@./proc) # (00.110820) mnt: Inspecting sharing on 342 shared_id 0 master_id -1 (@./) # (00.110824) Collecting netns 10/26757 # (00.110826) Switching to 26757's net for collecting sockets # (00.111000) unix: Collected: ino 82196 peer_ino 0 family 1 type 5 state 7 name (null) # (00.111151) netlink: Collect netlink sock 0x14075 # (00.111154) netlink: Collect netlink sock 0x1410e # (00.111156) netlink: Collect netlink sock 0x1407f # (00.111157) netlink: Collect netlink sock 0x14080 # (00.111158) netlink: Collect netlink sock 0x14076 # (00.111159) netlink: Collect netlink sock 0x14079 # (00.111160) netlink: Collect netlink sock 0x14087 # (00.111161) netlink: Collect netlink sock 0x14077 # (00.111163) netlink: Collect netlink sock 0x14078 # (00.111169) Collecting pidns 9/26757 # (00.111226) ======================================== # (00.111229) Dumping task (pid: 26757) # (00.111231) ======================================== # (00.111232) Obtaining task stat ... # (00.111299) # (00.111302) Collecting mappings (pid: 26757) # (00.111304) ---------------------------------------- # (00.111426) Dumping path for -3 fd via self 23 [/bin/sh] # (00.111491) vma 562c895cb000 borrows vfi from previous 562c895bd000 # (00.111500) vma 562c89687000 borrows vfi from previous 562c895cb000 # (00.111508) vma 562c896af000 borrows vfi from previous 562c89687000 # (00.111515) vma 562c896b3000 borrows vfi from previous 562c896af000 # (00.111599) Dumping path for -3 fd via self 23 [/lib/libc.so.6] # (00.111633) vma 7f42e74e0000 borrows vfi from previous 7f42e74be000 # (00.111641) vma 7f42e7628000 borrows vfi from previous 7f42e74e0000 # (00.111648) vma 7f42e7674000 borrows vfi from previous 7f42e7628000 # (00.111669) vma 7f42e7675000 borrows vfi from previous 7f42e7674000 # (00.111677) vma 7f42e7679000 borrows vfi from previous 7f42e7675000 # (00.111710) Dumping path for -3 fd via self 23 [/lib/libresolv.so.2] # (00.111963) vma 7f42e7683000 borrows vfi from previous 7f42e767f000 # (00.112037) vma 7f42e7690000 borrows vfi from previous 7f42e7683000 # (00.112046) vma 7f42e7694000 borrows vfi from previous 7f42e7690000 # (00.112053) vma 7f42e7695000 borrows vfi from previous 7f42e7694000 # (00.112060) vma 7f42e7696000 borrows vfi from previous 7f42e7695000 # (00.112145) Dumping path for -3 fd via self 23 [/lib/libm.so.6] # (00.112181) vma 7f42e76a6000 borrows vfi from previous 7f42e7699000 # (00.112189) vma 7f42e7745000 borrows vfi from previous 7f42e76a6000 # (00.112196) vma 7f42e781a000 borrows vfi from previous 7f42e7745000 # (00.112202) vma 7f42e781b000 borrows vfi from previous 7f42e781a000 # (00.112252) Dumping path for -3 fd via self 23 [/lib/ld-linux-x86-64.so.2] # (00.112302) vma 7f42e781f000 borrows vfi from previous 7f42e781e000 # (00.112315) vma 7f42e783d000 borrows vfi from previous 7f42e781f000 # (00.112322) vma 7f42e7845000 borrows vfi from previous 7f42e783d000 # (00.112372) vma 7f42e7846000 borrows vfi from previous 7f42e7845000 # (00.112421) Collected, longest area occupies 328 pages # (00.112423) 0x562c895bd000-0x562c895cb000 (56K) prot 0x1 flags 0x2 fdflags 0 st 0x41 off 0 reg fp shmid: 0x1 # (00.112425) 0x562c895cb000-0x562c89687000 (752K) prot 0x5 flags 0x2 fdflags 0 st 0x41 off 0xe000 reg fp shmid: 0x1 # (00.112427) 0x562c89687000-0x562c896af000 (160K) prot 0x1 flags 0x2 fdflags 0 st 0x41 off 0xca000 reg fp shmid: 0x1 # (00.112429) 0x562c896af000-0x562c896b3000 (16K) prot 0x1 flags 0x2 fdflags 0 st 0x41 off 0xf1000 reg fp shmid: 0x1 # (00.112430) 0x562c896b3000-0x562c896b4000 (4K) prot 0x3 flags 0x2 fdflags 0 st 0x41 off 0xf5000 reg fp shmid: 0x1 # (00.112432) 0x562c896b4000-0x562c896b5000 (4K) prot 0x3 flags 0x22 fdflags 0 st 0x201 off 0 reg ap shmid: 0 # (00.112433) 0x562c8b4ff000-0x562c8b520000 (132K) prot 0x3 flags 0x22 fdflags 0 st 0x221 off 0 reg heap ap shmid: 0 # (00.112435) 0x7f42e74bb000-0x7f42e74be000 (12K) prot 0x3 flags 0x22 fdflags 0 st 0x201 off 0 reg ap shmid: 0 # (00.112436) 0x7f42e74be000-0x7f42e74e0000 (136K) prot 0x1 flags 0x2 fdflags 0 st 0x41 off 0 reg fp shmid: 0x2 # (00.112438) 0x7f42e74e0000-0x7f42e7628000 (1312K) prot 0x5 flags 0x2 fdflags 0 st 0x41 off 0x22000 reg fp shmid: 0x2 # (00.112440) 0x7f42e7628000-0x7f42e7674000 (304K) prot 0x1 flags 0x2 fdflags 0 st 0x41 off 0x16a000 reg fp shmid: 0x2 # (00.112441) 0x7f42e7674000-0x7f42e7675000 (4K) prot 0 flags 0x2 fdflags 0 st 0x41 off 0x1b6000 reg fp shmid: 0x2 # (00.112443) 0x7f42e7675000-0x7f42e7679000 (16K) prot 0x1 flags 0x2 fdflags 0 st 0x41 off 0x1b6000 reg fp shmid: 0x2 # (00.112444) 0x7f42e7679000-0x7f42e767b000 (8K) prot 0x3 flags 0x2 fdflags 0 st 0x41 off 0x1ba000 reg fp shmid: 0x2 # (00.112446) 0x7f42e767b000-0x7f42e767f000 (16K) prot 0x3 flags 0x22 fdflags 0 st 0x201 off 0 reg ap shmid: 0 # (00.112447) 0x7f42e767f000-0x7f42e7683000 (16K) prot 0x1 flags 0x2 fdflags 0 st 0x41 off 0 reg fp shmid: 0x3 # (00.112449) 0x7f42e7683000-0x7f42e7690000 (52K) prot 0x5 flags 0x2 fdflags 0 st 0x41 off 0x4000 reg fp shmid: 0x3 # (00.112450) 0x7f42e7690000-0x7f42e7694000 (16K) prot 0x1 flags 0x2 fdflags 0 st 0x41 off 0x11000 reg fp shmid: 0x3 # (00.112452) 0x7f42e7694000-0x7f42e7695000 (4K) prot 0 flags 0x2 fdflags 0 st 0x41 off 0x15000 reg fp shmid: 0x3 # (00.112453) 0x7f42e7695000-0x7f42e7696000 (4K) prot 0x1 flags 0x2 fdflags 0 st 0x41 off 0x15000 reg fp shmid: 0x3 # (00.112455) 0x7f42e7696000-0x7f42e7697000 (4K) prot 0x3 flags 0x2 fdflags 0 st 0x41 off 0x16000 reg fp shmid: 0x3 # (00.112456) 0x7f42e7697000-0x7f42e7699000 (8K) prot 0x3 flags 0x22 fdflags 0 st 0x201 off 0 reg ap shmid: 0 # (00.112458) 0x7f42e7699000-0x7f42e76a6000 (52K) prot 0x1 flags 0x2 fdflags 0 st 0x41 off 0 reg fp shmid: 0x4 # (00.112459) 0x7f42e76a6000-0x7f42e7745000 (636K) prot 0x5 flags 0x2 fdflags 0 st 0x41 off 0xd000 reg fp shmid: 0x4 # (00.112461) 0x7f42e7745000-0x7f42e781a000 (852K) prot 0x1 flags 0x2 fdflags 0 st 0x41 off 0xac000 reg fp shmid: 0x4 # (00.112462) 0x7f42e781a000-0x7f42e781b000 (4K) prot 0x1 flags 0x2 fdflags 0 st 0x41 off 0x180000 reg fp shmid: 0x4 # (00.112463) 0x7f42e781b000-0x7f42e781c000 (4K) prot 0x3 flags 0x2 fdflags 0 st 0x41 off 0x181000 reg fp shmid: 0x4 # (00.112465) 0x7f42e781c000-0x7f42e781e000 (8K) prot 0x3 flags 0x22 fdflags 0 st 0x201 off 0 reg ap shmid: 0 # (00.112466) 0x7f42e781e000-0x7f42e781f000 (4K) prot 0x1 flags 0x2 fdflags 0 st 0x41 off 0 reg fp shmid: 0x5 # (00.112468) 0x7f42e781f000-0x7f42e783d000 (120K) prot 0x5 flags 0x2 fdflags 0 st 0x41 off 0x1000 reg fp shmid: 0x5 # (00.112472) 0x7f42e783d000-0x7f42e7845000 (32K) prot 0x1 flags 0x2 fdflags 0 st 0x41 off 0x1f000 reg fp shmid: 0x5 # (00.112474) 0x7f42e7845000-0x7f42e7846000 (4K) prot 0x1 flags 0x2 fdflags 0 st 0x41 off 0x26000 reg fp shmid: 0x5 # (00.112475) 0x7f42e7846000-0x7f42e7847000 (4K) prot 0x3 flags 0x2 fdflags 0 st 0x41 off 0x27000 reg fp shmid: 0x5 # (00.112477) 0x7f42e7847000-0x7f42e7848000 (4K) prot 0x3 flags 0x22 fdflags 0 st 0x201 off 0 reg ap shmid: 0 # (00.112478) 0x7fff57802000-0x7fff57823000 (132K) prot 0x3 flags 0x122 fdflags 0 st 0x201 off 0 reg ap shmid: 0 # (00.112480) 0x7fff578f1000-0x7fff578f5000 (16K) prot 0x1 flags 0x22 fdflags 0 st 0x1201 off 0 reg vvar ap shmid: 0 # (00.112481) 0x7fff578f5000-0x7fff578f7000 (8K) prot 0x5 flags 0x22 fdflags 0 st 0x209 off 0 reg vdso ap shmid: 0 # (00.112483) 0xffffffffff600000-0xffffffffff601000 (4K) prot 0x5 flags 0x22 fdflags 0 st 0x204 off 0 vsys ap shmid: 0 # (00.112485) ---------------------------------------- # (00.112486) # (00.112487) Collecting fds (pid: 26757) # (00.112488) ---------------------------------------- # (00.112502) Found 3 file descriptors # (00.112504) ---------------------------------------- # (00.112513) Dump private signals of 26757 # (00.112520) Dump shared signals of 26757 # (00.112528) Parasite syscall_ip at 0x562c895cb000 # (00.113497) Set up parasite blob using memfd # (00.113501) Putting parasite blob into 0x7fdd56d6d000->0x7f42e74ab000 # (00.113969) Dumping general registers for 26757 in native mode # (00.113973) Warn (compel/arch/x86/src/lib/infect.c:281): Will restore 26757 with interrupted system call # (00.113975) Dumping GP/FPU registers for 26757 # (00.113986) x86: xsave runtime structure # (00.113988) x86: ----------------------- # (00.113989) x86: cwd:0x37f swd:0 twd:0 fop:0 mxcsr:0x1f80 mxcsr_mask:0xffff # (00.113990) x86: magic1:0x46505853 extended_size:836 xstate_bv:0x7 xstate_size:832 # (00.113992) x86: xstate_bv: 0x7 # (00.113993) x86: ----------------------- # (00.113994) Putting tsock into pid 26757 # (00.114388) Wait for parasite being daemonized... # (00.114407) Wait for ack 2 on daemon socket # pie: 1: Running daemon thread leader # pie: 1: __sent ack msg: 2 2 0 # pie: 1: Daemon waits for command # (00.114553) Fetched ack: 2 2 0 # (00.114744) Parasite 26757 has been switched to daemon mode # (00.114753) Sent msg to daemon 72 0 0 # pie: 1: __fetched msg: 72 0 0 # pie: 1: __sent ack msg: 72 72 0 # pie: 1: Daemon waits for command # (00.114931) Wait for ack 72 on daemon socket # (00.114937) Fetched ack: 72 72 0 # (00.114939) vdso: vDSO hint is reliable - omit checking # (00.114995) Sent msg to daemon 74 0 0 # (00.114999) Wait for ack 74 on daemon socket # pie: 1: __fetched msg: 74 0 0 # pie: 1: __sent ack msg: 74 74 0 # pie: 1: Daemon waits for command # (00.115186) Fetched ack: 74 74 0 # (00.115224) Sent msg to daemon 70 0 0 # (00.115227) Wait for ack 70 on daemon socket # pie: 1: __fetched msg: 70 0 0 # pie: 1: __sent ack msg: 70 70 0 # pie: 1: Daemon waits for command # (00.115414) Fetched ack: 70 70 0 # (00.115419) sid=1 pgid=1 pid=1 # (00.115438) # (00.115440) Dumping opened files (pid: 26757) # (00.115441) ---------------------------------------- # (00.115481) Sent msg to daemon 71 0 0 # pie: 1: __fetched msg: 71 0 0 # (00.115745) Wait for ack 71 on daemon socket # pie: 1: __sent ack msg: 71 71 0 # pie: 1: Daemon waits for command # (00.115842) Fetched ack: 71 71 0 # (00.115866) 26757 fdinfo 0: pos: 0 flags: 100000/0 # (00.115873) Dumping pipe 26 with id 0x6 pipe_id 0x13cba # (00.115876) Dumping data from pipe 0x13cba fd 26 # (00.115931) 26757 fdinfo 1: pos: 0 flags: 100001/0 # (00.115935) Dumping pipe 27 with id 0x7 pipe_id 0x13cb9 # (00.115948) 26757 fdinfo 2: pos: 0 flags: 100001/0 # (00.115950) Dumping pipe 28 with id 0x8 pipe_id 0x13cb4 # (00.115954) ---------------------------------------- # (00.116005) Sent msg to daemon 65 0 0 # (00.116009) Wait for ack 65 on daemon socket # pie: 1: __fetched msg: 65 0 0 # pie: 1: __sent ack msg: 65 65 0 # pie: 1: Daemon waits for command # (00.116178) Fetched ack: 65 65 0 # (00.116183) # (00.116184) Dumping pages (type: 58 pid: 26757) # (00.116186) ---------------------------------------- # (00.116187) Private vmas 328/1229 pages # (00.116202) pagemap-cache: created for pid 26757 (takes 4096 bytes) # (00.116204) page-pipe: Create page pipe for 1229 segs # (00.116206) page-pipe: Will grow page pipe (iov off is 0) # (00.116231) pagemap-cache: 26757: filling VMA 562c895bd000-562c895cb000 (56K) [l:562c89400000 h:562c89600000] # (00.116239) Pagemap generated: 0 pages (0 lazy) 0 holes # (00.116241) pagemap-cache: 26757: filling VMA 562c895cb000-562c89687000 (752K) [l:562c89400000 h:562c89600000] # (00.116250) page-pipe: Add iov to page pipe (0 iovs, 0/1229 total) # (00.116252) Pagemap generated: 1 pages (0 lazy) 0 holes # (00.116253) pagemap-cache: 26757: filling VMA 562c89687000-562c896af000 (160K) [l:562c89600000 h:562c89800000] # (00.116257) Pagemap generated: 0 pages (0 lazy) 0 holes # (00.116259) pagemap-cache: 26757: filling VMA 562c896af000-562c896b3000 (16K) [l:562c89600000 h:562c89800000] # (00.116262) page-pipe: Add iov to page pipe (1 iovs, 1/1229 total) # (00.116264) Pagemap generated: 4 pages (0 lazy) 0 holes # (00.116265) pagemap-cache: 26757: filling VMA 562c896b3000-562c896b4000 (4K) [l:562c89600000 h:562c89800000] # (00.116268) Pagemap generated: 1 pages (0 lazy) 0 holes # (00.116269) pagemap-cache: 26757: filling VMA 562c896b4000-562c896b5000 (4K) [l:562c89600000 h:562c89800000] # (00.116273) page-pipe: Will grow page pipe (iov off is 2) # (00.116277) page-pipe: Add iov to page pipe (0 iovs, 2/1229 total) # (00.116278) Pagemap generated: 1 pages (1 lazy) 0 holes # (00.116280) pagemap-cache: 26757: filling VMA 562c8b4ff000-562c8b520000 (132K) [l:562c8b400000 h:562c8b600000] # (00.116283) page-pipe: Add iov to page pipe (1 iovs, 3/1229 total) # (00.116284) Pagemap generated: 3 pages (3 lazy) 0 holes # (00.116286) pagemap-cache: 26757: filling VMA 7f42e74bb000-7f42e74be000 (12K) [l:7f42e7400000 h:7f42e7600000] # (00.116296) page-pipe: Add iov to page pipe (2 iovs, 4/1229 total) # (00.116298) Pagemap generated: 2 pages (2 lazy) 0 holes # (00.116299) pagemap-cache: 26757: filling VMA 7f42e74be000-7f42e74e0000 (136K) [l:7f42e7400000 h:7f42e7600000] # (00.116304) Pagemap generated: 0 pages (0 lazy) 0 holes # (00.116305) pagemap-cache: 26757: filling VMA 7f42e74e0000-7f42e7628000 (1312K) [l:7f42e7400000 h:7f42e7600000] # (00.116319) Pagemap generated: 0 pages (0 lazy) 0 holes # (00.116321) pagemap-cache: 26757: filling VMA 7f42e7628000-7f42e7674000 (304K) [l:7f42e7600000 h:7f42e7800000] # (00.116322) pagemap-cache: 26757: 7f42e7628000-7f42e7674000 nr:1 cov:311296 # (00.116324) pagemap-cache: 26757: 7f42e7674000-7f42e7675000 nr:2 cov:315392 # (00.116325) pagemap-cache: 26757: 7f42e7675000-7f42e7679000 nr:3 cov:331776 # (00.116327) pagemap-cache: 26757: 7f42e7679000-7f42e767b000 nr:4 cov:339968 # (00.116328) pagemap-cache: 26757: 7f42e767b000-7f42e767f000 nr:5 cov:356352 # (00.116330) pagemap-cache: 26757: 7f42e767f000-7f42e7683000 nr:6 cov:372736 # (00.116331) pagemap-cache: 26757: 7f42e7683000-7f42e7690000 nr:7 cov:425984 # (00.116332) pagemap-cache: 26757: 7f42e7690000-7f42e7694000 nr:8 cov:442368 # (00.116334) pagemap-cache: 26757: 7f42e7694000-7f42e7695000 nr:9 cov:446464 # (00.116335) pagemap-cache: 26757: 7f42e7695000-7f42e7696000 nr:10 cov:450560 # (00.116336) pagemap-cache: 26757: 7f42e7696000-7f42e7697000 nr:11 cov:454656 # (00.116338) pagemap-cache: 26757: 7f42e7697000-7f42e7699000 nr:12 cov:462848 # (00.116339) pagemap-cache: 26757: 7f42e7699000-7f42e76a6000 nr:13 cov:516096 # (00.116340) pagemap-cache: 26757: 7f42e76a6000-7f42e7745000 nr:14 cov:1167360 # (00.116342) pagemap-cache: 26757: cache mode [l:7f42e7628000 h:7f42e7800000] # (00.116348) Pagemap generated: 0 pages (0 lazy) 0 holes # (00.116349) Pagemap generated: 0 pages (0 lazy) 0 holes # (00.116350) page-pipe: Will grow page pipe (iov off is 5) # (00.116352) page-pipe: Add iov to page pipe (0 iovs, 5/1229 total) # (00.116355) Pagemap generated: 4 pages (0 lazy) 0 holes # (00.116357) Pagemap generated: 2 pages (0 lazy) 0 holes # (00.116358) page-pipe: Will grow page pipe (iov off is 6) # (00.116360) page-pipe: Add iov to page pipe (0 iovs, 6/1229 total) # (00.116361) page-pipe: Add iov to page pipe (1 iovs, 7/1229 total) # (00.116362) Pagemap generated: 3 pages (3 lazy) 0 holes # (00.116363) Pagemap generated: 0 pages (0 lazy) 0 holes # (00.116365) Pagemap generated: 0 pages (0 lazy) 0 holes # (00.116366) Pagemap generated: 0 pages (0 lazy) 0 holes # (00.116367) Pagemap generated: 0 pages (0 lazy) 0 holes # (00.116368) page-pipe: Will grow page pipe (iov off is 8) # (00.116369) page-pipe: Add iov to page pipe (0 iovs, 8/1229 total) # (00.116370) Pagemap generated: 1 pages (0 lazy) 0 holes # (00.116372) Pagemap generated: 1 pages (0 lazy) 0 holes # (00.116373) Pagemap generated: 0 pages (0 lazy) 0 holes # (00.116374) Pagemap generated: 0 pages (0 lazy) 0 holes # (00.116376) Pagemap generated: 0 pages (0 lazy) 0 holes # (00.116377) pagemap-cache: 26757: filling VMA 7f42e7745000-7f42e781a000 (852K) [l:7f42e7600000 h:7f42e7800000] # (00.116384) Pagemap generated: 0 pages (0 lazy) 0 holes # (00.116385) pagemap-cache: 26757: filling VMA 7f42e781a000-7f42e781b000 (4K) [l:7f42e7800000 h:7f42e7a00000] # (00.116386) pagemap-cache: 26757: 7f42e781a000-7f42e781b000 nr:1 cov:4096 # (00.116388) pagemap-cache: 26757: 7f42e781b000-7f42e781c000 nr:2 cov:8192 # (00.116389) pagemap-cache: 26757: 7f42e781c000-7f42e781e000 nr:3 cov:16384 # (00.116391) pagemap-cache: 26757: 7f42e781e000-7f42e781f000 nr:4 cov:20480 # (00.116392) pagemap-cache: 26757: 7f42e781f000-7f42e783d000 nr:5 cov:143360 # (00.116393) pagemap-cache: 26757: 7f42e783d000-7f42e7845000 nr:6 cov:176128 # (00.116395) pagemap-cache: 26757: 7f42e7845000-7f42e7846000 nr:7 cov:180224 # (00.116396) pagemap-cache: 26757: 7f42e7846000-7f42e7847000 nr:8 cov:184320 # (00.116397) pagemap-cache: 26757: 7f42e7847000-7f42e7848000 nr:9 cov:188416 # (00.116398) pagemap-cache: 26757: cache mode [l:7f42e781a000 h:7f42e7a00000] # (00.116403) page-pipe: Add iov to page pipe (1 iovs, 9/1229 total) # (00.116404) Pagemap generated: 1 pages (0 lazy) 0 holes # (00.116405) Pagemap generated: 1 pages (0 lazy) 0 holes # (00.116406) page-pipe: Will grow page pipe (iov off is 10) # (00.116408) page-pipe: Add iov to page pipe (0 iovs, 10/1229 total) # (00.116409) Pagemap generated: 2 pages (2 lazy) 0 holes # (00.116410) Pagemap generated: 0 pages (0 lazy) 0 holes # (00.116411) Pagemap generated: 0 pages (0 lazy) 0 holes # (00.116413) Pagemap generated: 0 pages (0 lazy) 0 holes # (00.116414) page-pipe: Will grow page pipe (iov off is 11) # (00.116416) page-pipe: Grow pipe 10 -> 20 # (00.116418) page-pipe: Add iov to page pipe (0 iovs, 11/1229 total) # (00.116419) Pagemap generated: 1 pages (0 lazy) 0 holes # (00.116420) Pagemap generated: 1 pages (0 lazy) 0 holes # (00.116421) page-pipe: Will grow page pipe (iov off is 12) # (00.116422) page-pipe: Add iov to page pipe (0 iovs, 12/1229 total) # (00.116424) Pagemap generated: 1 pages (1 lazy) 0 holes # (00.116425) pagemap-cache: 26757: filling VMA 7fff57802000-7fff57823000 (132K) [l:7fff57800000 h:7fff57a00000] # (00.116426) pagemap-cache: 26757: 7fff57802000-7fff57823000 nr:1 cov:135168 # (00.116427) pagemap-cache: 26757: 7fff578f1000-7fff578f5000 nr:2 cov:151552 # (00.116429) pagemap-cache: 26757: 7fff578f5000-7fff578f7000 nr:3 cov:159744 # (00.116430) pagemap-cache: 26757: cache mode [l:7fff57802000 h:7fff57a00000] # (00.116434) page-pipe: Add iov to page pipe (1 iovs, 13/1229 total) # (00.116436) Pagemap generated: 3 pages (3 lazy) 0 holes # (00.116437) Pagemap generated: 0 pages (0 lazy) 0 holes # (00.116438) page-pipe: Will grow page pipe (iov off is 14) # (00.116440) page-pipe: Add iov to page pipe (0 iovs, 14/1229 total) # (00.116441) Pagemap generated: 2 pages (0 lazy) 0 holes # (00.116458) page-pipe: Page pipe: # (00.116460) page-pipe: * 2 pipes 15/1229 iovs: # (00.116461) page-pipe: buf 6 pages, 2 iovs, flags: 0 pipe_off: 0 : # (00.116464) page-pipe: 0x562c895cb000 1 # (00.116465) page-pipe: 0x562c896af000 5 # (00.116466) page-pipe: buf 6 pages, 3 iovs, flags: 1 pipe_off: 0 : # (00.116468) page-pipe: 0x562c896b4000 1 # (00.116469) page-pipe: 0x562c8b4ff000 3 # (00.116470) page-pipe: 0x7f42e74bb000 2 # (00.116471) page-pipe: buf 6 pages, 1 iovs, flags: 0 pipe_off: 6 : # (00.116472) page-pipe: 0x7f42e7675000 6 # (00.116473) page-pipe: buf 3 pages, 2 iovs, flags: 1 pipe_off: 6 : # (00.116475) page-pipe: 0x7f42e767b000 2 # (00.116476) page-pipe: 0x7f42e767e000 1 # (00.116477) page-pipe: buf 4 pages, 2 iovs, flags: 0 pipe_off: c : # (00.116478) page-pipe: 0x7f42e7695000 2 # (00.116479) page-pipe: 0x7f42e781a000 2 # (00.116480) page-pipe: buf 2 pages, 1 iovs, flags: 1 pipe_off: 9 : # (00.116481) page-pipe: 0x7f42e781c000 2 # (00.116482) page-pipe: buf 2 pages, 1 iovs, flags: 0 pipe_off: 10 : # (00.116484) page-pipe: 0x7f42e7845000 2 # (00.116485) page-pipe: buf 4 pages, 2 iovs, flags: 1 pipe_off: b : # (00.116486) page-pipe: 0x7f42e7847000 1 # (00.116487) page-pipe: 0x7fff57820000 3 # (00.116488) page-pipe: buf 2 pages, 1 iovs, flags: 0 pipe_off: 12 : # (00.116489) page-pipe: 0x7fff578f5000 2 # (00.116490) page-pipe: * 0 holes: # (00.116491) PPB: 6 pages 2 segs 16 pipe 0 off # (00.116531) Sent msg to daemon 66 0 0 # (00.116538) Wait for ack 66 on daemon socket # pie: 1: __fetched msg: 66 0 0 # pie: 1: __sent ack msg: 66 66 0 # pie: 1: Daemon waits for command # (00.116777) Fetched ack: 66 66 0 # (00.116781) PPB: 6 pages 3 segs 16 pipe 2 off # (00.116808) Sent msg to daemon 66 0 0 # (00.116814) Wait for ack 66 on daemon socket # pie: 1: __fetched msg: 66 0 0 # pie: 1: __sent ack msg: 66 66 0 # pie: 1: Daemon waits for command # (00.117011) Fetched ack: 66 66 0 # (00.117015) PPB: 6 pages 1 segs 16 pipe 5 off # (00.117051) Sent msg to daemon 66 0 0 # (00.117057) Wait for ack 66 on daemon socket # pie: 1: __fetched msg: 66 0 0 # pie: 1: __sent ack msg: 66 66 0 # pie: 1: Daemon waits for command # (00.117230) Fetched ack: 66 66 0 # (00.117234) PPB: 3 pages 2 segs 16 pipe 6 off # (00.117269) Sent msg to daemon 66 0 0 # (00.117275) Wait for ack 66 on daemon socket # pie: 1: __fetched msg: 66 0 0 # pie: 1: __sent ack msg: 66 66 0 # pie: 1: Daemon waits for command # (00.117447) Fetched ack: 66 66 0 # (00.117451) PPB: 4 pages 2 segs 32 pipe 8 off # (00.117486) Sent msg to daemon 66 0 0 # (00.117492) Wait for ack 66 on daemon socket # pie: 1: __fetched msg: 66 0 0 # pie: 1: __sent ack msg: 66 66 0 # pie: 1: Daemon waits for command # (00.117741) Fetched ack: 66 66 0 # (00.117745) PPB: 2 pages 1 segs 16 pipe 10 off # (00.117749) Sent msg to daemon 66 0 0 # (00.117753) Wait for ack 66 on daemon socket # pie: 1: __fetched msg: 66 0 0 # pie: 1: __sent ack msg: 66 66 0 # pie: 1: Daemon waits for command # (00.118046) Fetched ack: 66 66 0 # (00.118050) PPB: 2 pages 1 segs 32 pipe 11 off # (00.118086) Sent msg to daemon 66 0 0 # (00.118092) Wait for ack 66 on daemon socket # pie: 1: __fetched msg: 66 0 0 # pie: 1: __sent ack msg: 66 66 0 # pie: 1: Daemon waits for command # (00.118280) Fetched ack: 66 66 0 # (00.118283) PPB: 4 pages 2 segs 16 pipe 12 off # (00.118318) Sent msg to daemon 66 0 0 # (00.118324) Wait for ack 66 on daemon socket # pie: 1: __fetched msg: 66 0 0 # pie: 1: __sent ack msg: 66 66 0 # pie: 1: Daemon waits for command # (00.118493) Fetched ack: 66 66 0 # (00.118496) PPB: 2 pages 1 segs 32 pipe 14 off # (00.118531) Sent msg to daemon 66 0 0 # (00.118537) Wait for ack 66 on daemon socket # pie: 1: __fetched msg: 66 0 0 # pie: 1: __sent ack msg: 66 66 0 # pie: 1: Daemon waits for command # (00.118769) Fetched ack: 66 66 0 # (00.118773) page-xfer: Transferring pages: # (00.118775) page-xfer: buf 6/2 # (00.118777) page-xfer: p 0x562c895cb000 [1] # (00.118791) page-xfer: p 0x562c896af000 [5] # (00.118802) page-xfer: buf 6/3 # (00.118804) page-xfer: p 0x562c896b4000 [1] # (00.118805) page-xfer: p 0x562c8b4ff000 [3] # (00.118806) page-xfer: p 0x7f42e74bb000 [2] # (00.118808) page-xfer: buf 6/1 # (00.118809) page-xfer: p 0x7f42e7675000 [6] # (00.118821) page-xfer: buf 3/2 # (00.118822) page-xfer: p 0x7f42e767b000 [2] # (00.118823) page-xfer: p 0x7f42e767e000 [1] # (00.118825) page-xfer: buf 4/2 # (00.118827) page-xfer: p 0x7f42e7695000 [2] # (00.118833) page-xfer: p 0x7f42e781a000 [2] # (00.118839) page-xfer: buf 2/1 # (00.118840) page-xfer: p 0x7f42e781c000 [2] # (00.118841) page-xfer: buf 2/1 # (00.118843) page-xfer: p 0x7f42e7845000 [2] # (00.118848) page-xfer: buf 4/2 # (00.118849) page-xfer: p 0x7f42e7847000 [1] # (00.118851) page-xfer: p 0x7fff57820000 [3] # (00.118852) page-xfer: buf 2/1 # (00.118853) page-xfer: p 0x7fff578f5000 [2] # (00.118867) ---------------------------------------- # (00.118917) Sent msg to daemon 65 0 0 # (00.118920) Wait for ack 65 on daemon socket # pie: 1: __fetched msg: 65 0 0 # pie: 1: __sent ack msg: 65 65 0 # pie: 1: Daemon waits for command # (00.119098) Fetched ack: 65 65 0 # (00.119137) Sent msg to daemon 67 0 0 # (00.119139) Wait for ack 67 on daemon socket # pie: 1: __fetched msg: 67 0 0 # pie: 1: __sent ack msg: 67 67 0 # pie: 1: Daemon waits for command # (00.119440) Fetched ack: 67 67 0 # (00.119486) Sent msg to daemon 68 0 0 # (00.119489) Wait for ack 68 on daemon socket # pie: 1: __fetched msg: 68 0 0 # pie: 1: __sent ack msg: 68 68 0 # pie: 1: Daemon waits for command # (00.119766) Fetched ack: 68 68 0 # (00.119773) Sent msg to daemon 69 0 0 # (00.119775) Wait for ack 69 on daemon socket # pie: 1: __fetched msg: 69 0 0 # pie: 1: __sent ack msg: 69 69 0 # pie: 1: Daemon waits for command # (00.120161) Fetched ack: 69 69 0 # (00.120165) # (00.120167) Dumping core (pid: 26757) # (00.120168) ---------------------------------------- # (00.120169) Obtaining personality ... # (00.120256) Sent msg to daemon 64 0 0 # (00.120260) Wait for ack 64 on daemon socket # pie: 1: __fetched msg: 64 0 0 # pie: 1: __sent ack msg: 64 64 0 # pie: 1: Daemon waits for command # (00.120494) Fetched ack: 64 64 0 # (00.120534) 26757 has lsm profile unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023 # (00.120539) 26757 has 0 sched policy # (00.120542) dumping 0 nice for 26757 # (00.120544) dumping /proc/26757/loginuid # (00.120552) dumping /proc/26757/oom_score_adj # (00.120606) Sent msg to daemon 76 0 0 # (00.120610) Wait for ack 76 on daemon socket # pie: 1: __fetched msg: 76 0 0 # pie: 1: __sent ack msg: 76 76 0 # pie: 1: Daemon waits for command # (00.121030) Fetched ack: 76 76 0 # (00.121035) cg: Dumping cgroups for 26757 # (00.121066) cg: `- New css ID 2 # (00.121071) cg: `- [] -> [/system.slice/runc-test_busybox.scope] [37] # (00.121120) cg: adding cgroup /proc/self/fd/29/system.slice/runc-test_busybox.scope # (00.121132) cg: Couldn't open /proc/self/fd/29/system.slice/runc-test_busybox.scope/cgroup.clone_children. This cgroup property may not exist on this kernel # (00.121138) cg: Couldn't open /proc/self/fd/29/system.slice/runc-test_busybox.scope/notify_on_release. This cgroup property may not exist on this kernel # (00.121152) cg: Dumping value from /proc/self/fd/29/system.slice/runc-test_busybox.scope/cgroup.procs # (00.121158) cg: Couldn't open /proc/self/fd/29/system.slice/runc-test_busybox.scope/tasks. This cgroup property may not exist on this kernel # (00.121256) cg: Set 2 is root one # (00.121294) ---------------------------------------- # (00.121337) Waiting for 26757 to trap # (00.121505) Daemon 26757 exited trapping # (00.121514) Sent msg to daemon 3 0 0 # (00.121520) Force no-breakpoints restore # (00.121817) 26757 was trapped # (00.121823) 26757 (native) is going to execute the syscall 45, required is 15 # (00.122153) 26757 was trapped # (00.122157) `- Expecting exit # (00.122417) 26757 was trapped # (00.122458) 26757 (native) is going to execute the syscall 186, required is 15 # (00.122649) 26757 was trapped # (00.122653) `- Expecting exit # (00.123098) 26757 was trapped # (00.123104) 26757 (native) is going to execute the syscall 1, required is 15 # pie: 1: __fetched msg: 3 0 0 # (00.123339) 26757 was trapped # (00.123343) `- Expecting exit # (00.123556) 26757 was trapped # (00.123561) 26757 (native) is going to execute the syscall 186, required is 15 # (00.123832) 26757 was trapped # (00.123836) `- Expecting exit # (00.124011) 26757 was trapped # (00.124017) 26757 (native) is going to execute the syscall 186, required is 15 # (00.124209) 26757 was trapped # (00.124213) `- Expecting exit # (00.124422) 26757 was trapped # (00.124432) 26757 (native) is going to execute the syscall 1, required is 15 # pie: 1: 1: new_sp=0x7f42e74b5008 ip 0x7f42e75ac7e4 # (00.124923) 26757 was trapped # (00.124928) `- Expecting exit # (00.125157) 26757 was trapped # (00.125163) 26757 (native) is going to execute the syscall 3, required is 15 # (00.125388) 26757 was trapped # (00.125392) `- Expecting exit # (00.125548) 26757 was trapped # (00.125553) 26757 (native) is going to execute the syscall 3, required is 15 # (00.125779) 26757 was trapped # (00.125784) `- Expecting exit # (00.126168) 26757 was trapped # (00.126174) 26757 (native) is going to execute the syscall 15, required is 15 # (00.126397) 26757 was stopped # (00.126747) # (00.126751) Dumping mm (pid: 26757) # (00.126752) ---------------------------------------- # (00.126756) 0x562c895bd000-0x562c895cb000 (56K) prot 0x1 flags 0x2 fdflags 0 st 0x41 off 0 reg fp shmid: 0x1 # (00.126759) 0x562c895cb000-0x562c89687000 (752K) prot 0x5 flags 0x2 fdflags 0 st 0x41 off 0xe000 reg fp shmid: 0x1 # (00.126761) 0x562c89687000-0x562c896af000 (160K) prot 0x1 flags 0x2 fdflags 0 st 0x41 off 0xca000 reg fp shmid: 0x1 # (00.126762) 0x562c896af000-0x562c896b3000 (16K) prot 0x1 flags 0x2 fdflags 0 st 0x41 off 0xf1000 reg fp shmid: 0x1 # (00.126764) 0x562c896b3000-0x562c896b4000 (4K) prot 0x3 flags 0x2 fdflags 0 st 0x41 off 0xf5000 reg fp shmid: 0x1 # (00.126765) 0x562c896b4000-0x562c896b5000 (4K) prot 0x3 flags 0x22 fdflags 0 st 0x201 off 0 reg ap shmid: 0 # (00.126767) 0x562c8b4ff000-0x562c8b520000 (132K) prot 0x3 flags 0x22 fdflags 0 st 0x221 off 0 reg heap ap shmid: 0 # (00.126768) 0x7f42e74bb000-0x7f42e74be000 (12K) prot 0x3 flags 0x22 fdflags 0 st 0x201 off 0 reg ap shmid: 0 # (00.126770) 0x7f42e74be000-0x7f42e74e0000 (136K) prot 0x1 flags 0x2 fdflags 0 st 0x41 off 0 reg fp shmid: 0x2 # (00.126771) 0x7f42e74e0000-0x7f42e7628000 (1312K) prot 0x5 flags 0x2 fdflags 0 st 0x41 off 0x22000 reg fp shmid: 0x2 # (00.126773) 0x7f42e7628000-0x7f42e7674000 (304K) prot 0x1 flags 0x2 fdflags 0 st 0x41 off 0x16a000 reg fp shmid: 0x2 # (00.126774) 0x7f42e7674000-0x7f42e7675000 (4K) prot 0 flags 0x2 fdflags 0 st 0x41 off 0x1b6000 reg fp shmid: 0x2 # (00.126776) 0x7f42e7675000-0x7f42e7679000 (16K) prot 0x1 flags 0x2 fdflags 0 st 0x41 off 0x1b6000 reg fp shmid: 0x2 # (00.126778) 0x7f42e7679000-0x7f42e767b000 (8K) prot 0x3 flags 0x2 fdflags 0 st 0x41 off 0x1ba000 reg fp shmid: 0x2 # (00.126779) 0x7f42e767b000-0x7f42e767f000 (16K) prot 0x3 flags 0x22 fdflags 0 st 0x201 off 0 reg ap shmid: 0 # (00.126781) 0x7f42e767f000-0x7f42e7683000 (16K) prot 0x1 flags 0x2 fdflags 0 st 0x41 off 0 reg fp shmid: 0x3 # (00.126782) 0x7f42e7683000-0x7f42e7690000 (52K) prot 0x5 flags 0x2 fdflags 0 st 0x41 off 0x4000 reg fp shmid: 0x3 # (00.126784) 0x7f42e7690000-0x7f42e7694000 (16K) prot 0x1 flags 0x2 fdflags 0 st 0x41 off 0x11000 reg fp shmid: 0x3 # (00.126785) 0x7f42e7694000-0x7f42e7695000 (4K) prot 0 flags 0x2 fdflags 0 st 0x41 off 0x15000 reg fp shmid: 0x3 # (00.126787) 0x7f42e7695000-0x7f42e7696000 (4K) prot 0x1 flags 0x2 fdflags 0 st 0x41 off 0x15000 reg fp shmid: 0x3 # (00.126788) 0x7f42e7696000-0x7f42e7697000 (4K) prot 0x3 flags 0x2 fdflags 0 st 0x41 off 0x16000 reg fp shmid: 0x3 # (00.126790) 0x7f42e7697000-0x7f42e7699000 (8K) prot 0x3 flags 0x22 fdflags 0 st 0x201 off 0 reg ap shmid: 0 # (00.126791) 0x7f42e7699000-0x7f42e76a6000 (52K) prot 0x1 flags 0x2 fdflags 0 st 0x41 off 0 reg fp shmid: 0x4 # (00.126793) 0x7f42e76a6000-0x7f42e7745000 (636K) prot 0x5 flags 0x2 fdflags 0 st 0x41 off 0xd000 reg fp shmid: 0x4 # (00.126794) 0x7f42e7745000-0x7f42e781a000 (852K) prot 0x1 flags 0x2 fdflags 0 st 0x41 off 0xac000 reg fp shmid: 0x4 # (00.126796) 0x7f42e781a000-0x7f42e781b000 (4K) prot 0x1 flags 0x2 fdflags 0 st 0x41 off 0x180000 reg fp shmid: 0x4 # (00.126797) 0x7f42e781b000-0x7f42e781c000 (4K) prot 0x3 flags 0x2 fdflags 0 st 0x41 off 0x181000 reg fp shmid: 0x4 # (00.126799) 0x7f42e781c000-0x7f42e781e000 (8K) prot 0x3 flags 0x22 fdflags 0 st 0x201 off 0 reg ap shmid: 0 # (00.126800) 0x7f42e781e000-0x7f42e781f000 (4K) prot 0x1 flags 0x2 fdflags 0 st 0x41 off 0 reg fp shmid: 0x5 # (00.126802) 0x7f42e781f000-0x7f42e783d000 (120K) prot 0x5 flags 0x2 fdflags 0 st 0x41 off 0x1000 reg fp shmid: 0x5 # (00.126808) 0x7f42e783d000-0x7f42e7845000 (32K) prot 0x1 flags 0x2 fdflags 0 st 0x41 off 0x1f000 reg fp shmid: 0x5 # (00.126810) 0x7f42e7845000-0x7f42e7846000 (4K) prot 0x1 flags 0x2 fdflags 0 st 0x41 off 0x26000 reg fp shmid: 0x5 # (00.126812) 0x7f42e7846000-0x7f42e7847000 (4K) prot 0x3 flags 0x2 fdflags 0 st 0x41 off 0x27000 reg fp shmid: 0x5 # (00.126813) 0x7f42e7847000-0x7f42e7848000 (4K) prot 0x3 flags 0x22 fdflags 0 st 0x201 off 0 reg ap shmid: 0 # (00.126815) 0x7fff57802000-0x7fff57823000 (132K) prot 0x3 flags 0x122 fdflags 0 st 0x201 off 0 reg ap shmid: 0 # (00.126817) 0x7fff578f1000-0x7fff578f5000 (16K) prot 0x1 flags 0x22 fdflags 0 st 0x1201 off 0 reg vvar ap shmid: 0 # (00.126818) 0x7fff578f5000-0x7fff578f7000 (8K) prot 0x5 flags 0x22 fdflags 0 st 0x209 off 0 reg vdso ap shmid: 0 # (00.126820) 0xffffffffff600000-0xffffffffff601000 (4K) prot 0x5 flags 0x22 fdflags 0 st 0x204 off 0 vsys ap shmid: 0 # (00.126822) Obtaining task auvx ... # (00.126936) Dumping path for -3 fd via self 29 [/] # (00.126957) Dumping task cwd id 0x9 root id 0x9 # (00.126995) mnt: Dumping mountpoints # (00.126998) mnt: 333: 2e:/ @ ./proc/scsi # (00.134416) mnt: 332: 2d:/ @ ./sys/firmware # (00.140621) mnt: 331: 28:/null @ ./proc/sched_debug # (00.140755) mnt: 330: 28:/null @ ./proc/timer_list # (00.140758) mnt: 329: 28:/null @ ./proc/latency_stats # (00.140760) mnt: 328: 28:/null @ ./proc/keys # (00.140762) mnt: 327: 28:/null @ ./proc/kcore # (00.140764) mnt: 326: 2c:/ @ ./proc/acpi # (00.147151) mnt: 325: 27:/sysrq-trigger @ ./proc/sysrq-trigger # (00.147161) mnt: 324: 27:/sys @ ./proc/sys # (00.147164) mnt: 323: 27:/irq @ ./proc/irq # (00.147166) mnt: 322: 27:/fs @ ./proc/fs # (00.147168) mnt: 321: 27:/bus @ ./proc/bus # (00.147169) mnt: 349: 1a:/ @ ./sys/fs/cgroup # (00.147171) mnt: 348: 2b:/ @ ./sys # (00.147173) mnt: 347: 26:/ @ ./dev/mqueue # (00.147198) mnt: 346: 2a:/ @ ./dev/shm # (00.153099) mnt: 345: 29:/ @ ./dev/pts # (00.153110) mnt: 344: 28:/ @ ./dev # (00.153113) mnt: Mount is not fully visible ./dev # (00.153152) mnt: mount has children ./dev # (00.153169) Warn (criu/mount.c:1109): mnt: Can't create a temporary directory: Read-only file system # (00.153171) mnt: mount is overmounted or has children ./dev # (00.159848) mnt: 343: 27:/ @ ./proc # (00.159858) mnt: 342: 20:/bats-run-24631/runc.VlgKY0/bundle/rootfs @ ./ # (00.159870) Dumping file-locks # (00.159878) # (00.159880) Dumping pstree (pid: 26757) # (00.159881) ---------------------------------------- # (00.159885) Process: 1(26757) # (00.159903) ---------------------------------------- # (00.159909) Dumping 1(26757)'s namespaces # (00.160253) Dump UTS namespace 12 via 26757 # (00.160776) Dump NET namespace info 10 via 26757 # (00.160870) Dumping netns links # (00.161199) Dumping netns links # (00.161462) Dump IPC namespace 11 via 26757 # (00.161651) IPC shared memory segments: 0 # (00.161659) IPC message queues: 0 # (00.161664) IPC semaphore sets: 0 # (00.162886) Namespaces dump complete # (00.163135) cg: Dumping 1 sets # (00.163143) cg: `- Dumping of /system.slice/runc-test_busybox.scope # (00.163146) cg: Writing CG image # (00.163158) unix: Dumping external sockets # (00.163160) Writing image inventory (version 1) # (00.163205) Running post-dump scripts # (00.163207) RPC # (00.163364) Starting lazy pages server # (00.163371) Starting page server on port 27277 # (00.164214) Running status-ready scripts # (00.164219) RPC # __runc restore test_busybox_restore failed (status: 1) # time="2021-03-25T14:22:43Z" level=error msg="criu failed: type NOTIFY errno 0\nlog file: image-dir/restore.log" # CRIU restore log errors (if any): # (00.082491) 26817 (native) is going to execute the syscall 11, required is 15 # (00.082499) 26817 was trapped # (00.082501) `- Expecting exit # (00.082509) 26817 was trapped # (00.082511) 26817 (native) is going to execute the syscall 18446744073709551615, required is 15 # (00.082518) Error (compel/src/lib/infect.c:1513): Task 26817 is in unexpected state: f7f # (00.082535) Error (compel/src/lib/infect.c:1520): Task stopped with 15: Terminated # (00.082538) Error (criu/cr-restore.c:2416): Can't stop all tasks on rt_sigreturn # (00.082539) Error (criu/cr-restore.c:2454): Killing processes because of failure on restore. # The Network was unlocked so some data or a connection may have been lost. # (00.090140) Error (criu/mount.c:3396): mnt: Can't remove the directory /tmp/.criu.mntns.PJu68V: No such file or directory # (00.090151) Error (criu/cr-restore.c:2483): Restoring FAILED. # runc restore failed ok 17 checkpoint and restore in external network namespace ```

@adrianreber
Copy link
Contributor

@rppt Any idea why the lazy page test sometimes fails with runc. @kolyshkin was able to capture a complete log. The syscall number looks strange.

@kolyshkin
Copy link
Contributor Author

Yet another failuire from today: https://github.com/opencontainers/runc/pull/2873/checks?check_run_id=2204874348

I noticed it always happens when run under systemd cgroup. My previous thoughts on why this might happen are here: #2805 (comment). This is definitely related to systemd but I'm not sure how (I guess we need to take a look at when it wants to terminate the processes in cgroup).

@kolyshkin
Copy link
Contributor Author

I think we're crearing a scope by default (unless .slice is explicitly requested). Here's a relevant doc about scope lifetime (from https://www.freedesktop.org/software/systemd/man/systemd.scope.html):

Note that, unlike service units, scope units have no "main" process: all processes in the scope are equivalent. The lifecycle of the scope unit is thus not bound to the lifetime of one specific process, but to the existence of at least one process in the scope. This also means that the exit statuses of these processes are not relevant for the scope unit failure state. Scope units may still enter a failure state, for example due to resource exhaustion or stop timeouts being reached, but not due to programs inside of them terminating uncleanly. Since processes managed as scope units generally remain children of the original process that forked them off, it is also the job of that process to collect their exit statuses and act on them as needed.

With all this, it is hard to see how systemd may decide to terminate processes in a scope. So the only possibility is when someone stops it deliberately. I'm puzzled.

@kolyshkin
Copy link
Contributor Author

kolyshkin added a commit to kolyshkin/runc that referenced this issue Mar 28, 2021
This is to help debug issue opencontainers#2805.

Signed-off-by: Kir Kolyshkin <[email protected]>
@rppt
Copy link

rppt commented Mar 28, 2021

@rppt Any idea why the lazy page test sometimes fails with runc. @kolyshkin was able to capture a complete log.

It would have been nice to see lazy-pages.log, or at least if it has any errors.

The syscall number looks strange.

Yeah, syscall 18446744073709551615 is a bit too much :)
@avagin, @cyrillos do you have any ideas how do we get -1 instead of a syscall number. e.g:

(00.095376) 26819 (native) is going to execute the syscall 18446744073709551615, required is 15"

@cyrillos
Copy link

Maybe we've missed interrupted syscall somewhere? IIRC when syscall get interrupted the eax is set up to -1 in return and it seems that we might be screwed the syscall somewhere. Not sure though, didn't work with criu for long already ) OTOH if it triggers with lazy pages only better to poke one who wrote the lazy pages support.

@kolyshkin
Copy link
Contributor Author

OK I think I know what happens...

  1. The test runs runc checkpoint in lazy-pages mode in background:
    __runc --criu "$CRIU" checkpoint --lazy-pages --page-server 0.0.0.0:${port} --status-fd ${lazy_w} --work-path ./work-dir --image-path ./image-dir test_busybox &
  2. The test runs criu lazy-pages in background:
    ${CRIU} lazy-pages --page-server --address 127.0.0.1 --port ${port} -D image-dir &
  3. The test runs runc restore.

So, all three are working in parallel: criu restore restores, criu lazy-pages listens for page faults on a uffd and fetch missing pages from runc checkpoint, who serves those pages.

At some point criu lazy-pages decides to fetch the rest of the pages, and once it's done it exits, and runc checkpoint, as there are no more pages to serve, exits too.

At the end of runc checkpoint the container is removed:

runc/checkpoint.go

Lines 67 to 69 in b79201f

if !(options.LeaveRunning || options.PreDump) {
// destroy container unless we tell CRIU to keep it
defer destroy(container)

... and destroy calls cgroupManager's Destroy:

err := c.cgroupManager.Destroy()

... which, in case we're using systemd cgroup driver, calls stopUnit

if err := stopUnit(dbusConnection, unitName); err != nil {

... which asks systemd to stop this unit, which makes systemd to terminate the cgroup.

As the container is restored into the same cgroup and systemd unit, I guess this is how the restored task receives SIGTERM. As for why it does not always happen, I guess in most cases the systemd unit is terminated after successful restore, or before restore creates a process.

In general, I now question why we need stopUnit at all (systemd units are garbage collected -- as long as there are no processes in a cgroup, it gets removed). Removing stopUnit should also fix this flake.

@kolyshkin
Copy link
Contributor Author

The alternative fix is to set a different systemd unit name for the restored container.

@kolyshkin
Copy link
Contributor Author

The alternative fix is to set a different systemd unit name for the restored container.

Which is what #2881 implements.

@kolyshkin
Copy link
Contributor Author

I now question why we need stopUnit at all (systemd units are garbage collected -- as long as there are no processes in a cgroup, it gets removed).

This ^^^ is wrong (see #2880 (comment)) for details.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
5 participants