Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

"open /etc in the container" is now a flake #16294

Closed
edsantiago opened this issue Oct 25, 2022 · 21 comments
Closed

"open /etc in the container" is now a flake #16294

edsantiago opened this issue Oct 25, 2022 · 21 comments
Labels
flakes Flakes from Continuous Integration locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments.

Comments

@edsantiago
Copy link
Member

This is alarming. The "podman run --uidmap" => "open /etc" bug is now a flake:

# podman run --name c_uidmap --uidmap 0:10000:10000 quay.io/libpod/testimage:20221018 true
Error: open /etc in the container: no such file or directory

Background: this is #15698 and #15977, the bugs in which if you run podman with the wrong --storage-opt you forevermore lose the ability to run with --uidmap. That was at least comprehensible, even if not (IMO) forgivable. But this one here is a flake: it passed on rerun. That's not comprehensible.

Only one instance so far, f36 root, but @mtrmac was seeing this yesterday in #16253 and now I don't know if those were constants or flakes.

@edsantiago edsantiago added the flakes Flakes from Continuous Integration label Oct 25, 2022
@mtrmac
Copy link
Collaborator

mtrmac commented Oct 25, 2022

It didn’t reoccur in #16253 after a rebase — which only says that this is not consistently broken; it might be a somewhat low-frequency flake, the worst kind.

It’s quite possible I broke something when changing the c/storage locking code.

I’ll spin up a VM and I’ll see if I can reproduce this in a debugging environment, hopefully sometime this week. (It would still be valuable to have an estimate of the frequency in which it happens — looking at the rate of merged PRs, probably < 10% ?)

@edsantiago
Copy link
Member Author

I will log every instance I see, but this one (the link above) is the first instance that made it to my logs.

Context: my logs do not include your PR yesterday, because "passed on rebase" is not a flake: the only thing I consider a flake is "job failed, then passed, without any changes to the code itself".

@edsantiago
Copy link
Member Author

Oh, and the other interesting thing: these are not "one test flakes and the rest pass", these are "once one test fails, all subsequent uidmap tests also fail". This suggests a problem on the system, not a problem with the particular podman invocation. Just like the two issues I linked above: once the system goes wonky, it will always stay wonky.

@mtrmac
Copy link
Collaborator

mtrmac commented Oct 25, 2022

All the failures seem to be concentrated on a single image AFAICS, so the damage it might be localized to that image.

Is there some trick I can do to abort the tests on first failure, and to not clean up intermediate state if possible? At least https://github.com/sstephenson/bats/blob/master/man/bats.1.ronn does not show any option to stop immediately.

@edsantiago
Copy link
Member Author

There is still no --fail-fast option in bats, but this issue has a workaround. Not a great workaround, unfortunately: you will need to find a way to skip the teardown cleanup, too.

Another option is to instrument die(), changing false at the end to a sleep-and-beep loop (or just sleep, if you can check the window once in a while).

@edsantiago
Copy link
Member Author

I ran a loop last night in an f36 VM: podman built from sources, all tests >=040-* deleted, helpers.bash instrumented to infinite-loop on failure, and infinite bats test/system/*.bats loop. No failures, the VM died of its own accord (the 1minutetip reservation timeout). This suggests that the bug is somewhere in VM setup, not in something happening during an early podman test.

@mtrmac
Copy link
Collaborator

mtrmac commented Oct 26, 2022

@edsantiago just to confirm, do you mean that it’s likely something in the non-Podman OS that has broken, and not the Podman codebase? Or is “VM setup” a part of Podman tests, i.e. that this failure is probably specific a specific mode of running Podman tests, but still probably Podman-caused? Or is it currently unknown which one it is?

@edsantiago
Copy link
Member Author

My thinking yesterday was, there is still a --storage-opt in the system tests, maybe that's what's breaking everything?

@test "podman info --storage-opt='' " {
skip_if_remote "--storage-opt flag is not supported for remote"
skip_if_rootless "storage opts are required for rootless running"
run_podman --storage-opt='' info
# Note this will not work in rootless mode, unless you specify
# storage-driver=vfs, until we have kernels that support rootless overlay
# mounts.
is "$output" ".*graphOptions: {}" "output includes graphOptions: {}"

So, before setting up my overnight infinite loop, I infinite-looped something like

# while :;do bin/podman --storage-opt='' info >/dev/null;bin/podman run --uidmap ..... || break; done

It ran for an hour with no breakage.

Oh, phooey, I screwed up. TL;DR I thought the first failure was in 030-run.bats. It's not: in the log linked above, the first failure is in 400.bats. This means that it IS something in the system tests that's breaking everything! It's just not in the 001-030 tests, it's in something later. Spinning up a new VM to test.

(Clarification: there's a --uidmap test in 030. That one is passing. The one in 400 is failing. That means the problem is not in the VM, nor in the --storage-opt='' call, it's something happening between 030 and 400.

@edsantiago
Copy link
Member Author

Another one. f36 aarch64 root

@mtrmac
Copy link
Collaborator

mtrmac commented Nov 2, 2022

FWIW I have seen a panic in the second iteration of

for i in $(seq 1 100000); do 
	echo ================
	echo = "$i"
	echo ================

	rm -rf ~/.local/share/containers ~/.config/containers
	PODMAN=$(pwd)/bin/podman bats ./test/system/320-system-df.bats || break
	PODMAN=$(pwd)/bin/podman bats ./test/system/330-corrupt-images.bats || break
	PODMAN=$(pwd)/bin/podman bats ./test/system/400-unprivileged-access.bats || break
done

with a very different failure mode: fatal error: runtime·lock: lock count on https://github.com/golang/go/blob/895664482c0ebe5cec4a6935615a1e9610bbf1e3/src/runtime/proc.go#L1543 .

```
# # /root/podman/bin/podman run --rm quay.io/libpod/testimage:20221018 stat -c%n:%F:%h:%T:%t /dev/null /proc/acpi /proc/kcore /proc/keys /proc/latency_stats /proc/timer_list /proc/scsi /sys/firmware /sys/fs/selinux /sys/dev/block
# fatal error: runtime·lock: lock count
#
# runtime stack:
# runtime.throw({0x178df75?, 0x7fcc2bffee10?})
#       /usr/lib/golang/src/runtime/panic.go:1047 +0x5d fp=0x7fcc2bffede0 sp=0x7fcc2bffedb0 pc=0x44411d
# runtime.lock2(0xc00005c500?)
#       /usr/lib/golang/src/runtime/lock_futex.go:55 +0x178 fp=0x7fcc2bffee20 sp=0x7fcc2bffede0 pc=0x4182f8
# runtime.lockWithRank(...)
#       /usr/lib/golang/src/runtime/lockrank_off.go:22
# runtime.lock(...)
#       /usr/lib/golang/src/runtime/lock_futex.go:48
# runtime.mexit(0x1)
#       /usr/lib/golang/src/runtime/proc.go:1543 +0x173 fp=0x7fcc2bffee50 sp=0x7fcc2bffee20 pc=0x449873
# runtime.mstart0()
#       /usr/lib/golang/src/runtime/proc.go:1391 +0x89 fp=0x7fcc2bffee80 sp=0x7fcc2bffee50 pc=0x449529
# runtime.mstart()
#       /usr/lib/golang/src/runtime/asm_amd64.s:390 +0x5 fp=0x7fcc2bffee88 sp=0x7fcc2bffee80 pc=0x476965
#
# goroutine 1 [semacquire, locked to thread]:
# runtime.gopark(0xc000221608?, 0x0?, 0xa0?, 0x17?, 0x419d3f?)
#       /usr/lib/golang/src/runtime/proc.go:363 +0xd6 fp=0xc000665f58 sp=0xc000665f38 pc=0x446d36
# runtime.goparkunlock(...)
#       /usr/lib/golang/src/runtime/proc.go:369
# runtime.semacquire1(0xc0001764d8, 0xa0?, 0x1, 0x0)
#       /usr/lib/golang/src/runtime/sema.go:150 +0x1fe fp=0xc000665fc0 sp=0xc000665f58 pc=0x457ede
# sync.runtime_Semacquire(0xc0004f3aa0?)
#       /usr/lib/golang/src/runtime/sema.go:62 +0x25 fp=0xc000665ff0 sp=0xc000665fc0 pc=0x474a05
# sync.(*WaitGroup).Wait(0x40?)
#       /usr/lib/golang/src/sync/waitgroup.go:139 +0x52 fp=0xc000666018 sp=0xc000665ff0 pc=0x484832
# github.com/containers/podman/v4/libpod.(*Container).prepare(0xc000188800)
#       /root/podman/libpod/container_internal_linux.go:105 +0x285 fp=0xc000666168 sp=0xc000666018 pc=0x114a4a5
# github.com/containers/podman/v4/libpod.(*Container).prepareToStart(0xc000188800, {0x1a34ac0, 0xc0003b0c90}, 0x40?)
#       /root/podman/libpod/container_internal.go:812 +0x178 fp=0xc000666238 sp=0xc000666168 pc=0x1124558
# github.com/containers/podman/v4/libpod.(*Container).StartAndAttach(0xc000188800, {0x1a34ac0, 0xc0003b0c90}, 0xc00040e4b0, {0x1776056, 0xd}, 0xc0003ee060, 0x80?)
#       /root/podman/libpod/container_api.go:127 +0x18a fp=0xc000666308 sp=0xc000666238 pc=0x1100c6a
# github.com/containers/podman/v4/pkg/domain/infra/abi/terminal.StartAttachCtr({0x1a34ac0, 0xc0003b0c90}, 0xc000188800, 0xc000014018, 0xc000014020, 0x0, {0x1776056, 0xd}, 0x1, 0x1)
#       /root/podman/pkg/domain/infra/abi/terminal/terminal_common.go:95 +0x586 fp=0xc000666460 sp=0xc000666308 pc=0x12110a6
# github.com/containers/podman/v4/pkg/domain/infra/abi.(*ContainerEngine).ContainerRun(0xc000136408, {0x1a34ac0, 0xc0003b0c90}, {{0x0, 0x0}, 0x0, {0x1776056, 0xd}, 0xc000014020, 0x0, ...})
#       /root/podman/pkg/domain/infra/abi/containers.go:1155 +0x42b fp=0xc0006665e8 sp=0xc000666460 pc=0x123868b
# github.com/containers/podman/v4/cmd/podman/containers.run(0x23d8aa0?, {0xc0004a02a0?, 0xd, 0xe})
#       /root/podman/cmd/podman/containers/run.go:205 +0x816 fp=0xc000667cf8 sp=0xc0006665e8 pc=0x12facf6
# github.com/spf13/cobra.(*Command).execute(0x23d8aa0, {0xc000040120, 0xe, 0xe})
#       /root/podman/vendor/github.com/spf13/cobra/command.go:916 +0x862 fp=0xc000667e30 sp=0xc000667cf8 pc=0x995302
# github.com/spf13/cobra.(*Command).ExecuteC(0x23f6ac0)
#       /root/podman/vendor/github.com/spf13/cobra/command.go:1044 +0x3bd fp=0xc000667ee8 sp=0xc000667e30 pc=0x995b7d
# github.com/spf13/cobra.(*Command).Execute(...)
#       /root/podman/vendor/github.com/spf13/cobra/command.go:968
# github.com/spf13/cobra.(*Command).ExecuteContext(...)
#       /root/podman/vendor/github.com/spf13/cobra/command.go:961
# main.Execute()
#       /root/podman/cmd/podman/root.go:105 +0xc5 fp=0xc000667f50 sp=0xc000667ee8 pc=0x1408145
# main.main()
#       /root/podman/cmd/podman/main.go:40 +0x7c fp=0xc000667f80 sp=0xc000667f50 pc=0x14075bc
# runtime.main()
#       /usr/lib/golang/src/runtime/proc.go:250 +0x212 fp=0xc000667fe0 sp=0xc000667f80 pc=0x446972
# runtime.goexit()
#       /usr/lib/golang/src/runtime/asm_amd64.s:1594 +0x1 fp=0xc000667fe8 sp=0xc000667fe0 pc=0x478c21
#
# goroutine 2 [force gc (idle)]:
# runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?)
#       /usr/lib/golang/src/runtime/proc.go:363 +0xd6 fp=0xc00006cfb0 sp=0xc00006cf90 pc=0x446d36
# runtime.goparkunlock(...)
#       /usr/lib/golang/src/runtime/proc.go:369
# runtime.forcegchelper()
#       /usr/lib/golang/src/runtime/proc.go:302 +0xad fp=0xc00006cfe0 sp=0xc00006cfb0 pc=0x446bcd
# runtime.goexit()
#       /usr/lib/golang/src/runtime/asm_amd64.s:1594 +0x1 fp=0xc00006cfe8 sp=0xc00006cfe0 pc=0x478c21
# created by runtime.init.6
#       /usr/lib/golang/src/runtime/proc.go:290 +0x25
#
# goroutine 3 [runnable]:
# runtime.gopark(0x1?, 0x0?, 0x0?, 0x0?, 0x0?)
#       /usr/lib/golang/src/runtime/proc.go:363 +0xd6 fp=0xc00006d790 sp=0xc00006d770 pc=0x446d36
# runtime.goparkunlock(...)
#       /usr/lib/golang/src/runtime/proc.go:369
# runtime.bgsweep(0x0?)
#       /usr/lib/golang/src/runtime/mgcsweep.go:297 +0xd7 fp=0xc00006d7c8 sp=0xc00006d790 pc=0x4317f7
# runtime.gcenable.func1()
#       /usr/lib/golang/src/runtime/mgc.go:178 +0x26 fp=0xc00006d7e0 sp=0xc00006d7c8 pc=0x426466
# runtime.goexit()
#       /usr/lib/golang/src/runtime/asm_amd64.s:1594 +0x1 fp=0xc00006d7e8 sp=0xc00006d7e0 pc=0x478c21
# created by runtime.gcenable
#       /usr/lib/golang/src/runtime/mgc.go:178 +0x6b
#
# goroutine 4 [GC scavenge wait]:
# runtime.gopark(0xbb7ef97af47?, 0x102ddfb?, 0x0?, 0x0?, 0x0?)
#       /usr/lib/golang/src/runtime/proc.go:363 +0xd6 fp=0xc00006df70 sp=0xc00006df50 pc=0x446d36
# runtime.goparkunlock(...)
#       /usr/lib/golang/src/runtime/proc.go:369
# runtime.(*scavengerState).park(0x247c320)
#       /usr/lib/golang/src/runtime/mgcscavenge.go:389 +0x53 fp=0xc00006dfa0 sp=0xc00006df70 pc=0x42f853
# runtime.bgscavenge(0x0?)
#       /usr/lib/golang/src/runtime/mgcscavenge.go:622 +0x65 fp=0xc00006dfc8 sp=0xc00006dfa0 pc=0x42fe45
# runtime.gcenable.func2()
#       /usr/lib/golang/src/runtime/mgc.go:179 +0x26 fp=0xc00006dfe0 sp=0xc00006dfc8 pc=0x426406
# runtime.goexit()
#       /usr/lib/golang/src/runtime/asm_amd64.s:1594 +0x1 fp=0xc00006dfe8 sp=0xc00006dfe0 pc=0x478c21
# created by runtime.gcenable
#       /usr/lib/golang/src/runtime/mgc.go:179 +0xaa
#
# goroutine 5 [finalizer wait]:
# runtime.gopark(0x0?, 0x183cec8?, 0x20?, 0x80?, 0x2000000020?)
#       /usr/lib/golang/src/runtime/proc.go:363 +0xd6 fp=0xc00006c628 sp=0xc00006c608 pc=0x446d36
# runtime.goparkunlock(...)
#       /usr/lib/golang/src/runtime/proc.go:369
# runtime.runfinq()
#       /usr/lib/golang/src/runtime/mfinal.go:180 +0x10f fp=0xc00006c7e0 sp=0xc00006c628 pc=0x42556f
# runtime.goexit()
#       /usr/lib/golang/src/runtime/asm_amd64.s:1594 +0x1 fp=0xc00006c7e8 sp=0xc00006c7e0 pc=0x478c21
# created by runtime.createfing
#       /usr/lib/golang/src/runtime/mfinal.go:157 +0x45
#
# goroutine 6 [GC worker (idle)]:
# runtime.gopark(0xbb7f4ac4c9d?, 0x0?, 0x0?, 0x0?, 0x0?)
#       /usr/lib/golang/src/runtime/proc.go:363 +0xd6 fp=0xc00006e750 sp=0xc00006e730 pc=0x446d36
# runtime.gcBgMarkWorker()
#       /usr/lib/golang/src/runtime/mgc.go:1235 +0xf1 fp=0xc00006e7e0 sp=0xc00006e750 pc=0x4285b1
# runtime.goexit()
#       /usr/lib/golang/src/runtime/asm_amd64.s:1594 +0x1 fp=0xc00006e7e8 sp=0xc00006e7e0 pc=0x478c21
# created by runtime.gcBgMarkStartWorkers
#       /usr/lib/golang/src/runtime/mgc.go:1159 +0x25
#
# goroutine 18 [runnable]:
# runtime.gopark(0x183d110?, 0xc00010e000?, 0x18?, 0x14?, 0x0?)
#       /usr/lib/golang/src/runtime/proc.go:346 +0x110 fp=0xc000068750 sp=0xc000068748 pc=0x446d70
# runtime.gcBgMarkWorker()
#       /usr/lib/golang/src/runtime/mgc.go:1235 +0xf1 fp=0xc0000687e0 sp=0xc000068750 pc=0x4285b1
# runtime.goexit()
#       /usr/lib/golang/src/runtime/asm_amd64.s:1594 +0x1 fp=0xc0000687e8 sp=0xc0000687e0 pc=0x478c21
# created by runtime.gcBgMarkStartWorkers
#       /usr/lib/golang/src/runtime/mgc.go:1159 +0x25
#
# goroutine 7 [select, locked to thread]:
# runtime.gopark(0xc000069fa8?, 0x2?, 0x0?, 0x0?, 0xc000069fa4?)
#       /usr/lib/golang/src/runtime/proc.go:363 +0xd6 fp=0xc000069e18 sp=0xc000069df8 pc=0x446d36
# runtime.selectgo(0xc000069fa8, 0xc000069fa0, 0x0?, 0x0, 0x0?, 0x1)
#       /usr/lib/golang/src/runtime/select.go:328 +0x7bc fp=0xc000069f58 sp=0xc000069e18 pc=0x456e1c
# runtime.ensureSigM.func1()
#       /usr/lib/golang/src/runtime/signal_unix.go:991 +0x1b0 fp=0xc000069fe0 sp=0xc000069f58 pc=0x45b250
# runtime.goexit()
#       /usr/lib/golang/src/runtime/asm_amd64.s:1594 +0x1 fp=0xc000069fe8 sp=0xc000069fe0 pc=0x478c21
# created by runtime.ensureSigM
#       /usr/lib/golang/src/runtime/signal_unix.go:974 +0xbd
#
# goroutine 19 [syscall]:
# runtime.notetsleepg(0xd4000f?, 0xc0003bd7c0?)
#       /usr/lib/golang/src/runtime/lock_futex.go:236 +0x34 fp=0xc0003bd7a0 sp=0xc0003bd768 pc=0x418854
# os/signal.signal_recv()
#       /usr/lib/golang/src/runtime/sigqueue.go:152 +0x2f fp=0xc0003bd7c0 sp=0xc0003bd7a0 pc=0x47516f
# os/signal.loop()
#       /usr/lib/golang/src/os/signal/signal_unix.go:23 +0x19 fp=0xc0003bd7e0 sp=0xc0003bd7c0 pc=0x7b5dd9
# runtime.goexit()
#       /usr/lib/golang/src/runtime/asm_amd64.s:1594 +0x1 fp=0xc0003bd7e8 sp=0xc0003bd7e0 pc=0x478c21
# created by os/signal.Notify.func1.1
#       /usr/lib/golang/src/os/signal/signal.go:151 +0x2a
#
# goroutine 20 [select]:
# runtime.gopark(0xc0003bdfb0?, 0x2?, 0x0?, 0x0?, 0xc0003bdeac?)
#       /usr/lib/golang/src/runtime/proc.go:363 +0xd6 fp=0xc000083d18 sp=0xc000083cf8 pc=0x446d36
# runtime.selectgo(0xc000083fb0, 0xc0003bdea8, 0x4?, 0x0, 0x0?, 0x1)
#       /usr/lib/golang/src/runtime/select.go:328 +0x7bc fp=0xc000083e58 sp=0xc000083d18 pc=0x456e1c
# github.com/containers/podman/v4/libpod/shutdown.Start.func1()
#       /root/podman/libpod/shutdown/handler.go:47 +0x93 fp=0xc000083fe0 sp=0xc000083e58 pc=0x1058693
# runtime.goexit()
#       /usr/lib/golang/src/runtime/asm_amd64.s:1594 +0x1 fp=0xc000083fe8 sp=0xc000083fe0 pc=0x478c21
# created by github.com/containers/podman/v4/libpod/shutdown.Start
#       /root/podman/libpod/shutdown/handler.go:46 +0xe7
#
# goroutine 22 [select]:
# runtime.gopark(0xc0003be6f8?, 0x2?, 0x0?, 0x30?, 0xc0003be6b4?)
#       /usr/lib/golang/src/runtime/proc.go:363 +0xd6 fp=0xc000085ca8 sp=0xc000085c88 pc=0x446d36
# runtime.selectgo(0xc000085ef8, 0xc0003be6b0, 0x425bc5?, 0x0, 0x3?, 0x1)
#       /usr/lib/golang/src/runtime/select.go:328 +0x7bc fp=0xc000085de8 sp=0xc000085ca8 pc=0x456e1c
# github.com/containers/podman/v4/libpod.(*Runtime).libimageEvents.func2()
#       /root/podman/libpod/runtime.go:750 +0xe9 fp=0xc000085fe0 sp=0xc000085de8 pc=0x11b0669
# runtime.goexit()
#       /usr/lib/golang/src/runtime/asm_amd64.s:1594 +0x1 fp=0xc000085fe8 sp=0xc000085fe0 pc=0x478c21
# created by github.com/containers/podman/v4/libpod.(*Runtime).libimageEvents
#       /root/podman/libpod/runtime.go:726 +0x10d
#
# goroutine 23 [chan receive]:
# runtime.gopark(0xa?, 0x10?, 0x5?, 0x0?, 0x0?)
#       /usr/lib/golang/src/runtime/proc.go:363 +0xd6 fp=0xc0003beef0 sp=0xc0003beed0 pc=0x446d36
# runtime.chanrecv(0xc000410e40, 0xc0003befc8, 0x1)
#       /usr/lib/golang/src/runtime/chan.go:583 +0x49b fp=0xc0003bef80 sp=0xc0003beef0 pc=0x4132bb
# runtime.chanrecv2(0x0?, 0xc00038f1c0?)
#       /usr/lib/golang/src/runtime/chan.go:447 +0x18 fp=0xc0003befa8 sp=0xc0003bef80 pc=0x412df8
# github.com/containers/podman/v4/libpod.(*Runtime).startWorker.func1()
#       /root/podman/libpod/runtime_worker.go:6 +0x74 fp=0xc0003befe0 sp=0xc0003befa8 pc=0x11cb154
# runtime.goexit()
#       /usr/lib/golang/src/runtime/asm_amd64.s:1594 +0x1 fp=0xc0003befe8 sp=0xc0003befe0 pc=0x478c21
# created by github.com/containers/podman/v4/libpod.(*Runtime).startWorker
#       /root/podman/libpod/runtime_worker.go:5 +0x96
#
# goroutine 8 [syscall]:
# syscall.Syscall6(0x473cfb?, 0xc0003a8b60?, 0x200000003?, 0xc0003a8b60?, 0xc00005a000?, 0x4d0667?, 0xc0005c35d0?)
#       /usr/lib/golang/src/syscall/syscall_linux.go:90 +0x36 fp=0xc0005c3588 sp=0xc0005c3500 pc=0x4d06f6
# os.(*Process).blockUntilWaitable(0xc000526cc0)
#       /usr/lib/golang/src/os/wait_waitid.go:32 +0x87 fp=0xc0005c3660 sp=0xc0005c3588 pc=0x4fd587
# os.(*Process).wait(0xc000526cc0)
#       /usr/lib/golang/src/os/exec_unix.go:22 +0x28 fp=0xc0005c36c0 sp=0xc0005c3660 pc=0x4f5888
# os.(*Process).Wait(...)
#       /usr/lib/golang/src/os/exec.go:132
# os/exec.(*Cmd).Wait(0xc00021b4a0)
#       /usr/lib/golang/src/os/exec/exec.go:599 +0x4b fp=0xc0005c3738 sp=0xc0005c36c0 pc=0x79266b
# github.com/containers/common/libnetwork/netavark.(*netavarkNetwork).execNetavark(0xc0004e65a0, {0xc000585b40, 0x2, 0x10?}, {0x14af680, 0xc0004106c0}, {0x14d3880?, 0xc0000151a0})
#       /root/podman/vendor/github.com/containers/common/libnetwork/netavark/exec.go:140 +0xae5 fp=0xc0005c3a18 sp=0xc0005c3738 pc=0xfcef65
# github.com/containers/common/libnetwork/netavark.(*netavarkNetwork).Setup(0xc0004e65a0, {0xc000120040, 0x35}, {{{0xc000162a00, 0x40}, {0xc0001ef870, 0x10}, {0x0, 0x0, 0x0}, ...}})
#       /root/podman/vendor/github.com/containers/common/libnetwork/netavark/run.go:68 +0x5fd fp=0xc0005c3c08 sp=0xc0005c3a18 pc=0xfd47bd
# github.com/containers/podman/v4/libpod.(*Runtime).setUpNetwork.func1(...)
#       /root/podman/libpod/networking_common.go:67
# github.com/containers/podman/v4/libpod.(*Runtime).setUpNetwork(0xc0001be380, {0xc000120040, 0x35}, {{0xc000162a00, 0x40}, {0xc0001ef870, 0x10}, {0x0, 0x0, 0x0}, ...})
#       /root/podman/libpod/networking_common.go:76 +0x316 fp=0xc0005c3d08 sp=0xc0005c3c08 pc=0x116bbb6
# github.com/containers/podman/v4/libpod.(*Runtime).configureNetNS(0xc0001be380, 0xc000188800, {0x1a36a20, 0xc0004cdc80})
#       /root/podman/libpod/networking_linux.go:577 +0x268 fp=0xc0005c3e40 sp=0xc0005c3d08 pc=0x1175568
# github.com/containers/podman/v4/libpod.(*Runtime).createNetNS(0x0?, 0xc000188800)
#       /root/podman/libpod/networking_linux.go:617 +0x297 fp=0xc0005c3f60 sp=0xc0005c3e40 pc=0x1175bb7
# github.com/containers/podman/v4/libpod.(*Container).prepare.func1()
#       /root/podman/libpod/container_internal_linux.go:73 +0xbc fp=0xc0005c3fe0 sp=0xc0005c3f60 pc=0x114acfc
# runtime.goexit()
#       /usr/lib/golang/src/runtime/asm_amd64.s:1594 +0x1 fp=0xc0005c3fe8 sp=0xc0005c3fe0 pc=0x478c21
# created by github.com/containers/podman/v4/libpod.(*Container).prepare
#       /root/podman/libpod/container_internal_linux.go:68 +0x1bd
# fatal error: runtime·unlock: lock count
# fatal error: runtime·unlock: lock count
# fatal error: runtime·unlock: lock count
# fatal error: runtime·unlock: lock count
```

@edsantiago
Copy link
Member Author

Yuk.

FWIW I keep trying this:

# export PODMAN=$(pwd)/bin/podman
# while :;do bats test/system/[0123]*.bats test/system/400-unprivileged-access.bats || break; done

...with no failures. (No relevant failures, that is. Lots of assorted flakes).

My assumption is that it's not the --uidmap tests that are doing something bad, it's some sort of shy quiet test hidden away somewhere, screwing up some kind of podman state. But since I can't reproduce it, my assumption may be completely worthless.

@github-actions
Copy link

github-actions bot commented Dec 3, 2022

A friendly reminder that this issue had no activity for 30 days.

@edsantiago
Copy link
Member Author

Seen just now, minutes ago, in an in-flight PR. f36 root.

@github-actions
Copy link

github-actions bot commented Jan 7, 2023

A friendly reminder that this issue had no activity for 30 days.

@edsantiago
Copy link
Member Author

@github-actions
Copy link

github-actions bot commented Feb 9, 2023

A friendly reminder that this issue had no activity for 30 days.

@vrothberg
Copy link
Member

vrothberg commented Apr 24, 2023

@edsantiago still happening? I have no idea what's going on, unfortunately.

@edsantiago
Copy link
Member Author

I see no instances since Jan 3

@vrothberg
Copy link
Member

@edsantiago are the gods with us this time?

@edsantiago
Copy link
Member Author

No instances in my logs since Jan 3. Loath as I am to close without understanding it, there's just nothing to go on here.

@github-actions github-actions bot added the locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments. label Jan 18, 2024
@github-actions github-actions bot locked as resolved and limited conversation to collaborators Jan 18, 2024
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
flakes Flakes from Continuous Integration locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments.
Projects
None yet
Development

No branches or pull requests

4 participants