Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

podman machine init: panic: nil pointer dereference #23281

Closed
edsantiago opened this issue Jul 15, 2024 · 2 comments · Fixed by #23323
Closed

podman machine init: panic: nil pointer dereference #23281

edsantiago opened this issue Jul 15, 2024 · 2 comments · Fixed by #23323
Assignees
Labels
flakes Flakes from Continuous Integration locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments. machine

Comments

@edsantiago
Copy link
Member

Happens fairly often when running system tests in parallel. I have no idea what the trigger is:

# [11:43:36.833706343] $ ..../bin/podman machine init --image-path=/dev/null mt115_kszernsr
# [11:43:36.874815476] Flag --image-path has been deprecated, use --image instead
# panic: runtime error: invalid memory address or nil pointer dereference
# [signal SIGSEGV: segmentation violation code=0x1 addr=0x18 pc=0x48e49e]
#
# goroutine 1 [running]:
# io.copyBuffer({0x1fb70a0, 0xc0005ba108}, {0x0, 0x0}, {0x0, 0x0, 0x0})
# 	/usr/lib/golang/src/io/io.go:429 +0x17e
# io.Copy(...)
# 	/usr/lib/golang/src/io/io.go:388
# github.com/containers/podman/v5/pkg/machine/compression.(*genericDecompressor).sparseOptimizedCopy(0x50b37e?, {0x1fbdd18, 0xc00008a0e8}, {0x0, 0x0})
# 	..../pkg/machine/compression/generic.go:85 +0xf6
# github.com/containers/podman/v5/pkg/machine/compression.(*uncompressedDecompressor).decompress(0xc000594230?, {0x1fbdd18?, 0xc00008a0e8?}, {0x0?, 0x0?})
# 	..../pkg/machine/compression/uncompressed.go:17 +0x29
# github.com/containers/podman/v5/pkg/machine/compression.runDecompression({0x1fc93c0, 0xc000243aa0}, {0xc000594230, 0x50})
# 	..../pkg/machine/compression/decompress.go:97 +0x4f2
# github.com/containers/podman/v5/pkg/machine/compression.Decompress(0xc000594230?, {0xc000594230, 0x50})
# 	..../pkg/machine/compression/decompress.go:43 +0x86
# github.com/containers/podman/v5/pkg/machine/stdpull.(*StdDiskPull).Get(0xc0002e8130)
# 	..../pkg/machine/stdpull/local.go:29 +0x145
# github.com/containers/podman/v5/pkg/machine/shim/diskpull.GetDisk({0x7ffdbc39b003?, 0x9?}, 0xc0005b1580?, 0xc000012180?, 0x0?, {0x7ffdbc39b00d?, 0xe?})
# 	..../pkg/machine/shim/diskpull/diskpull.go:31 +0x11c
# github.com/containers/podman/v5/pkg/machine/qemu.(*QEMUStubber).GetDisk(0x1c76fee?, {0x7ffdbc39b003?, 0xc000607870?}, 0x0?, 0x3?)
# 	..../pkg/machine/qemu/stubber.go:387 +0x3b
# github.com/containers/podman/v5/pkg/machine/shim.Init({0x6, 0x64, {0x0, 0x0}, {0x7ffdbc39b003, 0x9}, {0xc0005afc30, 0x1, 0x1}, {0x0, ...}, ...}, ...)
# 	..../pkg/machine/shim/host.go:154 +0x539
# github.com/containers/podman/v5/cmd/podman/machine.initMachine(0x2c8cec0, {0xc0005b1240, 0x1, 0x2})
# 	..../cmd/podman/machine/init.go:219 +0x6d8
# github.com/spf13/cobra.(*Command).execute(0x2c8cec0, {0xc0000400d0, 0x2, 0x2})
# 	..../vendor/github.com/spf13/cobra/command.go:985 +0xaca
# github.com/spf13/cobra.(*Command).ExecuteC(0x2c80580)
# 	..../vendor/github.com/spf13/cobra/command.go:1117 +0x3ff
# github.com/spf13/cobra.(*Command).Execute(...)
# 	..../vendor/github.com/spf13/cobra/command.go:1041
# github.com/spf13/cobra.(*Command).ExecuteContext(...)
# 	..../vendor/github.com/spf13/cobra/command.go:1034
# main.Execute()
# 	..../cmd/podman/root.go:115 +0xb4
# main.main()
# 	..../cmd/podman/main.go:61 +0x4b2
# [11:43:36.877805265] [ rc=2 ]
@edsantiago edsantiago added flakes Flakes from Continuous Integration machine labels Jul 15, 2024
@Luap99
Copy link
Member

Luap99 commented Jul 18, 2024

I was able to reproduce with something like this

for i in {1..30}; do ./bin/podman machine init --image-path=/dev/null m$i &  done

However I am very lost with what is happing here there...

@Luap99
Copy link
Member

Luap99 commented Jul 18, 2024

The parallel run isn't necessary at all and likely a red hearing. This seems to be sort of a normal race that depends on timings

@Luap99 Luap99 self-assigned this Jul 18, 2024
Luap99 added a commit to Luap99/libpod that referenced this issue Jul 18, 2024
When the file is empty it is possible our code panics as bar.ProxyReader
returns nil when the bar is finished which is the case for 0 size as it
doesn't have to read anything from there. However as this happens on
different goroutines it is race and most of the time still works.

To fix this simply skip the progress bar setup for empty files.

While at it fix the deprecated argument in the tests.

Fixes containers#23281

Signed-off-by: Paul Holzinger <[email protected]>
@stale-locking-app stale-locking-app bot added the locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments. label Oct 17, 2024
@stale-locking-app stale-locking-app bot locked as resolved and limited conversation to collaborators Oct 17, 2024
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
flakes Flakes from Continuous Integration locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments. machine
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants