-
Notifications
You must be signed in to change notification settings - Fork 2.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
systemctl reboot
inside podman machine clears volume mounts
#15976
Comments
@baude @ashley-cui PTAL |
I think this is the expected behaviour given that we currently manually make the mount calls via ssh after start, see podman/pkg/machine/qemu/machine.go Lines 661 to 690 in dca5ead
I you want to fix this you have to mount this inside the VM on boot. I think you could create the proper systemd units via ignition or append lines to fstab. Contributions welcome. |
Thanks for pointing me in the right direction. I'll see what I can do with the time I have to play with this. |
The implementation assumes "stop" and "start", someone converted the ssh commands to a systemd service which would make it run in boot. Possibly making it harder to mount and umount while running ? |
I'm sorry, I'm not quite sure I'm following that last comment. So the implementation is designed for a physical |
It was mostly an implementation detail. I think I abstracted the mount and umount into functions, so the same would be needed with the systemd service. If it is generated by the ignition, then it would be less obvious how to manipulate it at runtime EDIT: no, must have been dreaming up that part. Basically the original machine never supported reboot properly, and the quick-and-dirty implementation just followed. Setting up a service to re-mount at boot, would be the proper way to go*. * it would also avoid a whole lot of quirks when waiting for ssh to first come online, especially in the mac version the old systemd implementation I was thinking of was from #8016 (comment) (the actual code was 6412ed9) |
Fair enough. I just noticed this happened overnight when I went to go create a container this morning and it failed with the
So I'm hoping I can get to this sooner rather than later, especially if the podman backing machine is going to reboot on it's own which will then cause subsequent container runs that previously worked to not work again. I'm lucky I saw the error message, because I didn't run the reboot myself this time and I would have been super confused why all of a sudden the mounts I had setup disappeared. |
I had a few moments to poke at this last Friday and played around with the idea of just dynamically writing the I think what I'll need to do in order to get this working is modify the existing The only thing that worries me about this is that for existing podman-machines, if the user updates the podman remote and then starts their podman machine back up, none of their already-configured mounts will come up. I'm wondering if there's some kind of annotation that could be added to the machine itself somehow as part of the new init and then we could mount via ssh if that annotation doesn't exist to preserve backwards compatibility. The next steps for me is learning more about ignition configs and getting that part working first, though, then I'll submit a draft PR with that working and could probably have some further discussion once we get to that point. |
I think systemd handles https://www.freedesktop.org/software/systemd/man/systemd.mount.html |
I'm hoping to get some time to work on it during my afternoon today, I'm just wondering what the |
This comment is mostly just notes to myself as to where I am and what I've tried for when I get a chance to pick this back up. I got some time to play around with systemd this morning and spent some time trying to dynamically create the systemd target.mount files: main...iamkirkbater:podman:add-systemd-init The main takeaways that I gleaned from this -
I no longer believe the above paragraph, as it's still happening without any of my changes. I wonder if it's a transient networking issue with my workstation. Edit again: It was a transient issue with my workstation, rebooting fixed it. Where am I going from here - So the systemd mount files don't seem to be doing what we want, but when I played with /etc/fstab the other day I noticed that it worked just fine on reboot, and would actually make subsequent start commands fail, so I'm thinking that I might go back to that route but via the ignition file, as the FCOS VM doesn't really like when systemd creates the mount files but maybe if it's in etcd/fstab it would work again? I guess what's really confusing me at this point is why the command to disable the filesystem integrity stuff works via ssh, and then on a reboot it deletes the mountpoint, but maybe that's what the ssh command to flip back on the integrity stuff is doing though. It doesn't look like there's a way to create a folder in the root directory via ignition, as the FCOS Ignition docs seem to hint that you can't: https://docs.fedoraproject.org/en-US/fedora-coreos/storage/#_immutable_read_only_usr. So if the /etc/fstab also stuff doesn't work, I'm wondering if there needs to be a special mapping layer on podman-machine that does something like check Thanks for tolerating me :) |
And of course I couldn't help myself but to power through. I ended up figuring out how to get past the immutable filesystem stuff using the same commands that are being used with the ssh bits, just they're being run as a prereq to the mount. Had to add Because these live in the I have a few todos left on that draft PR, like adding some more unit tests as well as adding some comments. I think the biggest problem with this will still be deciding how y'all want to help "migrate" existing VMs, I'm not sure what the appropriate path forward is and would be happy to help develop whatever that would be. If there's any immediate feedback, I'm usually around on Slack. Thanks! |
A friendly reminder that this issue had no activity for 30 days. |
@ashley-cui Is this still broken? |
Still an issue, probably needs some design discussion on dynamic mounts |
I thought someone added a PR to put the mounts in as systemd unit files, which would fire on reboot? |
Draft PR: #16143 |
Yeah, that was me who started that PR. The problem I'm having with it right now is inconsistency. Sometimes the systemd files all fire in order, sometimes it takes multiple reboots. Once they're all applied they run fine, seemingly forever, but it's not a better UX at all to have to fight the podman machine on your first startup. I just haven't had a chance to poke at it again since the last update on it. This is my first venture into systemd, so I could just be missing something simple. |
Since the mount behavior occurs after the start, as long as the restart after the reboot should solve the problem. Although a bit wordy.
|
A friendly reminder that this issue had no activity for 30 days. |
@iamkirkbater are you still working on this? |
👋🏼 I haven't had a chance to work on this for a while due to conflicting priorities. I've just been restarting my podman machine with |
I also think that I'm at the limits of my |
A friendly reminder that this issue had no activity for 30 days. |
@Luap99 @ashley-cui @baude Is this still an issue? |
yes |
Related issue for MacOS. Using /etc/synthetic.conf I have a symlink from ~/store -> /store and attempting to mount /store/pipeline with the -v CLI option for the run command fails with the same |
Does anyone know whether this issue may be related to podman hanging often when my macbook wakes from sleep? I have also been experiencing this mount issue, but I only notice during the few instances of podman not hanging after I step away for a while. Whenever I open my laptop, any "real" podman command hangs (e.g., I have theorized that the podman hanging issue may be related to the mount issue, presuming that the socket mount is failing when podman hangs, but I'm not sure if that's the case. Could anyone suggest what the best ways for me to debug the hanging issue would be? |
This is a really annoying issue given that it also affects automatic updates of the Podman machine. (I assume it's the same bug). Zincati will apply updates to the machine at random times by default (or at defined times if you configure a custom update strategy). As part of that, the machine will get rebooted and all mounts will be lost, e.g. leading to VS Code dev containers crashing. |
I'd received a solution in this issue for enabling mounting of special symbolic links that requires additional commands when creating the podman machine. It works fine until I reboot or the machine is restarted. It turns out the settings I needed can be added in a containers.conf file. Until I learned this, I used the script below to reinitialize podman's machine:
By allowing some additional mount commands to be run when initializing the machine it might allow this issue to be solved as well. |
Is this a BUG REPORT or FEATURE REQUEST? (leave only one on its own line)
/kind bug
Description
I'm running podman on a new M1 mac, and I need to enable the qemu-user-static package in order to support multiple architectures of packages. I've noticed that when I
systemctl reboot
from inside the podman machine, as the machine comes back online it does not have my defined mounts anymore.Steps to reproduce the issue:
podman machine init -v $HOME:$HOME
podman machine ssh
ls /
- note that you see aUsers
directorysudo systemctl reboot
podman machine ssh
ls /
- note there is no longer a Users directoryDescribe the results you received:
The mounts defined as part of the init process or in containers.conf are not present when rebooting from within the machine.
Describe the results you expected:
I'd expect the mounts to still be present after a reboot.
Additional information you deem important (e.g. issue happens only occasionally):
The workaround is simple, you just want to
podman machine stop && podman machine start
but it's still just an extra step when you'd expect this to just come back online with the correct mounts.Output of
podman version
:Output of
podman info
:Package info (e.g. output of
rpm -q podman
orapt list podman
orbrew info podman
):Have you tested with the latest version of Podman and have you checked the Podman Troubleshooting Guide? (https://github.com/containers/podman/blob/main/troubleshooting.md)
Yes
Additional environment details (AWS, VirtualBox, physical, etc.):
This is running on an M1Pro Macbook.
I'm happy to provide any additional details as needed. Thanks so much for all of your work on this, I'm excited that this is actually a viable alternative to docker now!
The text was updated successfully, but these errors were encountered: