-
Notifications
You must be signed in to change notification settings - Fork 612
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: mount additional disks when using vz #1405
Conversation
IMO The goal of additionalDisks are to share disk between instances doing conversion from QCOW2 - RAW defeats that purpose
My take on this would be to provide support for creation of additionalDisks with formatType.
If raw is used, it can be shared across drivers. About default, my take would be to keep qcow2 as default as they have optimal storage. |
602caab
to
670a268
Compare
Thanks for the review, @balajiv113, I pushed a new revision to incorporate your comments.
While QCOW2 is better optimized for storage, I wonder if RAW as a default is still viable given that RAW images can also be sparse? RAW images are a little harder to deal with, in that if you (for example) The only reason I'm proposing this is for interoperability between QEMU and Virtualization.framework VMs, but its possible that Lima can just leave this for individual users to decide too. Instead of automatic runtime conversion, I can:
Let me know if that sounds better, thanks! |
670a268
to
ef0f9af
Compare
How does it defeat the purpose? The images would still work with QEMU. I just tested this by switching an Alpine instance from QEMU to VZ, which converted the What is the functionality we would lose by switching to RAW format? |
With automatic conversion we will endup with 2 different disks (one with qcow2 format and other will be raw). From vz, whatever we change that won't be reflected to qcow2 disk. As you said, we can mount qemu also with raw format in that case it is fine. |
Oh, I didn't catch this. To me "automatic conversion" meant: the first time the disk is mounted to a VZ instance it will be replaced with the converted RAW disk, and from then on both QEMU and VZ instances would only use the RAW disk going forward. This should be transparent to the QEMU instances and guarantee data consistency because there is always only a single copy of the data. I didn't look at the code in the PR yet, so I didn't realize this is not how it was implemented. My main concern is about what happens if there is an error during the conversion (e.g. you run out of disk space). The old QCOW2 disk must remain in working state until the conversion has completed successfully. Storage sizeI don't have any knowledge about what advantages QCOW2 would have for storage size. Given that both formats have support for sparse files, how is QCOW2 optimized for storage? Do you have any references I can read up on this? Any comparison on storage efficiency and access speed (e.g. if QCOW2 would store data in compressed format, I would expect the disk to be slower than RAW). |
My bad, Am the culprit here. I overlooked and missed that the PR already takes care of moving QEMU disk to a new extension .qcow2. So it is as per you said only.
Again my bad :D I was looking into @jandubois |
This is a good point though: If you are only using QEMU, and you are not excluding the Lima VMs and disks from your backups, then you may end up backing up the full sparse file for a RAW disk if you have overallocated it significantly, and much of the space remains permanently unused. So I think it makes sense to allocate additional disks as QCOW2 by default (but provide a
So this would be my recommendation, as long as the auto-conversion to RAW is done in a way that any failure keeps the QCOW2 copy unharmed. PS: Thinking about this some more, I think we are already currently at risk of damaging the VM disk if we ever run out of host disk space while trying to commit additional sparse data blocks. I wonder if |
6271dbe
to
07ca3a3
Compare
Thank you for the great discussion. QCOW2 disks are definitely more user friendly, in that users can just use default/familiar tools like I updated the code to allow a new I also added a bunch of information on how to best deal with sparse RAW files to the docs. Let me know if it looks good, or if I should move that info to another location. |
@pendo324 On macOS you can use |
07ca3a3
to
c803fe7
Compare
Oh wow, I couldn't find anything like that on macOS when I looked through the docs for |
I used .vdi format for the VirtualBox driver, it should support raw as well - as long as you add a .img suffix to the file... |
c803fe7
to
427f556
Compare
I can add vdi to the list of supported formats, or add all of the formats from this list. Might be more suitable in a separate PR though? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM 👍
I don't think it is required as of now. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks
Hey @jandubois, please let me know if there's any further changes you think should be made to get this merged when you get a chance. Thanks! |
@pendo324 I've been extremely busy the last 2 weeks, so I barely looked at any outstanding code reviews; sorry about that! I've looked at the PR now once more, and there is one thing I don't get: After converting the qcow2 disk to raw format, you rename the old disk to $ ls -l ~/.lima/_disks/foo/
total 9360
-rw-r--r--@ 1 jan staff 10737418240 17 Mar 21:36 datadisk
-rw-r--r--@ 1 jan staff 196768 17 Mar 21:35 datadisk.qcow2
lrwxr-xr-x@ 1 jan staff 24 17 Mar 21:35 in_use_by -> /Users/jan/.lima/default What is the point of keeping the old disk around. It will never be accessed by Lima again, and there is no way delete it with any lima commands, except when you delete the raw disk with Probably unrelated to this PR: the first time I stopped the VM and tried to delete the disk, the $ l stop
INFO[0000] Sending SIGINT to hostagent process 205
INFO[0000] Waiting for the host agent and the driver processes to shut down
INFO[0000] [hostagent] 2023/03/17 21:17:17 tcpproxy: for incoming conn 127.0.0.1:62994, error dialing "192.168.5.15:22": connect tcp 192.168.5.15:22: no route to host
INFO[0000] [hostagent] Received SIGINT, shutting down the host agent
INFO[0000] [hostagent] Shutting down the host agent
INFO[0000] [hostagent] Stopping forwarding "/run/lima-guestagent.sock" (guest) to "/Users/jan/.lima/default/ga.sock" (host)
INFO[0000] [hostagent] Unmounting disk "foo"
INFO[0000] [hostagent] Unmounting "/Users/jan"
INFO[0000] [hostagent] Unmounting "/tmp/lima"
INFO[0000] [hostagent] Shutting down VZ
ERRO[0000] [hostagent] dhcp: unhandled message type: RELEASE
INFO[0000] [hostagent] [VZ] - vm state change: stopped
$ l disk delete foo
WARN[0000] Skipping deleting disk "foo", disk is referenced by one or more non-running instances: ["default"]
WARN[0000] To delete anyway, run "limactl disk delete --force foo" |
No problem, thanks for taking a look at it again 👍.
Good point, I just updated the PR to delete the old qcow2 disk after swapping the RAW disk to be named
Not sure why this would happen, I've never seen that before, but I'll try to look into it. |
427f556
to
c46bead
Compare
Signed-off-by: Justin Alvarez <[email protected]>
c46bead
to
eca1613
Compare
Accidentally removed some reviewers and it seems like I can't re-request for some reason. All tests passed, please take a look at this and let me know if you have any additional requests when you get a chance, thanks! @AkihiroSuda @jandubois @balajiv113 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks, LGTM.
I have one comment, but I'm also fine with merging the PR as-is.
qcow2Path := fmt.Sprintf("%s.qcow2", extraDiskPath) | ||
if err = imgutil.QCOWToRaw(extraDiskPath, rawPath); err != nil { | ||
return fmt.Errorf("failed to convert qcow2 disk %q to raw for vz driver: %w", diskName, err) | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Technically we don't know that the current format is qcow2, except for the fact that limactl disk create
currently only creates either raw
or qcow2
formatted volumes. So this code could be kept more generic, like this:
qcow2Path := fmt.Sprintf("%s.qcow2", extraDiskPath) | |
if err = imgutil.QCOWToRaw(extraDiskPath, rawPath); err != nil { | |
return fmt.Errorf("failed to convert qcow2 disk %q to raw for vz driver: %w", diskName, err) | |
} | |
oldFormatPath := fmt.Sprintf("%s.%s", extraDiskPath, extraDiskFormat) | |
if err = imgutil.ConvertToRaw(extraDiskPath, rawPath); err != nil { | |
return fmt.Errorf("failed to convert %s disk %q to raw for vz driver: %w", extraDiskFormat, diskName, err) | |
} | |
... |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good point, I created a new issue to track this, I'll implement it after we get this merged if that's alright #1438
Issue #, if available: Closes #218 *Description of changes:* Adds support for Lima's Virtualization.framework and Rosetta features, through the use of new finch.yaml configuration options (`vmType` and `rosetta`). `vmType` also sets the `mountType` to `virtiofs`, since that is only available when using Virtualization.framework. To support this, a few things needed to be changed on our side: - Disk migration. Although lima-vm/lima#1405 (which adds persistent disk support to `vmType: vz` in Lima) will auto-convert persistent disks from QCOW2 to RAW when they are attempted to be used with `vmType: vz`, because of the way our disks are persisted with symlinks, we also have to do this - Move some hardcoded lima config from finch.yaml to be programatically toggled in `pkg/config/lima_config_applier` - This allows things like the qemu user mode scripts to be installed only when they are needed. Installing them all the time, and then trying to use Rosetta as a binformat_misc handler causes conflicts - This also opens up possibilities of future customization based on Finch's config values Currently, because lima-vm/lima#1405 is not merged yet, this PR references [a specific branch of my own finch-core repo](https://github.com/pendo324/finch-core/tree/lima-with-vz-extra-disk) which includes the Lima change already. It also edits the Makefile to do a build of Lima from the submodule directly, and overwrite the Lima downloaded from the archive. These changes will be removed once the Lima change is merged upstream. *Testing done:* - unit tests - e2e tests - local testing on Intel and Apple Silicon machines - [x] I've reviewed the guidance in CONTRIBUTING.md #### License Acceptance By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license. --------- Signed-off-by: Justin Alvarez <[email protected]>
This PR adds support for using the additional disks feature with Virtualization.framework. I noticed it was missing, so I figured I'd add it.
Considerations:
vmType: vz
? There's already precedent for runtime conversion of disks in imgutil, just wanted to make sure we want to do it for additional disks before addingimgutil.QCOWToRaw
), causing data loss, which is why there's an intermediate file right now. Not sure if there's a better way to do this?Test: