Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

vz: use DiskImageCachingModeCached (rumored to fix disk corruption on ARM) #2026

Merged
merged 1 commit into from
Nov 24, 2023

Conversation

AkihiroSuda
Copy link
Member

Switch away from DiskImageCachingModeAutomatic to DiskImageCachingModeCached, as this is rumored to fix disk corruption on ARM

Expected to fix issue #1957 (for vz)

… ARM)

Switch away from `DiskImageCachingModeAutomatic` to `DiskImageCachingModeCached`,
as this is rumored to fix disk corruption on ARM
(See recent comments in utmapp/UTM issue 4840)

Expected to fix issue 1957 (for vz)

Signed-off-by: Akihiro Suda <[email protected]>
@AkihiroSuda
Copy link
Member Author

Merging, for ease of testing

@AkihiroSuda AkihiroSuda merged commit f98c399 into lima-vm:master Nov 24, 2023
24 checks passed
@hasan4791
Copy link
Contributor

hasan4791 commented Nov 24, 2023

Seems to be working good so far. Will monitor for a week atleast. 👍🏽
Template being used is "podman-rootful".

@AkihiroSuda AkihiroSuda mentioned this pull request Nov 25, 2023
@rpkoller
Copy link

rpkoller commented Nov 26, 2023

I've got notice of this issue in there #1957 . I am using Colima and since the release of Colima 0.6.0 i ran into VMs that got corrupted all of a sudden. At first my suspicion was the root cause was in the changes for 0.6.0 but yesterday i'Ve started to test in plain Lima and today I ran into the same issue. After finding this issue i've updated to HEAD as well. I am already testing for a few hours now and no issues so far. Will also keep testing the coming days. But this looks promising so far. Cuz under Lima the error sometimes happened after starting a single project in a completely newly generated instance. But since the upgrade to head it looks good. I am using the template "Docker". Thank you for spotting this!

Update: I am using the HEAD version extensively for the last four days and not ran into a single issue. With the changes in this PR everything works like a charm.

@EdwardMoyse
Copy link

As noted on #1957 this seems to be helping me - no corruptions w/ vz VM since I moved to HEAD (and it was happening hourly before).

@balajiv113
Copy link
Member

@AkihiroSuda I believe this change will have disk io performance issues.

We will see a reduced disk io performance

@balajiv113
Copy link
Member

Should we think of providing a option to switch over to older model for those who prefers performance over stability ?

@AkihiroSuda
Copy link
Member Author

I guess nobody wants disk corruption?

@balajiv113
Copy link
Member

If am not wrong this corruption is happening only on ARM. This change will now degrade performance for intel users as well.

@balajiv113
Copy link
Member

We can go ahead. If someone complaints on performance we can re-evaluate at that point.

@rpkoller
Copy link

rpkoller commented Nov 30, 2023

I've done some performance testing based on the following puppeteer script https://github.com/ddev/ddev-puppeteer created by @rfay for testing the actual performance of the different docker providers (orbstack, docker desktop, rancher desktop and colima) with DDEV for the following blog post https://ddev.com/blog/docker-performance-2023/. Based on the protocol I've tested with Lima 0.18.0 as well as HEAD-a8c703b on my m1pro with 32gb of ram. I've tested with a profile with 4 cpus and 8gb ram and 20gb of diskspace assigned.

Colima 0.6.6 - Lima 0.18.0 - Mutagen enabled
installtime: 20.025s

Colima 0.6.6 - Lima 0.18.0 - Mutagen NOT enabled
installtime: 31.543s

Colima 0.6.6 - Lima HEAD-a8c703b - Mutagen enabled
installtime: 19.520s

Colima 0.6.6 - Lima HEAD-a8c703b - Mutagen NOT enabled
installtime: 31.047s

from my none developer point of view i think there is no performance malus with this PR, it is even slightly faster. But overall it there is no significant difference at all anyway. just thought the results might be relevant and of interest in the context of this issue.

@balajiv113
Copy link
Member

@rpkoller Thanks for verifying this.

We should be good in that case 👍

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants