-
Notifications
You must be signed in to change notification settings - Fork 1.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Avoid BUG in migrate_folio_extra #16568
Conversation
Linux page migration code won't wait for writeback to complete unless it needs to call release_folio. Call SetPagePrivate wherever PageUptodate is set and define .release_folio, to cause fallback_migrate_folio to wait for us. Signed-off-by: tstabrawa <[email protected]>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@tstabrawa thanks for digging to to this and the clear analysis of the bug! I agree, setting PagePrivate()
and registering the .releasepage
and .invalidatepage
sure does look like the best way to handle this. Plus it has the advantage of aligning the ZFS code a bit more closely with the other kernel filesystems which we can hope will help us avoid this code of bug in the future.
Interestingly, I see that private page check in fallback_migrate_folio()
was already dropped by torvalds/linux@0201ebf for the 6.5 kernel so filemap_release_folio()
is now called unconditionally.
Unrelated to the fix it looks like the CI had an infrastructure glitch cloning the repository on some of the builders. I'll go ahead and resubmit those builds once the running tests wrap up.
Thanks for the quick PR approval!
I don't think it matters RE: this pull request, but it may be worth being aware of anyway. The patch you reference didn't really remove the private-page check; it just moves it to folio_needs_release, which is now called by filemap_release_folio. So, the |
Thanks again for the PR approval and merge! Let me know if you run into further trouble related to this problem and/or change. |
Yeah, sadly this hasn't fixed anything on my end. Just switched back to ZFS yesterday. Same issue occurs. Stacktrace in the original issue that will need to be reopened I guess. |
@RodoMa92 what version of ZFS and kernel did you manage to hit this with again? |
@behlendorf From this comment on #15140, it sounds like @RodoMa92 was using |
ZFS-dkms 2.2.6, and both latest LTS + latest supported kernel. |
I can easily reproduce this, so if any of you have any idea on what might also trigger this feel free to feed me patches to test. I should be able to quickly test this, I seriously want this fixed, but after taking a deeper look at it a couple of days ago I might not be able to do it by myself. |
Oh, god damn it, I checked the module sources and this change is not there? I'll triple check my module install then now, but I'm not understanding how can I be to 2.2.6 and not have this patchset included in, unless it's not merged on stable yet. |
Yep, wasn't clear for me that this change wasn't included in 2.2.6. Updated to git, checked that the change was actually there this time and now my VMs works 100% without any crashes. Sorry for the noise, but now I can also confirm that's fixed. Thanks a lot for this! :) |
Spoke too soon, seems that zvols are still affected. I'll revert back to qcow2 and test a lot more with that to see if it's just zvols that cause this now or if it is just luck of the draw with this patch enabled. |
I might have a clue on what is going on. For now I don't seem to be able to reproduce it anymore, I might write a small script to exercise this automatically for some time to triple check if this is my remaining issue. |
Nah, it's still present. It's just harder to hit now, but I can still reproduce the original bug every 20 tries on my end. @behlendorf If you can reopen #15140, that would be lovely. This hasn't completely fixed the issue, sadly. |
@RodoMa92 Thanks for your persistence (and patience!) in helping get to the bottom of this problem. I think I was able to identify one more way that Upon looking into writing a Note: The kernel page migration code will actually wait for any previously-started writeback during the "unmap" step, which could explain why unbatched retries didn't hit this problem prior to kernel 6.3.0. For example, in 6.2.16, unmap_and_move will wait for writeback to complete each time it retries migration on a page (such as would be the case if A side effect of my new patch could be that ZFS page migration gets noticeably faster and/or more effective (as it would no longer be skipping and/or starting writeback on all dirty pages being migrated). When you have the chance, would you please try the new patch (commit 47700cf on my fix_15140_take2 branch) and let us know if the problem remains? If you'd like to be sure you're running the correct patch, you could follow the instructions in my May 31 comment on #15140 (except, replace the old |
Loaded half an hour ago on my system and for now it seems to work as your initial patch, no crash yet. Haven't seen an huge difference in the times that it takes to bootup (maybe a second less on dropping caches?), but not having any kernel oops is far more important for me :P I'll keep you updated, but as an initial impression it looks quite good :) |
After 20 runs I still haven't been able to reproduce it yet. I'll probably test it further, but to me this looks like a proper workaround for my original bug. Thanks a lot for digging and fixing this issue! Marco |
Thanks for testing the proposed fix. Assuming I don't hear of any further problems, I plan to file a new PR later on tonight. |
@tstabrawa I'm looking forward to the PR. That is a much cleaner fix. |
This reverts commit b052035. Reviewed-by: Brian Behlendorf <[email protected]> Reviewed-by: Tony Hutter <[email protected]> Reviewed-by: Brian Atkinson <[email protected]> Signed-off-by: tstabrawa <[email protected]> Closes #16568 Closes #16723
Avoids using fallback_migrate_folio, which starts unnecessary writeback (leading to BUG in migrate_folio_extra). Reviewed-by: Brian Behlendorf <[email protected]> Reviewed-by: Tony Hutter <[email protected]> Reviewed-by: Brian Atkinson <[email protected]> Signed-off-by: tstabrawa <[email protected]> Closes #16568 Closes #16723
This reverts commit b052035. Reviewed-by: Brian Behlendorf <[email protected]> Reviewed-by: Tony Hutter <[email protected]> Reviewed-by: Brian Atkinson <[email protected]> Signed-off-by: tstabrawa <[email protected]> Closes openzfs#16568 Closes openzfs#16723
Avoids using fallback_migrate_folio, which starts unnecessary writeback (leading to BUG in migrate_folio_extra). Reviewed-by: Brian Behlendorf <[email protected]> Reviewed-by: Tony Hutter <[email protected]> Reviewed-by: Brian Atkinson <[email protected]> Signed-off-by: tstabrawa <[email protected]> Closes openzfs#16568 Closes openzfs#16723
Avoids using fallback_migrate_folio, which starts unnecessary writeback (leading to BUG in migrate_folio_extra). Reviewed-by: Brian Behlendorf <[email protected]> Reviewed-by: Tony Hutter <[email protected]> Reviewed-by: Brian Atkinson <[email protected]> Signed-off-by: tstabrawa <[email protected]> Closes openzfs#16568 Closes openzfs#16723
Linux page migration code won't wait for writeback to complete unless it needs to call
release_folio
. CallSetPagePrivate
whereverPageUptodate
is set and define.release_folio
, to causefallback_migrate_folio
to wait for us.Motivation and Context
Thanks for considering this PR.
I came across issue #15140 from the Proxmox VE 8.1 release notes, and gave it a good long look over. As far as I can tell, what's happening is that the Linux kernel page migration code is starting writeback on some pages, not waiting for writeback to complete, and then throwing a BUG when it finds that pages are still under writeback.
Pretty much all of the interesting action happens in fallback_migrate_folio(), which doesn't show up in the stack traces listed in #15140, but suffice it to say that it's called from move_to_new_folio(), which does appear in the stack traces. What appears to be happening in the case of the crashes described in #15140 is that fallback_migrate_folio() is being called upon dirty ZFS page-cache pages, so it's starting writeback by calling writeout(). Then, since ZFS doesn't store private data in any page cache pages, it skips the call to filemap_release_folio() (because folio_test_private() returns false), and immediately calls migrate_folio(), which in turn calls migrate_folio_extra(). Then, at the beginning of migrate_folio_extra(), it BUGs out because the page is still under writeback (folio_test_writeback() returns true).
Notably, if the page did have private data, then fallback_migrate_folio() would call into filemap_release_folio(), which would return false for pages under writeback, causing fallback_migrate_folio() to exit before calling migrate_folio().
So, in summary, in order for the BUG to happen a few things need to be true:
I went through the code for all of the filesystems in the Linux kernel and didn't see any that met all three conditions. Notably, pretty much all traditional filesystems store buffers in page private data. Those filesystems that don't store buffers either store something else in page_private (e.g. shmem/tmpfs, iomap), or don't do asynchronous writeback (e.g. ecryptfs, fuse, romfs, squashfs). So it would appear as if ZFS may be the only filesystem that experiences this particular behavior. As far as I can tell, the above-described behavior goes back all the way to when page migration was first implemented in kernel 2.6.16.
The way I see it, there are two ways to make the problem go away:
I assume the latter may be preferable (even if only temporarily) so that ZFS can avoid this crash for any/all kernel versions, but I'm happy to defer to the ZFS devs on which option(s) you choose to pursue.
The latter is the approach I took in the patch proposed here.
Description
Call
SetPagePrivate
whereverPageUptodate
is set and define.release_folio
, to causefallback_migrate_folio
to wait for writeback to complete.How Has This Been Tested?
Tested by user @JKDingwall - results in the following comments on #15140:
Also, regression-tested by running ZFS Test Suite (on Ubunutu 23.10, running kernel version 6.5.0-35-generic. No new test failures were observed. See attached files:
Types of changes
Checklist:
Signed-off-by
.