-
Notifications
You must be signed in to change notification settings - Fork 2
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
zfs#16706 #1
base: master
Are you sure you want to change the base?
Conversation
JFYI You don't even need PRs, CI will be triggered on pushes in any branch https://github.com/vpsfreecz/zfs/blob/vpsadminos-master-next/.github/workflows/zfs-qemu.yml#L4 |
awesome! @gmelikov thx! |
this is easier to keep track of tho, with comments and all, easy link to send and can ping @TheUbuntuGuy about this here too :) |
@gmelikov please is there any way how to kill in progress tests when I know I'm going to be pushing a next version soon? thx (the goal is to free up the workers for others) |
@snajpa I think this one should work (but only if you have appropriate rights for a repo) And there was a patch to automatically stop unneeded runs if branch was updated openzfs#16562 If you're afraid about openzfs/zfs's runners - best way is to use local branch which wasn't used for PR into upstream repo. |
ping @TheUbuntuGuy these six patches with a tip at bacde33 should probably yield a new set of stack traces as I think I've dealt with hopefully all of those you've been seeing with my patches so far :D |
I just ran this complete patch thrice. The first time, I just got some soft lockups before I aborted:
The second time got a crash with some new traces, including new extra spooky messages about the scheduler breaking:
The last time confirmed for sure that the scheduler is not having a good time:
|
@TheUbuntuGuy thank you - you see I can't reproduce it at that level you're reaching, so... I'd like you to know that I really really appreciate your help, thank you! I'm going to look at it tonight |
No problem. I'm just as invested as you in getting this resolved, I just don't have the low level expertise in the ZFS codebase to tackle an issue as complex as this. I released that I should really be decoding the stack traces for you, so here is trace 2 in my last message:
|
7f5644e
to
0192591
Compare
@TheUbuntuGuy the only thing I could find is that could you pls try the PR as is now? tip at 0192591 |
I just fixed a bug I made in truncate(2), so the current patchstack tip is at 63c5598 |
Tested 7c19c30 twice. Got the following:
and
|
@TheUbuntuGuy is your zfs |
@TheUbuntuGuy also the hex offsets at each function in a stack trace were better than line numbers :'( |
Sure. I have just |
If we run +-similar toolchain I should be arriving at relevant line numbers from those offsets alone, let's verify :)
|
oh sh*t it upgraded again I used to run a bit older one just a few days back :D ok this is funny, maybe the lines will be better after all, if you can share the exact corresponding source, then... I think we can just skip trying to match the offsets and what they mean together |
Our build environment has to target some old distributions, so we are still using
If you need me to change my build environment to match yours, I can do that. It's all automated in a CI pipeline using Docker containers. |
ok can we please unify then on latest debian/sid with gcc-14? that should be the easiest to reach for both of us I hope, assuming you can do current sid |
very well could be, yes, can you try to replicate with preempt=none? I'll try the same |
bang, even with preempt=none, next up, init on alloc/free :D |
That was fast. Didn't even have time to recompile :D |
you can switch that runtime with |
bang. ok dunno whatever it was that finally made the difference... time to hunt down what's going in zfs :D |
I noticed you have I went through it (minus the drivers) and I can't see much. Are the security mitigations masking something? |
that's from a production kernel for easier live patching, there it's builtin, on a dev node it's OOT |
with block cloning off, I get:
|
block cloning, direct io, sync all off and:
|
seems to be a race with dbuf_evict thread probably, these parts of zfs are fun |
yummy, we've also seen this one from your env
|
This is a wild guess, but since 6.6 is solid and 6.7 is the first kernel this breaks with, I wonder if the shrinker locking changes are in some way related... |
well yes, those changes are causing the original |
This reverts commit b052035. Reviewed-by: Brian Behlendorf <[email protected]> Reviewed-by: Tony Hutter <[email protected]> Reviewed-by: Brian Atkinson <[email protected]> Signed-off-by: tstabrawa <[email protected]> Closes openzfs#16568 Closes openzfs#16723
Avoids using fallback_migrate_folio, which starts unnecessary writeback (leading to BUG in migrate_folio_extra). Reviewed-by: Brian Behlendorf <[email protected]> Reviewed-by: Tony Hutter <[email protected]> Reviewed-by: Brian Atkinson <[email protected]> Signed-off-by: tstabrawa <[email protected]> Closes openzfs#16568 Closes openzfs#16723
Currently, even though send_reader_thread prefetches spill block, do_dump() will not use it and issues its own blocking arc_read. This causes significant performance degradation when sending datasets with lots of spill blocks. For unmodified spill blocks, we also create send_range struct for them in send_reader_thread and issue prefetches for them. We piggyback them on the dnode send_range instead of enqueueing them so we don't break send_range_after check. Reviewed-by: Brian Behlendorf <[email protected]> Reviewed-by: Alexander Motin <[email protected]> Signed-off-by: Chunwei Chen <[email protected]> Co-authored-by: david.chen <[email protected]> Closes openzfs#16701
a10e552 updated abd_free_linear_page() to no longer call abd_update_scatter_stat(). This meant that linear pages that were not attached to Direct I/O requests were not doing waste accounting for the ARC. This led to performance issues due to incorrect ARC accounting that resulted in 100% of CPU time being spent in arc_evict() during prolonged I/O workloads with the ARC. The call to abd_update_scatter_stats() is now conditionally called in abd_free_linear_page() when the ABD is not from a Direct I/O request. Reviewed-by: Mark Maybee <[email protected]> Reviewed-by: Tony Nguyen <[email protected]> Reviewed-by: Brian Behlendorf <[email protected]> Reviewed-by: Alexander Motin <[email protected]> Signed-off-by: Brian Atkinson <[email protected]> Closes openzfs#16729
When building on musl, we get: ``` In file included from tests/zfs-tests/cmd/getversion.c:22: /usr/include/sys/fcntl.h:1:2: error: #warning redirecting incorrect #include <sys/fcntl.h> to <fcntl.h> [-Werror=cpp] 1 | #warning redirecting incorrect #include <sys/fcntl.h> to <fcntl.h> In file included from module/os/linux/zfs/vdev_file.c:36: /usr/include/sys/fcntl.h:1:2: error: #warning redirecting incorrect #include <sys/fcntl.h> to <fcntl.h> [-Werror=cpp] 1 | #warning redirecting incorrect #include <sys/fcntl.h> to <fcntl.h> ``` Bug: https://bugs.gentoo.org/925235 Reviewed-by: Brian Behlendorf <[email protected]> Signed-off-by: Sam James <[email protected]> Closes openzfs#15925
This commit fixes JSON output for zfs list when user properties are requested with -o flag. This case needed to be handled specifically since zfs_prop_to_name does not return property name for user properties, instead it is stored in pl->pl_user_prop. Reviewed-by: Ameer Hamza <[email protected]> Reviewed-by: Brian Behlendorf <[email protected]> Reviewed-by: Alexander Motin <[email protected]> Signed-off-by: Umer Saleem <[email protected]> Closes openzfs#16732
FYI still working on it, it seems like some kind of use after free after some kernel's intervention which the ZFS code doesn't account for, perhaps due to new folio APIs and their new semantics. Still learning on the go :) |
This commit fixes JSON output for zpool list when user properties are requested with -o flag. This case needed to be handled specifically since zpool_prop_to_name does not return property name for user properties, instead it is stored in pl->pl_user_prop. Reviewed-by: Brian Behlendorf <[email protected]> Reviewed-by: Alexander Motin <[email protected]> Signed-off-by: Umer Saleem <[email protected]> Closes openzfs#16734
In zpool_get_user_prop, when called from zpool_expand_proplist and collect_pool, we often have zpool_props present in zpool_handle_t equal to NULL. This mostly happens when only one user property is requested using zpool list -o <user_property>. Checking for this case and correctly initializing the zpool_props field in zpool_handle_t fixes this issue. Interestingly, this issue does not occur if we query any other property like name or guid along with a user property with -o flag because while accessing properties like guid, zpool_prop_get_int is called which checks for this case specifically and calls zpool_get_all_props. Reviewed-by: Brian Behlendorf <[email protected]> Reviewed-by: Alexander Motin <[email protected]> Signed-off-by: Umer Saleem <[email protected]> Closes openzfs#16734
by protecting against sb->s_shrink eviction on umount with newer kernels deactivate_locked_super calls shrinker_free and only then sops->kill_sb cb, resulting in UAF on umount when trying to reach for the shrinker functions in zpl_prune_sb of in-umount dataset Signed-off-by: Pavel Snajdr <[email protected]>
Signed-off-by: Pavel Snajdr <[email protected]>
as per Documentation/filesystems/porting.rst: quote: **strongly recommended** take the RCU-delayed parts of ->destroy_inode() into a new method - ->free_inode(). If ->destroy_inode() becomes empty - all the better, just get rid of it. endquote. Signed-off-by: Pavel Snajdr <[email protected]>
it's fun, I think I can exclude folios too, this must be in there for a long time, I've tried ifdefing or else disarming all the new stuff I'd tend to implicate but so far nothing :-D |
Motivation and Context
openzfs#16706
Description
How Has This Been Tested?
Types of changes
Checklist:
Signed-off-by
.