Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[6.5.x, 2.2.99git] PANIC: unencrypted block in encrypted object set 2051 #15275

Closed
65a opened this issue Sep 15, 2023 · 39 comments
Closed

[6.5.x, 2.2.99git] PANIC: unencrypted block in encrypted object set 2051 #15275

65a opened this issue Sep 15, 2023 · 39 comments
Labels
Component: Encryption "native encryption" feature Type: Defect Incorrect behavior (e.g. crash, hang)

Comments

@65a
Copy link

65a commented Sep 15, 2023

System information

Type Xeon 6414u 256gb ECC DDR5 DSA/IAA/Non-QAT AVX512
Distribution Name Arch
Distribution Version latest
Kernel Version 6.5.2
Architecture amd64
OpenZFS Version 2.2.99+r83+g12ce45f260

Describe the problem you're observing

Machine panicked on accessing certain datasets. First observed reading a VIRTIOFS into Windows (blech) through KVM from a local encrypted ZFS dataset. PANIC followed by several oopses and timeouts. Stopped booting, pool cannot be imported on other machines.

Describe how to reproduce the problem

On this machine, now any high queue depth activity (e.g., compile latest ZFS from HEAD) causes the issue. Unfortunately the machine now hangs at boot, after complaining one file was corrupt. I am going to try to coax this board into allowing an alternate boot medium (unrelated problem) and I can probably recover better logs for this issue, which I will attach.

Include any warning/errors/backtraces from the system logs

PANIC: unecrypted in encrypted

...sometime later...
kernel hung task warning for autotrim
(at this point, no FS operations complete, and machine cannot be shutdown normally, power was removed to shut down)

@65a 65a added the Type: Defect Incorrect behavior (e.g. crash, hang) label Sep 15, 2023
@65a
Copy link
Author

65a commented Sep 15, 2023

I split the mirror and put the NVME into a USB carrier to access with a laptop. I cannot import it:

[ 9207.178096] BUG: unable to handle page fault for address: ffffa05fb8254014
[ 9207.178105] #PF: supervisor read access in kernel mode
[ 9207.178111] #PF: error_code(0x0000) - not-present page
[ 9207.178113] PGD 101000067 P4D 101000067 PUD 1011eb067 PMD 1bae2d067 PTE 0
[ 9207.178121] Oops: 0000 [#1] PREEMPT SMP NOPTI
[ 9207.178124] CPU: 0 PID: 45485 Comm: dmu_objset_find Tainted: P           OE      6.5.2-arch1-1 #1 d2912f929551bc8e9b95af790b8285a77c25fa29
[ 9207.178128] Hardware name: redacted laptop with AMD processor
[ 9207.178130] RIP: 0010:zil_claim_log_record+0x47/0xd0 [zfs]
[ 9207.178306] Code: 83 f8 09 74 6c 48 83 f8 18 75 55 48 85 d2 74 50 48 83 7e 40 00 48 8b 6f 38 74 45 45 31 e4 31 d2 48 89 d0 48 c1 e0 07 48 01 d8 <f6> 40 7c 80 75 22 48 83 78 48 00 75 07 48 83 78 50 00 74 14 48 c1
[ 9207.178309] RSP: 0018:ffffa05fae9bf9c8 EFLAGS: 00010282
[ 9207.178312] RAX: ffffa05fb8253f98 RBX: ffffa05fb81d0298 RCX: 0000000000000001
[ 9207.178314] RDX: 000000000000107a RSI: 0000000000000100 RDI: ffff8fd5be562400
[ 9207.178316] RBP: ffff8fd70bf50000 R08: 0000000000000000 R09: ffff8fd5be562470
[ 9207.178317] R10: 0000308149601e40 R11: 0000000000000001 R12: 000000000000107a
[ 9207.178318] R13: ffff8fd545459500 R14: ffffffffffffffff R15: 0000000000000980
[ 9207.178320] FS:  0000000000000000(0000) GS:ffff8fde36400000(0000) knlGS:0000000000000000
[ 9207.178321] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[ 9207.178323] CR2: ffffa05fb8254014 CR3: 00000001fdc28000 CR4: 0000000000750ef0
[ 9207.178325] PKRU: 55555554
[ 9207.178326] Call Trace:
[ 9207.178330]  <TASK>
[ 9207.178332]  ? __die+0x23/0x70
[ 9207.178342]  ? page_fault_oops+0x171/0x4e0
[ 9207.178349]  ? srso_alias_return_thunk+0x5/0x7f
[ 9207.178358]  ? exc_page_fault+0x175/0x180
[ 9207.178365]  ? asm_exc_page_fault+0x26/0x30
[ 9207.178370]  ? zil_claim_log_record+0x47/0xd0 [zfs 21eb75a7805469398ecac59be8042f533e714bd5]
[ 9207.178482]  ? zil_claim_log_record+0x6f/0xd0 [zfs 21eb75a7805469398ecac59be8042f533e714bd5]
[ 9207.178584]  zil_parse+0x58b/0x9d0 [zfs 21eb75a7805469398ecac59be8042f533e714bd5]
[ 9207.178707]  ? __pfx_zil_claim_log_record+0x10/0x10 [zfs 21eb75a7805469398ecac59be8042f533e714bd5]
[ 9207.178792]  ? __pfx_zil_claim_log_block+0x10/0x10 [zfs 21eb75a7805469398ecac59be8042f533e714bd5]
[ 9207.178891]  ? dnode_special_open+0x43/0x80 [zfs 21eb75a7805469398ecac59be8042f533e714bd5]
[ 9207.179022]  ? srso_alias_return_thunk+0x5/0x7f
[ 9207.179025]  ? zrl_exit+0x4c/0x60 [zfs 21eb75a7805469398ecac59be8042f533e714bd5]
[ 9207.179111]  ? srso_alias_return_thunk+0x5/0x7f
[ 9207.179114]  ? dmu_objset_open_impl+0x689/0xa00 [zfs 21eb75a7805469398ecac59be8042f533e714bd5]
[ 9207.179212]  ? srso_alias_return_thunk+0x5/0x7f
[ 9207.179215]  ? rrw_exit+0x69/0x150 [zfs 21eb75a7805469398ecac59be8042f533e714bd5]
[ 9207.179318]  ? srso_alias_return_thunk+0x5/0x7f
[ 9207.179321]  ? dmu_objset_from_ds+0x85/0x170 [zfs 21eb75a7805469398ecac59be8042f533e714bd5]
[ 9207.179417]  ? srso_alias_return_thunk+0x5/0x7f
[ 9207.179419]  ? dmu_objset_own_obj+0xab/0xd0 [zfs 21eb75a7805469398ecac59be8042f533e714bd5]
[ 9207.179517]  zil_claim+0x11d/0x2a0 [zfs 21eb75a7805469398ecac59be8042f533e714bd5]
[ 9207.179611]  dmu_objset_find_dp_impl+0x15c/0x3e0 [zfs 21eb75a7805469398ecac59be8042f533e714bd5]
[ 9207.179710]  dmu_objset_find_dp_cb+0x29/0x40 [zfs 21eb75a7805469398ecac59be8042f533e714bd5]
[ 9207.179841]  taskq_thread+0x2c3/0x4e0 [spl 9296bf079e094cd89d3d1d27445cc013acf8e320]
[ 9207.179856]  ? __pfx_default_wake_function+0x10/0x10
[ 9207.179864]  ? __pfx_taskq_thread+0x10/0x10 [spl 9296bf079e094cd89d3d1d27445cc013acf8e320]
[ 9207.179871]  kthread+0xe8/0x120
[ 9207.179880]  ? __pfx_kthread+0x10/0x10
[ 9207.179883]  ret_from_fork+0x34/0x50
[ 9207.179889]  ? __pfx_kthread+0x10/0x10
[ 9207.179891]  ret_from_fork_asm+0x1b/0x30
[ 9207.179902]  </TASK>
[ 9207.179903] Modules linked in: nft_reject_ipv4 nf_reject_ipv4 nft_reject nft_ct nft_masq nft_chain_nat nf_tables nfnetlink nf_nat_h323 nf_conntrack_h323 nf_nat_pptp nf_conntrack_pptp nf_nat_tftp nf_conntrack_tftp nf_nat_sip nf_conntrack_sip nf_nat_irc nf_conntrack_irc nf_nat_ftp nf_conntrack_ftp nf_nat nf_conntrack nf_defrag_ipv6 nf_defrag_ipv4 libcrc32c crc32c_generic r8153_ecm cdc_ether usbnet r8152 mii uas usb_storage rfcomm snd_seq_dummy snd_hrtimer snd_seq snd_seq_device ccm snd_hda_codec_realtek snd_hda_codec_generic snd_hda_codec_hdmi cmac algif_hash algif_skcipher af_alg wireguard curve25519_x86_64 libchacha20poly1305 chacha_x86_64 poly1305_x86_64 libcurve25519_generic libchacha ip6_udp_tunnel udp_tunnel bnep snd_soc_dmic snd_soc_acp6x_mach snd_acp6x_pdm_dma snd_sof_amd_rembrandt snd_sof_amd_renoir snd_sof_amd_acp snd_sof_pci snd_sof_xtensa_dsp btusb btrtl uvcvideo snd_sof btbcm btintel videobuf2_vmalloc snd_sof_utils uvc btmtk videobuf2_memops intel_rapl_msr snd_soc_core intel_rapl_common mt7921e
[ 9207.179992]  videobuf2_v4l2 bluetooth snd_compress edac_mce_amd mt7921_common ac97_bus videodev snd_pcm_dmaengine mt76_connac_lib snd_hda_intel snd_pci_ps kvm_amd videobuf2_common snd_intel_dspcfg ecdh_generic mt76 snd_rpl_pci_acp6x vfat mousedev joydev crc16 mc snd_intel_sdw_acpi hid_multitouch snd_acp_pci fat kvm snd_hda_codec irqbypass snd_pci_acp6x crct10dif_pclmul mac80211 snd_pci_acp5x crc32_pclmul snd_hda_core crc32c_intel polyval_clmulni snd_hwdep polyval_generic gf128mul snd_pcm libarc4 ghash_clmulni_intel snd_timer snd_rn_pci_acp3x sha512_ssse3 ucsi_acpi aesni_intel snd_acp_config crypto_simd snd_soc_acpi sp5100_tco typec_ucsi snd asus_nb_wmi cryptd cfg80211 rapl i8042 typec wmi_bmof pcspkr k10temp soundcore ccp i2c_piix4 snd_pci_acp3x roles i2c_hid_acpi amd_pmc amd_pmf i2c_hid serio acpi_tad mac_hid pkcs8_key_parser dm_multipath ec_sys auth_rpcgss crypto_user loop fuse dm_mod sunrpc ip_tables x_tables hid_asus asus_wmi ledtrig_audio sparse_keymap platform_profile rfkill usbhid zfs(POE) spl(OE) sdhci_pci
[ 9207.180122]  nvme cqhci sdhci nvme_core xhci_pci mmc_core xhci_pci_renesas nvme_common amdgpu i2c_algo_bit drm_ttm_helper ttm video wmi drm_suballoc_helper amdxcp drm_buddy gpu_sched drm_display_helper cec
[ 9207.180155] CR2: ffffa05fb8254014
[ 9207.180158] ---[ end trace 0000000000000000 ]---
[ 9207.180159] RIP: 0010:zil_claim_log_record+0x47/0xd0 [zfs]
[ 9207.180268] Code: 83 f8 09 74 6c 48 83 f8 18 75 55 48 85 d2 74 50 48 83 7e 40 00 48 8b 6f 38 74 45 45 31 e4 31 d2 48 89 d0 48 c1 e0 07 48 01 d8 <f6> 40 7c 80 75 22 48 83 78 48 00 75 07 48 83 78 50 00 74 14 48 c1
[ 9207.180270] RSP: 0018:ffffa05fae9bf9c8 EFLAGS: 00010282
[ 9207.180272] RAX: ffffa05fb8253f98 RBX: ffffa05fb81d0298 RCX: 0000000000000001
[ 9207.180273] RDX: 000000000000107a RSI: 0000000000000100 RDI: ffff8fd5be562400
[ 9207.180275] RBP: ffff8fd70bf50000 R08: 0000000000000000 R09: ffff8fd5be562470
[ 9207.180276] R10: 0000308149601e40 R11: 0000000000000001 R12: 000000000000107a
[ 9207.180277] R13: ffff8fd545459500 R14: ffffffffffffffff R15: 0000000000000980
[ 9207.180279] FS:  0000000000000000(0000) GS:ffff8fde36400000(0000) knlGS:0000000000000000
[ 9207.180280] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[ 9207.180282] CR2: ffffa05fb8254014 CR3: 00000001fdc28000 CR4: 0000000000750ef0
[ 9207.180283] PKRU: 55555554
[ 9207.180285] note: dmu_objset_find[45485] exited with irqs disabled

It has hung here, apparently indefinitely. Hoping I didn't lose the pool. I'm going to update this laptop to zfs head just in case that helps. zpool import -n poolname is completely hung on my laptop now.

@65a
Copy link
Author

65a commented Sep 15, 2023

After updating ZFS to zfs-2.2.99-87_g8af8d2abb1, I get the exact same stack trace for zpool import -N poolname and an indefinite hang. The laptop now needs to be hard powered off after interacting with the pool. I am afraid I may have lost my first ZFS pool in 15 years. Please advise!

@65a 65a changed the title PANIC: unencrypted in encrypted! [6.5.x] PANIC: unencrypted in encrypted! [pool is no longer accessible] Sep 15, 2023
@65a 65a changed the title [6.5.x] PANIC: unencrypted in encrypted! [pool is no longer accessible] [6.5.x] PANIC: unencrypted block in encrypted object set! [pool is no longer accessible] Sep 15, 2023
@65a
Copy link
Author

65a commented Sep 15, 2023

Recovered PANIC error message by grepping the source code, the initial error was in dmu_send, but now the pool cannot be imported with the stack trace above. zpool import -NFn poolname also hangs, so I have little hope for Force import, apparently.

@65a
Copy link
Author

65a commented Sep 15, 2023

EDIT: don't run these commands if you found this bug, see the commands further down that give a clean read only import. All of the -X and -F stuff is not required and could damage your data and leave it unrecoverable.

zdb shows history, but zpool import -Ffmn -X -T 4378154 -o readonly=on -o ro poolname shows there is some ZIL block pointer damage:
[ 106.448444] WARNING: ZFS read log block error 52, dataset mos, seq 0x47

I don't have a dateset called mos for what it is worth. This command is hanging but showing disk activity, I'll wait and see if anything good happens.

@RodoMa92
Copy link

RodoMa92 commented Sep 15, 2023

Yeah, encryption is kinda broken on kernel >= 6.3.1, see #15140. Ironically also for using Linux for a Windows VM instance. I've also bisected the first commit that causes my issue in the kernel itself, but as of today no one has pitched in and I'm still stuck at 6.1 LTS. I wanted to try to see if with the latest commit something changed, but I'm not gonna risk my encrypted boot pool now.

I'm sorry for what's happened to it, but this issues with encryption should be mentioned in the readme or properly fixed as of now, especially since they are claiming 6.5 kernel compat, which is quite risky to claim if people just using the FS can kill their dataset on their systems.

I'm evaluating switching back to a mainline supported FS at this point, at least FS bugs there are properly addressed rather than let linger on for an unknown amount of time.

@65a
Copy link
Author

65a commented Sep 15, 2023

Those seem quite different, e.g. I am running 2.99.1-git, which is not 2.1 and is 6.5 compatible (re 15258) , and I have a very different stack trace/error and cannot repro the issue in 15140 on a machine running similar software. At this point I am mainly looking for advice on how to get my data back. I suppose I should be just trying the previous command on various txgs until I find one that will import?

@65a 65a changed the title [6.5.x] PANIC: unencrypted block in encrypted object set! [pool is no longer accessible] [6.5.x, 2.2.99git] PANIC: unencrypted block in encrypted object set! [pool is no longer importable, data loss] Sep 15, 2023
@65a
Copy link
Author

65a commented Sep 15, 2023

I managed to get the pool to import readonly using some combination of the flags above, disabling zil replay, and eventually it imported ok. I can't zfs send to my backup pool which is older, but I managed to get a backup using tar, so at least I have my data back. Still no idea what happened here, or why this system's pool was corrupted.

@65a 65a changed the title [6.5.x, 2.2.99git] PANIC: unencrypted block in encrypted object set! [pool is no longer importable, data loss] [6.5.x, 2.2.99git] PANIC: unencrypted block in encrypted object set! Sep 15, 2023
@rincebrain
Copy link
Contributor

If it no longer breaks wildly on a new pool after updating to latest git, I might wildly speculate interactions with #14989.

@65a
Copy link
Author

65a commented Sep 15, 2023

That's possible, the machine has been running -git for some time, but updated right around then. My recovery efforts just reproduced the same error on a Broadwell-EX e5, but that might be due to pool corruption. I also thinks this still means I may have lost data, but I am experimenting on one side of the mirror only to find a fix or get a clean backup. I do have the original error message now, or something very like it (and tar getting sad right after):

[14617.969985] PANIC: unencrypted block in encrypted object set 2051
[14617.970040] Showing stack for process 5137
[14617.970043] CPU: 27 PID: 5137 Comm: tar Tainted: P           OE      6.5.3-arch1-1 #1 ed5b3b894d0aeb37298a77837232ca9b353cc27d
[14617.970048] Hardware name: redacted Broadwell-EX workstation
[14617.970049] Call Trace:
[14617.970052]  <TASK>
[14617.970054]  dump_stack_lvl+0x47/0x60
[14617.970065]  vcmn_err+0xdf/0x120 [spl bc4268844255e66b3f19e31055e910cadd26bf4d]
[14617.970091]  zfs_panic_recover+0x79/0xa0 [zfs 737837ff0cfae19abb4be5a38eb355efb5092f3e]
[14617.970387]  dbuf_read_impl.constprop.0+0x844/0x890 [zfs 737837ff0cfae19abb4be5a38eb355efb5092f3e]
[14617.970624]  dbuf_read+0xf5/0x5f0 [zfs 737837ff0cfae19abb4be5a38eb355efb5092f3e]
[14617.970858]  dmu_buf_hold_array_by_dnode+0x11d/0x690 [zfs 737837ff0cfae19abb4be5a38eb355efb5092f3e]
[14617.971099]  dmu_read_uio_dnode+0x60/0x150 [zfs 737837ff0cfae19abb4be5a38eb355efb5092f3e]
[14617.971339]  dmu_read_uio_dbuf+0x46/0x60 [zfs 737837ff0cfae19abb4be5a38eb355efb5092f3e]
[14617.971577]  zfs_read+0x139/0x3e0 [zfs 737837ff0cfae19abb4be5a38eb355efb5092f3e]
[14617.971812]  zpl_iter_read+0xe8/0x190 [zfs 737837ff0cfae19abb4be5a38eb355efb5092f3e]
[14617.972027]  vfs_read+0x201/0x350
[14617.972034]  ksys_read+0x6f/0xf0
[14617.972037]  do_syscall_64+0x60/0x90
[14617.972043]  ? syscall_exit_to_user_mode+0x2b/0x40
[14617.972049]  ? do_syscall_64+0x6c/0x90
[14617.972052]  entry_SYSCALL_64_after_hwframe+0x6e/0xd8
[14617.972058] RIP: 0033:0x79f698f03b61
[14617.972093] Code: ff ff eb c3 67 e8 2f c9 01 00 66 2e 0f 1f 84 00 00 00 00 00 0f 1f 44 00 00 f3 0f 1e fa 80 3d 05 28 14 00 00 74 13 31 c0 0f 05 <48> 3d 00 f0 ff ff 77 57 c3 66 0f 1f 44 00 00 48 83 ec 28 48 89 54
[14617.972095] RSP: 002b:00007ffce37216f8 EFLAGS: 00000246 ORIG_RAX: 0000000000000000
[14617.972099] RAX: ffffffffffffffda RBX: 0000000000000000 RCX: 000079f698f03b61
[14617.972100] RDX: 0000000000000e00 RSI: 0000629224b44a00 RDI: 000000000000000e
[14617.972102] RBP: 0000000000000e00 R08: 000000000003ffff R09: 0000000000000007
[14617.972103] R10: 0000629222ca226a R11: 0000000000000246 R12: 0000629224b44a00
[14617.972105] R13: 000000000000000e R14: 0000000000000e00 R15: 0000629224b44a00
[14617.972108]  </TASK>
[14745.587567] INFO: task tar:5137 blocked for more than 122 seconds.
[14745.587603]       Tainted: P           OE      6.5.3-arch1-1 #1
[14745.587624] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[14745.587647] task:tar             state:D stack:0     pid:5137  ppid:4983   flags:0x00004002
[14745.587656] Call Trace:
[14745.587658]  <TASK>
[14745.587663]  __schedule+0x3e8/0x1410
[14745.587674]  ? asm_sysvec_apic_timer_interrupt+0x1a/0x20
[14745.587685]  schedule+0x5e/0xd0
[14745.587693]  vcmn_err+0xec/0x120 [spl bc4268844255e66b3f19e31055e910cadd26bf4d]
[14745.587730]  zfs_panic_recover+0x79/0xa0 [zfs 737837ff0cfae19abb4be5a38eb355efb5092f3e]
[14745.588207]  dbuf_read_impl.constprop.0+0x844/0x890 [zfs 737837ff0cfae19abb4be5a38eb355efb5092f3e]
[14745.588652]  dbuf_read+0xf5/0x5f0 [zfs 737837ff0cfae19abb4be5a38eb355efb5092f3e]
[14745.589091]  dmu_buf_hold_array_by_dnode+0x11d/0x690 [zfs 737837ff0cfae19abb4be5a38eb355efb5092f3e]
[14745.589542]  dmu_read_uio_dnode+0x60/0x150 [zfs 737837ff0cfae19abb4be5a38eb355efb5092f3e]
[14745.590011]  dmu_read_uio_dbuf+0x46/0x60 [zfs 737837ff0cfae19abb4be5a38eb355efb5092f3e]
[14745.590498]  zfs_read+0x139/0x3e0 [zfs 737837ff0cfae19abb4be5a38eb355efb5092f3e]
[14745.590997]  zpl_iter_read+0xe8/0x190 [zfs 737837ff0cfae19abb4be5a38eb355efb5092f3e]
[14745.591455]  vfs_read+0x201/0x350
[14745.591464]  ksys_read+0x6f/0xf0
[14745.591471]  do_syscall_64+0x60/0x90
[14745.591479]  ? syscall_exit_to_user_mode+0x2b/0x40
[14745.591488]  ? do_syscall_64+0x6c/0x90
[14745.591494]  entry_SYSCALL_64_after_hwframe+0x6e/0xd8
[14745.591501] RIP: 0033:0x79f698f03b61
[14745.591527] RSP: 002b:00007ffce37216f8 EFLAGS: 00000246 ORIG_RAX: 0000000000000000
[14745.591532] RAX: ffffffffffffffda RBX: 0000000000000000 RCX: 000079f698f03b61
[14745.591535] RDX: 0000000000000e00 RSI: 0000629224b44a00 RDI: 000000000000000e
[14745.591538] RBP: 0000000000000e00 R08: 000000000003ffff R09: 0000000000000007
[14745.591541] R10: 0000629222ca226a R11: 0000000000000246 R12: 0000629224b44a00
[14745.591544] R13: 000000000000000e R14: 0000000000000e00 R15: 0000629224b44a00
[14745.591549]  </TASK>

@65a 65a changed the title [6.5.x, 2.2.99git] PANIC: unencrypted block in encrypted object set! [6.5.x, 2.2.99git] PANIC: unencrypted block in encrypted object set 2051 [pool lost] Sep 15, 2023
@65a
Copy link
Author

65a commented Sep 16, 2023

As for recovery, I can import the pool with zil recovery disabled in readonly mode, but when it hits a specific file in objset 2051 (naturally, my home directory), it will hang. The code in question is in dbuf.c, and I'm not sure, but it seems like it's lock sensitive code, and I'm not sure how the zfs_panic_recover interacts with that. My hypothesis is that it interacts poorly, so I removed the panic and am trying to rsync my data out again. I was able to interact with the suspected file (received IO error, as expected) without triggering a panic or hang with the panic_recover commented out, which gives me some hope my hypothesis is correct. This would also have indirectly triggered the original issue.

My current theory on events is as follows:

  1. A VM hypervisor switch triggers something like Page faults on Sapphire Rapids #14989
  2. This results in a corrupted file write somehow, whose block pointer wasn't completely written or is corrupt (this part is still pretty mysterious to me, but I guess with corrupted registers anything is possible?)
  3. I interacted with this file and triggered the panic/deadlock in dbuf.c, hanging all IO and fscking up interrupts
  4. Forcing a shutdown corrupted the ZIL

@65a
Copy link
Author

65a commented Sep 16, 2023

After trying some hacky command line flags, the ZIL was still too corrupt to touch, as zil_parse was eating a bad entry that was causing page faults. I made a local patch to do two things: avoid the panic_recover deadlock (which seems like a real bug?) and tell the zil to not claim log records at all. This may be a terrible idea, but it did allow the pool to import without errors:

diff --git a/module/zfs/dbuf.c b/module/zfs/dbuf.c
index f2831a0e8..c6574b7cc 100644
--- a/module/zfs/dbuf.c
+++ b/module/zfs/dbuf.c
@@ -1618,9 +1618,6 @@ dbuf_read_impl(dmu_buf_impl_t *db, zio_t *zio, uint32_t flags,
 	 * If this is not true it indicates tampering and we report an error.
 	 */
 	if (db->db_objset->os_encrypted && !BP_USES_CRYPT(bpp)) {
-		spa_log_error(db->db_objset->os_spa, &zb, &bpp->blk_birth);
-		zfs_panic_recover("unencrypted block in encrypted "
-		    "object set %llu", dmu_objset_id(db->db_objset));
 		err = SET_ERROR(EIO);
 		goto early_unlock;
 	}
diff --git a/module/zfs/zil.c b/module/zfs/zil.c
index b30676b42..dfdf80276 100644
--- a/module/zfs/zil.c
+++ b/module/zfs/zil.c
@@ -1169,6 +1169,7 @@ zil_claim(dsl_pool_t *dp, dsl_dataset_t *ds, void *txarg)
 	 * or destroy beyond the last block we successfully claimed.
 	 */
 	ASSERT3U(zh->zh_claim_txg, <=, first_txg);
+	/*
 	if (zh->zh_claim_txg == 0 && !BP_IS_HOLE(&zh->zh_log)) {
 		(void) zil_parse(zilog, zil_claim_log_block,
 		    zil_claim_log_record, tx, first_txg, B_FALSE);
@@ -1182,8 +1183,8 @@ zil_claim(dsl_pool_t *dp, dsl_dataset_t *ds, void *txarg)
 			os->os_next_write_raw[tx->tx_txg & TXG_MASK] = B_TRUE;
 		dsl_dataset_dirty(dmu_objset_ds(os), tx);
 	}
-
 	ASSERT3U(first_txg, ==, (spa_last_synced_txg(zilog->zl_spa) + 1));
+	*/
 	dmu_objset_disown(os, B_FALSE, FTAG);
 	return (0);
 }

@rincebrain
Copy link
Contributor

I would suspect the unencrypted behavior is probably the same thing that caused a bug report a while ago about managing to sneak an embedded_data block onto an encrypted dataset, which you can't...do.

@65a
Copy link
Author

65a commented Sep 16, 2023

I know exactly which file was the problem now, and it was moved into a directory shared with Windows 10 over Virtio-FS (which is terrible and often broken/weird). The VM hung, and dmesg had page fault related errors, which I'd seen before, as well as getting the PANIC: Unencrypted block error, I don't remember the precise order. Subsequent files touched after that were also corrupt, including a last minute attempt to rebuild the zfs dkms.

@rincebrain since I have the pool imported now and am running a scrub (I got a good rsync of my home directory now, and this is still one half of the mirror, so risks can be taken), does it make sense to checkpoint it and rewind it to try to clean the zil? Or is it likely the bad zil_log_record will be ok after an import/export cycle and a scrub? I'm sort of tempted to see if I can actually save this pool now (though for safety it would be retired after a backup).

@65a 65a changed the title [6.5.x, 2.2.99git] PANIC: unencrypted block in encrypted object set 2051 [pool lost] [6.5.x, 2.2.99git] PANIC: unencrypted block in encrypted object set 2051 and then deadlock in dbuf.c via zfs_panic_recover Sep 16, 2023
@rincebrain
Copy link
Contributor

I wouldn't expect scrub to fix anything in the ZIL - it shouldn't be the case that it won't correct things that could be correctable in ZIL replay.

@65a 65a changed the title [6.5.x, 2.2.99git] PANIC: unencrypted block in encrypted object set 2051 and then deadlock in dbuf.c via zfs_panic_recover [6.5.x, 2.2.99git] PANIC: unencrypted block in encrypted object set 2051 Sep 16, 2023
@65a
Copy link
Author

65a commented Sep 16, 2023

@rincebrain since I have read-only access working, I was able to determine this issue (or at least the version as of pool crash) was version 2.2.99+r83+g12ce45f260, which is commit 12ce45f, and unfortunately already included your SPR errata patch (though SPR9 also looks interesting?). I don't think the QAT change is actually relevant, as this CPU does not have QAT. I also realized my hacky patch is basically doing the same thing as the zfs_recover=1 tunable, so I guess that would have been easier. I don't think I can get zfs send or rw access working on this pool, the space map is hosed.

@rincebrain
Copy link
Contributor

You can send from a read-only dataset directly, if that would be convenient.

You can't use a lot of flags, but you can do it.

@65a
Copy link
Author

65a commented Sep 17, 2023

With no arguments, I get a vague 'invalid argument' error, so I defaulted to rsync. I did get a full backup though and things seem fine with the data, thankfully this was a workstation and nothing critical. It would be great if #3111 existed, since I'm fairly certain the pool could have survived. I briefly debated trying to see if I could hack the space map code into working for this particular corruption, but it's easier to rebuild at this point.

@heppu
Copy link

heppu commented Sep 18, 2023

I have now same problem after sharing one dataset to win10 machine as smb share. I was running also git version of zfs.

@heppu
Copy link

heppu commented Sep 18, 2023

@65a Could you describe bit more in detail how were you able to fix the problem or mount the pool in readonly mode?

@65a
Copy link
Author

65a commented Sep 18, 2023

If you have the same problem, this is a basic summary that worked for me. If it's your boot pool you'll need to use a different machine or a live CD or something.

  1. Before attaching the pool's disks, set the zfs_recover tunable: echo 1 > /sys/modules/zfs/parameters/zfs_recover
  2. Set the zil_replay_disable tunable: echo 1 > /sys/modules/zfs/paramters/zil_replay_disable
  3. Attach the pools' disks to the recovery host
  4. zpool import -o readonly=on <poolname>

If 4 doesn't work, you might want to create a new issue, since it probably indicates something different is happening.

@heppu
Copy link

heppu commented Sep 18, 2023

Thanks @65a ! Managed to get backup with rsync from live CD by following those instructions.

@65a
Copy link
Author

65a commented Sep 19, 2023

#15293 seems like a potential root cause here? VM or windows does something that tickles the UAF, corrupts the block pointer for a file, and then because the encrypted bit is not set, triggers the zfs_panic_recover in dbuf.c, which by defaultl (no zfs_recover=1) causes complete IO death, forcing a the user to hard reboot. I'm not sure where my zil corruption factored in, I was able to boot several times with a corrupted dataset before the pool hung on import.

@65a
Copy link
Author

65a commented Sep 19, 2023

@heppu any stack traces? I haven't tried to repro this yet, but once I have more time I may be able to.

@heppu
Copy link

heppu commented Sep 19, 2023

Sorry, I already destroyed the pool and re-created it. How ever it was exactly the the same page fault with RIP: 0010:zil_claim_log_record+0x47/0xd0 [zfs] and happened with arch and voidlinux.

@65a
Copy link
Author

65a commented Sep 19, 2023

Interesting, so you also saw ZIL corruption? I wasn't sure if that was a secondary effect in my case or not. Did you also get the unencryped block in error at some point?

@heppu
Copy link

heppu commented Sep 19, 2023

Yes, it was ZIL corruption caused by my smb share as far as I understood it but I didn't notice unencryped block in error. This was on voidlinux with zfs-2.2.0-rc3 on 8x1TB nvme encrypted raidz pool.

@rincebrain rincebrain added the Component: Encryption "native encryption" feature label Sep 19, 2023
@heppu
Copy link

heppu commented Oct 25, 2023

So I lost another pool for this on voidlinux with freshly released zfs 2.2.
Here's the dmesg output:

stacktrace [ 659.271649] BUG: unable to handle page fault for address: ffffa0553197e00c [ 659.271880] #PF: supervisor read access in kernel mode [ 659.271999] #PF: error_code(0x0000) - not-present page [ 659.272116] PGD 100000067 P4D 100000067 PUD 1bccd7067 PMD 2c4e46067 PTE 0 [ 659.272240] Oops: 0000 [#1] PREEMPT SMP NOPTI [ 659.272358] CPU: 94 PID: 6198 Comm: dmu_objset_find Tainted: P OE 6.5.8_1 #1 [ 659.272480] Hardware name: ASUS System Product Name/Pro WS WRX80E-SAGE SE WIFI, BIOS 1106 02/10/2023 [ 659.272604] RIP: 0010:zil_claim_log_record+0x47/0xd0 [zfs] [ 659.272877] Code: 83 f8 09 74 6c 48 83 f8 18 75 55 48 85 d2 74 50 48 83 7e 40 00 48 8b 6f 38 74 45 45 31 e4 31 d2 48 89 d0 48 c1 e0 07 48 01 d8 40 7c 80 75 22 48 83 78 48 00 75 07 48 83 78 50 00 74 14 48 c1 [ 659.273136] RSP: 0018:ffffa054871939d0 EFLAGS: 00010286 [ 659.273268] RAX: ffffa0553197df90 RBX: ffffa0553189bf10 RCX: 0000000000000000 [ 659.273403] RDX: 0000000000001c41 RSI: ffffffff823164a5 RDI: ffffffff822f5c44 [ 659.273540] RBP: ffff9166100e0000 R08: ffff91678d639540 R09: 9ae16a3b2f90404f [ 659.273679] R10: ffff9165c44dbd20 R11: 00000000f50b1000 R12: 0000000000001c41 [ 659.273819] R13: ffff916789aec540 R14: ffffffffffffffff R15: 0000000000014e15 [ 659.273961] FS: 0000000000000000(0000) GS:ffff91e2fe580000(0000) knlGS:0000000000000000 [ 659.274105] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 [ 659.274248] CR2: ffffa0553197e00c CR3: 0000004178a20000 CR4: 0000000000350ee0 [ 659.274393] Call Trace: [ 659.274537] [ 659.274678] ? __die+0x23/0x70 [ 659.274823] ? page_fault_oops+0x159/0x460 [ 659.274970] ? srso_return_thunk+0x5/0x10 [ 659.275116] ? fixup_exception+0x26/0x310 [ 659.275262] ? srso_return_thunk+0x5/0x10 [ 659.275409] ? exc_page_fault+0xd1/0x170 [ 659.275557] ? asm_exc_page_fault+0x26/0x30 [ 659.275708] ? zil_claim_log_record+0x47/0xd0 [zfs] [ 659.276000] zil_parse+0x54a/0x980 [zfs] [ 659.276288] ? __pfx_zil_claim_log_record+0x10/0x10 [zfs] [ 659.276577] ? __pfx_zil_claim_log_block+0x10/0x10 [zfs] [ 659.276874] ? _raw_spin_lock+0x17/0x40 [ 659.277025] ? srso_return_thunk+0x5/0x10 [ 659.277176] ? zrl_exit+0x4c/0x60 [zfs] [ 659.277464] ? srso_return_thunk+0x5/0x10 [ 659.277616] ? srso_return_thunk+0x5/0x10 [ 659.277765] ? dmu_objset_open_impl+0x5cf/0x9c0 [zfs] [ 659.278069] ? srso_return_thunk+0x5/0x10 [ 659.278220] ? preempt_count_add+0x6e/0xa0 [ 659.278370] ? srso_return_thunk+0x5/0x10 [ 659.278520] ? _raw_spin_lock+0x17/0x40 [ 659.278669] ? srso_return_thunk+0x5/0x10 [ 659.278819] ? srso_return_thunk+0x5/0x10 [ 659.278967] ? preempt_count_add+0x6e/0xa0 [ 659.279114] ? srso_return_thunk+0x5/0x10 [ 659.279261] ? srso_return_thunk+0x5/0x10 [ 659.279405] ? srso_return_thunk+0x5/0x10 [ 659.279548] ? dmu_objset_from_ds+0x85/0x160 [zfs] [ 659.279841] ? srso_return_thunk+0x5/0x10 [ 659.279982] ? dmu_objset_own_obj+0xa2/0xd0 [zfs] [ 659.280275] zil_claim+0x115/0x290 [zfs] [ 659.280558] dmu_objset_find_dp_impl+0x145/0x3e0 [zfs] [ 659.280852] dmu_objset_find_dp_cb+0x29/0x40 [zfs] [ 659.281140] taskq_thread+0x2c1/0x4e0 [spl] [ 659.281287] ? __pfx_default_wake_function+0x10/0x10 [ 659.281427] ? __pfx_taskq_thread+0x10/0x10 [spl] [ 659.281561] kthread+0xf7/0x130 [ 659.281687] ? __pfx_kthread+0x10/0x10 [ 659.281811] ret_from_fork+0x34/0x50 [ 659.281934] ? __pfx_kthread+0x10/0x10 [ 659.282055] ret_from_fork_asm+0x1b/0x30 [ 659.282181] [ 659.282297] Modules linked in: tcp_diag inet_diag 8021q garp mrp stp llc ipmi_ssif intel_rapl_msr iwlmvm intel_rapl_common amd64_edac snd_usb_audio edac_mce_amd mac80211 snd_usbmidi_lib kvm_amd snd_rawmidi ch341 mc libarc4 usbserial joydev input_leds kvm raid1 eeepc_wmi asus_wmi battery irqbypass md_mod ledtrig_audio sparse_keymap iwlwifi platform_profile snd_hda_intel video rapl snd_intel_dspcfg wmi_bmof snd_intel_sdw_acpi snd_hda_codec acpi_cpufreq btusb pcspkr cfg80211 btrtl snd_hda_core btbcm btintel acpi_ipmi tpm_crb snd_hwdep btmtk snd_pcm tpm_tis ipmi_si i2c_designware_platform tpm_tis_core i2c_designware_core evdev mac_hid tpm i2c_piix4 k10temp ipmi_devintf ipmi_msghandler tiny_power_button sg snd_seq snd_seq_device snd_timer snd soundcore vhost_vsock vmw_vsock_virtio_transport_common vsock vhost_net vhost vhost_iotlb tap hci_vhci bluetooth ecdh_generic rfkill ecc vfio_iommu_type1 vfio iommufd uhid uinput userio ppp_generic slhc tun nvram cuse fuse ext4 crc16 mbcache jbd2 dm_snapshot dm_bufio isofs cdrom [ 659.282400] squashfs uas usb_storage hid_multitouch hid_generic crct10dif_pclmul crc32_pclmul polyval_clmulni polyval_generic gf128mul usbhid ghash_clmulni_intel sha512_ssse3 ast aesni_intel sd_mod hid i2c_algo_bit nvme_tcp crypto_simd cryptd nvme_fabrics zfs(POE) xhci_pci drm_shmem_helper ahci xhci_pci_renesas drm_kms_helper libahci xhci_hcd mxm_wmi drm ixgbe ccp mpt3sas mlx4_core libata usbcore spl(OE) rng_core xfrm_algo agpgart raid_class dca usb_common sp5100_tco scsi_transport_sas wmi button sunrpc dm_mirror dm_region_hash dm_log dm_mod be2iscsi bnx2i cnic uio cxgb4i cxgb4 tls cxgb3i cxgb3 mdio libcxgbi libcxgb qla4xxx iscsi_boot_sysfs iscsi_tcp libiscsi_tcp libiscsi scsi_transport_iscsi scsi_mod scsi_common loop btrfs blake2b_generic xor raid6_pq libcrc32c crc32c_generic crc32c_intel [ 659.284679] CR2: ffffa0553197e00c [ 659.284829] ---[ end trace 0000000000000000 ]--- [ 659.290489] pstore: backend (erst) writing error (-28) [ 659.290645] RIP: 0010:zil_claim_log_record+0x47/0xd0 [zfs] [ 659.290950] Code: 83 f8 09 74 6c 48 83 f8 18 75 55 48 85 d2 74 50 48 83 7e 40 00 48 8b 6f 38 74 45 45 31 e4 31 d2 48 89 d0 48 c1 e0 07 48 01 d8 40 7c 80 75 22 48 83 78 48 00 75 07 48 83 78 50 00 74 14 48 c1 [ 659.291289] RSP: 0018:ffffa054871939d0 EFLAGS: 00010286 [ 659.291459] RAX: ffffa0553197df90 RBX: ffffa0553189bf10 RCX: 0000000000000000 [ 659.291632] RDX: 0000000000001c41 RSI: ffffffff823164a5 RDI: ffffffff822f5c44 [ 659.291806] RBP: ffff9166100e0000 R08: ffff91678d639540 R09: 9ae16a3b2f90404f [ 659.291985] R10: ffff9165c44dbd20 R11: 00000000f50b1000 R12: 0000000000001c41 [ 659.292154] R13: ffff916789aec540 R14: ffffffffffffffff R15: 0000000000014e15 [ 659.292324] FS: 0000000000000000(0000) GS:ffff91e2fe580000(0000) knlGS:0000000000000000 [ 659.292497] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 [ 659.292668] CR2: ffffa0553197e00c CR3: 0000004178a20000 CR4: 0000000000350ee0 [ 659.292843] note: dmu_objset_find[6198] exited with irqs disabled

@heppu
Copy link

heppu commented Oct 26, 2023

Now got the same error after 1 hour creating the pool. There was no SMB shares on pool, how ever there was one NFS share.

@heppu
Copy link

heppu commented Oct 26, 2023

And now this happened on completely different machine which should rule out any hardware reason.

@adrian-bl
Copy link

Fwiw, i hit the same bug after upgrading my machine from 2.1.x to 2.2.0:

Everything seemed fine after the reboot and after about 1h of uptime, i executed a zpool upgrade.

20 min later, ZFS started to complain about a corruped file in my Maildir folder (a newly delivered mail)

No SMB nor NFS involved.

@heppu
Copy link

heppu commented Oct 26, 2023

I managed to trigger this now few times on two different machines. Both running voidlinux and zfs 2.2.0 with root on zfs using zfsbootmenu. Running generate-zbm seemed to trigger this at least 3 times.

@65a
Copy link
Author

65a commented Nov 1, 2023

@heppu you might want to see if that linked PR fixes it for you or not, I'm not longer using ZFS encryption for now.

@mmatuska
Copy link
Contributor

mmatuska commented Nov 1, 2023

This should be fixed in #15465, please verify. For the filesystem to remain consistent, the files copied before the patch must be deleted.

@awused
Copy link

awused commented Nov 5, 2023

I'm glad I saw these bugs before updating; I'll definitely be holding off since I've been using encryption for a long time. Given the severity of these pool corruptions, especially with how easily #15465 can be triggered, should 2.2.0 even be left up as the current release?

@ckcameron
Copy link

ckcameron commented Nov 5, 2023

Same issue on multiple fresh pools.

zfs v2.2.2

[57371.032299] PANIC: unencrypted block in encrypted object set 1284 [57371.033443] Showing stack for process 163052 [57371.033447] CPU: 27 PID: 163052 Comm: perl Tainted: P O 6.5.0-10-generic #10-Ubuntu [57371.033453] Hardware name: ASUS System Product Name/ROG ZENITH EXTREME, BIOS 2401 02/08/2023 [57371.033456] Call Trace: [57371.033459] <TASK> [57371.033465] dump_stack_lvl+0x48/0x70 [57371.033476] dump_stack+0x10/0x20 [57371.033483] spl_dumpstack+0x29/0x40 [spl] [57371.033507] vcmn_err+0xcd/0x110 [spl] [57371.033539] zfs_panic_recover+0x75/0xa0 [zfs] [57371.033906] dbuf_read_impl.constprop.0+0x54c/0x5d0 [zfs] [57371.034254] dbuf_read+0x23c/0x5b0 [zfs] [57371.034594] ? srso_return_thunk+0x5/0x10 [57371.034604] dmu_buf_hold_array_by_dnode+0x113/0x570 [zfs] [57371.034953] dmu_read_uio_dnode+0x5a/0x110 [zfs] [57371.035297] ? srso_return_thunk+0x5/0x10 [57371.035302] ? avl_add+0x4a/0x80 [zfs] [57371.035568] dmu_read_uio_dbuf+0x48/0x70 [zfs] [57371.035912] zfs_read+0x125/0x300 [zfs] [57371.036256] zpl_iter_read+0xc2/0x140 [zfs] [57371.036591] vfs_read+0x20d/0x360 [57371.036603] ksys_read+0x73/0x100 [57371.036610] __x64_sys_read+0x19/0x30 [57371.036615] do_syscall_64+0x5c/0x90 [57371.036620] ? srso_return_thunk+0x5/0x10 [57371.036625] ? count_memcg_events.constprop.0+0x2a/0x50 [57371.036630] ? srso_return_thunk+0x5/0x10 [57371.036635] ? handle_mm_fault+0xad/0x360 [57371.036641] ? srso_return_thunk+0x5/0x10 [57371.036645] ? do_user_addr_fault+0x17a/0x6b0 [57371.036650] ? srso_return_thunk+0x5/0x10 [57371.036654] ? exit_to_user_mode_prepare+0x30/0xb0 [57371.036661] ? srso_return_thunk+0x5/0x10 [57371.036666] ? irqentry_exit_to_user_mode+0x17/0x20 [57371.036671] ? srso_return_thunk+0x5/0x10 [57371.036675] ? irqentry_exit+0x43/0x50 [57371.036679] ? srso_return_thunk+0x5/0x10 [57371.036683] ? exc_page_fault+0x94/0x1b0 [57371.036689] entry_SYSCALL_64_after_hwframe+0x6e/0xd8 [57371.036695] RIP: 0033:0x7f31c5876721 [57371.036784] Code: 00 48 8b 15 f9 36 0e 00 f7 d8 64 89 02 b8 ff ff ff ff eb c3 e8 a0 e6 01 00 f3 0f 1e fa 80 3d 25 be 0e 00 00 74 13 31 c0 0f 05 <48> 3d 00 f0 ff ff 77 57 c3 66 0f 1f 44 00 00 48 83 ec 28 48 89 54 [57371.036788] RSP: 002b:00007ffc365d0b78 EFLAGS: 00000246 ORIG_RAX: 0000000000000000 [57371.036794] RAX: ffffffffffffffda RBX: 0000000000000000 RCX: 00007f31c5876721 [57371.036798] RDX: 0000000000002000 RSI: 00005595e47e24c0 RDI: 0000000000000004 [57371.036801] RBP: 0000000000002000 R08: 0000000000000000 R09: 0000000000000000 [57371.036804] R10: 0000000000000018 R11: 0000000000000246 R12: 00005595e47e24c0 [57371.036807] R13: 00005595e47a82a0 R14: 0000000000000004 R15: 00005595e47dd950 [57371.036815] </TASK>

@oromenahar
Copy link
Contributor

@behlendorf I think this is fixed by #15464 and can be closed or not?

@behlendorf
Copy link
Contributor

Yes, this is resolved by #15464 which is back backported for a 2.2.1 release.

@berenddeboer
Copy link

This does not seem to have been fixed (or I'm testing this wrong). I'm on Ubuntu 23.10.

So I think my zfs is recent (I compiled 2.2.2 from scratch):

filename:       /lib/modules/6.5.0-13-generic/extra/zcommon/zfs.ko
version:        2.2.2-1
license:        CDDL
license:        Dual BSD/GPL
license:        Dual MIT/GPL
author:         OpenZFS
description:    ZFS
alias:          zzstd
alias:          zcommon
alias:          zunicode
alias:          znvpair
alias:          zlua
alias:          icp
alias:          zavl
alias:          devname:zfs
alias:          char-major-10-249
srcversion:     64DFECF22645F1BD34FFBB0
depends:        spl
retpoline:      Y
name:           zfs
vermagic:       6.5.0-13-generic SMP preempt mod_unload modversions
p

But error is still there (same as with the original Ubuntu Mantic zfs module (2.2.0-rc3). I have a 100% repeatable case of this bug:

2023-12-02T20:31:21.300626+13:00 firefly kernel: [  250.081483] PANIC: unencrypted block in encrypted object set 518
2023-12-02T20:31:21.300634+13:00 firefly kernel: [  250.081487] Showing stack for process 19604
2023-12-02T20:31:21.300635+13:00 firefly kernel: [  250.081489] CPU: 6 PID: 19604 Comm: node Tainted: P           OE      6.5.0-13-generic #13-Ubuntu
2023-12-02T20:31:21.300635+13:00 firefly kernel: [  250.081490] Hardware name: HP HP ZBook Firefly 14 inch G8 Mobile Workstation PC/880D, BIOS T76 Ver. 01.12.00 01/05/2023
2023-12-02T20:31:21.300636+13:00 firefly kernel: [  250.081491] Call Trace:
2023-12-02T20:31:21.300636+13:00 firefly kernel: [  250.081492]  <TASK>
2023-12-02T20:31:21.300636+13:00 firefly kernel: [  250.081495]  dump_stack_lvl+0x48/0x70
2023-12-02T20:31:21.300637+13:00 firefly kernel: [  250.081499]  dump_stack+0x10/0x20
2023-12-02T20:31:21.300637+13:00 firefly kernel: [  250.081503]  spl_dumpstack+0x29/0x40 [spl]
2023-12-02T20:31:21.300638+13:00 firefly kernel: [  250.081513]  vcmn_err+0xcd/0x110 [spl]
2023-12-02T20:31:21.300638+13:00 firefly kernel: [  250.081523]  zfs_panic_recover+0x75/0xa0 [zfs]
2023-12-02T20:31:21.300638+13:00 firefly kernel: [  250.081688]  dbuf_read_impl.constprop.0+0x54c/0x5d0 [zfs]
2023-12-02T20:31:21.300639+13:00 firefly kernel: [  250.081809]  dbuf_read+0x23c/0x5b0 [zfs]
2023-12-02T20:31:21.300639+13:00 firefly kernel: [  250.081924]  dmu_buf_hold_array_by_dnode+0x113/0x570 [zfs]
2023-12-02T20:31:21.300639+13:00 firefly kernel: [  250.082042]  dmu_read_uio_dnode+0x5a/0x110 [zfs]
2023-12-02T20:31:21.300640+13:00 firefly kernel: [  250.082158]  ? avl_add+0x4a/0x80 [zfs]
2023-12-02T20:31:21.300640+13:00 firefly kernel: [  250.082229]  dmu_read_uio_dbuf+0x48/0x70 [zfs]
2023-12-02T20:31:21.300640+13:00 firefly kernel: [  250.082344]  zfs_read+0x125/0x300 [zfs]
2023-12-02T20:31:21.300640+13:00 firefly kernel: [  250.082469]  zpl_iter_read+0xc2/0x140 [zfs]
2023-12-02T20:31:21.300641+13:00 firefly kernel: [  250.082586]  vfs_read+0x20a/0x360
2023-12-02T20:31:21.300641+13:00 firefly kernel: [  250.082589]  ksys_read+0x73/0x100
2023-12-02T20:31:21.300641+13:00 firefly kernel: [  250.082591]  __x64_sys_read+0x19/0x30
2023-12-02T20:31:21.300643+13:00 firefly kernel: [  250.082593]  do_syscall_64+0x59/0x90
2023-12-02T20:31:21.300647+13:00 firefly kernel: [  250.082596]  ? do_user_addr_fault+0x238/0x6b0
2023-12-02T20:31:21.300648+13:00 firefly kernel: [  250.082598]  ? exit_to_user_mode_prepare+0x30/0xb0
2023-12-02T20:31:21.300648+13:00 firefly kernel: [  250.082601]  ? irqentry_exit_to_user_mode+0x17/0x20
2023-12-02T20:31:21.300649+13:00 firefly kernel: [  250.082603]  ? irqentry_exit+0x43/0x50
2023-12-02T20:31:21.300649+13:00 firefly kernel: [  250.082605]  ? exc_page_fault+0x94/0x1b0
2023-12-02T20:31:21.300649+13:00 firefly kernel: [  250.082606]  entry_SYSCALL_64_after_hwframe+0x6e/0xd8
2023-12-02T20:31:21.300649+13:00 firefly kernel: [  250.082609] RIP: 0033:0x7f4f030a175c
2023-12-02T20:31:21.300650+13:00 firefly kernel: [  250.082643] Code: ec 28 48 89 54 24 18 48 89 74 24 10 89 7c 24 08 e8 29 91 f7 ff 48 8b 54 24 18 48 8b 74 24 10 41 89 c0 8b 7c 24 08 31 c0 0f 05 <48> 3d 00 f0 ff ff 77 34 44 89 c7 48 89 44 24 08 e8 7f 91 f7 ff 48
2023-12-02T20:31:21.300650+13:00 firefly kernel: [  250.082644] RSP: 002b:00007ffdbe548e40 EFLAGS: 00000246 ORIG_RAX: 0000000000000000
2023-12-02T20:31:21.300650+13:00 firefly kernel: [  250.082646] RAX: ffffffffffffffda RBX: 00007ffdbe549080 RCX: 00007f4f030a175c
2023-12-02T20:31:21.300652+13:00 firefly kernel: [  250.082646] RDX: 0000000000002000 RSI: 00007f4f0021b660 RDI: 0000000000000019
2023-12-02T20:31:21.300652+13:00 firefly kernel: [  250.082647] RBP: 00007ffdbe548f70 R08: 0000000000000000 R09: ffffffffffffffff
2023-12-02T20:31:21.300653+13:00 firefly kernel: [  250.082648] R10: 0000000000000070 R11: 0000000000000246 R12: 00007f4f02f846c8
2023-12-02T20:31:21.300653+13:00 firefly kernel: [  250.082648] R13: 00007ffdbe5491d0 R14: 00007ffdbe549070 R15: 00007ffdbe5491d0
2023-12-02T20:31:21.300653+13:00 firefly kernel: [  250.082650]  </TASK>

This could be remnant possibly from an earlier corruption, but not sure how to fix this (zpool scrub is fine).

Any hints are appreciated.

@rincebrain
Copy link
Contributor

Delete the file, or someone earlier commented out the assertion trip to let them read it and delete the original copy.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Component: Encryption "native encryption" feature Type: Defect Incorrect behavior (e.g. crash, hang)
Projects
None yet
Development

No branches or pull requests