-
Notifications
You must be signed in to change notification settings - Fork 1.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Compilation errors for camsys on RK3288 device #4
Comments
RK_CAMSYS driver is used in Android. V4l2 camera driver(Although it's still ugly. ... It was migrated from Sofia-3GR) will be included in release 4.4 in a few days. At current, you can got it from other branch. |
I will try the branch. thanks |
wzyy2
pushed a commit
that referenced
this issue
Jan 12, 2017
[ Upstream commit 74a5ed5 ] When booting up LDOM, find_node() warns that a physical address doesn't match a NUMA node. WARNING: CPU: 0 PID: 0 at arch/sparc/mm/init_64.c:835 find_node+0xf4/0x120 find_node: A physical address doesn't match a NUMA node rule. Some physical memory will be owned by node 0.Modules linked in: CPU: 0 PID: 0 Comm: swapper Not tainted 4.9.0-rc3 #4 Call Trace: [0000000000468ba0] __warn+0xc0/0xe0 [0000000000468c74] warn_slowpath_fmt+0x34/0x60 [00000000004592f4] find_node+0xf4/0x120 [0000000000dd0774] add_node_ranges+0x38/0xe4 [0000000000dd0b1c] numa_parse_mdesc+0x268/0x2e4 [0000000000dd0e9c] bootmem_init+0xb8/0x160 [0000000000dd174c] paging_init+0x808/0x8fc [0000000000dcb0d0] setup_arch+0x2c8/0x2f0 [0000000000dc68a0] start_kernel+0x48/0x424 [0000000000dcb374] start_early_boot+0x27c/0x28c [0000000000a32c08] tlb_fixup_done+0x4c/0x64 [0000000000027f08] 0x27f08 It is because linux use an internal structure node_masks[] to keep the best memory latency node only. However, LDOM mdesc can contain single latency-group with multiple memory latency nodes. If the address doesn't match the best latency node within node_masks[], it should check for an alternative via mdesc. The warning message should only be printed if the address doesn't match any node_masks[] nor within mdesc. To minimize the impact of searching mdesc every time, the last matched mask and index is stored in a variable. Signed-off-by: Thomas Tai <[email protected]> Reviewed-by: Chris Hyser <[email protected]> Reviewed-by: Liam Merwick <[email protected]> Signed-off-by: David S. Miller <[email protected]> Signed-off-by: Greg Kroah-Hartman <[email protected]>
wzyy2
pushed a commit
that referenced
this issue
Jan 12, 2017
{min,max}_capacity are static variables that are only updated from __update_min_max_capacity(), but not used anywhere else. Remove them together with the function updating them. This has also the nice side effect of fixing a LOCKDEP warning related to locking all CPUs in update_min_max_capacity(), as reported by Ke Wang: [ 2.853595] c0 ============================================= [ 2.859219] c0 [ INFO: possible recursive locking detected ] [ 2.864852] c0 4.4.6+ #5 Tainted: G W [ 2.869604] c0 --------------------------------------------- [ 2.875230] c0 swapper/0/1 is trying to acquire lock: [ 2.880248] (&rq->lock){-.-.-.}, at: [<ffffff80081241cc>] cpufreq_notifier_policy+0x2e8/0x37c [ 2.888815] c0 [ 2.888815] c0 but task is already holding lock: [ 2.895132] (&rq->lock){-.-.-.}, at: [<ffffff80081241cc>] cpufreq_notifier_policy+0x2e8/0x37c [ 2.903700] c0 [ 2.903700] c0 other info that might help us debug this: [ 2.910710] c0 Possible unsafe locking scenario: [ 2.910710] c0 [ 2.917112] c0 CPU0 [ 2.919795] c0 ---- [ 2.922478] lock(&rq->lock); [ 2.925507] lock(&rq->lock); [ 2.928536] c0 [ 2.928536] c0 *** DEADLOCK *** [ 2.928536] c0 [ 2.935200] c0 May be due to missing lock nesting notation [ 2.935200] c0 [ 2.942471] c0 7 locks held by swapper/0/1: [ 2.946623] #0: (&dev->mutex){......}, at: [<ffffff800850e118>] __driver_attach+0x64/0xb8 [ 2.954931] #1: (&dev->mutex){......}, at: [<ffffff800850e128>] __driver_attach+0x74/0xb8 [ 2.963239] #2: (cpu_hotplug.lock){++++++}, at: [<ffffff80080cb218>] get_online_cpus+0x48/0xa8 [ 2.971979] #3: (subsys mutex#6){+.+.+.}, at: [<ffffff800850bed4>] subsys_interface_register+0x44/0xc0 [ 2.981411] #4: (&policy->rwsem){+.+.+.}, at: [<ffffff8008720338>] cpufreq_online+0x330/0x76c [ 2.990065] #5: ((cpufreq_policy_notifier_list).rwsem){.+.+..}, at: [<ffffff80080f3418>] blocking_notifier_call_chain+0x38/0xc4 [ 3.001661] #6: (&rq->lock){-.-.-.}, at: [<ffffff80081241cc>] cpufreq_notifier_policy+0x2e8/0x37c [ 3.010661] c0 [ 3.010661] c0 stack backtrace: [ 3.015514] c0 CPU: 0 PID: 1 Comm: swapper/0 Tainted: G W 4.4.6+ #5 [ 3.022864] c0 Hardware name: Spreadtrum SP9860g Board (DT) [ 3.028402] c0 Call trace: [ 3.031092] c0 [<ffffff800808b50c>] dump_backtrace+0x0/0x210 [ 3.036716] c0 [<ffffff800808b73c>] show_stack+0x20/0x28 [ 3.041994] c0 [<ffffff8008433310>] dump_stack+0xa8/0xe0 [ 3.047273] c0 [<ffffff80081349e0>] __lock_acquire+0x1e0c/0x2218 [ 3.053243] c0 [<ffffff80081353c0>] lock_acquire+0xe0/0x280 [ 3.058784] c0 [<ffffff8008abfdfc>] _raw_spin_lock+0x44/0x58 [ 3.064407] c0 [<ffffff80081241cc>] cpufreq_notifier_policy+0x2e8/0x37c [ 3.070983] c0 [<ffffff80080f3458>] blocking_notifier_call_chain+0x78/0xc4 [ 3.077820] c0 [<ffffff8008720294>] cpufreq_online+0x28c/0x76c [ 3.083618] c0 [<ffffff80087208a4>] cpufreq_add_dev+0x98/0xdc [ 3.089331] c0 [<ffffff800850bf14>] subsys_interface_register+0x84/0xc0 [ 3.095907] c0 [<ffffff800871fa0c>] cpufreq_register_driver+0x168/0x28c [ 3.102486] c0 [<ffffff80087272f8>] sprd_cpufreq_probe+0x134/0x19c [ 3.108629] c0 [<ffffff8008510768>] platform_drv_probe+0x58/0xd0 [ 3.114599] c0 [<ffffff800850de2c>] driver_probe_device+0x1e8/0x470 [ 3.120830] c0 [<ffffff800850e168>] __driver_attach+0xb4/0xb8 [ 3.126541] c0 [<ffffff800850b750>] bus_for_each_dev+0x6c/0xac [ 3.132339] c0 [<ffffff800850d6c0>] driver_attach+0x2c/0x34 [ 3.137877] c0 [<ffffff800850d234>] bus_add_driver+0x210/0x298 [ 3.143676] c0 [<ffffff800850f1f4>] driver_register+0x7c/0x114 [ 3.149476] c0 [<ffffff8008510654>] __platform_driver_register+0x60/0x6c [ 3.156139] c0 [<ffffff8008f49f40>] sprd_cpufreq_platdrv_init+0x18/0x20 [ 3.162714] c0 [<ffffff8008082a64>] do_one_initcall+0xd0/0x1d8 [ 3.168514] c0 [<ffffff8008f0bc58>] kernel_init_freeable+0x1fc/0x29c [ 3.174834] c0 [<ffffff8008ab554c>] kernel_init+0x20/0x12c [ 3.180281] c0 [<ffffff8008086290>] ret_from_fork+0x10/0x40 Reported-by: Ke Wang <[email protected]> Signed-off-by: Juri Lelli <[email protected]>
geekerlw
pushed a commit
to geekerlw/kernel
that referenced
this issue
Jan 12, 2017
drm_connector_register_all requires a few too many locks because our connector_list locking is busted. Add another FIXME+hack to work around this. This should address the below lockdep splat: ====================================================== [ INFO: possible circular locking dependency detected ] 4.7.0-rc5+ #524 Tainted: G O ------------------------------------------------------- kworker/u8:0/6 is trying to acquire lock: (&dev->mode_config.mutex){+.+.+.}, at: [<ffffffff815afde0>] drm_modeset_lock_all+0x40/0x120 but task is already holding lock: ((fb_notifier_list).rwsem){++++.+}, at: [<ffffffff810ac195>] __blocking_notifier_call_chain+0x35/0x70 which lock already depends on the new lock. the existing dependency chain (in reverse order) is: -> #1 ((fb_notifier_list).rwsem){++++.+}: [<ffffffff810df611>] lock_acquire+0xb1/0x200 [<ffffffff819a55b4>] down_write+0x44/0x80 [<ffffffff810abf91>] blocking_notifier_chain_register+0x21/0xb0 [<ffffffff814c7448>] fb_register_client+0x18/0x20 [<ffffffff814c6c86>] backlight_device_register+0x136/0x260 [<ffffffffa0127eb2>] intel_backlight_device_register+0xa2/0x160 [i915] [<ffffffffa00f46be>] intel_connector_register+0xe/0x10 [i915] [<ffffffffa0112bfb>] intel_dp_connector_register+0x1b/0x80 [i915] [<ffffffff8159dfea>] drm_connector_register+0x4a/0x80 [<ffffffff8159fe44>] drm_connector_register_all+0x64/0xf0 [<ffffffff815a2a64>] drm_modeset_register_all+0x174/0x1c0 [<ffffffff81599b72>] drm_dev_register+0xc2/0xd0 [<ffffffffa00621d7>] i915_driver_load+0x1547/0x2200 [i915] [<ffffffffa006d80f>] i915_pci_probe+0x4f/0x70 [i915] [<ffffffff814a2135>] local_pci_probe+0x45/0xa0 [<ffffffff814a349b>] pci_device_probe+0xdb/0x130 [<ffffffff815c07e3>] driver_probe_device+0x223/0x440 [<ffffffff815c0ad5>] __driver_attach+0xd5/0x100 [<ffffffff815be386>] bus_for_each_dev+0x66/0xa0 [<ffffffff815c002e>] driver_attach+0x1e/0x20 [<ffffffff815bf9be>] bus_add_driver+0x1ee/0x280 [<ffffffff815c1810>] driver_register+0x60/0xe0 [<ffffffff814a1a10>] __pci_register_driver+0x60/0x70 [<ffffffffa01a905b>] i915_init+0x5b/0x62 [i915] [<ffffffff8100042d>] do_one_initcall+0x3d/0x150 [<ffffffff811a935b>] do_init_module+0x5f/0x1d9 [<ffffffff81124416>] load_module+0x20e6/0x27e0 [<ffffffff81124d63>] SYSC_finit_module+0xc3/0xf0 [<ffffffff81124dae>] SyS_finit_module+0xe/0x10 [<ffffffff819a83a9>] entry_SYSCALL_64_fastpath+0x1c/0xac -> #0 (&dev->mode_config.mutex){+.+.+.}: [<ffffffff810df0ac>] __lock_acquire+0x10fc/0x1260 [<ffffffff810df611>] lock_acquire+0xb1/0x200 [<ffffffff819a3097>] mutex_lock_nested+0x67/0x3c0 [<ffffffff815afde0>] drm_modeset_lock_all+0x40/0x120 [<ffffffff8158f79b>] drm_fb_helper_restore_fbdev_mode_unlocked+0x2b/0x80 [<ffffffff8158f81d>] drm_fb_helper_set_par+0x2d/0x50 [<ffffffffa0105f7a>] intel_fbdev_set_par+0x1a/0x60 [i915] [<ffffffff814c13c6>] fbcon_init+0x586/0x610 [<ffffffff8154d16a>] visual_init+0xca/0x130 [<ffffffff8154e611>] do_bind_con_driver+0x1c1/0x3a0 [<ffffffff8154eaf6>] do_take_over_console+0x116/0x180 [<ffffffff814bd3a7>] do_fbcon_takeover+0x57/0xb0 [<ffffffff814c1e48>] fbcon_event_notify+0x658/0x750 [<ffffffff810abcae>] notifier_call_chain+0x3e/0xb0 [<ffffffff810ac1ad>] __blocking_notifier_call_chain+0x4d/0x70 [<ffffffff810ac1e6>] blocking_notifier_call_chain+0x16/0x20 [<ffffffff814c748b>] fb_notifier_call_chain+0x1b/0x20 [<ffffffff814c86b1>] register_framebuffer+0x251/0x330 [<ffffffff8158fa9f>] drm_fb_helper_initial_config+0x25f/0x3f0 [<ffffffffa0106b48>] intel_fbdev_initial_config+0x18/0x30 [i915] [<ffffffff810adfd8>] async_run_entry_fn+0x48/0x150 [<ffffffff810a3947>] process_one_work+0x1e7/0x750 [<ffffffff810a3efb>] worker_thread+0x4b/0x4f0 [<ffffffff810aad4f>] kthread+0xef/0x110 [<ffffffff819a85ef>] ret_from_fork+0x1f/0x40 other info that might help us debug this: Possible unsafe locking scenario: CPU0 CPU1 ---- ---- lock((fb_notifier_list).rwsem); lock(&dev->mode_config.mutex); lock((fb_notifier_list).rwsem); lock(&dev->mode_config.mutex); *** DEADLOCK *** 6 locks held by kworker/u8:0/6: #0: ("events_unbound"){.+.+.+}, at: [<ffffffff810a38c9>] process_one_work+0x169/0x750 #1: ((&entry->work)){+.+.+.}, at: [<ffffffff810a38c9>] process_one_work+0x169/0x750 rockchip-linux#2: (registration_lock){+.+.+.}, at: [<ffffffff814c8487>] register_framebuffer+0x27/0x330 rockchip-linux#3: (console_lock){+.+.+.}, at: [<ffffffff814c86ce>] register_framebuffer+0x26e/0x330 rockchip-linux#4: (&fb_info->lock){+.+.+.}, at: [<ffffffff814c78dd>] lock_fb_info+0x1d/0x40 rockchip-linux#5: ((fb_notifier_list).rwsem){++++.+}, at: [<ffffffff810ac195>] __blocking_notifier_call_chain+0x35/0x70 stack backtrace: CPU: 2 PID: 6 Comm: kworker/u8:0 Tainted: G O 4.7.0-rc5+ #524 Hardware name: Intel Corp. Broxton P/NOTEBOOK, BIOS APLKRVPA.X64.0138.B33.1606250842 06/25/2016 Workqueue: events_unbound async_run_entry_fn 0000000000000000 ffff8800758577f0 ffffffff814507a5 ffffffff828b9900 ffffffff828b9900 ffff880075857830 ffffffff810dc6fa ffff880075857880 ffff88007584d688 0000000000000005 0000000000000006 ffff88007584d6b0 Call Trace: [<ffffffff814507a5>] dump_stack+0x67/0x92 [<ffffffff810dc6fa>] print_circular_bug+0x1aa/0x200 [<ffffffff810df0ac>] __lock_acquire+0x10fc/0x1260 [<ffffffff810df611>] lock_acquire+0xb1/0x200 [<ffffffff815afde0>] ? drm_modeset_lock_all+0x40/0x120 [<ffffffff815afde0>] ? drm_modeset_lock_all+0x40/0x120 [<ffffffff819a3097>] mutex_lock_nested+0x67/0x3c0 [<ffffffff815afde0>] ? drm_modeset_lock_all+0x40/0x120 [<ffffffff810fa85f>] ? rcu_read_lock_sched_held+0x7f/0x90 [<ffffffff81208218>] ? kmem_cache_alloc_trace+0x248/0x2b0 [<ffffffff815afdc5>] ? drm_modeset_lock_all+0x25/0x120 [<ffffffff815afde0>] drm_modeset_lock_all+0x40/0x120 [<ffffffff8158f79b>] drm_fb_helper_restore_fbdev_mode_unlocked+0x2b/0x80 [<ffffffff8158f81d>] drm_fb_helper_set_par+0x2d/0x50 [<ffffffffa0105f7a>] intel_fbdev_set_par+0x1a/0x60 [i915] [<ffffffff814c13c6>] fbcon_init+0x586/0x610 [<ffffffff8154d16a>] visual_init+0xca/0x130 [<ffffffff8154e611>] do_bind_con_driver+0x1c1/0x3a0 [<ffffffff8154eaf6>] do_take_over_console+0x116/0x180 [<ffffffff814bd3a7>] do_fbcon_takeover+0x57/0xb0 [<ffffffff814c1e48>] fbcon_event_notify+0x658/0x750 [<ffffffff810abcae>] notifier_call_chain+0x3e/0xb0 [<ffffffff810ac1ad>] __blocking_notifier_call_chain+0x4d/0x70 [<ffffffff810ac1e6>] blocking_notifier_call_chain+0x16/0x20 [<ffffffff814c748b>] fb_notifier_call_chain+0x1b/0x20 [<ffffffff814c86b1>] register_framebuffer+0x251/0x330 [<ffffffff815b7e8d>] ? vga_switcheroo_client_fb_set+0x5d/0x70 [<ffffffff8158fa9f>] drm_fb_helper_initial_config+0x25f/0x3f0 [<ffffffffa0106b48>] intel_fbdev_initial_config+0x18/0x30 [i915] [<ffffffff810adfd8>] async_run_entry_fn+0x48/0x150 [<ffffffff810a3947>] process_one_work+0x1e7/0x750 [<ffffffff810a38c9>] ? process_one_work+0x169/0x750 [<ffffffff810a3efb>] worker_thread+0x4b/0x4f0 [<ffffffff810a3eb0>] ? process_one_work+0x750/0x750 [<ffffffff810aad4f>] kthread+0xef/0x110 [<ffffffff819a85ef>] ret_from_fork+0x1f/0x40 [<ffffffff810aac60>] ? kthread_stop+0x2e0/0x2e0 v2: Rebase onto the right branch (hand-editing patches ftw) and add more reporters. Reported-by: Imre Deak <[email protected]> Cc: Imre Deak <[email protected]> Cc: Chris Wilson <[email protected]> Acked-by: Chris Wilson <[email protected]> Reported-by: Jiri Kosina <[email protected]> Cc: Jiri Kosina <[email protected]> Signed-off-by: Daniel Vetter <[email protected]> Signed-off-by: Dave Airlie <[email protected]> (cherry picked from commit 5c6c201) Change-Id: I24bc8426dafa81dc1f1de31aea527d75060ed68f Signed-off-by: Mark Yao <[email protected]>
Kwiboo
referenced
this issue
in Kwiboo/linux-rockchip
Apr 6, 2017
…_EXT commit 998d757 upstream. If there is no OPREGION_ASLE_EXT then a VBT stored in mailbox #4 may use the ASLE_EXT parts of the opregion. Adjust the vbt_size calculation for a vbt in mailbox #4 for this. This fixes the driver not finding the VBT on a jumper ezpad mini3 cherrytrail tablet and on a ACER SW5_017 machine. Signed-off-by: Hans de Goede <[email protected]> Signed-off-by: Jani Nikula <[email protected]> Link: http://patchwork.freedesktop.org/patch/msgid/[email protected] (cherry picked from commit dfb65e7) Signed-off-by: Jani Nikula <[email protected]> Signed-off-by: Greg Kroah-Hartman <[email protected]>
Kwiboo
referenced
this issue
in Kwiboo/linux-rockchip
Apr 6, 2017
[ Upstream commit d5afb6f ] The code where sk_clone() came from created a new socket and locked it, but then, on the error path didn't unlock it. This problem stayed there for a long while, till b0691c8 ("net: Unlock sock before calling sk_free()") fixed it, but unfortunately the callers of sk_clone() (now sk_clone_locked()) were not audited and the one in dccp_create_openreq_child() remained. Now in the age of the syskaller fuzzer, this was finally uncovered, as reported by Dmitry: ---- 8< ---- I've got the following report while running syzkaller fuzzer on 86292b3 ("Merge branch 'akpm' (patches from Andrew)") [ BUG: held lock freed! ] 4.10.0+ rockchip-linux#234 Not tainted ------------------------- syz-executor6/6898 is freeing memory ffff88006286cac0-ffff88006286d3b7, with a lock still held there! (slock-AF_INET6){+.-...}, at: [<ffffffff8362c2c9>] spin_lock include/linux/spinlock.h:299 [inline] (slock-AF_INET6){+.-...}, at: [<ffffffff8362c2c9>] sk_clone_lock+0x3d9/0x12c0 net/core/sock.c:1504 5 locks held by syz-executor6/6898: #0: (sk_lock-AF_INET6){+.+.+.}, at: [<ffffffff839a34b4>] lock_sock include/net/sock.h:1460 [inline] #0: (sk_lock-AF_INET6){+.+.+.}, at: [<ffffffff839a34b4>] inet_stream_connect+0x44/0xa0 net/ipv4/af_inet.c:681 #1: (rcu_read_lock){......}, at: [<ffffffff83bc1c2a>] inet6_csk_xmit+0x12a/0x5d0 net/ipv6/inet6_connection_sock.c:126 #2: (rcu_read_lock){......}, at: [<ffffffff8369b424>] __skb_unlink include/linux/skbuff.h:1767 [inline] #2: (rcu_read_lock){......}, at: [<ffffffff8369b424>] __skb_dequeue include/linux/skbuff.h:1783 [inline] #2: (rcu_read_lock){......}, at: [<ffffffff8369b424>] process_backlog+0x264/0x730 net/core/dev.c:4835 #3: (rcu_read_lock){......}, at: [<ffffffff83aeb5c0>] ip6_input_finish+0x0/0x1700 net/ipv6/ip6_input.c:59 #4: (slock-AF_INET6){+.-...}, at: [<ffffffff8362c2c9>] spin_lock include/linux/spinlock.h:299 [inline] #4: (slock-AF_INET6){+.-...}, at: [<ffffffff8362c2c9>] sk_clone_lock+0x3d9/0x12c0 net/core/sock.c:1504 Fix it just like was done by b0691c8 ("net: Unlock sock before calling sk_free()"). Reported-by: Dmitry Vyukov <[email protected]> Cc: Cong Wang <[email protected]> Cc: Eric Dumazet <[email protected]> Cc: Gerrit Renker <[email protected]> Cc: Thomas Gleixner <[email protected]> Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Arnaldo Carvalho de Melo <[email protected]> Signed-off-by: David S. Miller <[email protected]> Signed-off-by: Greg Kroah-Hartman <[email protected]>
Kwiboo
referenced
this issue
in Kwiboo/linux-rockchip
Apr 6, 2017
commit 95a4960 upstream. When iterating busy requests in timeout handler, if the STARTED flag of one request isn't set, that means the request is being processed in block layer or driver, and isn't submitted to hardware yet. In current implementation of blk_mq_check_expired(), if the request queue becomes dying, un-started requests are handled as being completed/freed immediately. This way is wrong, and can cause rq corruption or double allocation[1][2], when doing I/O and removing&resetting NVMe device at the sametime. This patch fixes several issues reported by Yi Zhang. [1]. oops log 1 [ 581.789754] ------------[ cut here ]------------ [ 581.789758] kernel BUG at block/blk-mq.c:374! [ 581.789760] invalid opcode: 0000 [#1] SMP [ 581.789761] Modules linked in: vfat fat ipmi_ssif intel_rapl sb_edac edac_core x86_pkg_temp_thermal intel_powerclamp coretemp kvm_intel kvm nvme irqbypass crct10dif_pclmul nvme_core crc32_pclmul ghash_clmulni_intel intel_cstate ipmi_si mei_me ipmi_devintf intel_uncore sg ipmi_msghandler intel_rapl_perf iTCO_wdt mei iTCO_vendor_support mxm_wmi lpc_ich dcdbas shpchp pcspkr acpi_power_meter wmi nfsd auth_rpcgss nfs_acl lockd dm_multipath grace sunrpc ip_tables xfs libcrc32c sd_mod mgag200 i2c_algo_bit drm_kms_helper syscopyarea sysfillrect sysimgblt fb_sys_fops ttm drm ahci libahci crc32c_intel tg3 libata megaraid_sas i2c_core ptp fjes pps_core dm_mirror dm_region_hash dm_log dm_mod [ 581.789796] CPU: 1 PID: 1617 Comm: kworker/1:1H Not tainted 4.10.0.bz1420297+ #4 [ 581.789797] Hardware name: Dell Inc. PowerEdge R730xd/072T6D, BIOS 2.2.5 09/06/2016 [ 581.789804] Workqueue: kblockd blk_mq_timeout_work [ 581.789806] task: ffff8804721c8000 task.stack: ffffc90006ee4000 [ 581.789809] RIP: 0010:blk_mq_end_request+0x58/0x70 [ 581.789810] RSP: 0018:ffffc90006ee7d50 EFLAGS: 00010202 [ 581.789811] RAX: 0000000000000001 RBX: ffff8802e4195340 RCX: ffff88028e2f4b88 [ 581.789812] RDX: 0000000000001000 RSI: 0000000000001000 RDI: 0000000000000000 [ 581.789813] RBP: ffffc90006ee7d60 R08: 0000000000000003 R09: ffff88028e2f4b00 [ 581.789814] R10: 0000000000001000 R11: 0000000000000001 R12: 00000000fffffffb [ 581.789815] R13: ffff88042abe5780 R14: 000000000000002d R15: ffff88046fbdff80 [ 581.789817] FS: 0000000000000000(0000) GS:ffff88047fc00000(0000) knlGS:0000000000000000 [ 581.789818] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 [ 581.789819] CR2: 00007f64f403a008 CR3: 000000014d078000 CR4: 00000000001406e0 [ 581.789820] Call Trace: [ 581.789825] blk_mq_check_expired+0x76/0x80 [ 581.789828] bt_iter+0x45/0x50 [ 581.789830] blk_mq_queue_tag_busy_iter+0xdd/0x1f0 [ 581.789832] ? blk_mq_rq_timed_out+0x70/0x70 [ 581.789833] ? blk_mq_rq_timed_out+0x70/0x70 [ 581.789840] ? __switch_to+0x140/0x450 [ 581.789841] blk_mq_timeout_work+0x88/0x170 [ 581.789845] process_one_work+0x165/0x410 [ 581.789847] worker_thread+0x137/0x4c0 [ 581.789851] kthread+0x101/0x140 [ 581.789853] ? rescuer_thread+0x3b0/0x3b0 [ 581.789855] ? kthread_park+0x90/0x90 [ 581.789860] ret_from_fork+0x2c/0x40 [ 581.789861] Code: 48 85 c0 74 0d 44 89 e6 48 89 df ff d0 5b 41 5c 5d c3 48 8b bb 70 01 00 00 48 85 ff 75 0f 48 89 df e8 7d f0 ff ff 5b 41 5c 5d c3 <0f> 0b e8 71 f0 ff ff 90 eb e9 0f 1f 40 00 66 2e 0f 1f 84 00 00 [ 581.789882] RIP: blk_mq_end_request+0x58/0x70 RSP: ffffc90006ee7d50 [ 581.789889] ---[ end trace bcaf03d9a14a0a70 ]--- [2]. oops log2 [ 6984.857362] BUG: unable to handle kernel NULL pointer dereference at 0000000000000010 [ 6984.857372] IP: nvme_queue_rq+0x6e6/0x8cd [nvme] [ 6984.857373] PGD 0 [ 6984.857374] [ 6984.857376] Oops: 0000 [#1] SMP [ 6984.857379] Modules linked in: ipmi_ssif vfat fat intel_rapl sb_edac edac_core x86_pkg_temp_thermal intel_powerclamp coretemp kvm_intel kvm irqbypass crct10dif_pclmul crc32_pclmul ghash_clmulni_intel ipmi_si iTCO_wdt iTCO_vendor_support mxm_wmi ipmi_devintf intel_cstate sg dcdbas intel_uncore mei_me intel_rapl_perf mei pcspkr lpc_ich ipmi_msghandler shpchp acpi_power_meter wmi nfsd auth_rpcgss dm_multipath nfs_acl lockd grace sunrpc ip_tables xfs libcrc32c sd_mod mgag200 i2c_algo_bit drm_kms_helper syscopyarea sysfillrect crc32c_intel sysimgblt fb_sys_fops ttm nvme drm nvme_core ahci libahci i2c_core tg3 libata ptp megaraid_sas pps_core fjes dm_mirror dm_region_hash dm_log dm_mod [ 6984.857416] CPU: 7 PID: 1635 Comm: kworker/7:1H Not tainted 4.10.0-2.el7.bz1420297.x86_64 #1 [ 6984.857417] Hardware name: Dell Inc. PowerEdge R730xd/072T6D, BIOS 2.2.5 09/06/2016 [ 6984.857427] Workqueue: kblockd blk_mq_run_work_fn [ 6984.857429] task: ffff880476e3da00 task.stack: ffffc90002e90000 [ 6984.857432] RIP: 0010:nvme_queue_rq+0x6e6/0x8cd [nvme] [ 6984.857433] RSP: 0018:ffffc90002e93c50 EFLAGS: 00010246 [ 6984.857434] RAX: 0000000000000000 RBX: ffff880275646600 RCX: 0000000000001000 [ 6984.857435] RDX: 0000000000000fff RSI: 00000002fba2a000 RDI: ffff8804734e6950 [ 6984.857436] RBP: ffffc90002e93d30 R08: 0000000000002000 R09: 0000000000001000 [ 6984.857437] R10: 0000000000001000 R11: 0000000000000000 R12: ffff8804741d8000 [ 6984.857438] R13: 0000000000000040 R14: ffff880475649f80 R15: ffff8804734e6780 [ 6984.857439] FS: 0000000000000000(0000) GS:ffff88047fcc0000(0000) knlGS:0000000000000000 [ 6984.857440] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 [ 6984.857442] CR2: 0000000000000010 CR3: 0000000001c09000 CR4: 00000000001406e0 [ 6984.857443] Call Trace: [ 6984.857451] ? mempool_free+0x2b/0x80 [ 6984.857455] ? bio_free+0x4e/0x60 [ 6984.857459] blk_mq_dispatch_rq_list+0xf5/0x230 [ 6984.857462] blk_mq_process_rq_list+0x133/0x170 [ 6984.857465] __blk_mq_run_hw_queue+0x8c/0xa0 [ 6984.857467] blk_mq_run_work_fn+0x12/0x20 [ 6984.857473] process_one_work+0x165/0x410 [ 6984.857475] worker_thread+0x137/0x4c0 [ 6984.857478] kthread+0x101/0x140 [ 6984.857480] ? rescuer_thread+0x3b0/0x3b0 [ 6984.857481] ? kthread_park+0x90/0x90 [ 6984.857489] ret_from_fork+0x2c/0x40 [ 6984.857490] Code: 8b bd 70 ff ff ff 89 95 50 ff ff ff 89 8d 58 ff ff ff 44 89 95 60 ff ff ff e8 b7 dd 12 e1 8b 95 50 ff ff ff 48 89 85 68 ff ff ff <4c> 8b 48 10 44 8b 58 18 8b 8d 58 ff ff ff 44 8b 95 60 ff ff ff [ 6984.857511] RIP: nvme_queue_rq+0x6e6/0x8cd [nvme] RSP: ffffc90002e93c50 [ 6984.857512] CR2: 0000000000000010 [ 6984.895359] ---[ end trace 2d7ceb528432bf83 ]--- Reported-by: Yi Zhang <[email protected]> Tested-by: Yi Zhang <[email protected]> Reviewed-by: Bart Van Assche <[email protected]> Reviewed-by: Hannes Reinecke <[email protected]> Signed-off-by: Ming Lei <[email protected]> Signed-off-by: Jens Axboe <[email protected]> Signed-off-by: Greg Kroah-Hartman <[email protected]>
Kwiboo
referenced
this issue
in Kwiboo/linux-rockchip
Apr 21, 2017
Add the missing unlock before return from function etnaviv_gpu_submit() in the error handling case. lst: fixed label name. Fixes: f3cd1b0 ("drm/etnaviv: (re-)protect fence allocation with GPU mutex") CC: [email protected] #4.9+ Signed-off-by: Wei Yongjun <[email protected]> Signed-off-by: Lucas Stach <[email protected]>
Kwiboo
referenced
this issue
in Kwiboo/linux-rockchip
May 2, 2017
mipsxx_pmu_handle_shared_irq() calls irq_work_run() while holding the pmuint_rwlock for read. irq_work_run() can, via perf_pending_event(), call try_to_wake_up() which can try to take rq->lock. However, perf can also call perf_pmu_enable() (and thus take the pmuint_rwlock for write) while holding the rq->lock, from finish_task_switch() via perf_event_context_sched_in(). This leads to an ABBA deadlock: PID: 3855 TASK: 8f7ce288 CPU: 2 COMMAND: "process" #0 [89c39ac8] __delay at 803b5be4 #1 [89c39ac8] do_raw_spin_lock at 8008fdcc #2 [89c39af8] try_to_wake_up at 8006e47c #3 [89c39b38] pollwake at 8018eab0 #4 [89c39b68] __wake_up_common at 800879f4 #5 [89c39b98] __wake_up at 800880e4 #6 [89c39bc8] perf_event_wakeup at 8012109c #7 [89c39be8] perf_pending_event at 80121184 #8 [89c39c08] irq_work_run_list at 801151f0 #9 [89c39c38] irq_work_run at 80115274 #10 [89c39c50] mipsxx_pmu_handle_shared_irq at 8002cc7c PID: 1481 TASK: 8eaac6a8 CPU: 3 COMMAND: "process" #0 [8de7f900] do_raw_write_lock at 800900e0 #1 [8de7f918] perf_event_context_sched_in at 80122310 #2 [8de7f938] __perf_event_task_sched_in at 80122608 #3 [8de7f958] finish_task_switch at 8006b8a4 #4 [8de7f998] __schedule at 805e4dc4 #5 [8de7f9f8] schedule at 805e5558 #6 [8de7fa10] schedule_hrtimeout_range_clock at 805e9984 #7 [8de7fa70] poll_schedule_timeout at 8018e8f8 #8 [8de7fa88] do_select at 8018f338 #9 [8de7fd88] core_sys_select at 8018f5cc #10 [8de7fee0] sys_select at 8018f854 #11 [8de7ff28] syscall_common at 80028fc8 The lock seems to be there to protect the hardware counters so there is no need to hold it across irq_work_run(). Signed-off-by: Rabin Vincent <[email protected]> Signed-off-by: Ralf Baechle <[email protected]>
wzyy2
pushed a commit
that referenced
this issue
May 24, 2017
[ Upstream commit d5afb6f ] The code where sk_clone() came from created a new socket and locked it, but then, on the error path didn't unlock it. This problem stayed there for a long while, till b0691c8 ("net: Unlock sock before calling sk_free()") fixed it, but unfortunately the callers of sk_clone() (now sk_clone_locked()) were not audited and the one in dccp_create_openreq_child() remained. Now in the age of the syskaller fuzzer, this was finally uncovered, as reported by Dmitry: ---- 8< ---- I've got the following report while running syzkaller fuzzer on 86292b3 ("Merge branch 'akpm' (patches from Andrew)") [ BUG: held lock freed! ] 4.10.0+ #234 Not tainted ------------------------- syz-executor6/6898 is freeing memory ffff88006286cac0-ffff88006286d3b7, with a lock still held there! (slock-AF_INET6){+.-...}, at: [<ffffffff8362c2c9>] spin_lock include/linux/spinlock.h:299 [inline] (slock-AF_INET6){+.-...}, at: [<ffffffff8362c2c9>] sk_clone_lock+0x3d9/0x12c0 net/core/sock.c:1504 5 locks held by syz-executor6/6898: #0: (sk_lock-AF_INET6){+.+.+.}, at: [<ffffffff839a34b4>] lock_sock include/net/sock.h:1460 [inline] #0: (sk_lock-AF_INET6){+.+.+.}, at: [<ffffffff839a34b4>] inet_stream_connect+0x44/0xa0 net/ipv4/af_inet.c:681 #1: (rcu_read_lock){......}, at: [<ffffffff83bc1c2a>] inet6_csk_xmit+0x12a/0x5d0 net/ipv6/inet6_connection_sock.c:126 #2: (rcu_read_lock){......}, at: [<ffffffff8369b424>] __skb_unlink include/linux/skbuff.h:1767 [inline] #2: (rcu_read_lock){......}, at: [<ffffffff8369b424>] __skb_dequeue include/linux/skbuff.h:1783 [inline] #2: (rcu_read_lock){......}, at: [<ffffffff8369b424>] process_backlog+0x264/0x730 net/core/dev.c:4835 #3: (rcu_read_lock){......}, at: [<ffffffff83aeb5c0>] ip6_input_finish+0x0/0x1700 net/ipv6/ip6_input.c:59 #4: (slock-AF_INET6){+.-...}, at: [<ffffffff8362c2c9>] spin_lock include/linux/spinlock.h:299 [inline] #4: (slock-AF_INET6){+.-...}, at: [<ffffffff8362c2c9>] sk_clone_lock+0x3d9/0x12c0 net/core/sock.c:1504 Fix it just like was done by b0691c8 ("net: Unlock sock before calling sk_free()"). Reported-by: Dmitry Vyukov <[email protected]> Cc: Cong Wang <[email protected]> Cc: Eric Dumazet <[email protected]> Cc: Gerrit Renker <[email protected]> Cc: Thomas Gleixner <[email protected]> Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Arnaldo Carvalho de Melo <[email protected]> Signed-off-by: David S. Miller <[email protected]> Signed-off-by: Greg Kroah-Hartman <[email protected]>
wzyy2
pushed a commit
that referenced
this issue
May 24, 2017
If the kernel is set to show unhandled signals, and a user task does not handle a SIGILL as a result of an instruction abort, we will attempt to log the offending instruction with dump_instr before killing the task. We use dump_instr to log the encoding of the offending userspace instruction. However, dump_instr is also used to dump instructions from kernel space, and internally always switches to KERNEL_DS before dumping the instruction with get_user. When both PAN and UAO are in use, reading a user instruction via get_user while in KERNEL_DS will result in a permission fault, which leads to an Oops. As we have regs corresponding to the context of the original instruction abort, we can inspect this and only flip to KERNEL_DS if the original abort was taken from the kernel, avoiding this issue. At the same time, remove the redundant (and incorrect) comments regarding the order dump_mem and dump_instr are called in. Cc: Catalin Marinas <[email protected]> Cc: James Morse <[email protected]> Cc: Robin Murphy <[email protected]> Cc: <[email protected]> #4.6+ Signed-off-by: Mark Rutland <[email protected]> Reported-by: Vladimir Murzin <[email protected]> Tested-by: Vladimir Murzin <[email protected]> Fixes: 57f4959 ("arm64: kernel: Add support for User Access Override") Signed-off-by: Will Deacon <[email protected]> (cherry picked from commit c5cea06) Signed-off-by: Alex Shi <[email protected]>
yanghanxing
pushed a commit
that referenced
this issue
Jun 13, 2017
commit fee960b upstream. The inline assembly in __XCHG_CASE() uses a +Q constraint to hazard against other accesses to the memory location being exchanged. However, the pointer passed to the constraint is a u8 pointer, and thus the hazard only applies to the first byte of the location. GCC can take advantage of this, assuming that other portions of the location are unchanged, as demonstrated with the following test case: union u { unsigned long l; unsigned int i[2]; }; unsigned long update_char_hazard(union u *u) { unsigned int a, b; a = u->i[1]; asm ("str %1, %0" : "+Q" (*(char *)&u->l) : "r" (0UL)); b = u->i[1]; return a ^ b; } unsigned long update_long_hazard(union u *u) { unsigned int a, b; a = u->i[1]; asm ("str %1, %0" : "+Q" (*(long *)&u->l) : "r" (0UL)); b = u->i[1]; return a ^ b; } The linaro 15.08 GCC 5.1.1 toolchain compiles the above as follows when using -O2 or above: 0000000000000000 <update_char_hazard>: 0: d2800001 mov x1, #0x0 // #0 4: f9000001 str x1, [x0] 8: d2800000 mov x0, #0x0 // #0 c: d65f03c0 ret 0000000000000010 <update_long_hazard>: 10: b9400401 ldr w1, [x0,#4] 14: d2800002 mov x2, #0x0 // #0 18: f9000002 str x2, [x0] 1c: b9400400 ldr w0, [x0,#4] 20: 4a000020 eor w0, w1, w0 24: d65f03c0 ret This patch fixes the issue by passing an unsigned long pointer into the +Q constraint, as we do for our cmpxchg code. This may hazard against more than is necessary, but this is better than missing a necessary hazard. Fixes: 305d454 ("arm64: atomics: implement native {relaxed, acquire, release} atomics") Acked-by: Will Deacon <[email protected]> Signed-off-by: Mark Rutland <[email protected]> Signed-off-by: Catalin Marinas <[email protected]> Signed-off-by: Greg Kroah-Hartman <[email protected]>
wzyy2
pushed a commit
that referenced
this issue
Aug 10, 2017
There's no need to take the rcu read lock when rounding rate. This patch fixes the following BUG: BUG: sleeping function called from invalid context at kernel/locking/mutex.c:620 in_atomic(): 0, irqs_disabled(): 0, pid: 153, name: kworker/u16:2 5 locks held by kworker/u16:2/153: #0: ("%s"("devfreq_wq")){......}, at: [<ffffff80080b8cf4>] process_one_work+0x1c4/0x58c #1: ((&(&devfreq->work)->work)){......}, at: [<ffffff80080b8cf4>] process_one_work+0x1c4/0x58c #2: (&devfreq->lock){......}, at: [<ffffff80089534c8>] devfreq_monitor+0x28/0x8c #3: (&vop->vop_lock){......}, at: [<ffffff80084c826c>] dmc_notifier_call+0x14/0x34 #4: (rcu_read_lock){......}, at: [<ffffff80089557f0>] rockchip_dmcfreq_target+0x0/0x2e0 CPU: 3 PID: 153 Comm: kworker/u16:2 Not tainted 4.4.77 #2573 Hardware name: Rockchip Sheep board (DT) Workqueue: devfreq_wq devfreq_monitor Call trace: [<ffffff8008089930>] dump_backtrace+0x0/0x1c8 [<ffffff8008089b0c>] show_stack+0x14/0x1c [<ffffff800839718c>] dump_stack+0x8c/0xac [<ffffff80080c8d5c>] ___might_sleep+0x11c/0x128 [<ffffff80080c8ddc>] __might_sleep+0x74/0x84 [<ffffff8008c371a4>] mutex_lock_nested+0x4c/0x39c [<ffffff80089458d8>] clk_prepare_lock+0x58/0xc8 [<ffffff8008946ec8>] clk_round_rate+0x34/0x94 [<ffffff800895589c>] rockchip_dmcfreq_target+0xac/0x2e0 [<ffffff80089533f4>] update_devfreq+0x100/0x1ac [<ffffff80089534d0>] devfreq_monitor+0x30/0x8c [<ffffff80080b8e1c>] process_one_work+0x2ec/0x58c [<ffffff80080ba16c>] worker_thread+0x300/0x428 [<ffffff80080bf3e0>] kthread+0x104/0x10c [<ffffff8008082840>] ret_from_fork+0x10/0x50 Change-Id: I31f75a55da72cab597796edd5c339222094fff97 Signed-off-by: Finley Xiao <[email protected]>
wzyy2
pushed a commit
that referenced
this issue
Sep 22, 2017
commit cdea465 upstream. A vendor with a system having more than 128 CPUs occasionally encounters the following crash during shutdown. This is not an easily reproduceable event, but the vendor was able to provide the following analysis of the crash, which exhibits the same footprint each time. crash> bt PID: 0 TASK: ffff88017c70ce70 CPU: 5 COMMAND: "swapper/5" #0 [ffff88085c143ac8] machine_kexec at ffffffff81059c8b #1 [ffff88085c143b28] __crash_kexec at ffffffff811052e2 #2 [ffff88085c143bf8] crash_kexec at ffffffff811053d0 #3 [ffff88085c143c10] oops_end at ffffffff8168ef88 #4 [ffff88085c143c38] no_context at ffffffff8167ebb3 #5 [ffff88085c143c88] __bad_area_nosemaphore at ffffffff8167ec49 #6 [ffff88085c143cd0] bad_area_nosemaphore at ffffffff8167edb3 #7 [ffff88085c143ce0] __do_page_fault at ffffffff81691d1e #8 [ffff88085c143d40] do_page_fault at ffffffff81691ec5 #9 [ffff88085c143d70] page_fault at ffffffff8168e188 [exception RIP: unknown or invalid address] RIP: ffffffffa053c800 RSP: ffff88085c143e28 RFLAGS: 00010206 RAX: ffff88017c72bfd8 RBX: ffff88017a8dc000 RCX: ffff8810588b5ac8 RDX: ffff8810588b5a00 RSI: ffffffffa053c800 RDI: ffff8810588b5a00 RBP: ffff88085c143e58 R8: ffff88017c70d408 R9: ffff88017a8dc000 R10: 0000000000000002 R11: ffff88085c143da0 R12: ffff8810588b5ac8 R13: 0000000000000100 R14: ffffffffa053c800 R15: ffff8810588b5a00 ORIG_RAX: ffffffffffffffff CS: 0010 SS: 0018 <IRQ stack> [exception RIP: cpuidle_enter_state+82] RIP: ffffffff81514192 RSP: ffff88017c72be50 RFLAGS: 00000202 RAX: 0000001e4c3c6f16 RBX: 000000000000f8a0 RCX: 0000000000000018 RDX: 0000000225c17d03 RSI: ffff88017c72bfd8 RDI: 0000001e4c3c6f16 RBP: ffff88017c72be78 R8: 000000000000237e R9: 0000000000000018 R10: 0000000000002494 R11: 0000000000000001 R12: ffff88017c72be20 R13: ffff88085c14f8e0 R14: 0000000000000082 R15: 0000001e4c3bb400 ORIG_RAX: ffffffffffffff10 CS: 0010 SS: 0018 This is the corresponding stack trace It has crashed because the area pointed with RIP extracted from timer element is already removed during a shutdown process. The function is smi_timeout(). And we think ffff8810588b5a00 in RDX is a parameter struct smi_info crash> rd ffff8810588b5a00 20 ffff8810588b5a00: ffff8810588b6000 0000000000000000 .`.X............ ffff8810588b5a10: ffff880853264400 ffffffffa05417e0 .D&S......T..... ffff8810588b5a20: 24a024a000000000 0000000000000000 .....$.$........ ffff8810588b5a30: 0000000000000000 0000000000000000 ................ ffff8810588b5a30: 0000000000000000 0000000000000000 ................ ffff8810588b5a40: ffffffffa053a040 ffffffffa053a060 @.S.....`.S..... ffff8810588b5a50: 0000000000000000 0000000100000001 ................ ffff8810588b5a60: 0000000000000000 0000000000000e00 ................ ffff8810588b5a70: ffffffffa053a580 ffffffffa053a6e0 ..S.......S..... ffff8810588b5a80: ffffffffa053a4a0 ffffffffa053a250 ..S.....P.S..... ffff8810588b5a90: 0000000500000002 0000000000000000 ................ Unfortunately the top of this area is already detroyed by someone. But because of two reasonns we think this is struct smi_info 1) The address included in between ffff8810588b5a70 and ffff8810588b5a80: are inside of ipmi_si_intf.c see crash> module ffff88085779d2c0 2) We've found the area which point this. It is offset 0x68 of ffff880859df4000 crash> rd ffff880859df4000 100 ffff880859df4000: 0000000000000000 0000000000000001 ................ ffff880859df4010: ffffffffa0535290 dead000000000200 .RS............. ffff880859df4020: ffff880859df4020 ffff880859df4020 @.Y.... @.Y.... ffff880859df4030: 0000000000000002 0000000000100010 ................ ffff880859df4040: ffff880859df4040 ffff880859df4040 @@.Y....@@.Y.... ffff880859df4050: 0000000000000000 0000000000000000 ................ ffff880859df4060: 0000000000000000 ffff8810588b5a00 .........Z.X.... ffff880859df4070: 0000000000000001 ffff880859df4078 [email protected].... If we regards it as struct ipmi_smi in shutdown process it looks consistent. The remedy for this apparent race is affixed below. Signed-off-by: Tony Camuso <[email protected]> Signed-off-by: Greg Kroah-Hartman <[email protected]> This was first introduced in 7ea0ed2 ipmi: Make the message handler easier to use for SMI interfaces where some code was moved outside of the rcu_read_lock() and the lock was not added. Signed-off-by: Corey Minyard <[email protected]>
wzyy2
pushed a commit
that referenced
this issue
Sep 22, 2017
… crash commit 96b7774 upstream. Commit: 2f5177f ("sched/cgroup: Fix/cleanup cgroup teardown/init") .. moved sched_online_group() from css_online() to css_alloc(). It exposes half-baked task group into global lists before initializing generic cgroup stuff. LTP testcase (third in cgroup_regression_test) written for testing similar race in kernels 2.6.26-2.6.28 easily triggers this oops: BUG: unable to handle kernel NULL pointer dereference at 0000000000000008 IP: kernfs_path_from_node_locked+0x260/0x320 CPU: 1 PID: 30346 Comm: cat Not tainted 4.10.0-rc5-test #4 Call Trace: ? kernfs_path_from_node+0x4f/0x60 kernfs_path_from_node+0x3e/0x60 print_rt_rq+0x44/0x2b0 print_rt_stats+0x7a/0xd0 print_cpu+0x2fc/0xe80 ? __might_sleep+0x4a/0x80 sched_debug_show+0x17/0x30 seq_read+0xf2/0x3b0 proc_reg_read+0x42/0x70 __vfs_read+0x28/0x130 ? security_file_permission+0x9b/0xc0 ? rw_verify_area+0x4e/0xb0 vfs_read+0xa5/0x170 SyS_read+0x46/0xa0 entry_SYSCALL_64_fastpath+0x1e/0xad Here the task group is already linked into the global RCU-protected 'task_groups' list, but the css->cgroup pointer is still NULL. This patch reverts this chunk and moves online back to css_online(). Signed-off-by: Konstantin Khlebnikov <[email protected]> Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Cc: Linus Torvalds <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Tejun Heo <[email protected]> Cc: Thomas Gleixner <[email protected]> Fixes: 2f5177f ("sched/cgroup: Fix/cleanup cgroup teardown/init") Link: http://lkml.kernel.org/r/148655324740.424917.5302984537258726349.stgit@buzz Signed-off-by: Ingo Molnar <[email protected]> Signed-off-by: Matt Fleming <[email protected]> Signed-off-by: Greg Kroah-Hartman <[email protected]>
wzyy2
pushed a commit
that referenced
this issue
Sep 22, 2017
commit 89affbf upstream. In codepaths that use the begin/retry interface for reading mems_allowed_seq with irqs disabled, there exists a race condition that stalls the patch process after only modifying a subset of the static_branch call sites. This problem manifested itself as a deadlock in the slub allocator, inside get_any_partial. The loop reads mems_allowed_seq value (via read_mems_allowed_begin), performs the defrag operation, and then verifies the consistency of mem_allowed via the read_mems_allowed_retry and the cookie returned by xxx_begin. The issue here is that both begin and retry first check if cpusets are enabled via cpusets_enabled() static branch. This branch can be rewritted dynamically (via cpuset_inc) if a new cpuset is created. The x86 jump label code fully synchronizes across all CPUs for every entry it rewrites. If it rewrites only one of the callsites (specifically the one in read_mems_allowed_retry) and then waits for the smp_call_function(do_sync_core) to complete while a CPU is inside the begin/retry section with IRQs off and the mems_allowed value is changed, we can hang. This is because begin() will always return 0 (since it wasn't patched yet) while retry() will test the 0 against the actual value of the seq counter. The fix is to use two different static keys: one for begin (pre_enable_key) and one for retry (enable_key). In cpuset_inc(), we first bump the pre_enable key to ensure that cpuset_mems_allowed_begin() always return a valid seqcount if are enabling cpusets. Similarly, when disabling cpusets via cpuset_dec(), we first ensure that callers of cpuset_mems_allowed_retry() will start ignoring the seqcount value before we let cpuset_mems_allowed_begin() return 0. The relevant stack traces of the two stuck threads: CPU: 1 PID: 1415 Comm: mkdir Tainted: G L 4.9.36-00104-g540c51286237 #4 Hardware name: Default string Default string/Hardware, BIOS 4.29.1-20170526215256 05/26/2017 task: ffff8817f9c28000 task.stack: ffffc9000ffa4000 RIP: smp_call_function_many+0x1f9/0x260 Call Trace: smp_call_function+0x3b/0x70 on_each_cpu+0x2f/0x90 text_poke_bp+0x87/0xd0 arch_jump_label_transform+0x93/0x100 __jump_label_update+0x77/0x90 jump_label_update+0xaa/0xc0 static_key_slow_inc+0x9e/0xb0 cpuset_css_online+0x70/0x2e0 online_css+0x2c/0xa0 cgroup_apply_control_enable+0x27f/0x3d0 cgroup_mkdir+0x2b7/0x420 kernfs_iop_mkdir+0x5a/0x80 vfs_mkdir+0xf6/0x1a0 SyS_mkdir+0xb7/0xe0 entry_SYSCALL_64_fastpath+0x18/0xad ... CPU: 2 PID: 1 Comm: init Tainted: G L 4.9.36-00104-g540c51286237 #4 Hardware name: Default string Default string/Hardware, BIOS 4.29.1-20170526215256 05/26/2017 task: ffff8818087c0000 task.stack: ffffc90000030000 RIP: int3+0x39/0x70 Call Trace: <#DB> ? ___slab_alloc+0x28b/0x5a0 <EOE> ? copy_process.part.40+0xf7/0x1de0 __slab_alloc.isra.80+0x54/0x90 copy_process.part.40+0xf7/0x1de0 copy_process.part.40+0xf7/0x1de0 kmem_cache_alloc_node+0x8a/0x280 copy_process.part.40+0xf7/0x1de0 _do_fork+0xe7/0x6c0 _raw_spin_unlock_irq+0x2d/0x60 trace_hardirqs_on_caller+0x136/0x1d0 entry_SYSCALL_64_fastpath+0x5/0xad do_syscall_64+0x27/0x350 SyS_clone+0x19/0x20 do_syscall_64+0x60/0x350 entry_SYSCALL64_slow_path+0x25/0x25 Link: http://lkml.kernel.org/r/[email protected] Fixes: 46e700a ("mm, page_alloc: remove unnecessary taking of a seqlock when cpusets are disabled") Signed-off-by: Dima Zavin <[email protected]> Reported-by: Cliff Spradlin <[email protected]> Acked-by: Vlastimil Babka <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Christopher Lameter <[email protected]> Cc: Li Zefan <[email protected]> Cc: Pekka Enberg <[email protected]> Cc: David Rientjes <[email protected]> Cc: Joonsoo Kim <[email protected]> Cc: Mel Gorman <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]> Signed-off-by: Greg Kroah-Hartman <[email protected]>
rkchrome
pushed a commit
that referenced
this issue
Nov 3, 2017
There's a race between usb_gadget_udc_stop() which is likely to set the gadget driver to NULL in the udc driver and this drivers gadget disconnect fn which likely checks for the gadget driver to a null ptr. It happens that unbind (doing set_gadget_data(NULL)) is called before the gadget driver is set to NULL and the udc driver calls disconnect fn which results in cdev being a null ptr. As a workaround we check cdev in android_disconnect() to prevent the following panic: Unable to handle kernel NULL pointer dereference at virtual address 000000a8 pgd = ffffff800940a000 [000000a8] *pgd=00000000be1fe003, *pud=00000000be1fe003, *pmd=0000000000000000 Internal error: Oops: 96000046 [#1] PREEMPT SMP CPU: 7 PID: 1134 Comm: kworker/u16:3 Tainted: G S 4.9.41-g75cd2a0231ea-dirty #4 Hardware name: HiKey960 (DT) Workqueue: events_power_efficient event_work task: ffffffc0b5f4f000 task.stack: ffffffc0b5b94000 PC is at android_disconnect+0x54/0xa4 LR is at android_disconnect+0x54/0xa4 pc : [<ffffff8008855938>] lr : [<ffffff8008855938>] pstate: 80000185 sp : ffffffc0b5b97bf0 x29: ffffffc0b5b97bf0 x28: 0000000000000003 x27: ffffffc0b5181c54 x26: ffffffc0b5181c68 x25: ffffff8008dc1000 x24: ffffffc0b5181d70 x23: ffffff8008dc18a0 x22: ffffffc0b5f5a018 x21: ffffffc0b5894ad8 x20: 0000000000000000 x19: ffffff8008ddaec8 x18: 0000000000000000 x17: 0000000000000000 x16: 0000000000000000 x15: 0000000000000000 x14: 00000000007c9ccd x13: 0000000000000000 x12: 0000000000000000 x11: 0000000000000001 x10: 0000000000000001 x9 : ffffff800930f1a8 x8 : ffffff800932a133 x7 : 0000000000000000 x6 : 0000000000000000 x5 : ffffffc0b5b97a50 x4 : ffffffc0be19f090 x3 : 0000000000000000 x2 : ffffff80091ca000 x1 : 000000000000002f x0 : 000000000000002f This happened on a hikey960 with the following backtrace: [<ffffff8008855938>] android_disconnect+0x54/0xa4 [<ffffff80089def38>] dwc3_disconnect_gadget.part.19+0x114.888119] [<ffffff80087f7d48>] dwc3_gadget_suspend+0x6c/0x70 [<ffffff80087ee674>] dwc3_suspend_device+0x58/0xa0 [<ffffff80087fb418>] dwc3_otg_work+0x214/0x474 [<ffffff80087fdc74>] event_work+0x3bc/0x5ac [<ffffff80080e5d88>] process_one_work+0x14c/0x43c [<ffffff80080e60d4>] worker_thread+0x5c/0x438 [<ffffff80080ece68>] kthread+0xec/0x100 [<ffffff8008083680>] ret_from_fork+0x10/0x50 dwc3_otg_work tries to handle a switch from host to device mode and therefore calls disconnect on the gadget driver. To reproduce the issue it is enaugh to enable tethering (rndis gadget), unplug and plug in again the usb connector which causes the change from device to host and back to device mode. Signed-off-by: Danilo Krummrich <[email protected]>
rkchrome
pushed a commit
that referenced
this issue
Dec 4, 2017
commit ab31fd0 upstream. v4.10 commit 6f2ce1c ("scsi: zfcp: fix rport unblock race with LUN recovery") extended accessing parent pointer fields of struct zfcp_erp_action for tracing. If an erp_action has never been enqueued before, these parent pointer fields are uninitialized and NULL. Examples are zfcp objects freshly added to the parent object's children list, before enqueueing their first recovery subsequently. In zfcp_erp_try_rport_unblock(), we iterate such list. Accessing erp_action fields can cause a NULL pointer dereference. Since the kernel can read from lowcore on s390, it does not immediately cause a kernel page fault. Instead it can cause hangs on trying to acquire the wrong erp_action->adapter->dbf->rec_lock in zfcp_dbf_rec_action_lvl() ^bogus^ while holding already other locks with IRQs disabled. Real life example from attaching lots of LUNs in parallel on many CPUs: crash> bt 17723 PID: 17723 TASK: ... CPU: 25 COMMAND: "zfcperp0.0.1800" LOWCORE INFO: -psw : 0x0404300180000000 0x000000000038e424 -function : _raw_spin_lock_wait_flags at 38e424 ... #0 [fdde8fc90] zfcp_dbf_rec_action_lvl at 3e0004e9862 [zfcp] #1 [fdde8fce8] zfcp_erp_try_rport_unblock at 3e0004dfddc [zfcp] #2 [fdde8fd38] zfcp_erp_strategy at 3e0004e0234 [zfcp] #3 [fdde8fda8] zfcp_erp_thread at 3e0004e0a12 [zfcp] #4 [fdde8fe60] kthread at 173550 #5 [fdde8feb8] kernel_thread_starter at 10add2 zfcp_adapter zfcp_port zfcp_unit <address>, 0x404040d600000000 scsi_device NULL, returning early! zfcp_scsi_dev.status = 0x40000000 0x40000000 ZFCP_STATUS_COMMON_RUNNING crash> zfcp_unit <address> struct zfcp_unit { erp_action = { adapter = 0x0, port = 0x0, unit = 0x0, }, } zfcp_erp_action is always fully embedded into its container object. Such container object is never moved in its object tree (only add or delete). Hence, erp_action parent pointers can never change. To fix the issue, initialize the erp_action parent pointers before adding the erp_action container to any list and thus before it becomes accessible from outside of its initializing function. In order to also close the time window between zfcp_erp_setup_act() memsetting the entire erp_action to zero and setting the parent pointers again, drop the memset and instead explicitly initialize individually all erp_action fields except for parent pointers. To be extra careful not to introduce any other unintended side effect, even keep zeroing the erp_action fields for list and timer. Also double-check with WARN_ON_ONCE that erp_action parent pointers never change, so we get to know when we would deviate from previous behavior. Signed-off-by: Steffen Maier <[email protected]> Fixes: 6f2ce1c ("scsi: zfcp: fix rport unblock race with LUN recovery") Reviewed-by: Benjamin Block <[email protected]> Signed-off-by: Martin K. Petersen <[email protected]> Signed-off-by: Greg Kroah-Hartman <[email protected]>
rkchrome
pushed a commit
that referenced
this issue
Dec 4, 2017
commit a743bbe upstream. The warning below says it all: BUG: using __this_cpu_read() in preemptible [00000000] code: swapper/0/1 caller is __this_cpu_preempt_check CPU: 0 PID: 1 Comm: swapper/0 Not tainted 4.14.0-rc8 #4 Call Trace: dump_stack check_preemption_disabled ? do_early_param __this_cpu_preempt_check arch_perfmon_init op_nmi_init ? alloc_pci_root_info oprofile_arch_init oprofile_init do_one_initcall ... These accessors should not have been used in the first place: it is PPro so no mixed silicon revisions and thus it can simply use boot_cpu_data. Reported-by: Fengguang Wu <[email protected]> Tested-by: Fengguang Wu <[email protected]> Fix-creation-mandated-by: Linus Torvalds <[email protected]> Signed-off-by: Borislav Petkov <[email protected]> Signed-off-by: Thomas Gleixner <[email protected]> Cc: Robert Richter <[email protected]> Cc: [email protected] Signed-off-by: Greg Kroah-Hartman <[email protected]>
rkchrome
pushed a commit
that referenced
this issue
Dec 4, 2017
[ Upstream commit 6151b8b ] ppp_release() tries to ensure that netdevices are unregistered before decrementing the unit refcount and running ppp_destroy_interface(). This is all fine as long as the the device is unregistered by ppp_release(): the unregister_netdevice() call, followed by rtnl_unlock(), guarantee that the unregistration process completes before rtnl_unlock() returns. However, the device may be unregistered by other means (like ppp_nl_dellink()). If this happens right before ppp_release() calling rtnl_lock(), then ppp_release() has to wait for the concurrent unregistration code to release the lock. But rtnl_unlock() releases the lock before completing the device unregistration process. This allows ppp_release() to proceed and eventually call ppp_destroy_interface() before the unregistration process completes. Calling free_netdev() on this partially unregistered device will BUG(): ------------[ cut here ]------------ kernel BUG at net/core/dev.c:8141! invalid opcode: 0000 [#1] SMP CPU: 1 PID: 1557 Comm: pppd Not tainted 4.14.0-rc2+ #4 Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.10.2-1.fc26 04/01/2014 Call Trace: ppp_destroy_interface+0xd8/0xe0 [ppp_generic] ppp_disconnect_channel+0xda/0x110 [ppp_generic] ppp_unregister_channel+0x5e/0x110 [ppp_generic] pppox_unbind_sock+0x23/0x30 [pppox] pppoe_connect+0x130/0x440 [pppoe] SYSC_connect+0x98/0x110 ? do_fcntl+0x2c0/0x5d0 SyS_connect+0xe/0x10 entry_SYSCALL_64_fastpath+0x1a/0xa5 RIP: free_netdev+0x107/0x110 RSP: ffffc28a40573d88 ---[ end trace ed294ff0cc40eeff ]--- We could set the ->needs_free_netdev flag on PPP devices and move the ppp_destroy_interface() logic in the ->priv_destructor() callback. But that'd be quite intrusive as we'd first need to unlink from the other channels and units that depend on the device (the ones that used the PPPIOCCONNECT and PPPIOCATTACH ioctls). Instead, we can just let the netdevice hold a reference on its ppp_file. This reference is dropped in ->priv_destructor(), at the very end of the unregistration process, so that neither ppp_release() nor ppp_disconnect_channel() can call ppp_destroy_interface() in the interim. Reported-by: Beniamino Galvani <[email protected]> Fixes: 8cb775b ("ppp: fix device unregistration upon netns deletion") Signed-off-by: Guillaume Nault <[email protected]> Signed-off-by: David S. Miller <[email protected]> Signed-off-by: Greg Kroah-Hartman <[email protected]>
wzyy2
pushed a commit
that referenced
this issue
Jan 30, 2018
[ Upstream commit ec4fbd6 ] Dmitry reported a lockdep splat [1] (false positive) that we can fix by releasing the spinlock before calling icmp_send() from ip_expire() This is a false positive because sending an ICMP message can not possibly re-enter the IP frag engine. [1] [ INFO: possible circular locking dependency detected ] 4.10.0+ #29 Not tainted ------------------------------------------------------- modprobe/12392 is trying to acquire lock: (_xmit_ETHER#2){+.-...}, at: [<ffffffff837a8182>] spin_lock include/linux/spinlock.h:299 [inline] (_xmit_ETHER#2){+.-...}, at: [<ffffffff837a8182>] __netif_tx_lock include/linux/netdevice.h:3486 [inline] (_xmit_ETHER#2){+.-...}, at: [<ffffffff837a8182>] sch_direct_xmit+0x282/0x6d0 net/sched/sch_generic.c:180 but task is already holding lock: (&(&q->lock)->rlock){+.-...}, at: [<ffffffff8389a4d1>] spin_lock include/linux/spinlock.h:299 [inline] (&(&q->lock)->rlock){+.-...}, at: [<ffffffff8389a4d1>] ip_expire+0x51/0x6c0 net/ipv4/ip_fragment.c:201 which lock already depends on the new lock. the existing dependency chain (in reverse order) is: -> #1 (&(&q->lock)->rlock){+.-...}: validate_chain kernel/locking/lockdep.c:2267 [inline] __lock_acquire+0x2149/0x3430 kernel/locking/lockdep.c:3340 lock_acquire+0x2a1/0x630 kernel/locking/lockdep.c:3755 __raw_spin_lock include/linux/spinlock_api_smp.h:142 [inline] _raw_spin_lock+0x33/0x50 kernel/locking/spinlock.c:151 spin_lock include/linux/spinlock.h:299 [inline] ip_defrag+0x3a2/0x4130 net/ipv4/ip_fragment.c:669 ip_check_defrag+0x4e3/0x8b0 net/ipv4/ip_fragment.c:713 packet_rcv_fanout+0x282/0x800 net/packet/af_packet.c:1459 deliver_skb net/core/dev.c:1834 [inline] dev_queue_xmit_nit+0x294/0xa90 net/core/dev.c:1890 xmit_one net/core/dev.c:2903 [inline] dev_hard_start_xmit+0x16b/0xab0 net/core/dev.c:2923 sch_direct_xmit+0x31f/0x6d0 net/sched/sch_generic.c:182 __dev_xmit_skb net/core/dev.c:3092 [inline] __dev_queue_xmit+0x13e5/0x1e60 net/core/dev.c:3358 dev_queue_xmit+0x17/0x20 net/core/dev.c:3423 neigh_resolve_output+0x6b9/0xb10 net/core/neighbour.c:1308 neigh_output include/net/neighbour.h:478 [inline] ip_finish_output2+0x8b8/0x15a0 net/ipv4/ip_output.c:228 ip_do_fragment+0x1d93/0x2720 net/ipv4/ip_output.c:672 ip_fragment.constprop.54+0x145/0x200 net/ipv4/ip_output.c:545 ip_finish_output+0x82d/0xe10 net/ipv4/ip_output.c:314 NF_HOOK_COND include/linux/netfilter.h:246 [inline] ip_output+0x1f0/0x7a0 net/ipv4/ip_output.c:404 dst_output include/net/dst.h:486 [inline] ip_local_out+0x95/0x170 net/ipv4/ip_output.c:124 ip_send_skb+0x3c/0xc0 net/ipv4/ip_output.c:1492 ip_push_pending_frames+0x64/0x80 net/ipv4/ip_output.c:1512 raw_sendmsg+0x26de/0x3a00 net/ipv4/raw.c:655 inet_sendmsg+0x164/0x5b0 net/ipv4/af_inet.c:761 sock_sendmsg_nosec net/socket.c:633 [inline] sock_sendmsg+0xca/0x110 net/socket.c:643 ___sys_sendmsg+0x4a3/0x9f0 net/socket.c:1985 __sys_sendmmsg+0x25c/0x750 net/socket.c:2075 SYSC_sendmmsg net/socket.c:2106 [inline] SyS_sendmmsg+0x35/0x60 net/socket.c:2101 do_syscall_64+0x2e8/0x930 arch/x86/entry/common.c:281 return_from_SYSCALL_64+0x0/0x7a -> #0 (_xmit_ETHER#2){+.-...}: check_prev_add kernel/locking/lockdep.c:1830 [inline] check_prevs_add+0xa8f/0x19f0 kernel/locking/lockdep.c:1940 validate_chain kernel/locking/lockdep.c:2267 [inline] __lock_acquire+0x2149/0x3430 kernel/locking/lockdep.c:3340 lock_acquire+0x2a1/0x630 kernel/locking/lockdep.c:3755 __raw_spin_lock include/linux/spinlock_api_smp.h:142 [inline] _raw_spin_lock+0x33/0x50 kernel/locking/spinlock.c:151 spin_lock include/linux/spinlock.h:299 [inline] __netif_tx_lock include/linux/netdevice.h:3486 [inline] sch_direct_xmit+0x282/0x6d0 net/sched/sch_generic.c:180 __dev_xmit_skb net/core/dev.c:3092 [inline] __dev_queue_xmit+0x13e5/0x1e60 net/core/dev.c:3358 dev_queue_xmit+0x17/0x20 net/core/dev.c:3423 neigh_hh_output include/net/neighbour.h:468 [inline] neigh_output include/net/neighbour.h:476 [inline] ip_finish_output2+0xf6c/0x15a0 net/ipv4/ip_output.c:228 ip_finish_output+0xa29/0xe10 net/ipv4/ip_output.c:316 NF_HOOK_COND include/linux/netfilter.h:246 [inline] ip_output+0x1f0/0x7a0 net/ipv4/ip_output.c:404 dst_output include/net/dst.h:486 [inline] ip_local_out+0x95/0x170 net/ipv4/ip_output.c:124 ip_send_skb+0x3c/0xc0 net/ipv4/ip_output.c:1492 ip_push_pending_frames+0x64/0x80 net/ipv4/ip_output.c:1512 icmp_push_reply+0x372/0x4d0 net/ipv4/icmp.c:394 icmp_send+0x156c/0x1c80 net/ipv4/icmp.c:754 ip_expire+0x40e/0x6c0 net/ipv4/ip_fragment.c:239 call_timer_fn+0x241/0x820 kernel/time/timer.c:1268 expire_timers kernel/time/timer.c:1307 [inline] __run_timers+0x960/0xcf0 kernel/time/timer.c:1601 run_timer_softirq+0x21/0x80 kernel/time/timer.c:1614 __do_softirq+0x31f/0xbe7 kernel/softirq.c:284 invoke_softirq kernel/softirq.c:364 [inline] irq_exit+0x1cc/0x200 kernel/softirq.c:405 exiting_irq arch/x86/include/asm/apic.h:657 [inline] smp_apic_timer_interrupt+0x76/0xa0 arch/x86/kernel/apic/apic.c:962 apic_timer_interrupt+0x93/0xa0 arch/x86/entry/entry_64.S:707 __read_once_size include/linux/compiler.h:254 [inline] atomic_read arch/x86/include/asm/atomic.h:26 [inline] rcu_dynticks_curr_cpu_in_eqs kernel/rcu/tree.c:350 [inline] __rcu_is_watching kernel/rcu/tree.c:1133 [inline] rcu_is_watching+0x83/0x110 kernel/rcu/tree.c:1147 rcu_read_lock_held+0x87/0xc0 kernel/rcu/update.c:293 radix_tree_deref_slot include/linux/radix-tree.h:238 [inline] filemap_map_pages+0x6d4/0x1570 mm/filemap.c:2335 do_fault_around mm/memory.c:3231 [inline] do_read_fault mm/memory.c:3265 [inline] do_fault+0xbd5/0x2080 mm/memory.c:3370 handle_pte_fault mm/memory.c:3600 [inline] __handle_mm_fault+0x1062/0x2cb0 mm/memory.c:3714 handle_mm_fault+0x1e2/0x480 mm/memory.c:3751 __do_page_fault+0x4f6/0xb60 arch/x86/mm/fault.c:1397 do_page_fault+0x54/0x70 arch/x86/mm/fault.c:1460 page_fault+0x28/0x30 arch/x86/entry/entry_64.S:1011 other info that might help us debug this: Possible unsafe locking scenario: CPU0 CPU1 ---- ---- lock(&(&q->lock)->rlock); lock(_xmit_ETHER#2); lock(&(&q->lock)->rlock); lock(_xmit_ETHER#2); *** DEADLOCK *** 10 locks held by modprobe/12392: #0: (&mm->mmap_sem){++++++}, at: [<ffffffff81329758>] __do_page_fault+0x2b8/0xb60 arch/x86/mm/fault.c:1336 #1: (rcu_read_lock){......}, at: [<ffffffff8188cab6>] filemap_map_pages+0x1e6/0x1570 mm/filemap.c:2324 #2: (&(ptlock_ptr(page))->rlock#2){+.+...}, at: [<ffffffff81984a78>] spin_lock include/linux/spinlock.h:299 [inline] #2: (&(ptlock_ptr(page))->rlock#2){+.+...}, at: [<ffffffff81984a78>] pte_alloc_one_map mm/memory.c:2944 [inline] #2: (&(ptlock_ptr(page))->rlock#2){+.+...}, at: [<ffffffff81984a78>] alloc_set_pte+0x13b8/0x1b90 mm/memory.c:3072 #3: (((&q->timer))){+.-...}, at: [<ffffffff81627e72>] lockdep_copy_map include/linux/lockdep.h:175 [inline] #3: (((&q->timer))){+.-...}, at: [<ffffffff81627e72>] call_timer_fn+0x1c2/0x820 kernel/time/timer.c:1258 #4: (&(&q->lock)->rlock){+.-...}, at: [<ffffffff8389a4d1>] spin_lock include/linux/spinlock.h:299 [inline] #4: (&(&q->lock)->rlock){+.-...}, at: [<ffffffff8389a4d1>] ip_expire+0x51/0x6c0 net/ipv4/ip_fragment.c:201 #5: (rcu_read_lock){......}, at: [<ffffffff8389a633>] ip_expire+0x1b3/0x6c0 net/ipv4/ip_fragment.c:216 #6: (slock-AF_INET){+.-...}, at: [<ffffffff839b3313>] spin_trylock include/linux/spinlock.h:309 [inline] #6: (slock-AF_INET){+.-...}, at: [<ffffffff839b3313>] icmp_xmit_lock net/ipv4/icmp.c:219 [inline] #6: (slock-AF_INET){+.-...}, at: [<ffffffff839b3313>] icmp_send+0x803/0x1c80 net/ipv4/icmp.c:681 #7: (rcu_read_lock_bh){......}, at: [<ffffffff838ab9a1>] ip_finish_output2+0x2c1/0x15a0 net/ipv4/ip_output.c:198 #8: (rcu_read_lock_bh){......}, at: [<ffffffff836d1dee>] __dev_queue_xmit+0x23e/0x1e60 net/core/dev.c:3324 #9: (dev->qdisc_running_key ?: &qdisc_running_key){+.....}, at: [<ffffffff836d3a27>] dev_queue_xmit+0x17/0x20 net/core/dev.c:3423 stack backtrace: CPU: 0 PID: 12392 Comm: modprobe Not tainted 4.10.0+ #29 Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 01/01/2011 Call Trace: <IRQ> __dump_stack lib/dump_stack.c:16 [inline] dump_stack+0x2ee/0x3ef lib/dump_stack.c:52 print_circular_bug+0x307/0x3b0 kernel/locking/lockdep.c:1204 check_prev_add kernel/locking/lockdep.c:1830 [inline] check_prevs_add+0xa8f/0x19f0 kernel/locking/lockdep.c:1940 validate_chain kernel/locking/lockdep.c:2267 [inline] __lock_acquire+0x2149/0x3430 kernel/locking/lockdep.c:3340 lock_acquire+0x2a1/0x630 kernel/locking/lockdep.c:3755 __raw_spin_lock include/linux/spinlock_api_smp.h:142 [inline] _raw_spin_lock+0x33/0x50 kernel/locking/spinlock.c:151 spin_lock include/linux/spinlock.h:299 [inline] __netif_tx_lock include/linux/netdevice.h:3486 [inline] sch_direct_xmit+0x282/0x6d0 net/sched/sch_generic.c:180 __dev_xmit_skb net/core/dev.c:3092 [inline] __dev_queue_xmit+0x13e5/0x1e60 net/core/dev.c:3358 dev_queue_xmit+0x17/0x20 net/core/dev.c:3423 neigh_hh_output include/net/neighbour.h:468 [inline] neigh_output include/net/neighbour.h:476 [inline] ip_finish_output2+0xf6c/0x15a0 net/ipv4/ip_output.c:228 ip_finish_output+0xa29/0xe10 net/ipv4/ip_output.c:316 NF_HOOK_COND include/linux/netfilter.h:246 [inline] ip_output+0x1f0/0x7a0 net/ipv4/ip_output.c:404 dst_output include/net/dst.h:486 [inline] ip_local_out+0x95/0x170 net/ipv4/ip_output.c:124 ip_send_skb+0x3c/0xc0 net/ipv4/ip_output.c:1492 ip_push_pending_frames+0x64/0x80 net/ipv4/ip_output.c:1512 icmp_push_reply+0x372/0x4d0 net/ipv4/icmp.c:394 icmp_send+0x156c/0x1c80 net/ipv4/icmp.c:754 ip_expire+0x40e/0x6c0 net/ipv4/ip_fragment.c:239 call_timer_fn+0x241/0x820 kernel/time/timer.c:1268 expire_timers kernel/time/timer.c:1307 [inline] __run_timers+0x960/0xcf0 kernel/time/timer.c:1601 run_timer_softirq+0x21/0x80 kernel/time/timer.c:1614 __do_softirq+0x31f/0xbe7 kernel/softirq.c:284 invoke_softirq kernel/softirq.c:364 [inline] irq_exit+0x1cc/0x200 kernel/softirq.c:405 exiting_irq arch/x86/include/asm/apic.h:657 [inline] smp_apic_timer_interrupt+0x76/0xa0 arch/x86/kernel/apic/apic.c:962 apic_timer_interrupt+0x93/0xa0 arch/x86/entry/entry_64.S:707 RIP: 0010:__read_once_size include/linux/compiler.h:254 [inline] RIP: 0010:atomic_read arch/x86/include/asm/atomic.h:26 [inline] RIP: 0010:rcu_dynticks_curr_cpu_in_eqs kernel/rcu/tree.c:350 [inline] RIP: 0010:__rcu_is_watching kernel/rcu/tree.c:1133 [inline] RIP: 0010:rcu_is_watching+0x83/0x110 kernel/rcu/tree.c:1147 RSP: 0000:ffff8801c391f120 EFLAGS: 00000a03 ORIG_RAX: ffffffffffffff10 RAX: dffffc0000000000 RBX: ffff8801c391f148 RCX: 0000000000000000 RDX: 0000000000000000 RSI: 000055edd4374000 RDI: ffff8801dbe1ae0c RBP: ffff8801c391f1a0 R08: 0000000000000002 R09: 0000000000000000 R10: dffffc0000000000 R11: 0000000000000002 R12: 1ffff10038723e25 R13: ffff8801dbe1ae00 R14: ffff8801c391f680 R15: dffffc0000000000 </IRQ> rcu_read_lock_held+0x87/0xc0 kernel/rcu/update.c:293 radix_tree_deref_slot include/linux/radix-tree.h:238 [inline] filemap_map_pages+0x6d4/0x1570 mm/filemap.c:2335 do_fault_around mm/memory.c:3231 [inline] do_read_fault mm/memory.c:3265 [inline] do_fault+0xbd5/0x2080 mm/memory.c:3370 handle_pte_fault mm/memory.c:3600 [inline] __handle_mm_fault+0x1062/0x2cb0 mm/memory.c:3714 handle_mm_fault+0x1e2/0x480 mm/memory.c:3751 __do_page_fault+0x4f6/0xb60 arch/x86/mm/fault.c:1397 do_page_fault+0x54/0x70 arch/x86/mm/fault.c:1460 page_fault+0x28/0x30 arch/x86/entry/entry_64.S:1011 RIP: 0033:0x7f83172f2786 RSP: 002b:00007fffe859ae80 EFLAGS: 00010293 RAX: 000055edd4373040 RBX: 00007f83175111c8 RCX: 000055edd4373238 RDX: 0000000000000000 RSI: 0000000000000000 RDI: 00007f8317510970 RBP: 00007fffe859afd0 R08: 0000000000000009 R09: 0000000000000000 R10: 0000000000000064 R11: 0000000000000000 R12: 000055edd4373040 R13: 0000000000000000 R14: 00007fffe859afe8 R15: 0000000000000000 Signed-off-by: Eric Dumazet <[email protected]> Reported-by: Dmitry Vyukov <[email protected]> Signed-off-by: David S. Miller <[email protected]> Signed-off-by: Sasha Levin <[email protected]> Signed-off-by: Greg Kroah-Hartman <[email protected]>
wzyy2
pushed a commit
that referenced
this issue
Jan 30, 2018
(cherry pick from commit 3d88d56) Due to how the MONOTONIC_RAW accumulation logic was handled, there is the potential for a 1ns discontinuity when we do accumulations. This small discontinuity has for the most part gone un-noticed, but since ARM64 enabled CLOCK_MONOTONIC_RAW in their vDSO clock_gettime implementation, we've seen failures with the inconsistency-check test in kselftest. This patch addresses the issue by using the same sub-ns accumulation handling that CLOCK_MONOTONIC uses, which avoids the issue for in-kernel users. Since the ARM64 vDSO implementation has its own clock_gettime calculation logic, this patch reduces the frequency of errors, but failures are still seen. The ARM64 vDSO will need to be updated to include the sub-nanosecond xtime_nsec values in its calculation for this issue to be completely fixed. Signed-off-by: John Stultz <[email protected]> Tested-by: Daniel Mentz <[email protected]> Cc: Prarit Bhargava <[email protected]> Cc: Kevin Brodsky <[email protected]> Cc: Richard Cochran <[email protected]> Cc: Stephen Boyd <[email protected]> Cc: Will Deacon <[email protected]> Cc: "stable #4 . 8+" <[email protected]> Cc: Miroslav Lichvar <[email protected]> Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Thomas Gleixner <[email protected]> Bug: 20045882 Bug: 63737556 Change-Id: I6c55dd7685f6bd212c6af9d09c527528e1dd5fa1
wzyy2
pushed a commit
that referenced
this issue
Jan 30, 2018
…erspace instruction We will reach fixup handler when one thread(say cpu0) caused an undefined exception, while another thread(say cpu1) is unmmaping the page. Fixup handler returns to the next userspace instruction which has caused the undef execption, rather than going to the same instruction. ARM ARM says that after undefined exception, the PC will be pointing to the next instruction. ie +4 offset in case of ARM and +2 in case of Thumb And there is no correction offset passed to vector_stub in case of undef exception. File: arch/arm/kernel/entry-armv.S +1085 vector_stub und, UND_MODE During an undefined exception, in normal scenario(ie when ldrt instruction does not cause an abort) after resorting the context in VFP hardware, the PC is modified as show below before jumping to ret_from_exception which is in r9. File: arch/arm/vfp/vfphw.S +169 @ The context stored in the VFP hardware is up to date with this thread vfp_hw_state_valid: tst r1, #FPEXC_EX bne process_exception @ might as well handle the pending @ exception before retrying branch @ out before setting an FPEXC that @ stops us reading stuff VFPFMXR FPEXC, r1 @ Restore FPEXC last sub r2, r2, #4 @ Retry current instruction - if Thumb str r2, [sp, #S_PC] @ mode it's two 16-bit instructions, @ else it's one 32-bit instruction, so @ always subtract 4 from the following @ instruction address. But if ldrt results in an abort, we reach the fixup handler and return to ret_from_execption without correcting the pc. This patch modifes the fixup handler to re-execute the same instruction which caused undefined execption. Change-Id: Ib4398a85abd0dc1df0c5ecc90cca7e6d02d9d978 Signed-off-by: Vinayak Menon <[email protected]> Signed-off-by: Arun KS <[email protected]> Acked-by: Catalin Marinas <[email protected]> Signed-off-by: Russell King <[email protected]> Signed-off-by: Tao Huang <[email protected]> (cherry picked from commit 3780f7a)
damluk
pushed a commit
to damluk/rockchip-kernel
that referenced
this issue
Mar 31, 2018
commit 1514839 upstream. This patch fixes NULL pointer crash due to active timer running for abort IOCB. From crash dump analysis it was discoverd that get_next_timer_interrupt() encountered a corrupted entry on the timer list. rockchip-linux#9 [ffff95e1f6f0fd40] page_fault at ffffffff914fe8f8 [exception RIP: get_next_timer_interrupt+440] RIP: ffffffff90ea3088 RSP: ffff95e1f6f0fdf0 RFLAGS: 00010013 RAX: ffff95e1f6451028 RBX: 000218e2389e5f40 RCX: 00000001232ad600 RDX: 0000000000000001 RSI: ffff95e1f6f0fdf0 RDI: 0000000001232ad6 RBP: ffff95e1f6f0fe40 R8: ffff95e1f6451188 R9: 0000000000000001 R10: 0000000000000016 R11: 0000000000000016 R12: 00000001232ad5f6 R13: ffff95e1f6450000 R14: ffff95e1f6f0fdf8 R15: ffff95e1f6f0fe10 ORIG_RAX: ffffffffffffffff CS: 0010 SS: 0018 Looking at the assembly of get_next_timer_interrupt(), address came from %r8 (ffff95e1f6451188) which is pointing to list_head with single entry at ffff95e5ff621178. 0xffffffff90ea307a <get_next_timer_interrupt+426>: mov (%r8),%rdx 0xffffffff90ea307d <get_next_timer_interrupt+429>: cmp %r8,%rdx 0xffffffff90ea3080 <get_next_timer_interrupt+432>: je 0xffffffff90ea30a7 <get_next_timer_interrupt+471> 0xffffffff90ea3082 <get_next_timer_interrupt+434>: nopw 0x0(%rax,%rax,1) 0xffffffff90ea3088 <get_next_timer_interrupt+440>: testb $0x1,0x18(%rdx) crash> rd ffff95e1f6451188 10 ffff95e1f6451188: ffff95e5ff621178 ffff95e5ff621178 x.b.....x.b..... ffff95e1f6451198: ffff95e1f6451198 ffff95e1f6451198 ..E.......E..... ffff95e1f64511a8: ffff95e1f64511a8 ffff95e1f64511a8 ..E.......E..... ffff95e1f64511b8: ffff95e77cf509a0 ffff95e77cf509a0 ...|.......|.... ffff95e1f64511c8: ffff95e1f64511c8 ffff95e1f64511c8 ..E.......E..... crash> rd ffff95e5ff621178 10 ffff95e5ff621178: 0000000000000001 ffff95e15936aa00 ..........6Y.... ffff95e5ff621188: 0000000000000000 00000000ffffffff ................ ffff95e5ff621198: 00000000000000a0 0000000000000010 ................ ffff95e5ff6211a8: ffff95e5ff621198 000000000000000c ..b............. ffff95e5ff6211b8: 00000f5800000000 ffff95e751f8d720 ....X... ..Q.... ffff95e5ff621178 belongs to freed mempool object at ffff95e5ff621080. CACHE NAME OBJSIZE ALLOCATED TOTAL SLABS SSIZE ffff95dc7fd74d00 mnt_cache 384 19785 24948 594 16k SLAB MEMORY NODE TOTAL ALLOCATED FREE ffffdc5dabfd8800 ffff95e5ff620000 1 42 29 13 FREE / [ALLOCATED] ffff95e5ff621080 (cpu 6 cache) Examining the contents of that memory reveals a pointer to a constant string in the driver, "abort\0", which is set by qla24xx_async_abort_cmd(). crash> rd ffffffffc059277c 20 ffffffffc059277c: 6e490074726f6261 0074707572726574 abort.Interrupt. ffffffffc059278c: 00676e696c6c6f50 6920726576697244 Polling.Driver i ffffffffc059279c: 646f6d207325206e 6974736554000a65 n %s mode..Testi ffffffffc05927ac: 636976656420676e 786c252074612065 ng device at %lx ffffffffc05927bc: 6b63656843000a2e 646f727020676e69 ...Checking prod ffffffffc05927cc: 6f20444920746375 0a2e706968632066 uct ID of chip.. ffffffffc05927dc: 5120646e756f4600 204130303232414c .Found QLA2200A ffffffffc05927ec: 43000a2e70696843 20676e696b636568 Chip...Checking ffffffffc05927fc: 65786f626c69616d 6c636e69000a2e73 mailboxes...incl ffffffffc059280c: 756e696c2f656475 616d2d616d642f78 ude/linux/dma-ma crash> struct -ox srb_iocb struct srb_iocb { union { struct {...} logio; struct {...} els_logo; struct {...} tmf; struct {...} fxiocb; struct {...} abt; struct ct_arg ctarg; struct {...} mbx; struct {...} nack; [0x0 ] } u; [0xb8] struct timer_list timer; [0x108] void (*timeout)(void *); } SIZE: 0x110 crash> ! bc ibase=16 obase=10 B8+40 F8 The object is a srb_t, and at offset 0xf8 within that structure (i.e. ffff95e5ff621080 + f8 -> ffff95e5ff621178) is a struct timer_list. Cc: <[email protected]> rockchip-linux#4.4+ Fixes: 4440e46 ("[SCSI] qla2xxx: Add IOCB Abort command asynchronous handling.") Signed-off-by: Himanshu Madhani <[email protected]> Reviewed-by: Johannes Thumshirn <[email protected]> Signed-off-by: Martin K. Petersen <[email protected]> Signed-off-by: Greg Kroah-Hartman <[email protected]>
damluk
pushed a commit
to damluk/rockchip-kernel
that referenced
this issue
Mar 31, 2018
[ Upstream commit 72d5481 ] It is unlikely request_threaded_irq will fail, but if it does for some reason we should clear iommu->pr_irq in the error path. Also intel_svm_finish_prq shouldn't try to clean up the page request interrupt if pr_irq is 0. Without these, if request_threaded_irq were to fail the following occurs: fail with no fixes: [ 0.683147] ------------[ cut here ]------------ [ 0.683148] NULL pointer, cannot free irq [ 0.683158] WARNING: CPU: 1 PID: 1 at kernel/irq/irqdomain.c:1632 irq_domain_free_irqs+0x126/0x140 [ 0.683160] Modules linked in: [ 0.683163] CPU: 1 PID: 1 Comm: swapper/0 Not tainted 4.15.0-rc2 rockchip-linux#3 [ 0.683165] Hardware name: /NUC7i3BNB, BIOS BNKBL357.86A.0036.2017.0105.1112 01/05/2017 [ 0.683168] RIP: 0010:irq_domain_free_irqs+0x126/0x140 [ 0.683169] RSP: 0000:ffffc90000037ce8 EFLAGS: 00010292 [ 0.683171] RAX: 000000000000001d RBX: ffff880276283c00 RCX: ffffffff81c5e5e8 [ 0.683172] RDX: 0000000000000001 RSI: 0000000000000096 RDI: 0000000000000246 [ 0.683174] RBP: ffff880276283c00 R08: 0000000000000000 R09: 000000000000023c [ 0.683175] R10: 0000000000000007 R11: 0000000000000000 R12: 000000000000007a [ 0.683176] R13: 0000000000000001 R14: 0000000000000000 R15: 0000010010000000 [ 0.683178] FS: 0000000000000000(0000) GS:ffff88027ec80000(0000) knlGS:0000000000000000 [ 0.683180] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 [ 0.683181] CR2: 0000000000000000 CR3: 0000000001c09001 CR4: 00000000003606e0 [ 0.683182] Call Trace: [ 0.683189] intel_svm_finish_prq+0x3c/0x60 [ 0.683191] free_dmar_iommu+0x1ac/0x1b0 [ 0.683195] init_dmars+0xaaa/0xaea [ 0.683200] ? klist_next+0x19/0xc0 [ 0.683203] ? pci_do_find_bus+0x50/0x50 [ 0.683205] ? pci_get_dev_by_id+0x52/0x70 [ 0.683208] intel_iommu_init+0x498/0x5c7 [ 0.683211] pci_iommu_init+0x13/0x3c [ 0.683214] ? e820__memblock_setup+0x61/0x61 [ 0.683217] do_one_initcall+0x4d/0x1a0 [ 0.683220] kernel_init_freeable+0x186/0x20e [ 0.683222] ? set_debug_rodata+0x11/0x11 [ 0.683225] ? rest_init+0xb0/0xb0 [ 0.683226] kernel_init+0xa/0xff [ 0.683229] ret_from_fork+0x1f/0x30 [ 0.683259] Code: 89 ee 44 89 e7 e8 3b e8 ff ff 5b 5d 44 89 e7 44 89 ee 41 5c 41 5d 41 5e e9 a8 84 ff ff 48 c7 c7 a8 71 a7 81 31 c0 e8 6a d3 f9 ff <0f> ff 5b 5d 41 5c 41 5d 41 5 e c3 0f 1f 44 00 00 66 2e 0f 1f 84 [ 0.683285] ---[ end trace f7650e42792627ca ]--- with iommu->pr_irq = 0, but no check in intel_svm_finish_prq: [ 0.669561] ------------[ cut here ]------------ [ 0.669563] Trying to free already-free IRQ 0 [ 0.669573] WARNING: CPU: 3 PID: 1 at kernel/irq/manage.c:1546 __free_irq+0xa4/0x2c0 [ 0.669574] Modules linked in: [ 0.669577] CPU: 3 PID: 1 Comm: swapper/0 Not tainted 4.15.0-rc2 rockchip-linux#4 [ 0.669579] Hardware name: /NUC7i3BNB, BIOS BNKBL357.86A.0036.2017.0105.1112 01/05/2017 [ 0.669581] RIP: 0010:__free_irq+0xa4/0x2c0 [ 0.669582] RSP: 0000:ffffc90000037cc0 EFLAGS: 00010082 [ 0.669584] RAX: 0000000000000021 RBX: 0000000000000000 RCX: ffffffff81c5e5e8 [ 0.669585] RDX: 0000000000000001 RSI: 0000000000000086 RDI: 0000000000000046 [ 0.669587] RBP: 0000000000000000 R08: 0000000000000000 R09: 000000000000023c [ 0.669588] R10: 0000000000000007 R11: 0000000000000000 R12: ffff880276253960 [ 0.669589] R13: ffff8802762538a4 R14: ffff880276253800 R15: ffff880276283600 [ 0.669593] FS: 0000000000000000(0000) GS:ffff88027ed80000(0000) knlGS:0000000000000000 [ 0.669594] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 [ 0.669596] CR2: 0000000000000000 CR3: 0000000001c09001 CR4: 00000000003606e0 [ 0.669602] Call Trace: [ 0.669616] free_irq+0x30/0x60 [ 0.669620] intel_svm_finish_prq+0x34/0x60 [ 0.669623] free_dmar_iommu+0x1ac/0x1b0 [ 0.669627] init_dmars+0xaaa/0xaea [ 0.669631] ? klist_next+0x19/0xc0 [ 0.669634] ? pci_do_find_bus+0x50/0x50 [ 0.669637] ? pci_get_dev_by_id+0x52/0x70 [ 0.669639] intel_iommu_init+0x498/0x5c7 [ 0.669642] pci_iommu_init+0x13/0x3c [ 0.669645] ? e820__memblock_setup+0x61/0x61 [ 0.669648] do_one_initcall+0x4d/0x1a0 [ 0.669651] kernel_init_freeable+0x186/0x20e [ 0.669653] ? set_debug_rodata+0x11/0x11 [ 0.669656] ? rest_init+0xb0/0xb0 [ 0.669658] kernel_init+0xa/0xff [ 0.669661] ret_from_fork+0x1f/0x30 [ 0.669662] Code: 7a 08 75 0e e9 c3 01 00 00 4c 39 7b 08 74 57 48 89 da 48 8b 5a 18 48 85 db 75 ee 89 ee 48 c7 c7 78 67 a7 81 31 c0 e8 4c 37 fa ff <0f> ff 48 8b 34 24 4c 89 ef e 8 0e 4c 68 00 49 8b 46 40 48 8b 80 [ 0.669688] ---[ end trace 58a470248700f2fc ]--- Cc: Alex Williamson <[email protected]> Cc: Joerg Roedel <[email protected]> Cc: Ashok Raj <[email protected]> Signed-off-by: Jerry Snitselaar <[email protected]> Reviewed-by: Ashok Raj <[email protected]> Signed-off-by: Alex Williamson <[email protected]> Signed-off-by: Sasha Levin <[email protected]> Signed-off-by: Greg Kroah-Hartman <[email protected]>
rkchrome
pushed a commit
that referenced
this issue
Apr 10, 2018
According to the dwc2 programmer's guide v3.10a, in '2.1.3.2 Dedicated FIFO Mode with No Thresholding', it suggested that: Device RxFIFO = - Scatter/Gather DMA mode: (4 * number of control endpoints + 6) + ((largest USB packet used / 4) + 1 for status information) + (2 * number of OUT endpoints) + 1 for Global NAK on rockchip platforms: (4 * 1 + 6) + ((1024 / 4) + 1) + (2 * 6) + 1 = 280 - Slave or Buffer DMA mode: (5 * number of control endpoints + 8) + ((largest USB packet used / 4) + 1 for status information) + (2 * number of OUT endpoints) + 1 for Global NAK on rockchip platforms: (5 * 1 + 8) + ((1024 / 4) + 1) + (2 * 6) + 1 = 283 Device IN Endpoint TxFIFO = The TxFIFO must equal at least one MaxPacketSize (MPS). In addition to RxFIFO and TxFIFOs, refer to dwc2 databook v3.10a, 'Figure 2-13 Device Mode FIFO Address Mapping and AHB FIFO Access Mapping (Dedicated FIFO)', it required that when the device is operating in non Scatter Gather Internal DMA mode, the last locations of the SPRAM are used to store the DMAADDR values for each Endpoint (1 location per endpoint). When the device is operating in Scatter Gather mode, then the last locations of the SPRAM store the Base Descriptor address, Current Descriptor address, Current Buffer address, and status quadlet information for each endpoint direction (4 locations per Endpoint). If an Endpoint is bidirectional , then 4 locations will be used for IN, and another 4 for OUT). Considering that the total FIFO size of dwc2 otg is 0x3cc (972), and we must reserve (4 * 13) = 52 locations for all Endpoints. So reconfig dwc2 device fifo size as follows: Device RxFIFO = 280 Device IN Endpoint TxFIFO - FIFO #0 = (64 / 4) = 16 (Assuming this is used for EP0) - FIFO #1 = (1024/4) = 256 (Assuming this is used for Isochronous) - FIFO #2 = (512/4) = 128 - FIFO #3 = (512/4) = 128 - FIFO #4 = (256/4) = 64 - FIFO #5 = (128/4) = 32 - FIFO #6 = (64/4) = 16 After reconfig the dwc2 device fifo size, test mtp write on rockchip platform (PC -> rockchip platform) on rk312x/rk3326/px30/rk3288 evb, when mask the 'vfs_write' in f_mtp.c, the writing data rate can be increased from 16MBps ~ 20MBps to 30MBps ~ 36MBps on different kinds of rockchip evbs. Change-Id: Icdf8a5dd95f96d174233e4ffc765c9a982b9f0b6 Signed-off-by: William Wu <[email protected]>
rkchrome
pushed a commit
that referenced
this issue
Apr 10, 2018
According to the dwc2 programmer's guide v3.10a, in '2.1.3.2 Dedicated FIFO Mode with No Thresholding', it suggested that: Device RxFIFO = - Scatter/Gather DMA mode: (4 * number of control endpoints + 6) + ((largest USB packet used / 4) + 1 for status information) + (2 * number of OUT endpoints) + 1 for Global NAK on rockchip platforms: (4 * 1 + 6) + ((1024 / 4) + 1) + (2 * 6) + 1 = 280 - Slave or Buffer DMA mode: (5 * number of control endpoints + 8) + ((largest USB packet used / 4) + 1 for status information) + (2 * number of OUT endpoints) + 1 for Global NAK on rockchip platforms: (5 * 1 + 8) + ((1024 / 4) + 1) + (2 * 6) + 1 = 283 Device IN Endpoint TxFIFO = The TxFIFO must equal at least one MaxPacketSize (MPS). In addition to RxFIFO and TxFIFOs, refer to dwc2 databook v3.10a, 'Figure 2-13 Device Mode FIFO Address Mapping and AHB FIFO Access Mapping (Dedicated FIFO)', it required that when the device is operating in non Scatter Gather Internal DMA mode, the last locations of the SPRAM are used to store the DMAADDR values for each Endpoint (1 location per endpoint). When the device is operating in Scatter Gather mode, then the last locations of the SPRAM store the Base Descriptor address, Current Descriptor address, Current Buffer address, and status quadlet information for each endpoint direction (4 locations per Endpoint). If an Endpoint is bidirectional , then 4 locations will be used for IN, and another 4 for OUT). Considering that the total FIFO size of dwc2 otg is 0x3cc (972), and we must reserve (4 * 13) = 52 locations for all Endpoints. So reconfig dwc2 device fifo size as follows: Device RxFIFO = 280 Device IN Endpoint TxFIFO - FIFO #0 = (64 / 4) = 16 (Assuming this is used for EP0) - FIFO #1 = (1024/4) = 256 (Assuming this is used for Isochronous) - FIFO #2 = (512/4) = 128 - FIFO #3 = (512/4) = 128 - FIFO #4 = (256/4) = 64 - FIFO #5 = (128/4) = 32 - FIFO #6 = (64/4) = 16 After reconfig the dwc2 device fifo size, test mtp write on rockchip platform (PC -> rockchip platform) on rk312x/rk3326/px30/rk3288 evb, when mask the 'vfs_write' in f_mtp.c, the writing data rate can be increased from 16MBps ~ 20MBps to 30MBps ~ 36MBps on different kinds of rockchip evbs. Change-Id: I52c64a279523c811f706e69e427b0a6e8c45683b Signed-off-by: William Wu <[email protected]>
rkchrome
pushed a commit
that referenced
this issue
Jun 11, 2018
[ Upstream commit d754941 ] If, for any reason, userland shuts down iscsi transport interfaces before proper logouts - like when logging in to LUNs manually, without logging out on server shutdown, or when automated scripts can't umount/logout from logged LUNs - kernel will hang forever on its sd_sync_cache() logic, after issuing the SYNCHRONIZE_CACHE cmd to all still existent paths. PID: 1 TASK: ffff8801a69b8000 CPU: 1 COMMAND: "systemd-shutdow" #0 [ffff8801a69c3a30] __schedule at ffffffff8183e9ee #1 [ffff8801a69c3a80] schedule at ffffffff8183f0d5 #2 [ffff8801a69c3a98] schedule_timeout at ffffffff81842199 #3 [ffff8801a69c3b40] io_schedule_timeout at ffffffff8183e604 #4 [ffff8801a69c3b70] wait_for_completion_io_timeout at ffffffff8183fc6c #5 [ffff8801a69c3bd0] blk_execute_rq at ffffffff813cfe10 #6 [ffff8801a69c3c88] scsi_execute at ffffffff815c3fc7 #7 [ffff8801a69c3cc8] scsi_execute_req_flags at ffffffff815c60fe #8 [ffff8801a69c3d30] sd_sync_cache at ffffffff815d37d7 #9 [ffff8801a69c3da8] sd_shutdown at ffffffff815d3c3c This happens because iscsi_eh_cmd_timed_out(), the transport layer timeout helper, would tell the queue timeout function (scsi_times_out) to reset the request timer over and over, until the session state is back to logged in state. Unfortunately, during server shutdown, this might never happen again. Other option would be "not to handle" the issue in the transport layer. That would trigger the error handler logic, which would also need the session state to be logged in again. Best option, for such case, is to tell upper layers that the command was handled during the transport layer error handler helper, marking it as DID_NO_CONNECT, which will allow completion and inform about the problem. After the session was marked as ISCSI_STATE_FAILED, due to the first timeout during the server shutdown phase, all subsequent cmds will fail to be queued, allowing upper logic to fail faster. Signed-off-by: Rafael David Tinoco <[email protected]> Reviewed-by: Lee Duncan <[email protected]> Signed-off-by: Martin K. Petersen <[email protected]> Signed-off-by: Sasha Levin <[email protected]> Signed-off-by: Greg Kroah-Hartman <[email protected]>
friendlyarm
referenced
this issue
in friendlyarm/kernel-rockchip
Aug 31, 2020
commit 2d9a2c5 upstream. Before v4.15 commit 75492a5 ("s390/scsi: Convert timers to use timer_setup()"), we intentionally only passed zfcp_adapter as context argument to zfcp_fsf_request_timeout_handler(). Since we only trigger adapter recovery, it was unnecessary to sync against races between timeout and (late) completion. Likewise, we only passed zfcp_erp_action as context argument to zfcp_erp_timeout_handler(). Since we only wakeup an ERP action, it was unnecessary to sync against races between timeout and (late) completion. Meanwhile the timeout handlers get timer_list as context argument and do a timer-specific container-of to zfcp_fsf_req which can have been freed. Fix it by making sure that any request timeout handlers, that might just have started before del_timer(), are completed by using del_timer_sync() instead. This ensures the request free happens afterwards. Space time diagram of potential use-after-free: Basic idea is to have 2 or more pending requests whose timeouts run out at almost the same time. req 1 timeout ERP thread req 2 timeout ---------------- ---------------- --------------------------------------- zfcp_fsf_request_timeout_handler fsf_req = from_timer(fsf_req, t, timer) adapter = fsf_req->adapter zfcp_qdio_siosl(adapter) zfcp_erp_adapter_reopen(adapter,...) zfcp_erp_strategy ... zfcp_fsf_req_dismiss_all list_for_each_entry_safe zfcp_fsf_req_complete 1 del_timer 1 zfcp_fsf_req_free 1 zfcp_fsf_req_complete 2 zfcp_fsf_request_timeout_handler del_timer 2 fsf_req = from_timer(fsf_req, t, timer) zfcp_fsf_req_free 2 adapter = fsf_req->adapter ^^^^^^^ already freed Link: https://lore.kernel.org/r/[email protected] Fixes: 75492a5 ("s390/scsi: Convert timers to use timer_setup()") Cc: <[email protected]> #4.15+ Suggested-by: Julian Wiedmann <[email protected]> Reviewed-by: Julian Wiedmann <[email protected]> Signed-off-by: Steffen Maier <[email protected]> Signed-off-by: Martin K. Petersen <[email protected]> Signed-off-by: Greg Kroah-Hartman <[email protected]>
AaronDewes
pushed a commit
to AaronDewes/kernel
that referenced
this issue
Sep 29, 2020
[ Upstream commit 8a39e8c ] When compiling with DEBUG=1 on Fedora 32 I'm getting crash for 'perf test signal': Program received signal SIGSEGV, Segmentation fault. 0x0000000000c68548 in __test_function () (gdb) bt #0 0x0000000000c68548 in __test_function () rockchip-linux#1 0x00000000004d62e9 in test_function () at tests/bp_signal.c:61 rockchip-linux#2 0x00000000004d689a in test__bp_signal (test=0xa8e280 <generic_ ... rockchip-linux#3 0x00000000004b7d49 in run_test (test=0xa8e280 <generic_tests+1 ... rockchip-linux#4 0x00000000004b7e7f in test_and_print (t=0xa8e280 <generic_test ... rockchip-linux#5 0x00000000004b8927 in __cmd_test (argc=1, argv=0x7fffffffdce0, ... ... It's caused by the symbol __test_function being in the ".bss" section: $ readelf -a ./perf | less [Nr] Name Type Address Offset Size EntSize Flags Link Info Align ... [28] .bss NOBITS 0000000000c356a0 008346a0 00000000000511f8 0000000000000000 WA 0 0 32 $ nm perf | grep __test_function 0000000000c68548 B __test_function I guess most of the time we're just lucky the inline asm ended up in the ".text" section, so making it specific explicit with push and pop section clauses. $ readelf -a ./perf | less [Nr] Name Type Address Offset Size EntSize Flags Link Info Align ... [13] .text PROGBITS 0000000000431240 00031240 0000000000306faa 0000000000000000 AX 0 0 16 $ nm perf | grep __test_function 00000000004d62c8 T __test_function Committer testing: $ readelf -wi ~/bin/perf | grep producer -m1 <c> DW_AT_producer : (indirect string, offset: 0x254a): GNU C99 10.2.1 20200723 (Red Hat 10.2.1-1) -mtune=generic -march=x86-64 -ggdb3 -std=gnu99 -fno-omit-frame-pointer -funwind-tables -fstack-protector-all ^^^^^ ^^^^^ ^^^^^ $ Before: $ perf test signal 20: Breakpoint overflow signal handler : FAILED! $ After: $ perf test signal 20: Breakpoint overflow signal handler : Ok $ Fixes: 8fd34e1 ("perf test: Improve bp_signal") Signed-off-by: Jiri Olsa <[email protected]> Tested-by: Arnaldo Carvalho de Melo <[email protected]> Cc: Alexander Shishkin <[email protected]> Cc: Michael Petlan <[email protected]> Cc: Namhyung Kim <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Wang Nan <[email protected]> Link: http://lore.kernel.org/lkml/[email protected] Signed-off-by: Arnaldo Carvalho de Melo <[email protected]> Signed-off-by: Sasha Levin <[email protected]>
AaronDewes
pushed a commit
to AaronDewes/kernel
that referenced
this issue
Sep 29, 2020
[ Upstream commit d26383d ] The following leaks were detected by ASAN: Indirect leak of 360 byte(s) in 9 object(s) allocated from: #0 0x7fecc305180e in calloc (/lib/x86_64-linux-gnu/libasan.so.5+0x10780e) rockchip-linux#1 0x560578f6dce5 in perf_pmu__new_format util/pmu.c:1333 rockchip-linux#2 0x560578f752fc in perf_pmu_parse util/pmu.y:59 rockchip-linux#3 0x560578f6a8b7 in perf_pmu__format_parse util/pmu.c:73 rockchip-linux#4 0x560578e07045 in test__pmu tests/pmu.c:155 rockchip-linux#5 0x560578de109b in run_test tests/builtin-test.c:410 rockchip-linux#6 0x560578de109b in test_and_print tests/builtin-test.c:440 rockchip-linux#7 0x560578de401a in __cmd_test tests/builtin-test.c:661 rockchip-linux#8 0x560578de401a in cmd_test tests/builtin-test.c:807 rockchip-linux#9 0x560578e49354 in run_builtin /home/namhyung/project/linux/tools/perf/perf.c:312 rockchip-linux#10 0x560578ce71a8 in handle_internal_command /home/namhyung/project/linux/tools/perf/perf.c:364 rockchip-linux#11 0x560578ce71a8 in run_argv /home/namhyung/project/linux/tools/perf/perf.c:408 rockchip-linux#12 0x560578ce71a8 in main /home/namhyung/project/linux/tools/perf/perf.c:538 rockchip-linux#13 0x7fecc2b7acc9 in __libc_start_main ../csu/libc-start.c:308 Fixes: cff7f95 ("perf tests: Move pmu tests into separate object") Signed-off-by: Namhyung Kim <[email protected]> Acked-by: Jiri Olsa <[email protected]> Cc: Alexander Shishkin <[email protected]> Cc: Andi Kleen <[email protected]> Cc: Ian Rogers <[email protected]> Cc: Mark Rutland <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Stephane Eranian <[email protected]> Link: http://lore.kernel.org/lkml/[email protected] Signed-off-by: Arnaldo Carvalho de Melo <[email protected]> Signed-off-by: Sasha Levin <[email protected]>
rkchrome
pushed a commit
that referenced
this issue
Oct 10, 2020
[ Upstream commit 266150c ] Realloc of size zero is a free not an error, avoid this causing a double free. Caught by clang's address sanitizer: ==2634==ERROR: AddressSanitizer: attempting double-free on 0x6020000015f0 in thread T0: #0 0x5649659297fd in free llvm/llvm-project/compiler-rt/lib/asan/asan_malloc_linux.cpp:123:3 #1 0x5649659e9251 in __zfree tools/lib/zalloc.c:13:2 #2 0x564965c0f92c in mem2node__exit tools/perf/util/mem2node.c:114:2 #3 0x564965a08b4c in perf_c2c__report tools/perf/builtin-c2c.c:2867:2 #4 0x564965a0616a in cmd_c2c tools/perf/builtin-c2c.c:2989:10 #5 0x564965944348 in run_builtin tools/perf/perf.c:312:11 #6 0x564965943235 in handle_internal_command tools/perf/perf.c:364:8 #7 0x5649659440c4 in run_argv tools/perf/perf.c:408:2 #8 0x564965942e41 in main tools/perf/perf.c:538:3 0x6020000015f0 is located 0 bytes inside of 1-byte region [0x6020000015f0,0x6020000015f1) freed by thread T0 here: #0 0x564965929da3 in realloc third_party/llvm/llvm-project/compiler-rt/lib/asan/asan_malloc_linux.cpp:164:3 #1 0x564965c0f55e in mem2node__init tools/perf/util/mem2node.c:97:16 #2 0x564965a08956 in perf_c2c__report tools/perf/builtin-c2c.c:2803:8 #3 0x564965a0616a in cmd_c2c tools/perf/builtin-c2c.c:2989:10 #4 0x564965944348 in run_builtin tools/perf/perf.c:312:11 #5 0x564965943235 in handle_internal_command tools/perf/perf.c:364:8 #6 0x5649659440c4 in run_argv tools/perf/perf.c:408:2 #7 0x564965942e41 in main tools/perf/perf.c:538:3 previously allocated by thread T0 here: #0 0x564965929c42 in calloc third_party/llvm/llvm-project/compiler-rt/lib/asan/asan_malloc_linux.cpp:154:3 #1 0x5649659e9220 in zalloc tools/lib/zalloc.c:8:9 #2 0x564965c0f32d in mem2node__init tools/perf/util/mem2node.c:61:12 #3 0x564965a08956 in perf_c2c__report tools/perf/builtin-c2c.c:2803:8 #4 0x564965a0616a in cmd_c2c tools/perf/builtin-c2c.c:2989:10 #5 0x564965944348 in run_builtin tools/perf/perf.c:312:11 #6 0x564965943235 in handle_internal_command tools/perf/perf.c:364:8 #7 0x5649659440c4 in run_argv tools/perf/perf.c:408:2 #8 0x564965942e41 in main tools/perf/perf.c:538:3 v2: add a WARN_ON_ONCE when the free condition arises. Signed-off-by: Ian Rogers <[email protected]> Acked-by: Jiri Olsa <[email protected]> Cc: Alexander Shishkin <[email protected]> Cc: Mark Rutland <[email protected]> Cc: Namhyung Kim <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Stephane Eranian <[email protected]> Cc: [email protected] Link: http://lore.kernel.org/lkml/[email protected] Signed-off-by: Arnaldo Carvalho de Melo <[email protected]> Signed-off-by: Sasha Levin <[email protected]>
rkchrome
pushed a commit
that referenced
this issue
Nov 3, 2020
/tmp/rtl8703b_phycfg-53954c.s: Assembler messages: /tmp/rtl8703b_phycfg-53954c.s:3071: Error: selected processor does not support `bfc w0,#4,#4' Signed-off-by: Wu Liangqing <[email protected]> Change-Id: Ia24c316c385b7b0bea2c1cb2a0d639cf0a0b17d9
rkchrome
pushed a commit
that referenced
this issue
Nov 3, 2020
Our static-static calculation returns a failure if the public key is of low order. We check for this when peers are added, and don't allow them to be added if they're low order, except in the case where we haven't yet been given a private key. In that case, we would defer the removal of the peer until we're given a private key, since at that point we're doing new static-static calculations which incur failures we can act on. This meant, however, that we wound up removing peers rather late in the configuration flow. Syzkaller points out that peer_remove calls flush_workqueue, which in turn might then wait for sending a handshake initiation to complete. Since handshake initiation needs the static identity lock, holding the static identity lock while calling peer_remove can result in a rare deadlock. We have precisely this case in this situation of late-stage peer removal based on an invalid public key. We can't drop the lock when removing, because then incoming handshakes might interact with a bogus static-static calculation. While the band-aid patch for this would involve breaking up the peer removal into two steps like wg_peer_remove_all does, in order to solve the locking issue, there's actually a much more elegant way of fixing this: If the static-static calculation succeeds with one private key, it *must* succeed with all others, because all 32-byte strings map to valid private keys, thanks to clamping. That means we can get rid of this silly dance and locking headaches of removing peers late in the configuration flow, and instead just reject them early on, regardless of whether the device has yet been assigned a private key. For the case where the device doesn't yet have a private key, we safely use zeros just for the purposes of checking for low order points by way of checking the output of the calculation. The following PoC will trigger the deadlock: ip link add wg0 type wireguard ip addr add 10.0.0.1/24 dev wg0 ip link set wg0 up ping -f 10.0.0.2 & while true; do wg set wg0 private-key /dev/null peer AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA= allowed-ips 10.0.0.0/24 endpoint 10.0.0.3:1234 wg set wg0 private-key <(echo AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA=) done [ 0.949105] ====================================================== [ 0.949550] WARNING: possible circular locking dependency detected [ 0.950143] 5.5.0-debug+ #18 Not tainted [ 0.950431] ------------------------------------------------------ [ 0.950959] wg/89 is trying to acquire lock: [ 0.951252] ffff8880333e2128 ((wq_completion)wg-kex-wg0){+.+.}, at: flush_workqueue+0xe3/0x12f0 [ 0.951865] [ 0.951865] but task is already holding lock: [ 0.952280] ffff888032819bc0 (&wg->static_identity.lock){++++}, at: wg_set_device+0x95d/0xcc0 [ 0.953011] [ 0.953011] which lock already depends on the new lock. [ 0.953011] [ 0.953651] [ 0.953651] the existing dependency chain (in reverse order) is: [ 0.954292] [ 0.954292] -> #2 (&wg->static_identity.lock){++++}: [ 0.954804] lock_acquire+0x127/0x350 [ 0.955133] down_read+0x83/0x410 [ 0.955428] wg_noise_handshake_create_initiation+0x97/0x700 [ 0.955885] wg_packet_send_handshake_initiation+0x13a/0x280 [ 0.956401] wg_packet_handshake_send_worker+0x10/0x20 [ 0.956841] process_one_work+0x806/0x1500 [ 0.957167] worker_thread+0x8c/0xcb0 [ 0.957549] kthread+0x2ee/0x3b0 [ 0.957792] ret_from_fork+0x24/0x30 [ 0.958234] [ 0.958234] -> #1 ((work_completion)(&peer->transmit_handshake_work)){+.+.}: [ 0.958808] lock_acquire+0x127/0x350 [ 0.959075] process_one_work+0x7ab/0x1500 [ 0.959369] worker_thread+0x8c/0xcb0 [ 0.959639] kthread+0x2ee/0x3b0 [ 0.959896] ret_from_fork+0x24/0x30 [ 0.960346] [ 0.960346] -> #0 ((wq_completion)wg-kex-wg0){+.+.}: [ 0.960945] check_prev_add+0x167/0x1e20 [ 0.961351] __lock_acquire+0x2012/0x3170 [ 0.961725] lock_acquire+0x127/0x350 [ 0.961990] flush_workqueue+0x106/0x12f0 [ 0.962280] peer_remove_after_dead+0x160/0x220 [ 0.962600] wg_set_device+0xa24/0xcc0 [ 0.962994] genl_rcv_msg+0x52f/0xe90 [ 0.963298] netlink_rcv_skb+0x111/0x320 [ 0.963618] genl_rcv+0x1f/0x30 [ 0.963853] netlink_unicast+0x3f6/0x610 [ 0.964245] netlink_sendmsg+0x700/0xb80 [ 0.964586] __sys_sendto+0x1dd/0x2c0 [ 0.964854] __x64_sys_sendto+0xd8/0x1b0 [ 0.965141] do_syscall_64+0x90/0xd9a [ 0.965408] entry_SYSCALL_64_after_hwframe+0x49/0xbe [ 0.965769] [ 0.965769] other info that might help us debug this: [ 0.965769] [ 0.966337] Chain exists of: [ 0.966337] (wq_completion)wg-kex-wg0 --> (work_completion)(&peer->transmit_handshake_work) --> &wg->static_identity.lock [ 0.966337] [ 0.967417] Possible unsafe locking scenario: [ 0.967417] [ 0.967836] CPU0 CPU1 [ 0.968155] ---- ---- [ 0.968497] lock(&wg->static_identity.lock); [ 0.968779] lock((work_completion)(&peer->transmit_handshake_work)); [ 0.969345] lock(&wg->static_identity.lock); [ 0.969809] lock((wq_completion)wg-kex-wg0); [ 0.970146] [ 0.970146] *** DEADLOCK *** [ 0.970146] [ 0.970531] 5 locks held by wg/89: [ 0.970908] #0: ffffffff827433c8 (cb_lock){++++}, at: genl_rcv+0x10/0x30 [ 0.971400] #1: ffffffff82743480 (genl_mutex){+.+.}, at: genl_rcv_msg+0x642/0xe90 [ 0.971924] #2: ffffffff827160c0 (rtnl_mutex){+.+.}, at: wg_set_device+0x9f/0xcc0 [ 0.972488] #3: ffff888032819de0 (&wg->device_update_lock){+.+.}, at: wg_set_device+0xb0/0xcc0 [ 0.973095] #4: ffff888032819bc0 (&wg->static_identity.lock){++++}, at: wg_set_device+0x95d/0xcc0 [ 0.973653] [ 0.973653] stack backtrace: [ 0.973932] CPU: 1 PID: 89 Comm: wg Not tainted 5.5.0-debug+ #18 [ 0.974476] Call Trace: [ 0.974638] dump_stack+0x97/0xe0 [ 0.974869] check_noncircular+0x312/0x3e0 [ 0.975132] ? print_circular_bug+0x1f0/0x1f0 [ 0.975410] ? __kernel_text_address+0x9/0x30 [ 0.975727] ? unwind_get_return_address+0x51/0x90 [ 0.976024] check_prev_add+0x167/0x1e20 [ 0.976367] ? graph_lock+0x70/0x160 [ 0.976682] __lock_acquire+0x2012/0x3170 [ 0.976998] ? register_lock_class+0x1140/0x1140 [ 0.977323] lock_acquire+0x127/0x350 [ 0.977627] ? flush_workqueue+0xe3/0x12f0 [ 0.977890] flush_workqueue+0x106/0x12f0 [ 0.978147] ? flush_workqueue+0xe3/0x12f0 [ 0.978410] ? find_held_lock+0x2c/0x110 [ 0.978662] ? lock_downgrade+0x6e0/0x6e0 [ 0.978919] ? queue_rcu_work+0x60/0x60 [ 0.979166] ? netif_napi_del+0x151/0x3b0 [ 0.979501] ? peer_remove_after_dead+0x160/0x220 [ 0.979871] peer_remove_after_dead+0x160/0x220 [ 0.980232] wg_set_device+0xa24/0xcc0 [ 0.980516] ? deref_stack_reg+0x8e/0xc0 [ 0.980801] ? set_peer+0xe10/0xe10 [ 0.981040] ? __ww_mutex_check_waiters+0x150/0x150 [ 0.981430] ? __nla_validate_parse+0x163/0x270 [ 0.981719] ? genl_family_rcv_msg_attrs_parse+0x13f/0x310 [ 0.982078] genl_rcv_msg+0x52f/0xe90 [ 0.982348] ? genl_family_rcv_msg_attrs_parse+0x310/0x310 [ 0.982690] ? register_lock_class+0x1140/0x1140 [ 0.983049] netlink_rcv_skb+0x111/0x320 [ 0.983298] ? genl_family_rcv_msg_attrs_parse+0x310/0x310 [ 0.983645] ? netlink_ack+0x880/0x880 [ 0.983888] genl_rcv+0x1f/0x30 [ 0.984168] netlink_unicast+0x3f6/0x610 [ 0.984443] ? netlink_detachskb+0x60/0x60 [ 0.984729] ? find_held_lock+0x2c/0x110 [ 0.984976] netlink_sendmsg+0x700/0xb80 [ 0.985220] ? netlink_broadcast_filtered+0xa60/0xa60 [ 0.985533] __sys_sendto+0x1dd/0x2c0 [ 0.985763] ? __x64_sys_getpeername+0xb0/0xb0 [ 0.986039] ? sockfd_lookup_light+0x17/0x160 [ 0.986397] ? __sys_recvmsg+0x8c/0xf0 [ 0.986711] ? __sys_recvmsg_sock+0xd0/0xd0 [ 0.987018] __x64_sys_sendto+0xd8/0x1b0 [ 0.987283] ? lockdep_hardirqs_on+0x39b/0x5a0 [ 0.987666] do_syscall_64+0x90/0xd9a [ 0.987903] entry_SYSCALL_64_after_hwframe+0x49/0xbe [ 0.988223] RIP: 0033:0x7fe77c12003e [ 0.988508] Code: c3 8b 07 85 c0 75 24 49 89 fb 48 89 f0 48 89 d7 48 89 ce 4c 89 c2 4d 89 ca 4c 8b 44 24 08 4c 8b 4c 24 10 4c 4 [ 0.989666] RSP: 002b:00007fffada2ed58 EFLAGS: 00000246 ORIG_RAX: 000000000000002c [ 0.990137] RAX: ffffffffffffffda RBX: 00007fe77c159d48 RCX: 00007fe77c12003e [ 0.990583] RDX: 0000000000000040 RSI: 000055fd1d38e020 RDI: 0000000000000004 [ 0.991091] RBP: 000055fd1d38e020 R08: 000055fd1cb63358 R09: 000000000000000c [ 0.991568] R10: 0000000000000000 R11: 0000000000000246 R12: 000000000000002c [ 0.992014] R13: 0000000000000004 R14: 000055fd1d38e020 R15: 0000000000000001 Signed-off-by: Jason A. Donenfeld <[email protected]> Reported-by: syzbot <[email protected]> Signed-off-by: David S. Miller <[email protected]> (cherry picked from commit ec31c26) Bug: 152722841 Signed-off-by: Jason A. Donenfeld <[email protected]> Signed-off-by: Greg Kroah-Hartman <[email protected]> Change-Id: I860bfac72c98c8c9b26f4490b4f346dc67892f87
rkchrome
pushed a commit
that referenced
this issue
Nov 3, 2020
[ Upstream commit 6617dfd ] Commit 4fc427e ("ipv6_route_seq_next should increase position index") tried to fix the issue where seq_file pos is not increased if a NULL element is returned with seq_ops->next(). See bug https://bugzilla.kernel.org/show_bug.cgi?id=206283 The commit effectively does: - increase pos for all seq_ops->start() - increase pos for all seq_ops->next() For ipv6_route, increasing pos for all seq_ops->next() is correct. But increasing pos for seq_ops->start() is not correct since pos is used to determine how many items to skip during seq_ops->start(): iter->skip = *pos; seq_ops->start() just fetches the *current* pos item. The item can be skipped only after seq_ops->show() which essentially is the beginning of seq_ops->next(). For example, I have 7 ipv6 route entries, root@arch-fb-vm1:~/net-next dd if=/proc/net/ipv6_route bs=4096 00000000000000000000000000000000 40 00000000000000000000000000000000 00 00000000000000000000000000000000 00000400 00000001 00000000 00000001 eth0 fe800000000000000000000000000000 40 00000000000000000000000000000000 00 00000000000000000000000000000000 00000100 00000001 00000000 00000001 eth0 00000000000000000000000000000000 00 00000000000000000000000000000000 00 00000000000000000000000000000000 ffffffff 00000001 00000000 00200200 lo 00000000000000000000000000000001 80 00000000000000000000000000000000 00 00000000000000000000000000000000 00000000 00000003 00000000 80200001 lo fe800000000000002050e3fffebd3be8 80 00000000000000000000000000000000 00 00000000000000000000000000000000 00000000 00000002 00000000 80200001 eth0 ff000000000000000000000000000000 08 00000000000000000000000000000000 00 00000000000000000000000000000000 00000100 00000004 00000000 00000001 eth0 00000000000000000000000000000000 00 00000000000000000000000000000000 00 00000000000000000000000000000000 ffffffff 00000001 00000000 00200200 lo 0+1 records in 0+1 records out 1050 bytes (1.0 kB, 1.0 KiB) copied, 0.00707908 s, 148 kB/s root@arch-fb-vm1:~/net-next In the above, I specify buffer size 4096, so all records can be returned to user space with a single trip to the kernel. If I use buffer size 128, since each record size is 149, internally kernel seq_read() will read 149 into its internal buffer and return the data to user space in two read() syscalls. Then user read() syscall will trigger next seq_ops->start(). Since the current implementation increased pos even for seq_ops->start(), it will skip record #2, #4 and #6, assuming the first record is #1. root@arch-fb-vm1:~/net-next dd if=/proc/net/ipv6_route bs=128 00000000000000000000000000000000 40 00000000000000000000000000000000 00 00000000000000000000000000000000 00000400 00000001 00000000 00000001 eth0 00000000000000000000000000000000 00 00000000000000000000000000000000 00 00000000000000000000000000000000 ffffffff 00000001 00000000 00200200 lo fe800000000000002050e3fffebd3be8 80 00000000000000000000000000000000 00 00000000000000000000000000000000 00000000 00000002 00000000 80200001 eth0 00000000000000000000000000000000 00 00000000000000000000000000000000 00 00000000000000000000000000000000 ffffffff 00000001 00000000 00200200 lo 4+1 records in 4+1 records out 600 bytes copied, 0.00127758 s, 470 kB/s To fix the problem, create a fake pos pointer so seq_ops->start() won't actually increase seq_file pos. With this fix, the above `dd` command with `bs=128` will show correct result. Fixes: 4fc427e ("ipv6_route_seq_next should increase position index") Cc: Alexei Starovoitov <[email protected]> Suggested-by: Vasily Averin <[email protected]> Reviewed-by: Vasily Averin <[email protected]> Signed-off-by: Yonghong Song <[email protected]> Acked-by: Martin KaFai Lau <[email protected]> Acked-by: Andrii Nakryiko <[email protected]> Signed-off-by: Jakub Kicinski <[email protected]> Signed-off-by: Greg Kroah-Hartman <[email protected]>
rkchrome
pushed a commit
that referenced
this issue
Nov 3, 2020
[ Upstream commit 71a174b ] b6da31b "tty: Fix data race in tty_insert_flip_string_fixed_flag" puts tty_flip_buffer_push under port->lock introducing the following possible circular locking dependency: [30129.876566] ====================================================== [30129.876566] WARNING: possible circular locking dependency detected [30129.876567] 5.9.0-rc2+ #3 Tainted: G S W [30129.876568] ------------------------------------------------------ [30129.876568] sysrq.sh/1222 is trying to acquire lock: [30129.876569] ffffffff92c39480 (console_owner){....}-{0:0}, at: console_unlock+0x3fe/0xa90 [30129.876572] but task is already holding lock: [30129.876572] ffff888107cb9018 (&pool->lock/1){-.-.}-{2:2}, at: show_workqueue_state.cold.55+0x15b/0x6ca [30129.876576] which lock already depends on the new lock. [30129.876577] the existing dependency chain (in reverse order) is: [30129.876578] -> #3 (&pool->lock/1){-.-.}-{2:2}: [30129.876581] _raw_spin_lock+0x30/0x70 [30129.876581] __queue_work+0x1a3/0x10f0 [30129.876582] queue_work_on+0x78/0x80 [30129.876582] pty_write+0x165/0x1e0 [30129.876583] n_tty_write+0x47f/0xf00 [30129.876583] tty_write+0x3d6/0x8d0 [30129.876584] vfs_write+0x1a8/0x650 [30129.876588] -> #2 (&port->lock#2){-.-.}-{2:2}: [30129.876590] _raw_spin_lock_irqsave+0x3b/0x80 [30129.876591] tty_port_tty_get+0x1d/0xb0 [30129.876592] tty_port_default_wakeup+0xb/0x30 [30129.876592] serial8250_tx_chars+0x3d6/0x970 [30129.876593] serial8250_handle_irq.part.12+0x216/0x380 [30129.876593] serial8250_default_handle_irq+0x82/0xe0 [30129.876594] serial8250_interrupt+0xdd/0x1b0 [30129.876595] __handle_irq_event_percpu+0xfc/0x850 [30129.876602] -> #1 (&port->lock){-.-.}-{2:2}: [30129.876605] _raw_spin_lock_irqsave+0x3b/0x80 [30129.876605] serial8250_console_write+0x12d/0x900 [30129.876606] console_unlock+0x679/0xa90 [30129.876606] register_console+0x371/0x6e0 [30129.876607] univ8250_console_init+0x24/0x27 [30129.876607] console_init+0x2f9/0x45e [30129.876609] -> #0 (console_owner){....}-{0:0}: [30129.876611] __lock_acquire+0x2f70/0x4e90 [30129.876612] lock_acquire+0x1ac/0xad0 [30129.876612] console_unlock+0x460/0xa90 [30129.876613] vprintk_emit+0x130/0x420 [30129.876613] printk+0x9f/0xc5 [30129.876614] show_pwq+0x154/0x618 [30129.876615] show_workqueue_state.cold.55+0x193/0x6ca [30129.876615] __handle_sysrq+0x244/0x460 [30129.876616] write_sysrq_trigger+0x48/0x4a [30129.876616] proc_reg_write+0x1a6/0x240 [30129.876617] vfs_write+0x1a8/0x650 [30129.876619] other info that might help us debug this: [30129.876620] Chain exists of: [30129.876621] console_owner --> &port->lock#2 --> &pool->lock/1 [30129.876625] Possible unsafe locking scenario: [30129.876626] CPU0 CPU1 [30129.876626] ---- ---- [30129.876627] lock(&pool->lock/1); [30129.876628] lock(&port->lock#2); [30129.876630] lock(&pool->lock/1); [30129.876631] lock(console_owner); [30129.876633] *** DEADLOCK *** [30129.876634] 5 locks held by sysrq.sh/1222: [30129.876634] #0: ffff8881d3ce0470 (sb_writers#3){.+.+}-{0:0}, at: vfs_write+0x359/0x650 [30129.876637] #1: ffffffff92c612c0 (rcu_read_lock){....}-{1:2}, at: __handle_sysrq+0x4d/0x460 [30129.876640] #2: ffffffff92c612c0 (rcu_read_lock){....}-{1:2}, at: show_workqueue_state+0x5/0xf0 [30129.876642] #3: ffff888107cb9018 (&pool->lock/1){-.-.}-{2:2}, at: show_workqueue_state.cold.55+0x15b/0x6ca [30129.876645] #4: ffffffff92c39980 (console_lock){+.+.}-{0:0}, at: vprintk_emit+0x123/0x420 [30129.876648] stack backtrace: [30129.876649] CPU: 3 PID: 1222 Comm: sysrq.sh Tainted: G S W 5.9.0-rc2+ #3 [30129.876649] Hardware name: Intel Corporation 2012 Client Platform/Emerald Lake 2, BIOS ACRVMBY1.86C.0078.P00.1201161002 01/16/2012 [30129.876650] Call Trace: [30129.876650] dump_stack+0x9d/0xe0 [30129.876651] check_noncircular+0x34f/0x410 [30129.876653] __lock_acquire+0x2f70/0x4e90 [30129.876656] lock_acquire+0x1ac/0xad0 [30129.876658] console_unlock+0x460/0xa90 [30129.876660] vprintk_emit+0x130/0x420 [30129.876660] printk+0x9f/0xc5 [30129.876661] show_pwq+0x154/0x618 [30129.876662] show_workqueue_state.cold.55+0x193/0x6ca [30129.876664] __handle_sysrq+0x244/0x460 [30129.876665] write_sysrq_trigger+0x48/0x4a [30129.876665] proc_reg_write+0x1a6/0x240 [30129.876666] vfs_write+0x1a8/0x650 It looks like the commit was aimed to protect tty_insert_flip_string and there is no need for tty_flip_buffer_push to be under this lock. Fixes: b6da31b ("tty: Fix data race in tty_insert_flip_string_fixed_flag") Signed-off-by: Artem Savkov <[email protected]> Acked-by: Jiri Slaby <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Greg Kroah-Hartman <[email protected]> Signed-off-by: Sasha Levin <[email protected]>
friendlyarm
referenced
this issue
in friendlyarm/kernel-rockchip
Nov 17, 2020
[ Upstream commit 266150c ] Realloc of size zero is a free not an error, avoid this causing a double free. Caught by clang's address sanitizer: ==2634==ERROR: AddressSanitizer: attempting double-free on 0x6020000015f0 in thread T0: #0 0x5649659297fd in free llvm/llvm-project/compiler-rt/lib/asan/asan_malloc_linux.cpp:123:3 #1 0x5649659e9251 in __zfree tools/lib/zalloc.c:13:2 #2 0x564965c0f92c in mem2node__exit tools/perf/util/mem2node.c:114:2 #3 0x564965a08b4c in perf_c2c__report tools/perf/builtin-c2c.c:2867:2 #4 0x564965a0616a in cmd_c2c tools/perf/builtin-c2c.c:2989:10 #5 0x564965944348 in run_builtin tools/perf/perf.c:312:11 #6 0x564965943235 in handle_internal_command tools/perf/perf.c:364:8 #7 0x5649659440c4 in run_argv tools/perf/perf.c:408:2 #8 0x564965942e41 in main tools/perf/perf.c:538:3 0x6020000015f0 is located 0 bytes inside of 1-byte region [0x6020000015f0,0x6020000015f1) freed by thread T0 here: #0 0x564965929da3 in realloc third_party/llvm/llvm-project/compiler-rt/lib/asan/asan_malloc_linux.cpp:164:3 #1 0x564965c0f55e in mem2node__init tools/perf/util/mem2node.c:97:16 #2 0x564965a08956 in perf_c2c__report tools/perf/builtin-c2c.c:2803:8 #3 0x564965a0616a in cmd_c2c tools/perf/builtin-c2c.c:2989:10 #4 0x564965944348 in run_builtin tools/perf/perf.c:312:11 #5 0x564965943235 in handle_internal_command tools/perf/perf.c:364:8 #6 0x5649659440c4 in run_argv tools/perf/perf.c:408:2 #7 0x564965942e41 in main tools/perf/perf.c:538:3 previously allocated by thread T0 here: #0 0x564965929c42 in calloc third_party/llvm/llvm-project/compiler-rt/lib/asan/asan_malloc_linux.cpp:154:3 #1 0x5649659e9220 in zalloc tools/lib/zalloc.c:8:9 #2 0x564965c0f32d in mem2node__init tools/perf/util/mem2node.c:61:12 #3 0x564965a08956 in perf_c2c__report tools/perf/builtin-c2c.c:2803:8 #4 0x564965a0616a in cmd_c2c tools/perf/builtin-c2c.c:2989:10 #5 0x564965944348 in run_builtin tools/perf/perf.c:312:11 #6 0x564965943235 in handle_internal_command tools/perf/perf.c:364:8 #7 0x5649659440c4 in run_argv tools/perf/perf.c:408:2 #8 0x564965942e41 in main tools/perf/perf.c:538:3 v2: add a WARN_ON_ONCE when the free condition arises. Signed-off-by: Ian Rogers <[email protected]> Acked-by: Jiri Olsa <[email protected]> Cc: Alexander Shishkin <[email protected]> Cc: Mark Rutland <[email protected]> Cc: Namhyung Kim <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Stephane Eranian <[email protected]> Cc: [email protected] Link: http://lore.kernel.org/lkml/[email protected] Signed-off-by: Arnaldo Carvalho de Melo <[email protected]> Signed-off-by: Sasha Levin <[email protected]>
friendlyarm
referenced
this issue
in friendlyarm/kernel-rockchip
Nov 17, 2020
[ Upstream commit 6617dfd ] Commit 4fc427e ("ipv6_route_seq_next should increase position index") tried to fix the issue where seq_file pos is not increased if a NULL element is returned with seq_ops->next(). See bug https://bugzilla.kernel.org/show_bug.cgi?id=206283 The commit effectively does: - increase pos for all seq_ops->start() - increase pos for all seq_ops->next() For ipv6_route, increasing pos for all seq_ops->next() is correct. But increasing pos for seq_ops->start() is not correct since pos is used to determine how many items to skip during seq_ops->start(): iter->skip = *pos; seq_ops->start() just fetches the *current* pos item. The item can be skipped only after seq_ops->show() which essentially is the beginning of seq_ops->next(). For example, I have 7 ipv6 route entries, root@arch-fb-vm1:~/net-next dd if=/proc/net/ipv6_route bs=4096 00000000000000000000000000000000 40 00000000000000000000000000000000 00 00000000000000000000000000000000 00000400 00000001 00000000 00000001 eth0 fe800000000000000000000000000000 40 00000000000000000000000000000000 00 00000000000000000000000000000000 00000100 00000001 00000000 00000001 eth0 00000000000000000000000000000000 00 00000000000000000000000000000000 00 00000000000000000000000000000000 ffffffff 00000001 00000000 00200200 lo 00000000000000000000000000000001 80 00000000000000000000000000000000 00 00000000000000000000000000000000 00000000 00000003 00000000 80200001 lo fe800000000000002050e3fffebd3be8 80 00000000000000000000000000000000 00 00000000000000000000000000000000 00000000 00000002 00000000 80200001 eth0 ff000000000000000000000000000000 08 00000000000000000000000000000000 00 00000000000000000000000000000000 00000100 00000004 00000000 00000001 eth0 00000000000000000000000000000000 00 00000000000000000000000000000000 00 00000000000000000000000000000000 ffffffff 00000001 00000000 00200200 lo 0+1 records in 0+1 records out 1050 bytes (1.0 kB, 1.0 KiB) copied, 0.00707908 s, 148 kB/s root@arch-fb-vm1:~/net-next In the above, I specify buffer size 4096, so all records can be returned to user space with a single trip to the kernel. If I use buffer size 128, since each record size is 149, internally kernel seq_read() will read 149 into its internal buffer and return the data to user space in two read() syscalls. Then user read() syscall will trigger next seq_ops->start(). Since the current implementation increased pos even for seq_ops->start(), it will skip record #2, #4 and #6, assuming the first record is #1. root@arch-fb-vm1:~/net-next dd if=/proc/net/ipv6_route bs=128 00000000000000000000000000000000 40 00000000000000000000000000000000 00 00000000000000000000000000000000 00000400 00000001 00000000 00000001 eth0 00000000000000000000000000000000 00 00000000000000000000000000000000 00 00000000000000000000000000000000 ffffffff 00000001 00000000 00200200 lo fe800000000000002050e3fffebd3be8 80 00000000000000000000000000000000 00 00000000000000000000000000000000 00000000 00000002 00000000 80200001 eth0 00000000000000000000000000000000 00 00000000000000000000000000000000 00 00000000000000000000000000000000 ffffffff 00000001 00000000 00200200 lo 4+1 records in 4+1 records out 600 bytes copied, 0.00127758 s, 470 kB/s To fix the problem, create a fake pos pointer so seq_ops->start() won't actually increase seq_file pos. With this fix, the above `dd` command with `bs=128` will show correct result. Fixes: 4fc427e ("ipv6_route_seq_next should increase position index") Cc: Alexei Starovoitov <[email protected]> Suggested-by: Vasily Averin <[email protected]> Reviewed-by: Vasily Averin <[email protected]> Signed-off-by: Yonghong Song <[email protected]> Acked-by: Martin KaFai Lau <[email protected]> Acked-by: Andrii Nakryiko <[email protected]> Signed-off-by: Jakub Kicinski <[email protected]> Signed-off-by: Greg Kroah-Hartman <[email protected]>
friendlyarm
referenced
this issue
in friendlyarm/kernel-rockchip
Nov 17, 2020
[ Upstream commit 71a174b ] b6da31b "tty: Fix data race in tty_insert_flip_string_fixed_flag" puts tty_flip_buffer_push under port->lock introducing the following possible circular locking dependency: [30129.876566] ====================================================== [30129.876566] WARNING: possible circular locking dependency detected [30129.876567] 5.9.0-rc2+ #3 Tainted: G S W [30129.876568] ------------------------------------------------------ [30129.876568] sysrq.sh/1222 is trying to acquire lock: [30129.876569] ffffffff92c39480 (console_owner){....}-{0:0}, at: console_unlock+0x3fe/0xa90 [30129.876572] but task is already holding lock: [30129.876572] ffff888107cb9018 (&pool->lock/1){-.-.}-{2:2}, at: show_workqueue_state.cold.55+0x15b/0x6ca [30129.876576] which lock already depends on the new lock. [30129.876577] the existing dependency chain (in reverse order) is: [30129.876578] -> #3 (&pool->lock/1){-.-.}-{2:2}: [30129.876581] _raw_spin_lock+0x30/0x70 [30129.876581] __queue_work+0x1a3/0x10f0 [30129.876582] queue_work_on+0x78/0x80 [30129.876582] pty_write+0x165/0x1e0 [30129.876583] n_tty_write+0x47f/0xf00 [30129.876583] tty_write+0x3d6/0x8d0 [30129.876584] vfs_write+0x1a8/0x650 [30129.876588] -> #2 (&port->lock#2){-.-.}-{2:2}: [30129.876590] _raw_spin_lock_irqsave+0x3b/0x80 [30129.876591] tty_port_tty_get+0x1d/0xb0 [30129.876592] tty_port_default_wakeup+0xb/0x30 [30129.876592] serial8250_tx_chars+0x3d6/0x970 [30129.876593] serial8250_handle_irq.part.12+0x216/0x380 [30129.876593] serial8250_default_handle_irq+0x82/0xe0 [30129.876594] serial8250_interrupt+0xdd/0x1b0 [30129.876595] __handle_irq_event_percpu+0xfc/0x850 [30129.876602] -> #1 (&port->lock){-.-.}-{2:2}: [30129.876605] _raw_spin_lock_irqsave+0x3b/0x80 [30129.876605] serial8250_console_write+0x12d/0x900 [30129.876606] console_unlock+0x679/0xa90 [30129.876606] register_console+0x371/0x6e0 [30129.876607] univ8250_console_init+0x24/0x27 [30129.876607] console_init+0x2f9/0x45e [30129.876609] -> #0 (console_owner){....}-{0:0}: [30129.876611] __lock_acquire+0x2f70/0x4e90 [30129.876612] lock_acquire+0x1ac/0xad0 [30129.876612] console_unlock+0x460/0xa90 [30129.876613] vprintk_emit+0x130/0x420 [30129.876613] printk+0x9f/0xc5 [30129.876614] show_pwq+0x154/0x618 [30129.876615] show_workqueue_state.cold.55+0x193/0x6ca [30129.876615] __handle_sysrq+0x244/0x460 [30129.876616] write_sysrq_trigger+0x48/0x4a [30129.876616] proc_reg_write+0x1a6/0x240 [30129.876617] vfs_write+0x1a8/0x650 [30129.876619] other info that might help us debug this: [30129.876620] Chain exists of: [30129.876621] console_owner --> &port->lock#2 --> &pool->lock/1 [30129.876625] Possible unsafe locking scenario: [30129.876626] CPU0 CPU1 [30129.876626] ---- ---- [30129.876627] lock(&pool->lock/1); [30129.876628] lock(&port->lock#2); [30129.876630] lock(&pool->lock/1); [30129.876631] lock(console_owner); [30129.876633] *** DEADLOCK *** [30129.876634] 5 locks held by sysrq.sh/1222: [30129.876634] #0: ffff8881d3ce0470 (sb_writers#3){.+.+}-{0:0}, at: vfs_write+0x359/0x650 [30129.876637] #1: ffffffff92c612c0 (rcu_read_lock){....}-{1:2}, at: __handle_sysrq+0x4d/0x460 [30129.876640] #2: ffffffff92c612c0 (rcu_read_lock){....}-{1:2}, at: show_workqueue_state+0x5/0xf0 [30129.876642] #3: ffff888107cb9018 (&pool->lock/1){-.-.}-{2:2}, at: show_workqueue_state.cold.55+0x15b/0x6ca [30129.876645] #4: ffffffff92c39980 (console_lock){+.+.}-{0:0}, at: vprintk_emit+0x123/0x420 [30129.876648] stack backtrace: [30129.876649] CPU: 3 PID: 1222 Comm: sysrq.sh Tainted: G S W 5.9.0-rc2+ #3 [30129.876649] Hardware name: Intel Corporation 2012 Client Platform/Emerald Lake 2, BIOS ACRVMBY1.86C.0078.P00.1201161002 01/16/2012 [30129.876650] Call Trace: [30129.876650] dump_stack+0x9d/0xe0 [30129.876651] check_noncircular+0x34f/0x410 [30129.876653] __lock_acquire+0x2f70/0x4e90 [30129.876656] lock_acquire+0x1ac/0xad0 [30129.876658] console_unlock+0x460/0xa90 [30129.876660] vprintk_emit+0x130/0x420 [30129.876660] printk+0x9f/0xc5 [30129.876661] show_pwq+0x154/0x618 [30129.876662] show_workqueue_state.cold.55+0x193/0x6ca [30129.876664] __handle_sysrq+0x244/0x460 [30129.876665] write_sysrq_trigger+0x48/0x4a [30129.876665] proc_reg_write+0x1a6/0x240 [30129.876666] vfs_write+0x1a8/0x650 It looks like the commit was aimed to protect tty_insert_flip_string and there is no need for tty_flip_buffer_push to be under this lock. Fixes: b6da31b ("tty: Fix data race in tty_insert_flip_string_fixed_flag") Signed-off-by: Artem Savkov <[email protected]> Acked-by: Jiri Slaby <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Greg Kroah-Hartman <[email protected]> Signed-off-by: Sasha Levin <[email protected]>
friendlyarm
referenced
this issue
in friendlyarm/kernel-rockchip
Nov 17, 2020
[ Upstream commit e5e179a ] At memory hot-remove time we can retrieve an LMB's nid from its corresponding memory_block. There is no need to store the nid in multiple locations. Note that lmb_to_memblock() uses find_memory_block() to get the corresponding memory_block. As find_memory_block() runs in sub-linear time this approach is negligibly slower than what we do at present. In exchange for this lookup at hot-remove time we no longer need to call memory_add_physaddr_to_nid() during drmem_init() for each LMB. On powerpc, memory_add_physaddr_to_nid() is a linear search, so this spares us an O(n^2) initialization during boot. On systems with many LMBs that initialization overhead is palpable and disruptive. For example, on a box with 249854 LMBs we're seeing drmem_init() take upwards of 30 seconds to complete: [ 53.721639] drmem: initializing drmem v2 [ 80.604346] watchdog: BUG: soft lockup - CPU#65 stuck for 23s! [swapper/0:1] [ 80.604377] Modules linked in: [ 80.604389] CPU: 65 PID: 1 Comm: swapper/0 Not tainted 5.6.0-rc2+ #4 [ 80.604397] NIP: c0000000000a4980 LR: c0000000000a4940 CTR: 0000000000000000 [ 80.604407] REGS: c0002dbff8493830 TRAP: 0901 Not tainted (5.6.0-rc2+) [ 80.604412] MSR: 8000000002009033 <SF,VEC,EE,ME,IR,DR,RI,LE> CR: 44000248 XER: 0000000d [ 80.604431] CFAR: c0000000000a4a38 IRQMASK: 0 [ 80.604431] GPR00: c0000000000a4940 c0002dbff8493ac0 c000000001904400 c0003cfffffede30 [ 80.604431] GPR04: 0000000000000000 c000000000f4095a 000000000000002f 0000000010000000 [ 80.604431] GPR08: c0000bf7ecdb7fb8 c0000bf7ecc2d3c8 0000000000000008 c00c0002fdfb2001 [ 80.604431] GPR12: 0000000000000000 c00000001e8ec200 [ 80.604477] NIP [c0000000000a4980] hot_add_scn_to_nid+0xa0/0x3e0 [ 80.604486] LR [c0000000000a4940] hot_add_scn_to_nid+0x60/0x3e0 [ 80.604492] Call Trace: [ 80.604498] [c0002dbff8493ac0] [c0000000000a4940] hot_add_scn_to_nid+0x60/0x3e0 (unreliable) [ 80.604509] [c0002dbff8493b20] [c000000000087c10] memory_add_physaddr_to_nid+0x20/0x60 [ 80.604521] [c0002dbff8493b40] [c0000000010d4880] drmem_init+0x25c/0x2f0 [ 80.604530] [c0002dbff8493c10] [c000000000010154] do_one_initcall+0x64/0x2c0 [ 80.604540] [c0002dbff8493ce0] [c0000000010c4aa0] kernel_init_freeable+0x2d8/0x3a0 [ 80.604550] [c0002dbff8493db0] [c000000000010824] kernel_init+0x2c/0x148 [ 80.604560] [c0002dbff8493e20] [c00000000000b648] ret_from_kernel_thread+0x5c/0x74 [ 80.604567] Instruction dump: [ 80.604574] 392918e8 e9490000 e90a000a e92a0000 80ea000c 1d080018 3908ffe8 7d094214 [ 80.604586] 7fa94040 419d00dc e9490010 714a0088 <2faa0008> 409e00ac e9490000 7fbe5040 [ 89.047390] drmem: 249854 LMB(s) With a patched kernel on the same machine we're no longer seeing the soft lockup. drmem_init() now completes in negligible time, even when the LMB count is large. Fixes: b2d3b5e ("powerpc/pseries: Track LMB nid instead of using device tree") Signed-off-by: Scott Cheloha <[email protected]> Reviewed-by: Nathan Lynch <[email protected]> Signed-off-by: Michael Ellerman <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Sasha Levin <[email protected]>
friendlyarm
referenced
this issue
in friendlyarm/kernel-rockchip
Nov 17, 2020
commit 66d204a upstream. Very sporadically I had test case btrfs/069 from fstests hanging (for years, it is not a recent regression), with the following traces in dmesg/syslog: [162301.160628] BTRFS info (device sdc): dev_replace from /dev/sdd (devid 2) to /dev/sdg started [162301.181196] BTRFS info (device sdc): scrub: finished on devid 4 with status: 0 [162301.287162] BTRFS info (device sdc): dev_replace from /dev/sdd (devid 2) to /dev/sdg finished [162513.513792] INFO: task btrfs-transacti:1356167 blocked for more than 120 seconds. [162513.514318] Not tainted 5.9.0-rc6-btrfs-next-69 #1 [162513.514522] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [162513.514747] task:btrfs-transacti state:D stack: 0 pid:1356167 ppid: 2 flags:0x00004000 [162513.514751] Call Trace: [162513.514761] __schedule+0x5ce/0xd00 [162513.514765] ? _raw_spin_unlock_irqrestore+0x3c/0x60 [162513.514771] schedule+0x46/0xf0 [162513.514844] wait_current_trans+0xde/0x140 [btrfs] [162513.514850] ? finish_wait+0x90/0x90 [162513.514864] start_transaction+0x37c/0x5f0 [btrfs] [162513.514879] transaction_kthread+0xa4/0x170 [btrfs] [162513.514891] ? btrfs_cleanup_transaction+0x660/0x660 [btrfs] [162513.514894] kthread+0x153/0x170 [162513.514897] ? kthread_stop+0x2c0/0x2c0 [162513.514902] ret_from_fork+0x22/0x30 [162513.514916] INFO: task fsstress:1356184 blocked for more than 120 seconds. [162513.515192] Not tainted 5.9.0-rc6-btrfs-next-69 #1 [162513.515431] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [162513.515680] task:fsstress state:D stack: 0 pid:1356184 ppid:1356177 flags:0x00004000 [162513.515682] Call Trace: [162513.515688] __schedule+0x5ce/0xd00 [162513.515691] ? _raw_spin_unlock_irqrestore+0x3c/0x60 [162513.515697] schedule+0x46/0xf0 [162513.515712] wait_current_trans+0xde/0x140 [btrfs] [162513.515716] ? finish_wait+0x90/0x90 [162513.515729] start_transaction+0x37c/0x5f0 [btrfs] [162513.515743] btrfs_attach_transaction_barrier+0x1f/0x50 [btrfs] [162513.515753] btrfs_sync_fs+0x61/0x1c0 [btrfs] [162513.515758] ? __ia32_sys_fdatasync+0x20/0x20 [162513.515761] iterate_supers+0x87/0xf0 [162513.515765] ksys_sync+0x60/0xb0 [162513.515768] __do_sys_sync+0xa/0x10 [162513.515771] do_syscall_64+0x33/0x80 [162513.515774] entry_SYSCALL_64_after_hwframe+0x44/0xa9 [162513.515781] RIP: 0033:0x7f5238f50bd7 [162513.515782] Code: Bad RIP value. [162513.515784] RSP: 002b:00007fff67b978e8 EFLAGS: 00000206 ORIG_RAX: 00000000000000a2 [162513.515786] RAX: ffffffffffffffda RBX: 000055b1fad2c560 RCX: 00007f5238f50bd7 [162513.515788] RDX: 00000000ffffffff RSI: 000000000daf0e74 RDI: 000000000000003a [162513.515789] RBP: 0000000000000032 R08: 000000000000000a R09: 00007f5239019be0 [162513.515791] R10: fffffffffffff24f R11: 0000000000000206 R12: 000000000000003a [162513.515792] R13: 00007fff67b97950 R14: 00007fff67b97906 R15: 000055b1fad1a340 [162513.515804] INFO: task fsstress:1356185 blocked for more than 120 seconds. [162513.516064] Not tainted 5.9.0-rc6-btrfs-next-69 #1 [162513.516329] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [162513.516617] task:fsstress state:D stack: 0 pid:1356185 ppid:1356177 flags:0x00000000 [162513.516620] Call Trace: [162513.516625] __schedule+0x5ce/0xd00 [162513.516628] ? _raw_spin_unlock_irqrestore+0x3c/0x60 [162513.516634] schedule+0x46/0xf0 [162513.516647] wait_current_trans+0xde/0x140 [btrfs] [162513.516650] ? finish_wait+0x90/0x90 [162513.516662] start_transaction+0x4d7/0x5f0 [btrfs] [162513.516679] btrfs_setxattr_trans+0x3c/0x100 [btrfs] [162513.516686] __vfs_setxattr+0x66/0x80 [162513.516691] __vfs_setxattr_noperm+0x70/0x200 [162513.516697] vfs_setxattr+0x6b/0x120 [162513.516703] setxattr+0x125/0x240 [162513.516709] ? lock_acquire+0xb1/0x480 [162513.516712] ? mnt_want_write+0x20/0x50 [162513.516721] ? rcu_read_lock_any_held+0x8e/0xb0 [162513.516723] ? preempt_count_add+0x49/0xa0 [162513.516725] ? __sb_start_write+0x19b/0x290 [162513.516727] ? preempt_count_add+0x49/0xa0 [162513.516732] path_setxattr+0xba/0xd0 [162513.516739] __x64_sys_setxattr+0x27/0x30 [162513.516741] do_syscall_64+0x33/0x80 [162513.516743] entry_SYSCALL_64_after_hwframe+0x44/0xa9 [162513.516745] RIP: 0033:0x7f5238f56d5a [162513.516746] Code: Bad RIP value. [162513.516748] RSP: 002b:00007fff67b97868 EFLAGS: 00000202 ORIG_RAX: 00000000000000bc [162513.516750] RAX: ffffffffffffffda RBX: 0000000000000001 RCX: 00007f5238f56d5a [162513.516751] RDX: 000055b1fbb0d5a0 RSI: 00007fff67b978a0 RDI: 000055b1fbb0d470 [162513.516753] RBP: 000055b1fbb0d5a0 R08: 0000000000000001 R09: 00007fff67b97700 [162513.516754] R10: 0000000000000004 R11: 0000000000000202 R12: 0000000000000004 [162513.516756] R13: 0000000000000024 R14: 0000000000000001 R15: 00007fff67b978a0 [162513.516767] INFO: task fsstress:1356196 blocked for more than 120 seconds. [162513.517064] Not tainted 5.9.0-rc6-btrfs-next-69 #1 [162513.517365] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [162513.517763] task:fsstress state:D stack: 0 pid:1356196 ppid:1356177 flags:0x00004000 [162513.517780] Call Trace: [162513.517786] __schedule+0x5ce/0xd00 [162513.517789] ? _raw_spin_unlock_irqrestore+0x3c/0x60 [162513.517796] schedule+0x46/0xf0 [162513.517810] wait_current_trans+0xde/0x140 [btrfs] [162513.517814] ? finish_wait+0x90/0x90 [162513.517829] start_transaction+0x37c/0x5f0 [btrfs] [162513.517845] btrfs_attach_transaction_barrier+0x1f/0x50 [btrfs] [162513.517857] btrfs_sync_fs+0x61/0x1c0 [btrfs] [162513.517862] ? __ia32_sys_fdatasync+0x20/0x20 [162513.517865] iterate_supers+0x87/0xf0 [162513.517869] ksys_sync+0x60/0xb0 [162513.517872] __do_sys_sync+0xa/0x10 [162513.517875] do_syscall_64+0x33/0x80 [162513.517878] entry_SYSCALL_64_after_hwframe+0x44/0xa9 [162513.517881] RIP: 0033:0x7f5238f50bd7 [162513.517883] Code: Bad RIP value. [162513.517885] RSP: 002b:00007fff67b978e8 EFLAGS: 00000206 ORIG_RAX: 00000000000000a2 [162513.517887] RAX: ffffffffffffffda RBX: 000055b1fad2c560 RCX: 00007f5238f50bd7 [162513.517889] RDX: 0000000000000000 RSI: 000000007660add2 RDI: 0000000000000053 [162513.517891] RBP: 0000000000000032 R08: 0000000000000067 R09: 00007f5239019be0 [162513.517893] R10: fffffffffffff24f R11: 0000000000000206 R12: 0000000000000053 [162513.517895] R13: 00007fff67b97950 R14: 00007fff67b97906 R15: 000055b1fad1a340 [162513.517908] INFO: task fsstress:1356197 blocked for more than 120 seconds. [162513.518298] Not tainted 5.9.0-rc6-btrfs-next-69 #1 [162513.518672] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [162513.519157] task:fsstress state:D stack: 0 pid:1356197 ppid:1356177 flags:0x00000000 [162513.519160] Call Trace: [162513.519165] __schedule+0x5ce/0xd00 [162513.519168] ? _raw_spin_unlock_irqrestore+0x3c/0x60 [162513.519174] schedule+0x46/0xf0 [162513.519190] wait_current_trans+0xde/0x140 [btrfs] [162513.519193] ? finish_wait+0x90/0x90 [162513.519206] start_transaction+0x4d7/0x5f0 [btrfs] [162513.519222] btrfs_create+0x57/0x200 [btrfs] [162513.519230] lookup_open+0x522/0x650 [162513.519246] path_openat+0x2b8/0xa50 [162513.519270] do_filp_open+0x91/0x100 [162513.519275] ? find_held_lock+0x32/0x90 [162513.519280] ? lock_acquired+0x33b/0x470 [162513.519285] ? do_raw_spin_unlock+0x4b/0xc0 [162513.519287] ? _raw_spin_unlock+0x29/0x40 [162513.519295] do_sys_openat2+0x20d/0x2d0 [162513.519300] do_sys_open+0x44/0x80 [162513.519304] do_syscall_64+0x33/0x80 [162513.519307] entry_SYSCALL_64_after_hwframe+0x44/0xa9 [162513.519309] RIP: 0033:0x7f5238f4a903 [162513.519310] Code: Bad RIP value. [162513.519312] RSP: 002b:00007fff67b97758 EFLAGS: 00000246 ORIG_RAX: 0000000000000055 [162513.519314] RAX: ffffffffffffffda RBX: 00000000ffffffff RCX: 00007f5238f4a903 [162513.519316] RDX: 0000000000000000 RSI: 00000000000001b6 RDI: 000055b1fbb0d470 [162513.519317] RBP: 00007fff67b978c0 R08: 0000000000000001 R09: 0000000000000002 [162513.519319] R10: 00007fff67b974f7 R11: 0000000000000246 R12: 0000000000000013 [162513.519320] R13: 00000000000001b6 R14: 00007fff67b97906 R15: 000055b1fad1c620 [162513.519332] INFO: task btrfs:1356211 blocked for more than 120 seconds. [162513.519727] Not tainted 5.9.0-rc6-btrfs-next-69 #1 [162513.520115] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [162513.520508] task:btrfs state:D stack: 0 pid:1356211 ppid:1356178 flags:0x00004002 [162513.520511] Call Trace: [162513.520516] __schedule+0x5ce/0xd00 [162513.520519] ? _raw_spin_unlock_irqrestore+0x3c/0x60 [162513.520525] schedule+0x46/0xf0 [162513.520544] btrfs_scrub_pause+0x11f/0x180 [btrfs] [162513.520548] ? finish_wait+0x90/0x90 [162513.520562] btrfs_commit_transaction+0x45a/0xc30 [btrfs] [162513.520574] ? start_transaction+0xe0/0x5f0 [btrfs] [162513.520596] btrfs_dev_replace_finishing+0x6d8/0x711 [btrfs] [162513.520619] btrfs_dev_replace_by_ioctl.cold+0x1cc/0x1fd [btrfs] [162513.520639] btrfs_ioctl+0x2a25/0x36f0 [btrfs] [162513.520643] ? do_sigaction+0xf3/0x240 [162513.520645] ? find_held_lock+0x32/0x90 [162513.520648] ? do_sigaction+0xf3/0x240 [162513.520651] ? lock_acquired+0x33b/0x470 [162513.520655] ? _raw_spin_unlock_irq+0x24/0x50 [162513.520657] ? lockdep_hardirqs_on+0x7d/0x100 [162513.520660] ? _raw_spin_unlock_irq+0x35/0x50 [162513.520662] ? do_sigaction+0xf3/0x240 [162513.520671] ? __x64_sys_ioctl+0x83/0xb0 [162513.520672] __x64_sys_ioctl+0x83/0xb0 [162513.520677] do_syscall_64+0x33/0x80 [162513.520679] entry_SYSCALL_64_after_hwframe+0x44/0xa9 [162513.520681] RIP: 0033:0x7fc3cd307d87 [162513.520682] Code: Bad RIP value. [162513.520684] RSP: 002b:00007ffe30a56bb8 EFLAGS: 00000202 ORIG_RAX: 0000000000000010 [162513.520686] RAX: ffffffffffffffda RBX: 0000000000000004 RCX: 00007fc3cd307d87 [162513.520687] RDX: 00007ffe30a57a30 RSI: 00000000ca289435 RDI: 0000000000000003 [162513.520689] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000 [162513.520690] R10: 0000000000000008 R11: 0000000000000202 R12: 0000000000000003 [162513.520692] R13: 0000557323a212e0 R14: 00007ffe30a5a520 R15: 0000000000000001 [162513.520703] Showing all locks held in the system: [162513.520712] 1 lock held by khungtaskd/54: [162513.520713] #0: ffffffffb40a91a0 (rcu_read_lock){....}-{1:2}, at: debug_show_all_locks+0x15/0x197 [162513.520728] 1 lock held by in:imklog/596: [162513.520729] #0: ffff8f3f0d781400 (&f->f_pos_lock){+.+.}-{3:3}, at: __fdget_pos+0x4d/0x60 [162513.520782] 1 lock held by btrfs-transacti/1356167: [162513.520784] #0: ffff8f3d810cc848 (&fs_info->transaction_kthread_mutex){+.+.}-{3:3}, at: transaction_kthread+0x4a/0x170 [btrfs] [162513.520798] 1 lock held by btrfs/1356190: [162513.520800] #0: ffff8f3d57644470 (sb_writers#15){.+.+}-{0:0}, at: mnt_want_write_file+0x22/0x60 [162513.520805] 1 lock held by fsstress/1356184: [162513.520806] #0: ffff8f3d576440e8 (&type->s_umount_key#62){++++}-{3:3}, at: iterate_supers+0x6f/0xf0 [162513.520811] 3 locks held by fsstress/1356185: [162513.520812] #0: ffff8f3d57644470 (sb_writers#15){.+.+}-{0:0}, at: mnt_want_write+0x20/0x50 [162513.520815] #1: ffff8f3d80a650b8 (&type->i_mutex_dir_key#10){++++}-{3:3}, at: vfs_setxattr+0x50/0x120 [162513.520820] #2: ffff8f3d57644690 (sb_internal#2){.+.+}-{0:0}, at: start_transaction+0x40e/0x5f0 [btrfs] [162513.520833] 1 lock held by fsstress/1356196: [162513.520834] #0: ffff8f3d576440e8 (&type->s_umount_key#62){++++}-{3:3}, at: iterate_supers+0x6f/0xf0 [162513.520838] 3 locks held by fsstress/1356197: [162513.520839] #0: ffff8f3d57644470 (sb_writers#15){.+.+}-{0:0}, at: mnt_want_write+0x20/0x50 [162513.520843] #1: ffff8f3d506465e8 (&type->i_mutex_dir_key#10){++++}-{3:3}, at: path_openat+0x2a7/0xa50 [162513.520846] #2: ffff8f3d57644690 (sb_internal#2){.+.+}-{0:0}, at: start_transaction+0x40e/0x5f0 [btrfs] [162513.520858] 2 locks held by btrfs/1356211: [162513.520859] #0: ffff8f3d810cde30 (&fs_info->dev_replace.lock_finishing_cancel_unmount){+.+.}-{3:3}, at: btrfs_dev_replace_finishing+0x52/0x711 [btrfs] [162513.520877] #1: ffff8f3d57644690 (sb_internal#2){.+.+}-{0:0}, at: start_transaction+0x40e/0x5f0 [btrfs] This was weird because the stack traces show that a transaction commit, triggered by a device replace operation, is blocking trying to pause any running scrubs but there are no stack traces of blocked tasks doing a scrub. After poking around with drgn, I noticed there was a scrub task that was constantly running and blocking for shorts periods of time: >>> t = find_task(prog, 1356190) >>> prog.stack_trace(t) #0 __schedule+0x5ce/0xcfc #1 schedule+0x46/0xe4 #2 schedule_timeout+0x1df/0x475 #3 btrfs_reada_wait+0xda/0x132 #4 scrub_stripe+0x2a8/0x112f #5 scrub_chunk+0xcd/0x134 #6 scrub_enumerate_chunks+0x29e/0x5ee #7 btrfs_scrub_dev+0x2d5/0x91b #8 btrfs_ioctl+0x7f5/0x36e7 rockchip-linux#9 __x64_sys_ioctl+0x83/0xb0 rockchip-linux#10 do_syscall_64+0x33/0x77 rockchip-linux#11 entry_SYSCALL_64+0x7c/0x156 Which corresponds to: int btrfs_reada_wait(void *handle) { struct reada_control *rc = handle; struct btrfs_fs_info *fs_info = rc->fs_info; while (atomic_read(&rc->elems)) { if (!atomic_read(&fs_info->reada_works_cnt)) reada_start_machine(fs_info); wait_event_timeout(rc->wait, atomic_read(&rc->elems) == 0, (HZ + 9) / 10); } (...) So the counter "rc->elems" was set to 1 and never decreased to 0, causing the scrub task to loop forever in that function. Then I used the following script for drgn to check the readahead requests: $ cat dump_reada.py import sys import drgn from drgn import NULL, Object, cast, container_of, execscript, \ reinterpret, sizeof from drgn.helpers.linux import * mnt_path = b"/home/fdmanana/btrfs-tests/scratch_1" mnt = None for mnt in for_each_mount(prog, dst = mnt_path): pass if mnt is None: sys.stderr.write(f'Error: mount point {mnt_path} not found\n') sys.exit(1) fs_info = cast('struct btrfs_fs_info *', mnt.mnt.mnt_sb.s_fs_info) def dump_re(re): nzones = re.nzones.value_() print(f're at {hex(re.value_())}') print(f'\t logical {re.logical.value_()}') print(f'\t refcnt {re.refcnt.value_()}') print(f'\t nzones {nzones}') for i in range(nzones): dev = re.zones[i].device name = dev.name.str.string_() print(f'\t\t dev id {dev.devid.value_()} name {name}') print() for _, e in radix_tree_for_each(fs_info.reada_tree): re = cast('struct reada_extent *', e) dump_re(re) $ drgn dump_reada.py re at 0xffff8f3da9d25ad8 logical 38928384 refcnt 1 nzones 1 dev id 0 name b'/dev/sdd' $ So there was one readahead extent with a single zone corresponding to the source device of that last device replace operation logged in dmesg/syslog. Also the ID of that zone's device was 0 which is a special value set in the source device of a device replace operation when the operation finishes (constant BTRFS_DEV_REPLACE_DEVID set at btrfs_dev_replace_finishing()), confirming again that device /dev/sdd was the source of a device replace operation. Normally there should be as many zones in the readahead extent as there are devices, and I wasn't expecting the extent to be in a block group with a 'single' profile, so I went and confirmed with the following drgn script that there weren't any single profile block groups: $ cat dump_block_groups.py import sys import drgn from drgn import NULL, Object, cast, container_of, execscript, \ reinterpret, sizeof from drgn.helpers.linux import * mnt_path = b"/home/fdmanana/btrfs-tests/scratch_1" mnt = None for mnt in for_each_mount(prog, dst = mnt_path): pass if mnt is None: sys.stderr.write(f'Error: mount point {mnt_path} not found\n') sys.exit(1) fs_info = cast('struct btrfs_fs_info *', mnt.mnt.mnt_sb.s_fs_info) BTRFS_BLOCK_GROUP_DATA = (1 << 0) BTRFS_BLOCK_GROUP_SYSTEM = (1 << 1) BTRFS_BLOCK_GROUP_METADATA = (1 << 2) BTRFS_BLOCK_GROUP_RAID0 = (1 << 3) BTRFS_BLOCK_GROUP_RAID1 = (1 << 4) BTRFS_BLOCK_GROUP_DUP = (1 << 5) BTRFS_BLOCK_GROUP_RAID10 = (1 << 6) BTRFS_BLOCK_GROUP_RAID5 = (1 << 7) BTRFS_BLOCK_GROUP_RAID6 = (1 << 8) BTRFS_BLOCK_GROUP_RAID1C3 = (1 << 9) BTRFS_BLOCK_GROUP_RAID1C4 = (1 << 10) def bg_flags_string(bg): flags = bg.flags.value_() ret = '' if flags & BTRFS_BLOCK_GROUP_DATA: ret = 'data' if flags & BTRFS_BLOCK_GROUP_METADATA: if len(ret) > 0: ret += '|' ret += 'meta' if flags & BTRFS_BLOCK_GROUP_SYSTEM: if len(ret) > 0: ret += '|' ret += 'system' if flags & BTRFS_BLOCK_GROUP_RAID0: ret += ' raid0' elif flags & BTRFS_BLOCK_GROUP_RAID1: ret += ' raid1' elif flags & BTRFS_BLOCK_GROUP_DUP: ret += ' dup' elif flags & BTRFS_BLOCK_GROUP_RAID10: ret += ' raid10' elif flags & BTRFS_BLOCK_GROUP_RAID5: ret += ' raid5' elif flags & BTRFS_BLOCK_GROUP_RAID6: ret += ' raid6' elif flags & BTRFS_BLOCK_GROUP_RAID1C3: ret += ' raid1c3' elif flags & BTRFS_BLOCK_GROUP_RAID1C4: ret += ' raid1c4' else: ret += ' single' return ret def dump_bg(bg): print() print(f'block group at {hex(bg.value_())}') print(f'\t start {bg.start.value_()} length {bg.length.value_()}') print(f'\t flags {bg.flags.value_()} - {bg_flags_string(bg)}') bg_root = fs_info.block_group_cache_tree.address_of_() for bg in rbtree_inorder_for_each_entry('struct btrfs_block_group', bg_root, 'cache_node'): dump_bg(bg) $ drgn dump_block_groups.py block group at 0xffff8f3d673b0400 start 22020096 length 16777216 flags 258 - system raid6 block group at 0xffff8f3d53ddb400 start 38797312 length 536870912 flags 260 - meta raid6 block group at 0xffff8f3d5f4d9c00 start 575668224 length 2147483648 flags 257 - data raid6 block group at 0xffff8f3d08189000 start 2723151872 length 67108864 flags 258 - system raid6 block group at 0xffff8f3db70ff000 start 2790260736 length 1073741824 flags 260 - meta raid6 block group at 0xffff8f3d5f4dd800 start 3864002560 length 67108864 flags 258 - system raid6 block group at 0xffff8f3d67037000 start 3931111424 length 2147483648 flags 257 - data raid6 $ So there were only 2 reasons left for having a readahead extent with a single zone: reada_find_zone(), called when creating a readahead extent, returned NULL either because we failed to find the corresponding block group or because a memory allocation failed. With some additional and custom tracing I figured out that on every further ocurrence of the problem the block group had just been deleted when we were looping to create the zones for the readahead extent (at reada_find_extent()), so we ended up with only one zone in the readahead extent, corresponding to a device that ends up getting replaced. So after figuring that out it became obvious why the hang happens: 1) Task A starts a scrub on any device of the filesystem, except for device /dev/sdd; 2) Task B starts a device replace with /dev/sdd as the source device; 3) Task A calls btrfs_reada_add() from scrub_stripe() and it is currently starting to scrub a stripe from block group X. This call to btrfs_reada_add() is the one for the extent tree. When btrfs_reada_add() calls reada_add_block(), it passes the logical address of the extent tree's root node as its 'logical' argument - a value of 38928384; 4) Task A then enters reada_find_extent(), called from reada_add_block(). It finds there isn't any existing readahead extent for the logical address 38928384, so it proceeds to the path of creating a new one. It calls btrfs_map_block() to find out which stripes exist for the block group X. On the first iteration of the for loop that iterates over the stripes, it finds the stripe for device /dev/sdd, so it creates one zone for that device and adds it to the readahead extent. Before getting into the second iteration of the loop, the cleanup kthread deletes block group X because it was empty. So in the iterations for the remaining stripes it does not add more zones to the readahead extent, because the calls to reada_find_zone() returned NULL because they couldn't find block group X anymore. As a result the new readahead extent has a single zone, corresponding to the device /dev/sdd; 4) Before task A returns to btrfs_reada_add() and queues the readahead job for the readahead work queue, task B finishes the device replace and at btrfs_dev_replace_finishing() swaps the device /dev/sdd with the new device /dev/sdg; 5) Task A returns to reada_add_block(), which increments the counter "->elems" of the reada_control structure allocated at btrfs_reada_add(). Then it returns back to btrfs_reada_add() and calls reada_start_machine(). This queues a job in the readahead work queue to run the function reada_start_machine_worker(), which calls __reada_start_machine(). At __reada_start_machine() we take the device list mutex and for each device found in the current device list, we call reada_start_machine_dev() to start the readahead work. However at this point the device /dev/sdd was already freed and is not in the device list anymore. This means the corresponding readahead for the extent at 38928384 is never started, and therefore the "->elems" counter of the reada_control structure allocated at btrfs_reada_add() never goes down to 0, causing the call to btrfs_reada_wait(), done by the scrub task, to wait forever. Note that the readahead request can be made either after the device replace started or before it started, however in pratice it is very unlikely that a device replace is able to start after a readahead request is made and is able to complete before the readahead request completes - maybe only on a very small and nearly empty filesystem. This hang however is not the only problem we can have with readahead and device removals. When the readahead extent has other zones other than the one corresponding to the device that is being removed (either by a device replace or a device remove operation), we risk having a use-after-free on the device when dropping the last reference of the readahead extent. For example if we create a readahead extent with two zones, one for the device /dev/sdd and one for the device /dev/sde: 1) Before the readahead worker starts, the device /dev/sdd is removed, and the corresponding btrfs_device structure is freed. However the readahead extent still has the zone pointing to the device structure; 2) When the readahead worker starts, it only finds device /dev/sde in the current device list of the filesystem; 3) It starts the readahead work, at reada_start_machine_dev(), using the device /dev/sde; 4) Then when it finishes reading the extent from device /dev/sde, it calls __readahead_hook() which ends up dropping the last reference on the readahead extent through the last call to reada_extent_put(); 5) At reada_extent_put() it iterates over each zone of the readahead extent and attempts to delete an element from the device's 'reada_extents' radix tree, resulting in a use-after-free, as the device pointer of the zone for /dev/sdd is now stale. We can also access the device after dropping the last reference of a zone, through reada_zone_release(), also called by reada_extent_put(). And a device remove suffers the same problem, however since it shrinks the device size down to zero before removing the device, it is very unlikely to still have readahead requests not completed by the time we free the device, the only possibility is if the device has a very little space allocated. While the hang problem is exclusive to scrub, since it is currently the only user of btrfs_reada_add() and btrfs_reada_wait(), the use-after-free problem affects any path that triggers readhead, which includes btree_readahead_hook() and __readahead_hook() (a readahead worker can trigger readahed for the children of a node) for example - any path that ends up calling reada_add_block() can trigger the use-after-free after a device is removed. So fix this by waiting for any readahead requests for a device to complete before removing a device, ensuring that while waiting for existing ones no new ones can be made. This problem has been around for a very long time - the readahead code was added in 2011, device remove exists since 2008 and device replace was introduced in 2013, hard to pick a specific commit for a git Fixes tag. CC: [email protected] # 4.4+ Reviewed-by: Josef Bacik <[email protected]> Signed-off-by: Filipe Manana <[email protected]> Reviewed-by: David Sterba <[email protected]> Signed-off-by: David Sterba <[email protected]> Signed-off-by: Greg Kroah-Hartman <[email protected]>
Joern-P
pushed a commit
to Joern-P/kernel
that referenced
this issue
Jan 14, 2021
up_read() may wakeup some tasks, so do not call up_read() in scheduler, or it will cause deadlock as below: Thread rockchip-linux#4 5 (Name: cpu3, state: debug-request) (Suspended : Container) queued_spin_lock_slowpath() at qspinlock.c:369 0xffffff8008119120 queued_spin_lock() at qspinlock.h:88 0xffffff8008f0a470 do_raw_spin_lock() at spinlock.h:180 0xffffff8008f0a470 __raw_spin_lock() at spinlock_api_smp.h:143 0xffffff8008f0a470 _raw_spin_lock() at spinlock.c:144 0xffffff8008f0a470 rq_lock() at sched.h:1,244 0xffffff80080f2f4c ttwu_queue() at core.c:2,442 0xffffff80080f2f4c try_to_wake_up() at core.c:2,658 0xffffff80080eb998 wake_up_q() at core.c:450 0xffffff80080eb6a8 rwsem_wake() at rwsem-xadd.c:703 0xffffff800811a44c __up_read() at rwsem.h:107 0xffffff8008118930 up_read() at rwsem.c:122 0xffffff8008118930 cpufreq_task_boost() at cpufreq_interactive.c:1,449 0xffffff8008a4bdb4 enqueue_task_fair() at fair.c:5,285 0xffffff80080f7814 enqueue_task() at core.c:1,324 0xffffff80080ec15c activate_task() at core.c:1,346 0xffffff80080ec15c ttwu_activate() at core.c:2,240 0xffffff80080f2fc0 ttwu_do_activate() at core.c:2,299 0xffffff80080f2fc0 ttwu_queue() at core.c:2,444 0xffffff80080f2fc0 try_to_wake_up() at core.c:2,658 0xffffff80080eb998 wake_up_q() at core.c:450 0xffffff80080eb6a8 futex_wake() at futex.c:1,636 0xffffff8008159e78 do_futex() at futex.c:3,714 0xffffff8008158fb0 __do_sys_futex() at futex.c:3,770 0xffffff800815bd98 __se_sys_futex() at futex.c:3,738 0xffffff800815bd98 __arm64_sys_futex() at futex.c:3,738 0xffffff800815bd98 __invoke_syscall() at syscall.c:36 0xffffff8008098d6c invoke_syscall() at syscall.c:48 0xffffff8008098d6c el0_svc_common() at syscall.c:117 0xffffff8008098d6c el0_svc_handler() at syscall.c:163 0xffffff8008098ccc el0_svc() at entry.S:940 0xffffff8008083d08 Fixes: 205ed4e (cpufreq: interactive: introduce boost cpufreq interface for task) Change-Id: I9607faa5ede3a662e7f2f55da29b08fc328f4d43 Signed-off-by: Liang Chen <[email protected]>
mcerveny
pushed a commit
to mcerveny/rockchip-linux
that referenced
this issue
Sep 18, 2021
up_read() may wakeup some tasks, so do not call up_read() in scheduler, or it will cause deadlock as below: Thread rockchip-linux#4 5 (Name: cpu3, state: debug-request) (Suspended : Container) queued_spin_lock_slowpath() at qspinlock.c:369 0xffffff8008119120 queued_spin_lock() at qspinlock.h:88 0xffffff8008f0a470 do_raw_spin_lock() at spinlock.h:180 0xffffff8008f0a470 __raw_spin_lock() at spinlock_api_smp.h:143 0xffffff8008f0a470 _raw_spin_lock() at spinlock.c:144 0xffffff8008f0a470 rq_lock() at sched.h:1,244 0xffffff80080f2f4c ttwu_queue() at core.c:2,442 0xffffff80080f2f4c try_to_wake_up() at core.c:2,658 0xffffff80080eb998 wake_up_q() at core.c:450 0xffffff80080eb6a8 rwsem_wake() at rwsem-xadd.c:703 0xffffff800811a44c __up_read() at rwsem.h:107 0xffffff8008118930 up_read() at rwsem.c:122 0xffffff8008118930 cpufreq_task_boost() at cpufreq_interactive.c:1,449 0xffffff8008a4bdb4 enqueue_task_fair() at fair.c:5,285 0xffffff80080f7814 enqueue_task() at core.c:1,324 0xffffff80080ec15c activate_task() at core.c:1,346 0xffffff80080ec15c ttwu_activate() at core.c:2,240 0xffffff80080f2fc0 ttwu_do_activate() at core.c:2,299 0xffffff80080f2fc0 ttwu_queue() at core.c:2,444 0xffffff80080f2fc0 try_to_wake_up() at core.c:2,658 0xffffff80080eb998 wake_up_q() at core.c:450 0xffffff80080eb6a8 futex_wake() at futex.c:1,636 0xffffff8008159e78 do_futex() at futex.c:3,714 0xffffff8008158fb0 __do_sys_futex() at futex.c:3,770 0xffffff800815bd98 __se_sys_futex() at futex.c:3,738 0xffffff800815bd98 __arm64_sys_futex() at futex.c:3,738 0xffffff800815bd98 __invoke_syscall() at syscall.c:36 0xffffff8008098d6c invoke_syscall() at syscall.c:48 0xffffff8008098d6c el0_svc_common() at syscall.c:117 0xffffff8008098d6c el0_svc_handler() at syscall.c:163 0xffffff8008098ccc el0_svc() at entry.S:940 0xffffff8008083d08 Fixes: 205ed4e (cpufreq: interactive: introduce boost cpufreq interface for task) Change-Id: I9607faa5ede3a662e7f2f55da29b08fc328f4d43 Signed-off-by: Liang Chen <[email protected]>
Joern-P
pushed a commit
to Joern-P/kernel
that referenced
this issue
Nov 11, 2021
[ Upstream commit 71a174b ] b6da31b "tty: Fix data race in tty_insert_flip_string_fixed_flag" puts tty_flip_buffer_push under port->lock introducing the following possible circular locking dependency: [30129.876566] ====================================================== [30129.876566] WARNING: possible circular locking dependency detected [30129.876567] 5.9.0-rc2+ rockchip-linux#3 Tainted: G S W [30129.876568] ------------------------------------------------------ [30129.876568] sysrq.sh/1222 is trying to acquire lock: [30129.876569] ffffffff92c39480 (console_owner){....}-{0:0}, at: console_unlock+0x3fe/0xa90 [30129.876572] but task is already holding lock: [30129.876572] ffff888107cb9018 (&pool->lock/1){-.-.}-{2:2}, at: show_workqueue_state.cold.55+0x15b/0x6ca [30129.876576] which lock already depends on the new lock. [30129.876577] the existing dependency chain (in reverse order) is: [30129.876578] -> rockchip-linux#3 (&pool->lock/1){-.-.}-{2:2}: [30129.876581] _raw_spin_lock+0x30/0x70 [30129.876581] __queue_work+0x1a3/0x10f0 [30129.876582] queue_work_on+0x78/0x80 [30129.876582] pty_write+0x165/0x1e0 [30129.876583] n_tty_write+0x47f/0xf00 [30129.876583] tty_write+0x3d6/0x8d0 [30129.876584] vfs_write+0x1a8/0x650 [30129.876588] -> rockchip-linux#2 (&port->lock#2){-.-.}-{2:2}: [30129.876590] _raw_spin_lock_irqsave+0x3b/0x80 [30129.876591] tty_port_tty_get+0x1d/0xb0 [30129.876592] tty_port_default_wakeup+0xb/0x30 [30129.876592] serial8250_tx_chars+0x3d6/0x970 [30129.876593] serial8250_handle_irq.part.12+0x216/0x380 [30129.876593] serial8250_default_handle_irq+0x82/0xe0 [30129.876594] serial8250_interrupt+0xdd/0x1b0 [30129.876595] __handle_irq_event_percpu+0xfc/0x850 [30129.876602] -> rockchip-linux#1 (&port->lock){-.-.}-{2:2}: [30129.876605] _raw_spin_lock_irqsave+0x3b/0x80 [30129.876605] serial8250_console_write+0x12d/0x900 [30129.876606] console_unlock+0x679/0xa90 [30129.876606] register_console+0x371/0x6e0 [30129.876607] univ8250_console_init+0x24/0x27 [30129.876607] console_init+0x2f9/0x45e [30129.876609] -> #0 (console_owner){....}-{0:0}: [30129.876611] __lock_acquire+0x2f70/0x4e90 [30129.876612] lock_acquire+0x1ac/0xad0 [30129.876612] console_unlock+0x460/0xa90 [30129.876613] vprintk_emit+0x130/0x420 [30129.876613] printk+0x9f/0xc5 [30129.876614] show_pwq+0x154/0x618 [30129.876615] show_workqueue_state.cold.55+0x193/0x6ca [30129.876615] __handle_sysrq+0x244/0x460 [30129.876616] write_sysrq_trigger+0x48/0x4a [30129.876616] proc_reg_write+0x1a6/0x240 [30129.876617] vfs_write+0x1a8/0x650 [30129.876619] other info that might help us debug this: [30129.876620] Chain exists of: [30129.876621] console_owner --> &port->lock#2 --> &pool->lock/1 [30129.876625] Possible unsafe locking scenario: [30129.876626] CPU0 CPU1 [30129.876626] ---- ---- [30129.876627] lock(&pool->lock/1); [30129.876628] lock(&port->lock#2); [30129.876630] lock(&pool->lock/1); [30129.876631] lock(console_owner); [30129.876633] *** DEADLOCK *** [30129.876634] 5 locks held by sysrq.sh/1222: [30129.876634] #0: ffff8881d3ce0470 (sb_writers#3){.+.+}-{0:0}, at: vfs_write+0x359/0x650 [30129.876637] rockchip-linux#1: ffffffff92c612c0 (rcu_read_lock){....}-{1:2}, at: __handle_sysrq+0x4d/0x460 [30129.876640] rockchip-linux#2: ffffffff92c612c0 (rcu_read_lock){....}-{1:2}, at: show_workqueue_state+0x5/0xf0 [30129.876642] rockchip-linux#3: ffff888107cb9018 (&pool->lock/1){-.-.}-{2:2}, at: show_workqueue_state.cold.55+0x15b/0x6ca [30129.876645] rockchip-linux#4: ffffffff92c39980 (console_lock){+.+.}-{0:0}, at: vprintk_emit+0x123/0x420 [30129.876648] stack backtrace: [30129.876649] CPU: 3 PID: 1222 Comm: sysrq.sh Tainted: G S W 5.9.0-rc2+ rockchip-linux#3 [30129.876649] Hardware name: Intel Corporation 2012 Client Platform/Emerald Lake 2, BIOS ACRVMBY1.86C.0078.P00.1201161002 01/16/2012 [30129.876650] Call Trace: [30129.876650] dump_stack+0x9d/0xe0 [30129.876651] check_noncircular+0x34f/0x410 [30129.876653] __lock_acquire+0x2f70/0x4e90 [30129.876656] lock_acquire+0x1ac/0xad0 [30129.876658] console_unlock+0x460/0xa90 [30129.876660] vprintk_emit+0x130/0x420 [30129.876660] printk+0x9f/0xc5 [30129.876661] show_pwq+0x154/0x618 [30129.876662] show_workqueue_state.cold.55+0x193/0x6ca [30129.876664] __handle_sysrq+0x244/0x460 [30129.876665] write_sysrq_trigger+0x48/0x4a [30129.876665] proc_reg_write+0x1a6/0x240 [30129.876666] vfs_write+0x1a8/0x650 It looks like the commit was aimed to protect tty_insert_flip_string and there is no need for tty_flip_buffer_push to be under this lock. Change-Id: If836c7d5ac563c77794294b8e22772f1fa54858c Fixes: b6da31b ("tty: Fix data race in tty_insert_flip_string_fixed_flag") Signed-off-by: Artem Savkov <[email protected]> Acked-by: Jiri Slaby <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Greg Kroah-Hartman <[email protected]> Signed-off-by: Sasha Levin <[email protected]> Signed-off-by: Shunqian Zheng <[email protected]> (cherry picked from commit 8908ffa)
Caesar-github
pushed a commit
that referenced
this issue
Sep 3, 2022
As per changes in include/linux/jbd_common.h for avoiding the bit_spin_locks on RT ("fs: jbd/jbd2: Make state lock and journal head lock rt safe") we do the same thing here. We use the non atomic __set_bit and __clear_bit inside the scope of the lock to preserve the ability of the existing LIST_DEBUG code to use the zero'th bit in the sanity checks. As a bit spinlock, we had no lockdep visibility into the usage of the list head locking. Now, if we were to implement it as a standard non-raw spinlock, we would see: BUG: sleeping function called from invalid context at kernel/rtmutex.c:658 in_atomic(): 1, irqs_disabled(): 0, pid: 122, name: udevd 5 locks held by udevd/122: #0: (&sb->s_type->i_mutex_key#7/1){+.+.+.}, at: [<ffffffff811967e8>] lock_rename+0xe8/0xf0 #1: (rename_lock){+.+...}, at: [<ffffffff811a277c>] d_move+0x2c/0x60 #2: (&dentry->d_lock){+.+...}, at: [<ffffffff811a0763>] dentry_lock_for_move+0xf3/0x130 #3: (&dentry->d_lock/2){+.+...}, at: [<ffffffff811a0734>] dentry_lock_for_move+0xc4/0x130 #4: (&dentry->d_lock/3){+.+...}, at: [<ffffffff811a0747>] dentry_lock_for_move+0xd7/0x130 Pid: 122, comm: udevd Not tainted 3.4.47-rt62 #7 Call Trace: [<ffffffff810b9624>] __might_sleep+0x134/0x1f0 [<ffffffff817a24d4>] rt_spin_lock+0x24/0x60 [<ffffffff811a0c4c>] __d_shrink+0x5c/0xa0 [<ffffffff811a1b2d>] __d_drop+0x1d/0x40 [<ffffffff811a24be>] __d_move+0x8e/0x320 [<ffffffff811a278e>] d_move+0x3e/0x60 [<ffffffff81199598>] vfs_rename+0x198/0x4c0 [<ffffffff8119b093>] sys_renameat+0x213/0x240 [<ffffffff817a2de5>] ? _raw_spin_unlock+0x35/0x60 [<ffffffff8107781c>] ? do_page_fault+0x1ec/0x4b0 [<ffffffff817a32ca>] ? retint_swapgs+0xe/0x13 [<ffffffff813eb0e6>] ? trace_hardirqs_on_thunk+0x3a/0x3f [<ffffffff8119b0db>] sys_rename+0x1b/0x20 [<ffffffff817a3b96>] system_call_fastpath+0x1a/0x1f Since we are only taking the lock during short lived list operations, lets assume for now that it being raw won't be a significant latency concern. Signed-off-by: Paul Gortmaker <[email protected]> Signed-off-by: Sebastian Andrzej Siewior <[email protected]>
Caesar-github
pushed a commit
that referenced
this issue
Sep 3, 2022
At first glance, the use of 'static inline' seems appropriate for INIT_HLIST_BL_HEAD(). However, when a 'static inline' function invocation is inlined by gcc, all callers share any static local data declared within that inline function. This presents a problem for how lockdep classes are setup. raw_spinlocks, for example, when CONFIG_DEBUG_SPINLOCK, # define raw_spin_lock_init(lock) \ do { \ static struct lock_class_key __key; \ \ __raw_spin_lock_init((lock), #lock, &__key); \ } while (0) When this macro is expanded into a 'static inline' caller, like INIT_HLIST_BL_HEAD(): static inline INIT_HLIST_BL_HEAD(struct hlist_bl_head *h) { h->first = NULL; raw_spin_lock_init(&h->lock); } ...the static local lock_class_key object is made a function static. For compilation units which initialize invoke INIT_HLIST_BL_HEAD() more than once, then, all of the invocations share this same static local object. This can lead to some very confusing lockdep splats (example below). Solve this problem by forcing the INIT_HLIST_BL_HEAD() to be a macro, which prevents the lockdep class object sharing. ============================================= [ INFO: possible recursive locking detected ] 4.4.4-rt11 #4 Not tainted --------------------------------------------- kswapd0/59 is trying to acquire lock: (&h->lock#2){+.+.-.}, at: mb_cache_shrink_scan but task is already holding lock: (&h->lock#2){+.+.-.}, at: mb_cache_shrink_scan other info that might help us debug this: Possible unsafe locking scenario: CPU0 ---- lock(&h->lock#2); lock(&h->lock#2); *** DEADLOCK *** May be due to missing lock nesting notation 2 locks held by kswapd0/59: #0: (shrinker_rwsem){+.+...}, at: rt_down_read_trylock #1: (&h->lock#2){+.+.-.}, at: mb_cache_shrink_scan Reported-by: Luis Claudio R. Goncalves <[email protected]> Tested-by: Luis Claudio R. Goncalves <[email protected]> Signed-off-by: Josh Cartwright <[email protected]> Signed-off-by: Sebastian Andrzej Siewior <[email protected]>
Caesar-github
pushed a commit
that referenced
this issue
Sep 3, 2022
…ntext | BUG: sleeping function called from invalid context at kernel/locking/rtmutex.c:914 | in_atomic(): 1, irqs_disabled(): 0, pid: 255, name: kworker/u257:6 | 5 locks held by kworker/u257:6/255: | #0: ("events_unbound"){.+.+.+}, at: [<ffffffff8108edf1>] process_one_work+0x171/0x5e0 | #1: ((&entry->work)){+.+.+.}, at: [<ffffffff8108edf1>] process_one_work+0x171/0x5e0 | #2: (&shost->scan_mutex){+.+.+.}, at: [<ffffffffa000faa3>] __scsi_add_device+0xa3/0x130 [scsi_mod] | #3: (&set->tag_list_lock){+.+...}, at: [<ffffffff812f09fa>] blk_mq_init_queue+0x96a/0xa50 | #4: (rcu_read_lock_sched){......}, at: [<ffffffff8132887d>] percpu_ref_kill_and_confirm+0x1d/0x120 | Preemption disabled at:[<ffffffff812eff76>] blk_mq_freeze_queue_start+0x56/0x70 | | CPU: 2 PID: 255 Comm: kworker/u257:6 Not tainted 3.18.7-rt0+ #1 | Workqueue: events_unbound async_run_entry_fn | 0000000000000003 ffff8800bc29f998 ffffffff815b3a12 0000000000000000 | 0000000000000000 ffff8800bc29f9b8 ffffffff8109aa16 ffff8800bc29fa28 | ffff8800bc5d1bc8 ffff8800bc29f9e8 ffffffff815b8dd4 ffff880000000000 | Call Trace: | [<ffffffff815b3a12>] dump_stack+0x4f/0x7c | [<ffffffff8109aa16>] __might_sleep+0x116/0x190 | [<ffffffff815b8dd4>] rt_spin_lock+0x24/0x60 | [<ffffffff810b6089>] __wake_up+0x29/0x60 | [<ffffffff812ee06e>] blk_mq_usage_counter_release+0x1e/0x20 | [<ffffffff81328966>] percpu_ref_kill_and_confirm+0x106/0x120 | [<ffffffff812eff76>] blk_mq_freeze_queue_start+0x56/0x70 | [<ffffffff812f0000>] blk_mq_update_tag_set_depth+0x40/0xd0 | [<ffffffff812f0a1c>] blk_mq_init_queue+0x98c/0xa50 | [<ffffffffa000dcf0>] scsi_mq_alloc_queue+0x20/0x60 [scsi_mod] | [<ffffffffa000ea35>] scsi_alloc_sdev+0x2f5/0x370 [scsi_mod] | [<ffffffffa000f494>] scsi_probe_and_add_lun+0x9e4/0xdd0 [scsi_mod] | [<ffffffffa000fb26>] __scsi_add_device+0x126/0x130 [scsi_mod] | [<ffffffffa013033f>] ata_scsi_scan_host+0xaf/0x200 [libata] | [<ffffffffa012b5b6>] async_port_probe+0x46/0x60 [libata] | [<ffffffff810978fb>] async_run_entry_fn+0x3b/0xf0 | [<ffffffff8108ee81>] process_one_work+0x201/0x5e0 percpu_ref_kill_and_confirm() invokes blk_mq_usage_counter_release() in a rcu-sched region. swait based wake queue can't be used due to wake_up_all() usage and disabled interrupts in !RT configs (as reported by Corey Minyard). The wq_has_sleeper() check has been suggested by Peter Zijlstra. Signed-off-by: Sebastian Andrzej Siewior <[email protected]>
Caesar-github
pushed a commit
that referenced
this issue
Sep 3, 2022
The two commits below add up to a cpuset might_sleep() splat for RT: 8447a0f cpuset: convert callback_mutex to a spinlock 344736f cpuset: simplify cpuset_node_allowed API BUG: sleeping function called from invalid context at kernel/locking/rtmutex.c:995 in_atomic(): 0, irqs_disabled(): 1, pid: 11718, name: cset CPU: 135 PID: 11718 Comm: cset Tainted: G E 4.10.0-rt1-rt #4 Hardware name: Intel Corporation BRICKLAND/BRICKLAND, BIOS BRHSXSD1.86B.0056.R01.1409242327 09/24/2014 Call Trace: ? dump_stack+0x5c/0x81 ? ___might_sleep+0xf4/0x170 ? rt_spin_lock+0x1c/0x50 ? __cpuset_node_allowed+0x66/0xc0 ? ___slab_alloc+0x390/0x570 <disables IRQs> ? anon_vma_fork+0x8f/0x140 ? copy_page_range+0x6cf/0xb00 ? anon_vma_fork+0x8f/0x140 ? __slab_alloc.isra.74+0x5a/0x81 ? anon_vma_fork+0x8f/0x140 ? kmem_cache_alloc+0x1b5/0x1f0 ? anon_vma_fork+0x8f/0x140 ? copy_process.part.35+0x1670/0x1ee0 ? _do_fork+0xdd/0x3f0 ? _do_fork+0xdd/0x3f0 ? do_syscall_64+0x61/0x170 ? entry_SYSCALL64_slow_path+0x25/0x25 The later ensured that a NUMA box WILL take callback_lock in atomic context by removing the allocator and reclaim path __GFP_HARDWALL usage which prevented such contexts from taking callback_mutex. One option would be to reinstate __GFP_HARDWALL protections for RT, however, as the 8447a0f changelog states: The callback_mutex is only used to synchronize reads/updates of cpusets' flags and cpu/node masks. These operations should always proceed fast so there's no reason why we can't use a spinlock instead of the mutex. Cc: [email protected] Signed-off-by: Mike Galbraith <[email protected]> Signed-off-by: Sebastian Andrzej Siewior <[email protected]>
Caesar-github
pushed a commit
that referenced
this issue
Sep 3, 2022
…ntext [ Upstream commit 61c928ecf4fe200bda9b49a0813b5ba0f43995b5 ] | BUG: sleeping function called from invalid context at kernel/locking/rtmutex.c:914 | in_atomic(): 1, irqs_disabled(): 0, pid: 255, name: kworker/u257:6 | 5 locks held by kworker/u257:6/255: | #0: ("events_unbound"){.+.+.+}, at: [<ffffffff8108edf1>] process_one_work+0x171/0x5e0 | #1: ((&entry->work)){+.+.+.}, at: [<ffffffff8108edf1>] process_one_work+0x171/0x5e0 | #2: (&shost->scan_mutex){+.+.+.}, at: [<ffffffffa000faa3>] __scsi_add_device+0xa3/0x130 [scsi_mod] | #3: (&set->tag_list_lock){+.+...}, at: [<ffffffff812f09fa>] blk_mq_init_queue+0x96a/0xa50 | #4: (rcu_read_lock_sched){......}, at: [<ffffffff8132887d>] percpu_ref_kill_and_confirm+0x1d/0x120 | Preemption disabled at:[<ffffffff812eff76>] blk_mq_freeze_queue_start+0x56/0x70 | | CPU: 2 PID: 255 Comm: kworker/u257:6 Not tainted 3.18.7-rt0+ #1 | Workqueue: events_unbound async_run_entry_fn | 0000000000000003 ffff8800bc29f998 ffffffff815b3a12 0000000000000000 | 0000000000000000 ffff8800bc29f9b8 ffffffff8109aa16 ffff8800bc29fa28 | ffff8800bc5d1bc8 ffff8800bc29f9e8 ffffffff815b8dd4 ffff880000000000 | Call Trace: | [<ffffffff815b3a12>] dump_stack+0x4f/0x7c | [<ffffffff8109aa16>] __might_sleep+0x116/0x190 | [<ffffffff815b8dd4>] rt_spin_lock+0x24/0x60 | [<ffffffff810b6089>] __wake_up+0x29/0x60 | [<ffffffff812ee06e>] blk_mq_usage_counter_release+0x1e/0x20 | [<ffffffff81328966>] percpu_ref_kill_and_confirm+0x106/0x120 | [<ffffffff812eff76>] blk_mq_freeze_queue_start+0x56/0x70 | [<ffffffff812f0000>] blk_mq_update_tag_set_depth+0x40/0xd0 | [<ffffffff812f0a1c>] blk_mq_init_queue+0x98c/0xa50 | [<ffffffffa000dcf0>] scsi_mq_alloc_queue+0x20/0x60 [scsi_mod] | [<ffffffffa000ea35>] scsi_alloc_sdev+0x2f5/0x370 [scsi_mod] | [<ffffffffa000f494>] scsi_probe_and_add_lun+0x9e4/0xdd0 [scsi_mod] | [<ffffffffa000fb26>] __scsi_add_device+0x126/0x130 [scsi_mod] | [<ffffffffa013033f>] ata_scsi_scan_host+0xaf/0x200 [libata] | [<ffffffffa012b5b6>] async_port_probe+0x46/0x60 [libata] | [<ffffffff810978fb>] async_run_entry_fn+0x3b/0xf0 | [<ffffffff8108ee81>] process_one_work+0x201/0x5e0 percpu_ref_kill_and_confirm() invokes blk_mq_usage_counter_release() in a rcu-sched region. swait based wake queue can't be used due to wake_up_all() usage and disabled interrupts in !RT configs (as reported by Corey Minyard). The wq_has_sleeper() check has been suggested by Peter Zijlstra. Signed-off-by: Sebastian Andrzej Siewior <[email protected]> Signed-off-by: Steven Rostedt (VMware) <[email protected]>
Caesar-github
pushed a commit
that referenced
this issue
Nov 3, 2022
The two commits below add up to a cpuset might_sleep() splat for RT: 8447a0f cpuset: convert callback_mutex to a spinlock 344736f cpuset: simplify cpuset_node_allowed API BUG: sleeping function called from invalid context at kernel/locking/rtmutex.c:995 in_atomic(): 0, irqs_disabled(): 1, pid: 11718, name: cset CPU: 135 PID: 11718 Comm: cset Tainted: G E 4.10.0-rt1-rt #4 Hardware name: Intel Corporation BRICKLAND/BRICKLAND, BIOS BRHSXSD1.86B.0056.R01.1409242327 09/24/2014 Call Trace: ? dump_stack+0x5c/0x81 ? ___might_sleep+0xf4/0x170 ? rt_spin_lock+0x1c/0x50 ? __cpuset_node_allowed+0x66/0xc0 ? ___slab_alloc+0x390/0x570 <disables IRQs> ? anon_vma_fork+0x8f/0x140 ? copy_page_range+0x6cf/0xb00 ? anon_vma_fork+0x8f/0x140 ? __slab_alloc.isra.74+0x5a/0x81 ? anon_vma_fork+0x8f/0x140 ? kmem_cache_alloc+0x1b5/0x1f0 ? anon_vma_fork+0x8f/0x140 ? copy_process.part.35+0x1670/0x1ee0 ? _do_fork+0xdd/0x3f0 ? _do_fork+0xdd/0x3f0 ? do_syscall_64+0x61/0x170 ? entry_SYSCALL64_slow_path+0x25/0x25 The later ensured that a NUMA box WILL take callback_lock in atomic context by removing the allocator and reclaim path __GFP_HARDWALL usage which prevented such contexts from taking callback_mutex. One option would be to reinstate __GFP_HARDWALL protections for RT, however, as the 8447a0f changelog states: The callback_mutex is only used to synchronize reads/updates of cpusets' flags and cpu/node masks. These operations should always proceed fast so there's no reason why we can't use a spinlock instead of the mutex. Cc: [email protected] Signed-off-by: Mike Galbraith <[email protected]> Signed-off-by: Sebastian Andrzej Siewior <[email protected]>
hejiawencc
pushed a commit
to LubanCat/kernel
that referenced
this issue
Dec 14, 2022
As per changes in include/linux/jbd_common.h for avoiding the bit_spin_locks on RT ("fs: jbd/jbd2: Make state lock and journal head lock rt safe") we do the same thing here. We use the non atomic __set_bit and __clear_bit inside the scope of the lock to preserve the ability of the existing LIST_DEBUG code to use the zero'th bit in the sanity checks. As a bit spinlock, we had no lockdep visibility into the usage of the list head locking. Now, if we were to implement it as a standard non-raw spinlock, we would see: BUG: sleeping function called from invalid context at kernel/rtmutex.c:658 in_atomic(): 1, irqs_disabled(): 0, pid: 122, name: udevd 5 locks held by udevd/122: #0: (&sb->s_type->i_mutex_key#7/1){+.+.+.}, at: [<ffffffff811967e8>] lock_rename+0xe8/0xf0 #1: (rename_lock){+.+...}, at: [<ffffffff811a277c>] d_move+0x2c/0x60 #2: (&dentry->d_lock){+.+...}, at: [<ffffffff811a0763>] dentry_lock_for_move+0xf3/0x130 rockchip-linux#3: (&dentry->d_lock/2){+.+...}, at: [<ffffffff811a0734>] dentry_lock_for_move+0xc4/0x130 rockchip-linux#4: (&dentry->d_lock/3){+.+...}, at: [<ffffffff811a0747>] dentry_lock_for_move+0xd7/0x130 Pid: 122, comm: udevd Not tainted 3.4.47-rt62 rockchip-linux#7 Call Trace: [<ffffffff810b9624>] __might_sleep+0x134/0x1f0 [<ffffffff817a24d4>] rt_spin_lock+0x24/0x60 [<ffffffff811a0c4c>] __d_shrink+0x5c/0xa0 [<ffffffff811a1b2d>] __d_drop+0x1d/0x40 [<ffffffff811a24be>] __d_move+0x8e/0x320 [<ffffffff811a278e>] d_move+0x3e/0x60 [<ffffffff81199598>] vfs_rename+0x198/0x4c0 [<ffffffff8119b093>] sys_renameat+0x213/0x240 [<ffffffff817a2de5>] ? _raw_spin_unlock+0x35/0x60 [<ffffffff8107781c>] ? do_page_fault+0x1ec/0x4b0 [<ffffffff817a32ca>] ? retint_swapgs+0xe/0x13 [<ffffffff813eb0e6>] ? trace_hardirqs_on_thunk+0x3a/0x3f [<ffffffff8119b0db>] sys_rename+0x1b/0x20 [<ffffffff817a3b96>] system_call_fastpath+0x1a/0x1f Since we are only taking the lock during short lived list operations, lets assume for now that it being raw won't be a significant latency concern. Signed-off-by: Paul Gortmaker <[email protected]> Signed-off-by: Sebastian Andrzej Siewior <[email protected]>
hejiawencc
pushed a commit
to LubanCat/kernel
that referenced
this issue
Dec 14, 2022
At first glance, the use of 'static inline' seems appropriate for INIT_HLIST_BL_HEAD(). However, when a 'static inline' function invocation is inlined by gcc, all callers share any static local data declared within that inline function. This presents a problem for how lockdep classes are setup. raw_spinlocks, for example, when CONFIG_DEBUG_SPINLOCK, # define raw_spin_lock_init(lock) \ do { \ static struct lock_class_key __key; \ \ __raw_spin_lock_init((lock), #lock, &__key); \ } while (0) When this macro is expanded into a 'static inline' caller, like INIT_HLIST_BL_HEAD(): static inline INIT_HLIST_BL_HEAD(struct hlist_bl_head *h) { h->first = NULL; raw_spin_lock_init(&h->lock); } ...the static local lock_class_key object is made a function static. For compilation units which initialize invoke INIT_HLIST_BL_HEAD() more than once, then, all of the invocations share this same static local object. This can lead to some very confusing lockdep splats (example below). Solve this problem by forcing the INIT_HLIST_BL_HEAD() to be a macro, which prevents the lockdep class object sharing. ============================================= [ INFO: possible recursive locking detected ] 4.4.4-rt11 rockchip-linux#4 Not tainted --------------------------------------------- kswapd0/59 is trying to acquire lock: (&h->lock#2){+.+.-.}, at: mb_cache_shrink_scan but task is already holding lock: (&h->lock#2){+.+.-.}, at: mb_cache_shrink_scan other info that might help us debug this: Possible unsafe locking scenario: CPU0 ---- lock(&h->lock#2); lock(&h->lock#2); *** DEADLOCK *** May be due to missing lock nesting notation 2 locks held by kswapd0/59: #0: (shrinker_rwsem){+.+...}, at: rt_down_read_trylock #1: (&h->lock#2){+.+.-.}, at: mb_cache_shrink_scan Reported-by: Luis Claudio R. Goncalves <[email protected]> Tested-by: Luis Claudio R. Goncalves <[email protected]> Signed-off-by: Josh Cartwright <[email protected]> Signed-off-by: Sebastian Andrzej Siewior <[email protected]>
hejiawencc
pushed a commit
to LubanCat/kernel
that referenced
this issue
Dec 14, 2022
…ntext | BUG: sleeping function called from invalid context at kernel/locking/rtmutex.c:914 | in_atomic(): 1, irqs_disabled(): 0, pid: 255, name: kworker/u257:6 | 5 locks held by kworker/u257:6/255: | #0: ("events_unbound"){.+.+.+}, at: [<ffffffff8108edf1>] process_one_work+0x171/0x5e0 | #1: ((&entry->work)){+.+.+.}, at: [<ffffffff8108edf1>] process_one_work+0x171/0x5e0 | #2: (&shost->scan_mutex){+.+.+.}, at: [<ffffffffa000faa3>] __scsi_add_device+0xa3/0x130 [scsi_mod] | rockchip-linux#3: (&set->tag_list_lock){+.+...}, at: [<ffffffff812f09fa>] blk_mq_init_queue+0x96a/0xa50 | rockchip-linux#4: (rcu_read_lock_sched){......}, at: [<ffffffff8132887d>] percpu_ref_kill_and_confirm+0x1d/0x120 | Preemption disabled at:[<ffffffff812eff76>] blk_mq_freeze_queue_start+0x56/0x70 | | CPU: 2 PID: 255 Comm: kworker/u257:6 Not tainted 3.18.7-rt0+ #1 | Workqueue: events_unbound async_run_entry_fn | 0000000000000003 ffff8800bc29f998 ffffffff815b3a12 0000000000000000 | 0000000000000000 ffff8800bc29f9b8 ffffffff8109aa16 ffff8800bc29fa28 | ffff8800bc5d1bc8 ffff8800bc29f9e8 ffffffff815b8dd4 ffff880000000000 | Call Trace: | [<ffffffff815b3a12>] dump_stack+0x4f/0x7c | [<ffffffff8109aa16>] __might_sleep+0x116/0x190 | [<ffffffff815b8dd4>] rt_spin_lock+0x24/0x60 | [<ffffffff810b6089>] __wake_up+0x29/0x60 | [<ffffffff812ee06e>] blk_mq_usage_counter_release+0x1e/0x20 | [<ffffffff81328966>] percpu_ref_kill_and_confirm+0x106/0x120 | [<ffffffff812eff76>] blk_mq_freeze_queue_start+0x56/0x70 | [<ffffffff812f0000>] blk_mq_update_tag_set_depth+0x40/0xd0 | [<ffffffff812f0a1c>] blk_mq_init_queue+0x98c/0xa50 | [<ffffffffa000dcf0>] scsi_mq_alloc_queue+0x20/0x60 [scsi_mod] | [<ffffffffa000ea35>] scsi_alloc_sdev+0x2f5/0x370 [scsi_mod] | [<ffffffffa000f494>] scsi_probe_and_add_lun+0x9e4/0xdd0 [scsi_mod] | [<ffffffffa000fb26>] __scsi_add_device+0x126/0x130 [scsi_mod] | [<ffffffffa013033f>] ata_scsi_scan_host+0xaf/0x200 [libata] | [<ffffffffa012b5b6>] async_port_probe+0x46/0x60 [libata] | [<ffffffff810978fb>] async_run_entry_fn+0x3b/0xf0 | [<ffffffff8108ee81>] process_one_work+0x201/0x5e0 percpu_ref_kill_and_confirm() invokes blk_mq_usage_counter_release() in a rcu-sched region. swait based wake queue can't be used due to wake_up_all() usage and disabled interrupts in !RT configs (as reported by Corey Minyard). The wq_has_sleeper() check has been suggested by Peter Zijlstra. Signed-off-by: Sebastian Andrzej Siewior <[email protected]>
hejiawencc
pushed a commit
to LubanCat/kernel
that referenced
this issue
Dec 14, 2022
The two commits below add up to a cpuset might_sleep() splat for RT: 8447a0f cpuset: convert callback_mutex to a spinlock 344736f cpuset: simplify cpuset_node_allowed API BUG: sleeping function called from invalid context at kernel/locking/rtmutex.c:995 in_atomic(): 0, irqs_disabled(): 1, pid: 11718, name: cset CPU: 135 PID: 11718 Comm: cset Tainted: G E 4.10.0-rt1-rt rockchip-linux#4 Hardware name: Intel Corporation BRICKLAND/BRICKLAND, BIOS BRHSXSD1.86B.0056.R01.1409242327 09/24/2014 Call Trace: ? dump_stack+0x5c/0x81 ? ___might_sleep+0xf4/0x170 ? rt_spin_lock+0x1c/0x50 ? __cpuset_node_allowed+0x66/0xc0 ? ___slab_alloc+0x390/0x570 <disables IRQs> ? anon_vma_fork+0x8f/0x140 ? copy_page_range+0x6cf/0xb00 ? anon_vma_fork+0x8f/0x140 ? __slab_alloc.isra.74+0x5a/0x81 ? anon_vma_fork+0x8f/0x140 ? kmem_cache_alloc+0x1b5/0x1f0 ? anon_vma_fork+0x8f/0x140 ? copy_process.part.35+0x1670/0x1ee0 ? _do_fork+0xdd/0x3f0 ? _do_fork+0xdd/0x3f0 ? do_syscall_64+0x61/0x170 ? entry_SYSCALL64_slow_path+0x25/0x25 The later ensured that a NUMA box WILL take callback_lock in atomic context by removing the allocator and reclaim path __GFP_HARDWALL usage which prevented such contexts from taking callback_mutex. One option would be to reinstate __GFP_HARDWALL protections for RT, however, as the 8447a0f changelog states: The callback_mutex is only used to synchronize reads/updates of cpusets' flags and cpu/node masks. These operations should always proceed fast so there's no reason why we can't use a spinlock instead of the mutex. Cc: [email protected] Signed-off-by: Mike Galbraith <[email protected]> Signed-off-by: Sebastian Andrzej Siewior <[email protected]>
hejiawencc
pushed a commit
to LubanCat/kernel
that referenced
this issue
Dec 14, 2022
…ntext [ Upstream commit 61c928ecf4fe200bda9b49a0813b5ba0f43995b5 ] | BUG: sleeping function called from invalid context at kernel/locking/rtmutex.c:914 | in_atomic(): 1, irqs_disabled(): 0, pid: 255, name: kworker/u257:6 | 5 locks held by kworker/u257:6/255: | #0: ("events_unbound"){.+.+.+}, at: [<ffffffff8108edf1>] process_one_work+0x171/0x5e0 | #1: ((&entry->work)){+.+.+.}, at: [<ffffffff8108edf1>] process_one_work+0x171/0x5e0 | #2: (&shost->scan_mutex){+.+.+.}, at: [<ffffffffa000faa3>] __scsi_add_device+0xa3/0x130 [scsi_mod] | rockchip-linux#3: (&set->tag_list_lock){+.+...}, at: [<ffffffff812f09fa>] blk_mq_init_queue+0x96a/0xa50 | rockchip-linux#4: (rcu_read_lock_sched){......}, at: [<ffffffff8132887d>] percpu_ref_kill_and_confirm+0x1d/0x120 | Preemption disabled at:[<ffffffff812eff76>] blk_mq_freeze_queue_start+0x56/0x70 | | CPU: 2 PID: 255 Comm: kworker/u257:6 Not tainted 3.18.7-rt0+ #1 | Workqueue: events_unbound async_run_entry_fn | 0000000000000003 ffff8800bc29f998 ffffffff815b3a12 0000000000000000 | 0000000000000000 ffff8800bc29f9b8 ffffffff8109aa16 ffff8800bc29fa28 | ffff8800bc5d1bc8 ffff8800bc29f9e8 ffffffff815b8dd4 ffff880000000000 | Call Trace: | [<ffffffff815b3a12>] dump_stack+0x4f/0x7c | [<ffffffff8109aa16>] __might_sleep+0x116/0x190 | [<ffffffff815b8dd4>] rt_spin_lock+0x24/0x60 | [<ffffffff810b6089>] __wake_up+0x29/0x60 | [<ffffffff812ee06e>] blk_mq_usage_counter_release+0x1e/0x20 | [<ffffffff81328966>] percpu_ref_kill_and_confirm+0x106/0x120 | [<ffffffff812eff76>] blk_mq_freeze_queue_start+0x56/0x70 | [<ffffffff812f0000>] blk_mq_update_tag_set_depth+0x40/0xd0 | [<ffffffff812f0a1c>] blk_mq_init_queue+0x98c/0xa50 | [<ffffffffa000dcf0>] scsi_mq_alloc_queue+0x20/0x60 [scsi_mod] | [<ffffffffa000ea35>] scsi_alloc_sdev+0x2f5/0x370 [scsi_mod] | [<ffffffffa000f494>] scsi_probe_and_add_lun+0x9e4/0xdd0 [scsi_mod] | [<ffffffffa000fb26>] __scsi_add_device+0x126/0x130 [scsi_mod] | [<ffffffffa013033f>] ata_scsi_scan_host+0xaf/0x200 [libata] | [<ffffffffa012b5b6>] async_port_probe+0x46/0x60 [libata] | [<ffffffff810978fb>] async_run_entry_fn+0x3b/0xf0 | [<ffffffff8108ee81>] process_one_work+0x201/0x5e0 percpu_ref_kill_and_confirm() invokes blk_mq_usage_counter_release() in a rcu-sched region. swait based wake queue can't be used due to wake_up_all() usage and disabled interrupts in !RT configs (as reported by Corey Minyard). The wq_has_sleeper() check has been suggested by Peter Zijlstra. Signed-off-by: Sebastian Andrzej Siewior <[email protected]> Signed-off-by: Steven Rostedt (VMware) <[email protected]>
hejiawencc
pushed a commit
to LubanCat/kernel
that referenced
this issue
Dec 8, 2023
Example: RK3588 Use I2S2_2CH as Clk-Gen to serve TDM_MULTI_LANES I2S2_2CH ----> BCLK,I2S_LRCK --------> I2S0_8CH_TX (Slave TRCM-TXONLY) | |--------> BCLK,TDM_SYNC --------> TDM Device (Slave) Note: I2S2_2CH_MCLK: BCLK I2S2_2CH_SCLK: I2S_LRCK (GPIO2_B7) I2S2_2CH_LRCK: TDM_SYNC (GPIO2_C0) DT: &i2s0_8ch { status = "okay"; assigned-clocks = <&cru I2S0_8CH_MCLKOUT>; assigned-clock-parents = <&cru MCLK_I2S0_8CH_TX>; i2s-lrck-gpio = <&gpio1 RK_PC5 GPIO_ACTIVE_HIGH>; tdm-fsync-gpio = <&gpio1 RK_PC2 GPIO_ACTIVE_HIGH>; rockchip,tdm-multi-lanes; rockchip,tdm-tx-lanes = <2>; //e.g. TDM16 x 2 rockchip,tdm-rx-lanes = <2>; //e.g. TDM16 x 2 rockchip,clk-src = <&i2s2_2ch>; pinctrl-names = "default"; pinctrl-0 = <&i2s0_lrck &i2s0_sclk &i2s0_sdi0 &i2s0_sdi1 &i2s0_sdo0 &i2s0_sdo1>; }; &i2s2_2ch { status = "okay"; assigned-clocks = <&cru I2S2_2CH_MCLKOUT>; assigned-clock-parents = <&cru MCLK_I2S2_2CH>; pinctrl-names = "default"; pinctrl-0 = <&i2s2m0_mclk &i2s2m0_lrck &i2s2m0_sclk>; }; Usage: TDM16 x 2 Playback amixer contents numid=3,iface=MIXER,name='Receive SDIx Select' ; type=ENUMERATED,access=rw------,values=1,items=5 ; Item #0 'Auto' ; Item #1 'SDIx1' ; Item #2 'SDIx2' ; Item rockchip-linux#3 'SDIx3' ; Item rockchip-linux#4 'SDIx4' : values=0 numid=2,iface=MIXER,name='Transmit SDOx Select' ; type=ENUMERATED,access=rw------,values=1,items=5 ; Item #0 'Auto' ; Item #1 'SDOx1' ; Item #2 'SDOx2' ; Item rockchip-linux#3 'SDOx3' ; Item rockchip-linux#4 'SDOx4' : values=0 /# amixer sset "Transmit SDOx Select" "SDOx2" Simple mixer control 'Transmit SDOx Select',0 Capabilities: enum Items: 'Auto' 'SDOx1' 'SDOx2' 'SDOx3' 'SDOx4' Item0: 'SDOx2' /# aplay -D hw:0,0 --period-size=1024 --buffer-size=4096 -r 48000 \ -c 32 -f s32_le /dev/zero Signed-off-by: Sugar Zhang <[email protected]> Change-Id: I6996e05c73a9d68bbeb9562eb6e68e4c99b52d85
hejiawencc
pushed a commit
to LubanCat/kernel
that referenced
this issue
Jan 10, 2024
Example: RK3588 Use I2S2_2CH as Clk-Gen to serve TDM_MULTI_LANES I2S2_2CH ----> BCLK,I2S_LRCK --------> I2S0_8CH_TX (Slave TRCM-TXONLY) | |--------> BCLK,TDM_SYNC --------> TDM Device (Slave) Note: I2S2_2CH_MCLK: BCLK I2S2_2CH_SCLK: I2S_LRCK (GPIO2_B7) I2S2_2CH_LRCK: TDM_SYNC (GPIO2_C0) DT: &i2s0_8ch { status = "okay"; assigned-clocks = <&cru I2S0_8CH_MCLKOUT>; assigned-clock-parents = <&cru MCLK_I2S0_8CH_TX>; i2s-lrck-gpio = <&gpio1 RK_PC5 GPIO_ACTIVE_HIGH>; tdm-fsync-gpio = <&gpio1 RK_PC2 GPIO_ACTIVE_HIGH>; rockchip,tdm-multi-lanes; rockchip,tdm-tx-lanes = <2>; //e.g. TDM16 x 2 rockchip,tdm-rx-lanes = <2>; //e.g. TDM16 x 2 rockchip,clk-src = <&i2s2_2ch>; pinctrl-names = "default"; pinctrl-0 = <&i2s0_lrck &i2s0_sclk &i2s0_sdi0 &i2s0_sdi1 &i2s0_sdo0 &i2s0_sdo1>; }; &i2s2_2ch { status = "okay"; assigned-clocks = <&cru I2S2_2CH_MCLKOUT>; assigned-clock-parents = <&cru MCLK_I2S2_2CH>; pinctrl-names = "default"; pinctrl-0 = <&i2s2m0_mclk &i2s2m0_lrck &i2s2m0_sclk>; }; Usage: TDM16 x 2 Playback amixer contents numid=3,iface=MIXER,name='Receive SDIx Select' ; type=ENUMERATED,access=rw------,values=1,items=5 ; Item #0 'Auto' ; Item #1 'SDIx1' ; Item #2 'SDIx2' ; Item rockchip-linux#3 'SDIx3' ; Item rockchip-linux#4 'SDIx4' : values=0 numid=2,iface=MIXER,name='Transmit SDOx Select' ; type=ENUMERATED,access=rw------,values=1,items=5 ; Item #0 'Auto' ; Item #1 'SDOx1' ; Item #2 'SDOx2' ; Item rockchip-linux#3 'SDOx3' ; Item rockchip-linux#4 'SDOx4' : values=0 /# amixer sset "Transmit SDOx Select" "SDOx2" Simple mixer control 'Transmit SDOx Select',0 Capabilities: enum Items: 'Auto' 'SDOx1' 'SDOx2' 'SDOx3' 'SDOx4' Item0: 'SDOx2' /# aplay -D hw:0,0 --period-size=1024 --buffer-size=4096 -r 48000 \ -c 32 -f s32_le /dev/zero Signed-off-by: Sugar Zhang <[email protected]> Change-Id: I6996e05c73a9d68bbeb9562eb6e68e4c99b52d85
hejiawencc
pushed a commit
to LubanCat/kernel
that referenced
this issue
Jan 10, 2024
This patch add support for DMA-based digital loopback. BACKGROUND Audio Products with AEC require loopback for echo cancellation. the hardware LP is not always available on some products, maybe the HW limitation(such as internal acodec) or HW Cost-down. This patch add support software DLP for such products. Enable: CONFIG_SND_SOC_ROCKCHIP_DLP &i2s { rockchip,digital-loopback; }; Mode List: amixer contents numid=2,iface=MIXER,name='Software Digital Loopback Mode' ; type=ENUMERATED,access=rw------,values=1,items=7 ; Item #0 'Disabled' ; Item #1 '2CH: 1 Loopback + 1 Mic' ; Item #2 '2CH: 1 Mic + 1 Loopback' ; Item rockchip-linux#3 '2CH: 1 Mic + 1 Loopback-mixed' ; Item rockchip-linux#4 '2CH: 2 Loopbacks' ; Item rockchip-linux#5 '4CH: 2 Mics + 2 Loopbacks' ; Item rockchip-linux#6 '4CH: 2 Mics + 1 Loopback-mixed' : values=0 Testenv: wired SDO0 --> SDI0 directly to get external digital loopback as reference. Testcase: dlp.sh /#!/bin/sh item=0 id=`amixer contents | grep "Software Digital Loopback" | \ awk -F ',' '{print $1}'` items=`amixer contents | grep -A 1 "Software Digital Loopback" | \ grep items | awk -F 'items=' '{print $2}'` echo "Software Digital Loopback: $id, items: $items" mode_chs() { case $1 in [0-4]) echo "2" ;; [5-6]) echo "4" ;; *) echo "2" ;; esac } while true do ch=`mode_chs $item` amixer -c 0 cset $id $item arecord -D hw:0,0 --period-size=1024 --buffer-size=4096 -r 48000 -c $ch -f s16_le \ -d 15 sine/dlp_$item.wav & sleep 2 for i in $(seq 1 10) do aplay -D hw:0,0 --period-size=1024 --buffer-size=8192 $((ch))ch.wav -d 1 done pid=$(ps | egrep "aplay|arecord" | grep -v grep | awk '{print $1}' | sort -r) for p in $pid do wait $p 2>/dev/null done item=$((item+1)) if [ $item -ge $items ]; then sleep 1 break fi done echo "Done" Result: do shell test and verify dlp_x.wav: * Alignment: ~1 samples shift (loopback <-> mics). * Integrity: no giltch, no data lost. * AEC: align loopback and mics sample and do simple AEC, get clean waveform. Logs: ... numid=2,iface=MIXER,name='Software Digital Loopback Mode' ; type=ENUMERATED,access=rw------,values=1,items=7 ; Item #0 'Disabled' ; Item #1 '2CH: 1 Loopback + 1 Mic' ; Item #2 '2CH: 1 Mic + 1 Loopback' ; Item rockchip-linux#3 '2CH: 1 Mic + 1 Loopback-mixed' ; Item rockchip-linux#4 '2CH: 2 Loopbacks' ; Item rockchip-linux#5 '4CH: 2 Mics + 2 Loopbacks' ; Item rockchip-linux#6 '4CH: 2 Mics + 1 Loopback-mixed' : values=2 Recording WAVE 'sine/dlp_2.wav' : Signed 16 bit Little Endian, Rate 48000 Hz, Stereo Playing WAVE '2ch.wav' : Signed 16 bit Little Endian, Rate 48000 Hz, Stereo Playing WAVE '2ch.wav' : Signed 16 bit Little Endian, Rate 48000 Hz, Stereo Playing WAVE '2ch.wav' : Signed 16 bit Little Endian, Rate 48000 Hz, Stereo Playing WAVE '2ch.wav' : Signed 16 bit Little Endian, Rate 48000 Hz, Stereo Playing WAVE '2ch.wav' : Signed 16 bit Little Endian, Rate 48000 Hz, Stereo Playing WAVE '2ch.wav' : Signed 16 bit Little Endian, Rate 48000 Hz, Stereo Playing WAVE '2ch.wav' : Signed 16 bit Little Endian, Rate 48000 Hz, Stereo Playing WAVE '2ch.wav' : Signed 16 bit Little Endian, Rate 48000 Hz, Stereo Playing WAVE '2ch.wav' : Signed 16 bit Little Endian, Rate 48000 Hz, Stereo Playing WAVE '2ch.wav' : Signed 16 bit Little Endian, Rate 48000 Hz, Stereo ... numid=2,iface=MIXER,name='Software Digital Loopback Mode' ; type=ENUMERATED,access=rw------,values=1,items=7 ; Item #0 'Disabled' ; Item #1 '2CH: 1 Loopback + 1 Mic' ; Item #2 '2CH: 1 Mic + 1 Loopback' ; Item rockchip-linux#3 '2CH: 1 Mic + 1 Loopback-mixed' ; Item rockchip-linux#4 '2CH: 2 Loopbacks' ; Item rockchip-linux#5 '4CH: 2 Mics + 2 Loopbacks' ; Item rockchip-linux#6 '4CH: 2 Mics + 1 Loopback-mixed' : values=6 Recording WAVE 'sine/dlp_6.wav' : Signed 16 bit Little Endian, Rate 48000 Hz, Channels 4 Playing WAVE '4ch.wav' : Signed 16 bit Little Endian, Rate 48000 Hz, Channels 4 Playing WAVE '4ch.wav' : Signed 16 bit Little Endian, Rate 48000 Hz, Channels 4 Playing WAVE '4ch.wav' : Signed 16 bit Little Endian, Rate 48000 Hz, Channels 4 Playing WAVE '4ch.wav' : Signed 16 bit Little Endian, Rate 48000 Hz, Channels 4 Playing WAVE '4ch.wav' : Signed 16 bit Little Endian, Rate 48000 Hz, Channels 4 Playing WAVE '4ch.wav' : Signed 16 bit Little Endian, Rate 48000 Hz, Channels 4 Playing WAVE '4ch.wav' : Signed 16 bit Little Endian, Rate 48000 Hz, Channels 4 Playing WAVE '4ch.wav' : Signed 16 bit Little Endian, Rate 48000 Hz, Channels 4 Playing WAVE '4ch.wav' : Signed 16 bit Little Endian, Rate 48000 Hz, Channels 4 Playing WAVE '4ch.wav' : Signed 16 bit Little Endian, Rate 48000 Hz, Channels 4 Done Signed-off-by: Sugar Zhang <[email protected]> Change-Id: I5772f0694f7a14a0f0bd1f0777b6c4cdbd781a64
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
root@x:/build/koala/linux_sdk# ./build/mk-kernel.sh rk3288-koala
Building kernel for rk3288-koala board!
Using rk3288-koala_defconfig
configuration written to .config
scripts/kconfig/conf --silentoldconfig Kconfig
make[1]: 'arch/arm/boot/dts/rk3288-koala.dtb' is up to date.
CHK include/config/kernel.release
CHK include/generated/uapi/linux/version.h
CHK include/generated/utsrelease.h
make[1]: 'include/generated/mach-types.h' is up to date.
CHK include/generated/timeconst.h
CHK include/generated/bounds.h
CHK include/generated/asm-offsets.h
CALL scripts/checksyscalls.sh
CHK include/generated/compile.h
GZIP kernel/config_data.gz
CHK kernel/config_data.h
CC drivers/media/video/rk_camsys/camsys_drv.o
CC drivers/media/video/rk_camsys/camsys_marvin.o
CC drivers/media/video/rk_camsys/camsys_mipicsi_phy.o
CC drivers/media/video/rk_camsys/camsys_soc_priv.o
CC drivers/media/video/rk_camsys/camsys_soc_rk3288.o
CC drivers/media/video/rk_camsys/camsys_soc_rk3368.o
CC drivers/media/video/rk_camsys/camsys_soc_rk3399.o
CC drivers/media/video/rk_camsys/ext_flashled_drv/rk_ext_fshled_ctl.o
LD drivers/media/video/rk_camsys/ext_flashled_drv/built-in.o
LD drivers/media/video/rk_camsys/built-in.o
LD drivers/media/built-in.o
LD drivers/built-in.o
LINK vmlinux
LD vmlinux.o
MODPOST vmlinux.o
GEN .version
CHK include/generated/compile.h
UPD include/generated/compile.h
CC init/version.o
LD init/built-in.o
drivers/built-in.o:在函数‘camsys_soc_init’中:
/build/koala/linux_sdk/kernel/drivers/media/video/rk_camsys/camsys_soc_priv.c:61:对‘camsys_rk3368_cfg’未定义的引用
/build/koala/linux_sdk/kernel/drivers/media/video/rk_camsys/camsys_soc_priv.c:61:对‘rockchip_soc_id’未定义的引用
/build/koala/linux_sdk/kernel/drivers/media/video/rk_camsys/camsys_soc_priv.c:61:对‘camsys_rk3366_cfg’未定义的引用
/build/koala/linux_sdk/kernel/drivers/media/video/rk_camsys/camsys_soc_priv.c:61:对‘camsys_rk3399_cfg’未定义的引用
Makefile:947: recipe for target 'vmlinux' failed
make: *** [vmlinux] Error 1
The text was updated successfully, but these errors were encountered: