-
Notifications
You must be signed in to change notification settings - Fork 1.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
dht: fix asan use-after-free bug #4242
Merged
Merged
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
The client is throwing below stacktrace while asan is enabled. The client is facing an issue while application is trying to call removexattr in 2x1 subvol and non-mds subvol is down. As we can see in below stacktrace dht_setxattr_mds_cbk is calling dht_setxattr_non_mds_cbk and dht_setxattr_non_mds_cbk is trying to wipe local because call_cnt is 0 but dht_setxattr_mds_cbk is trying to access frame->local that;s why it is crashed. x621000051c34 is located 1844 bytes inside of 4164-byte region [0x621000051500,0x621000052544) freed by thread T7 here: #0 0x7f916ccb9388 in __interceptor_free.part.0 (/lib64/libasan.so.8+0xb9388) GlusterFS#1 0x7f91654af204 in dht_local_wipe /root/glusterfs_new/glusterfs/xlators/cluster/dht/src/dht-helper.c:713 gluster#2 0x7f91654af204 in dht_setxattr_non_mds_cbk /root/glusterfs_new/glusterfs/xlators/cluster/dht/src/dht-common.c:3900 gluster#3 0x7f91694c1f42 in client4_0_removexattr_cbk /root/glusterfs_new/glusterfs/xlators/protocol/client/src/client-rpc-fops_v2.c:1061 gluster#4 0x7f91694ba26f in client_submit_request /root/glusterfs_new/glusterfs/xlators/protocol/client/src/client.c:288 gluster#5 0x7f91695021bd in client4_0_removexattr /root/glusterfs_new/glusterfs/xlators/protocol/client/src/client-rpc-fops_v2.c:4480 gluster#6 0x7f91694a5f56 in client_removexattr /root/glusterfs_new/glusterfs/xlators/protocol/client/src/client.c:1439 gluster#7 0x7f91654a1161 in dht_setxattr_mds_cbk /root/glusterfs_new/glusterfs/xlators/cluster/dht/src/dht-common.c:3979 gluster#8 0x7f91694c1f42 in client4_0_removexattr_cbk /root/glusterfs_new/glusterfs/xlators/protocol/client/src/client-rpc-fops_v2.c:1061 gluster#9 0x7f916cbc4340 in rpc_clnt_handle_reply /root/glusterfs_new/glusterfs/rpc/rpc-lib/src/rpc-clnt.c:723 gluster#10 0x7f916cbc4340 in rpc_clnt_notify /root/glusterfs_new/glusterfs/rpc/rpc-lib/src/rpc-clnt.c:890 gluster#11 0x7f916cbb7ec5 in rpc_transport_notify /root/glusterfs_new/glusterfs/rpc/rpc-lib/src/rpc-transport.c:504 gluster#12 0x7f916a1aa5fa in socket_event_poll_in_async /root/glusterfs_new/glusterfs/rpc/rpc-transport/socket/src/socket.c:2358 gluster#13 0x7f916a1bd7c2 in gf_async ../../../../libglusterfs/src/glusterfs/async.h:187 gluster#14 0x7f916a1bd7c2 in socket_event_poll_in /root/glusterfs_new/glusterfs/rpc/rpc-transport/socket/src/socket.c:2399 gluster#15 0x7f916a1bd7c2 in socket_event_handler /root/glusterfs_new/glusterfs/rpc/rpc-transport/socket/src/socket.c:2790 gluster#16 0x7f916a1bd7c2 in socket_event_handler /root/glusterfs_new/glusterfs/rpc/rpc-transport/socket/src/socket.c:2710 gluster#17 0x7f916c946d22 in event_dispatch_epoll_handler /root/glusterfs_new/glusterfs/libglusterfs/src/event-epoll.c:614 gluster#18 0x7f916c946d22 in event_dispatch_epoll_worker /root/glusterfs_new/glusterfs/libglusterfs/src/event-epoll.c:725 gluster#19 0x7f916be8cdec in start_thread (/lib64/libc.so.6+0x8cdec) Solution: Use switch instead of using if statement to wind a operation, in case of switch the code will not try to access local after wind a operation for last dht subvol. Fixes: gluster#3732 Change-Id: I031bc814d6df98058430ef4de7040e3370d1c677 Signed-off-by: Mohit Agrawal <[email protected]>
/run regression |
1 test(s) failed 0 test(s) generated core 2 test(s) needed retry |
/run regression |
xhernandez
approved these changes
Oct 23, 2023
mohit84
added a commit
to mohit84/glusterfs
that referenced
this pull request
Oct 23, 2023
The client is throwing below stacktrace while asan is enabled. The client is facing an issue while application is trying to call removexattr in 2x1 subvol and non-mds subvol is down. As we can see in below stacktrace dht_setxattr_mds_cbk is calling dht_setxattr_non_mds_cbk and dht_setxattr_non_mds_cbk is trying to wipe local because call_cnt is 0 but dht_setxattr_mds_cbk is trying to access frame->local that;s why it is crashed. x621000051c34 is located 1844 bytes inside of 4164-byte region [0x621000051500,0x621000052544) freed by thread T7 here: Solution: Use switch instead of using if statement to wind a operation, in case of switch the code will not try to access local after wind a operation for last dht subvol. > Fixes: gluster#3732 > Change-Id: I031bc814d6df98058430ef4de7040e3370d1c677 > (Reviewed on upstream link gluster#4242) Fixex: gluster#3732 Change-Id: I031bc814d6df98058430ef4de7040e3370d1c677 Fixes: gluster#3732 Signed-off-by: Mohit Agrawal <[email protected]>
mohit84
added a commit
to mohit84/glusterfs
that referenced
this pull request
Oct 23, 2023
The client is throwing below stacktrace while asan is enabled. The client is facing an issue while application is trying to call removexattr in 2x1 subvol and non-mds subvol is down. As we can see in below stacktrace dht_setxattr_mds_cbk is calling dht_setxattr_non_mds_cbk and dht_setxattr_non_mds_cbk is trying to wipe local because call_cnt is 0 but dht_setxattr_mds_cbk is trying to access frame->local that;s why it is crashed. x621000051c34 is located 1844 bytes inside of 4164-byte region [0x621000051500,0x621000052544) freed by thread T7 here: Solution: Use switch instead of using if statement to wind a operation, in case of switch the code will not try to access local after wind a operation for last dht subvol. > Fixes: gluster#3732 > Change-Id: I031bc814d6df98058430ef4de7040e3370d1c677 > (Cherry picke from commit 11ff6f5) > (Reviwed on upstream link gluster#4242) Fixes: gluster#3732 Change-Id: I031bc814d6df98058430ef4de7040e3370d1c677 Signed-off-by: Mohit Agrawal <[email protected]>
Shwetha-Acharya
pushed a commit
that referenced
this pull request
Oct 25, 2023
The client is throwing below stacktrace while asan is enabled. The client is facing an issue while application is trying to call removexattr in 2x1 subvol and non-mds subvol is down. As we can see in below stacktrace dht_setxattr_mds_cbk is calling dht_setxattr_non_mds_cbk and dht_setxattr_non_mds_cbk is trying to wipe local because call_cnt is 0 but dht_setxattr_mds_cbk is trying to access frame->local that;s why it is crashed. x621000051c34 is located 1844 bytes inside of 4164-byte region [0x621000051500,0x621000052544) freed by thread T7 here: Solution: Use switch instead of using if statement to wind a operation, in case of switch the code will not try to access local after wind a operation for last dht subvol. > Fixes: #3732 > Change-Id: I031bc814d6df98058430ef4de7040e3370d1c677 > (Cherry picke from commit 11ff6f5) > (Reviwed on upstream link #4242) Fixes: #3732 Change-Id: I031bc814d6df98058430ef4de7040e3370d1c677 Signed-off-by: Mohit Agrawal <[email protected]>
Shwetha-Acharya
pushed a commit
that referenced
this pull request
Oct 25, 2023
The client is throwing below stacktrace while asan is enabled. The client is facing an issue while application is trying to call removexattr in 2x1 subvol and non-mds subvol is down. As we can see in below stacktrace dht_setxattr_mds_cbk is calling dht_setxattr_non_mds_cbk and dht_setxattr_non_mds_cbk is trying to wipe local because call_cnt is 0 but dht_setxattr_mds_cbk is trying to access frame->local that;s why it is crashed. x621000051c34 is located 1844 bytes inside of 4164-byte region [0x621000051500,0x621000052544) freed by thread T7 here: Solution: Use switch instead of using if statement to wind a operation, in case of switch the code will not try to access local after wind a operation for last dht subvol. > Fixes: #3732 > Change-Id: I031bc814d6df98058430ef4de7040e3370d1c677 > (Reviewed on upstream link #4242) Fixex: #3732 Change-Id: I031bc814d6df98058430ef4de7040e3370d1c677 Fixes: #3732 Signed-off-by: Mohit Agrawal <[email protected]>
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
The client is throwing below stacktrace while asan is enabled. The client is facing an issue while application
is trying to call removexattr in 2x1 subvol and non-mds subvol is down. As we can see in below stacktrace dht_setxattr_mds_cbk is calling dht_setxattr_non_mds_cbk and dht_setxattr_non_mds_cbk is trying to wipe
local because call_cnt is 0 but dht_setxattr_mds_cbk is trying to access frame->local that;s why it is crashed.
x621000051c34 is located 1844 bytes inside of 4164-byte region [0x621000051500,0x621000052544) freed by thread T7 here:
#0 0x7f916ccb9388 in __interceptor_free.part.0 (/lib64/libasan.so.8+0xb9388)
#1 0x7f91654af204 in dht_local_wipe /root/glusterfs_new/glusterfs/xlators/cluster/dht/src/dht-helper.c:713
#2 0x7f91654af204 in dht_setxattr_non_mds_cbk /root/glusterfs_new/glusterfs/xlators/cluster/dht/src/dht-common.c:3900
#3 0x7f91694c1f42 in client4_0_removexattr_cbk /root/glusterfs_new/glusterfs/xlators/protocol/client/src/client-rpc-fops_v2.c:1061
#4 0x7f91694ba26f in client_submit_request /root/glusterfs_new/glusterfs/xlators/protocol/client/src/client.c:288
#5 0x7f91695021bd in client4_0_removexattr /root/glusterfs_new/glusterfs/xlators/protocol/client/src/client-rpc-fops_v2.c:4480
#6 0x7f91694a5f56 in client_removexattr /root/glusterfs_new/glusterfs/xlators/protocol/client/src/client.c:1439
#7 0x7f91654a1161 in dht_setxattr_mds_cbk /root/glusterfs_new/glusterfs/xlators/cluster/dht/src/dht-common.c:3979
#8 0x7f91694c1f42 in client4_0_removexattr_cbk /root/glusterfs_new/glusterfs/xlators/protocol/client/src/client-rpc-fops_v2.c:1061
#9 0x7f916cbc4340 in rpc_clnt_handle_reply /root/glusterfs_new/glusterfs/rpc/rpc-lib/src/rpc-clnt.c:723
#10 0x7f916cbc4340 in rpc_clnt_notify /root/glusterfs_new/glusterfs/rpc/rpc-lib/src/rpc-clnt.c:890
#11 0x7f916cbb7ec5 in rpc_transport_notify /root/glusterfs_new/glusterfs/rpc/rpc-lib/src/rpc-transport.c:504
#12 0x7f916a1aa5fa in socket_event_poll_in_async /root/glusterfs_new/glusterfs/rpc/rpc-transport/socket/src/socket.c:2358
#13 0x7f916a1bd7c2 in gf_async ../../../../libglusterfs/src/glusterfs/async.h:187
#14 0x7f916a1bd7c2 in socket_event_poll_in /root/glusterfs_new/glusterfs/rpc/rpc-transport/socket/src/socket.c:2399
#15 0x7f916a1bd7c2 in socket_event_handler /root/glusterfs_new/glusterfs/rpc/rpc-transport/socket/src/socket.c:2790
#16 0x7f916a1bd7c2 in socket_event_handler /root/glusterfs_new/glusterfs/rpc/rpc-transport/socket/src/socket.c:2710
#17 0x7f916c946d22 in event_dispatch_epoll_handler /root/glusterfs_new/glusterfs/libglusterfs/src/event-epoll.c:614
#18 0x7f916c946d22 in event_dispatch_epoll_worker /root/glusterfs_new/glusterfs/libglusterfs/src/event-epoll.c:725
#19 0x7f916be8cdec in start_thread (/lib64/libc.so.6+0x8cdec)
Solution: Use switch instead of using if statement to wind a operation, in case of switch the code
will not try to access local after wind a operation for last dht subvol.
Fixes: #3732
Change-Id: I031bc814d6df98058430ef4de7040e3370d1c677