Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

dht: fix asan use-after-free bug #4242

Merged
merged 1 commit into from
Oct 23, 2023
Merged

Conversation

mohit84
Copy link
Contributor

@mohit84 mohit84 commented Oct 20, 2023

The client is throwing below stacktrace while asan is enabled. The client is facing an issue while application
is trying to call removexattr in 2x1 subvol and non-mds subvol is down. As we can see in below stacktrace dht_setxattr_mds_cbk is calling dht_setxattr_non_mds_cbk and dht_setxattr_non_mds_cbk is trying to wipe
local because call_cnt is 0 but dht_setxattr_mds_cbk is trying to access frame->local that;s why it is crashed.

x621000051c34 is located 1844 bytes inside of 4164-byte region [0x621000051500,0x621000052544) freed by thread T7 here:
#0 0x7f916ccb9388 in __interceptor_free.part.0 (/lib64/libasan.so.8+0xb9388)
#1 0x7f91654af204 in dht_local_wipe /root/glusterfs_new/glusterfs/xlators/cluster/dht/src/dht-helper.c:713
#2 0x7f91654af204 in dht_setxattr_non_mds_cbk /root/glusterfs_new/glusterfs/xlators/cluster/dht/src/dht-common.c:3900
#3 0x7f91694c1f42 in client4_0_removexattr_cbk /root/glusterfs_new/glusterfs/xlators/protocol/client/src/client-rpc-fops_v2.c:1061
#4 0x7f91694ba26f in client_submit_request /root/glusterfs_new/glusterfs/xlators/protocol/client/src/client.c:288
#5 0x7f91695021bd in client4_0_removexattr /root/glusterfs_new/glusterfs/xlators/protocol/client/src/client-rpc-fops_v2.c:4480
#6 0x7f91694a5f56 in client_removexattr /root/glusterfs_new/glusterfs/xlators/protocol/client/src/client.c:1439
#7 0x7f91654a1161 in dht_setxattr_mds_cbk /root/glusterfs_new/glusterfs/xlators/cluster/dht/src/dht-common.c:3979
#8 0x7f91694c1f42 in client4_0_removexattr_cbk /root/glusterfs_new/glusterfs/xlators/protocol/client/src/client-rpc-fops_v2.c:1061
#9 0x7f916cbc4340 in rpc_clnt_handle_reply /root/glusterfs_new/glusterfs/rpc/rpc-lib/src/rpc-clnt.c:723
#10 0x7f916cbc4340 in rpc_clnt_notify /root/glusterfs_new/glusterfs/rpc/rpc-lib/src/rpc-clnt.c:890
#11 0x7f916cbb7ec5 in rpc_transport_notify /root/glusterfs_new/glusterfs/rpc/rpc-lib/src/rpc-transport.c:504
#12 0x7f916a1aa5fa in socket_event_poll_in_async /root/glusterfs_new/glusterfs/rpc/rpc-transport/socket/src/socket.c:2358
#13 0x7f916a1bd7c2 in gf_async ../../../../libglusterfs/src/glusterfs/async.h:187
#14 0x7f916a1bd7c2 in socket_event_poll_in /root/glusterfs_new/glusterfs/rpc/rpc-transport/socket/src/socket.c:2399
#15 0x7f916a1bd7c2 in socket_event_handler /root/glusterfs_new/glusterfs/rpc/rpc-transport/socket/src/socket.c:2790
#16 0x7f916a1bd7c2 in socket_event_handler /root/glusterfs_new/glusterfs/rpc/rpc-transport/socket/src/socket.c:2710
#17 0x7f916c946d22 in event_dispatch_epoll_handler /root/glusterfs_new/glusterfs/libglusterfs/src/event-epoll.c:614
#18 0x7f916c946d22 in event_dispatch_epoll_worker /root/glusterfs_new/glusterfs/libglusterfs/src/event-epoll.c:725
#19 0x7f916be8cdec in start_thread (/lib64/libc.so.6+0x8cdec)

Solution: Use switch instead of using if statement to wind a operation, in case of switch the code
will not try to access local after wind a operation for last dht subvol.

Fixes: #3732
Change-Id: I031bc814d6df98058430ef4de7040e3370d1c677

The client is throwing below stacktrace while asan is enabled.
The client is facing an issue while application is trying
to call removexattr in 2x1 subvol and non-mds subvol is down.
As we can see in below stacktrace dht_setxattr_mds_cbk is calling
dht_setxattr_non_mds_cbk and dht_setxattr_non_mds_cbk is trying to
wipe local because call_cnt is 0 but dht_setxattr_mds_cbk is trying
to access frame->local that;s why it is crashed.

x621000051c34 is located 1844 bytes inside of 4164-byte region [0x621000051500,0x621000052544)
freed by thread T7 here:
    #0 0x7f916ccb9388 in __interceptor_free.part.0 (/lib64/libasan.so.8+0xb9388)
    GlusterFS#1 0x7f91654af204 in dht_local_wipe /root/glusterfs_new/glusterfs/xlators/cluster/dht/src/dht-helper.c:713
    gluster#2 0x7f91654af204 in dht_setxattr_non_mds_cbk /root/glusterfs_new/glusterfs/xlators/cluster/dht/src/dht-common.c:3900
    gluster#3 0x7f91694c1f42 in client4_0_removexattr_cbk /root/glusterfs_new/glusterfs/xlators/protocol/client/src/client-rpc-fops_v2.c:1061
    gluster#4 0x7f91694ba26f in client_submit_request /root/glusterfs_new/glusterfs/xlators/protocol/client/src/client.c:288
    gluster#5 0x7f91695021bd in client4_0_removexattr /root/glusterfs_new/glusterfs/xlators/protocol/client/src/client-rpc-fops_v2.c:4480
    gluster#6 0x7f91694a5f56 in client_removexattr /root/glusterfs_new/glusterfs/xlators/protocol/client/src/client.c:1439
    gluster#7 0x7f91654a1161 in dht_setxattr_mds_cbk /root/glusterfs_new/glusterfs/xlators/cluster/dht/src/dht-common.c:3979
    gluster#8 0x7f91694c1f42 in client4_0_removexattr_cbk /root/glusterfs_new/glusterfs/xlators/protocol/client/src/client-rpc-fops_v2.c:1061
    gluster#9 0x7f916cbc4340 in rpc_clnt_handle_reply /root/glusterfs_new/glusterfs/rpc/rpc-lib/src/rpc-clnt.c:723
    gluster#10 0x7f916cbc4340 in rpc_clnt_notify /root/glusterfs_new/glusterfs/rpc/rpc-lib/src/rpc-clnt.c:890
    gluster#11 0x7f916cbb7ec5 in rpc_transport_notify /root/glusterfs_new/glusterfs/rpc/rpc-lib/src/rpc-transport.c:504
    gluster#12 0x7f916a1aa5fa in socket_event_poll_in_async /root/glusterfs_new/glusterfs/rpc/rpc-transport/socket/src/socket.c:2358
    gluster#13 0x7f916a1bd7c2 in gf_async ../../../../libglusterfs/src/glusterfs/async.h:187
    gluster#14 0x7f916a1bd7c2 in socket_event_poll_in /root/glusterfs_new/glusterfs/rpc/rpc-transport/socket/src/socket.c:2399
    gluster#15 0x7f916a1bd7c2 in socket_event_handler /root/glusterfs_new/glusterfs/rpc/rpc-transport/socket/src/socket.c:2790
    gluster#16 0x7f916a1bd7c2 in socket_event_handler /root/glusterfs_new/glusterfs/rpc/rpc-transport/socket/src/socket.c:2710
    gluster#17 0x7f916c946d22 in event_dispatch_epoll_handler /root/glusterfs_new/glusterfs/libglusterfs/src/event-epoll.c:614
    gluster#18 0x7f916c946d22 in event_dispatch_epoll_worker /root/glusterfs_new/glusterfs/libglusterfs/src/event-epoll.c:725
    gluster#19 0x7f916be8cdec in start_thread (/lib64/libc.so.6+0x8cdec)

Solution: Use switch instead of using if statement to wind a operation, in case of switch
          the code will not try to access local after wind a operation for last dht
          subvol.

Fixes: gluster#3732
Change-Id: I031bc814d6df98058430ef4de7040e3370d1c677
Signed-off-by: Mohit Agrawal <[email protected]>
@mohit84 mohit84 requested a review from xhernandez October 20, 2023 06:25
@mohit84
Copy link
Contributor Author

mohit84 commented Oct 20, 2023

/run regression

@gluster-ant
Copy link
Collaborator

1 test(s) failed
./tests/basic/quick-read-with-upcall.t

0 test(s) generated core

2 test(s) needed retry
./tests/000-flaky/basic_distribute_rebal-all-nodes-migrate.t
./tests/basic/quick-read-with-upcall.t
https://build.gluster.org/job/gh_centos7-regression/3346/

@mohit84
Copy link
Contributor Author

mohit84 commented Oct 20, 2023

/run regression

@mohit84 mohit84 merged commit 650ab3a into gluster:devel Oct 23, 2023
mohit84 added a commit to mohit84/glusterfs that referenced this pull request Oct 23, 2023
The client is throwing below stacktrace while asan is enabled. The client is facing an
issue while application is trying to call removexattr in 2x1 subvol and non-mds subvol
is down. As we can see in below stacktrace dht_setxattr_mds_cbk is calling
dht_setxattr_non_mds_cbk and dht_setxattr_non_mds_cbk is trying to wipe local because
call_cnt is 0 but dht_setxattr_mds_cbk is trying to access frame->local that;s
why it is crashed.

x621000051c34 is located 1844 bytes inside of 4164-byte region [0x621000051500,0x621000052544) freed by thread T7 here:

Solution: Use switch instead of using if statement to wind a operation, in case of switch
          the code will not try to access local after wind a operation for last dht subvol.

> Fixes: gluster#3732
> Change-Id: I031bc814d6df98058430ef4de7040e3370d1c677
> (Reviewed on upstream link gluster#4242)

Fixex: gluster#3732
Change-Id: I031bc814d6df98058430ef4de7040e3370d1c677
Fixes: gluster#3732
Signed-off-by: Mohit Agrawal <[email protected]>
mohit84 added a commit to mohit84/glusterfs that referenced this pull request Oct 23, 2023
The client is throwing below stacktrace while asan is enabled. The client is facing
an issue while application is trying to call removexattr in 2x1 subvol and non-mds
subvol is down. As we can see in below stacktrace dht_setxattr_mds_cbk is calling
dht_setxattr_non_mds_cbk and dht_setxattr_non_mds_cbk is trying to wipe local because
call_cnt is 0 but dht_setxattr_mds_cbk is trying to access frame->local that;s why
it is crashed.

x621000051c34 is located 1844 bytes inside of 4164-byte region [0x621000051500,0x621000052544) freed by thread T7 here:

Solution: Use switch instead of using if statement to wind a operation, in case of switch
          the code will not try to access local after wind a operation for last dht subvol.

> Fixes: gluster#3732
> Change-Id: I031bc814d6df98058430ef4de7040e3370d1c677
> (Cherry picke from commit 11ff6f5)
> (Reviwed on upstream link gluster#4242)

Fixes: gluster#3732
Change-Id: I031bc814d6df98058430ef4de7040e3370d1c677
Signed-off-by: Mohit Agrawal <[email protected]>
Shwetha-Acharya pushed a commit that referenced this pull request Oct 25, 2023
The client is throwing below stacktrace while asan is enabled. The client is facing
an issue while application is trying to call removexattr in 2x1 subvol and non-mds
subvol is down. As we can see in below stacktrace dht_setxattr_mds_cbk is calling
dht_setxattr_non_mds_cbk and dht_setxattr_non_mds_cbk is trying to wipe local because
call_cnt is 0 but dht_setxattr_mds_cbk is trying to access frame->local that;s why
it is crashed.

x621000051c34 is located 1844 bytes inside of 4164-byte region [0x621000051500,0x621000052544) freed by thread T7 here:

Solution: Use switch instead of using if statement to wind a operation, in case of switch
          the code will not try to access local after wind a operation for last dht subvol.

> Fixes: #3732
> Change-Id: I031bc814d6df98058430ef4de7040e3370d1c677
> (Cherry picke from commit 11ff6f5)
> (Reviwed on upstream link #4242)

Fixes: #3732
Change-Id: I031bc814d6df98058430ef4de7040e3370d1c677

Signed-off-by: Mohit Agrawal <[email protected]>
Shwetha-Acharya pushed a commit that referenced this pull request Oct 25, 2023
The client is throwing below stacktrace while asan is enabled. The client is facing an
issue while application is trying to call removexattr in 2x1 subvol and non-mds subvol
is down. As we can see in below stacktrace dht_setxattr_mds_cbk is calling
dht_setxattr_non_mds_cbk and dht_setxattr_non_mds_cbk is trying to wipe local because
call_cnt is 0 but dht_setxattr_mds_cbk is trying to access frame->local that;s
why it is crashed.

x621000051c34 is located 1844 bytes inside of 4164-byte region [0x621000051500,0x621000052544) freed by thread T7 here:

Solution: Use switch instead of using if statement to wind a operation, in case of switch
          the code will not try to access local after wind a operation for last dht subvol.

> Fixes: #3732
> Change-Id: I031bc814d6df98058430ef4de7040e3370d1c677
> (Reviewed on upstream link #4242)

Fixex: #3732
Change-Id: I031bc814d6df98058430ef4de7040e3370d1c677
Fixes: #3732

Signed-off-by: Mohit Agrawal <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

AddressSanitizer: heap-use-after-free
3 participants