forked from gluster/glusterfs
-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
glfsxmp file made to work for validation #3
Draft
amarts
wants to merge
1
commit into
master
Choose a base branch
from
glfsxmp-test-statedump
base: master
Could not load branches
Branch not found: {{ refName }}
Loading
Could not load tags
Nothing to show
Loading
Are you sure you want to change the base?
Some commits from the old base branch may be removed from the timeline,
and old review comments may become outdated.
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Change-Id: I4c752f7a68b2e3caf6411557e57283809eaa12fe
amarts
added a commit
that referenced
this pull request
Jul 26, 2021
Change-Id: I92929b39a16abe1d7039e6960a6cae2efcb2ff90 Signed-off-by: Amar Tumballi <[email protected]>
amarts
pushed a commit
that referenced
this pull request
Sep 4, 2021
…#2740) When handling RPC_CLNT_DISCONNECT event, glustershd may be already disconnected and removed from the list of services, and an attempt to extract an entry from empty list causes the following error: ==1364671==ERROR: AddressSanitizer: heap-buffer-overflow on address ... READ of size 1 at 0x60d00001c48f thread T23 #0 0x7ff1a5f6db8c in __interceptor_fopen64.part.0 (/lib64/libasan.so.6+0x53b8c) #1 0x7ff1a5c63717 in gf_is_service_running libglusterfs/src/common-utils.c:4180 #2 0x7ff190178ad3 in glusterd_proc_is_running xlators/mgmt/glusterd/src/glusterd-proc-mgmt.c:157 #3 0x7ff19017ce29 in glusterd_muxsvc_common_rpc_notify xlators/mgmt/glusterd/src/glusterd-svc-mgmt.c:440 #4 0x7ff190176e75 in __glusterd_muxsvc_conn_common_notify xlators/mgmt/glusterd/src/glusterd-conn-mgmt.c:172 #5 0x7ff18fee0940 in glusterd_big_locked_notify xlators/mgmt/glusterd/src/glusterd-handler.c:66 gluster#6 0x7ff190176ec7 in glusterd_muxsvc_conn_common_notify xlators/mgmt/glusterd/src/glusterd-conn-mgmt.c:183 gluster#7 0x7ff1a5b57b60 in rpc_clnt_handle_disconnect rpc/rpc-lib/src/rpc-clnt.c:821 gluster#8 0x7ff1a5b58082 in rpc_clnt_notify rpc/rpc-lib/src/rpc-clnt.c:882 gluster#9 0x7ff1a5b4da47 in rpc_transport_notify rpc/rpc-lib/src/rpc-transport.c:520 gluster#10 0x7ff18fba1d4f in socket_event_poll_err rpc/rpc-transport/socket/src/socket.c:1370 gluster#11 0x7ff18fbb223c in socket_event_handler rpc/rpc-transport/socket/src/socket.c:2971 gluster#12 0x7ff1a5d646ff in event_dispatch_epoll_handler libglusterfs/src/event-epoll.c:638 gluster#13 0x7ff1a5d6539c in event_dispatch_epoll_worker libglusterfs/src/event-epoll.c:749 gluster#14 0x7ff1a5917298 in start_thread /usr/src/debug/glibc-2.33-20.fc34.x86_64/nptl/pthread_create.c:481 gluster#15 0x7ff1a5551352 in clone (/lib64/libc.so.6+0x100352) 0x60d00001c48f is located 12 bytes to the right of 131-byte region [0x60d00001c400,0x60d00001c483) freed by thread T19 here: #0 0x7ff1a5fc8647 in free (/lib64/libasan.so.6+0xae647) Signed-off-by: Dmitry Antipov <[email protected]> Updates: gluster#1000
amarts
pushed a commit
that referenced
this pull request
Sep 16, 2021
Fix a few use-after-free error detected by ASan, for example: ==1089284==ERROR: AddressSanitizer: heap-use-after-free on address ... WRITE of size 8 at 0x61100001ffd0 thread T2 #0 0x7fa6ab385633 in dict_unref libglusterfs/src/dict.c:798 #1 0x41664a in cli_cmd_volume_stop_cbk cli/src/cli-cmd-volume.c:564 #2 0x40f2d6 in cli_cmd_process cli/src/cli-cmd.c:133 #3 0x40e772 in cli_batch cli/src/input.c:29 #4 0x7fa6aae3e298 in start_thread (/lib64/libpthread.so.0+0x9298) #5 0x7fa6aaa78352 in clone (/lib64/libc.so.6+0x100352) 0x61100001ffd0 is located 80 bytes inside of 224-byte region ... freed by thread T2 here: #0 0x7fa6ab732647 in free (/lib64/libasan.so.6+0xae647) #1 0x7fa6ab43171a in __gf_free libglusterfs/src/mem-pool.c:362 #2 0x7fa6ab38559b in dict_destroy libglusterfs/src/dict.c:782 #3 0x7fa6ab38565e in dict_unref libglusterfs/src/dict.c:801 #4 0x40cd28 in cli_local_wipe cli/src/cli.c:783 #5 0x4165c4 in cli_cmd_volume_stop_cbk cli/src/cli-cmd-volume.c:562 gluster#6 0x40f2d6 in cli_cmd_process cli/src/cli-cmd.c:133 gluster#7 0x40e772 in cli_batch cli/src/input.c:29 gluster#8 0x7fa6aae3e298 in start_thread (/lib64/libpthread.so.0+0x9298) previously allocated by thread T2 here: #0 0x7fa6ab732af7 in calloc (/lib64/libasan.so.6+0xaeaf7) #1 0x7fa6ab430a91 in __gf_calloc libglusterfs/src/mem-pool.c:151 #2 0x7fa6ab382562 in get_new_dict_full libglusterfs/src/dict.c:84 #3 0x7fa6ab38268d in dict_new libglusterfs/src/dict.c:127 #4 0x415fcf in cli_cmd_volume_stop_cbk cli/src/cli-cmd-volume.c:501 #5 0x40f2d6 in cli_cmd_process cli/src/cli-cmd.c:133 gluster#6 0x40e772 in cli_batch cli/src/input.c:29 gluster#7 0x7fa6aae3e298 in start_thread (/lib64/libpthread.so.0+0x9298) Thread T2 created by T0 here: #0 0x7fa6ab6da8d6 in pthread_create (/lib64/libasan.so.6+0x568d6) #1 0x40eb2b in cli_input_init cli/src/input.c:76 #2 0x40d0ee in main cli/src/cli.c:863 #3 0x7fa6aa99fb74 in __libc_start_main (/lib64/libc.so.6+0x27b74) Also tweak CLI_LOCAL_INIT() to take an extra reference to related 'dict_t' object and ensure correct reference counting sequence for the most of CLI commands, which should follow the scheme below: int cli_cmd_xxx(...) { int ret = -1; dict_t *options = NULL; ... ret = cli_cmd_xxx_parse(..., &options); /* refcount 1 */ if (!ret) goto out; ... CLI_LOCAL_INIT(..., options); /* refcount 2 */ ... ret = proc->fn(..., options); ... out: CLI_STACK_DESTROY(...); /* refcount 1 */ if (options) dict_unref(options); /* freed due to refcount 0 */ return ret; } Signed-off-by: Dmitry Antipov <[email protected]> Updates: gluster#1000
amarts
pushed a commit
that referenced
this pull request
Mar 31, 2022
Unconditionally free serialized dict data in '__glusterd_send_svc_configure_req()'. Found with AddressSanitizer: ==273334==ERROR: LeakSanitizer: detected memory leaks Direct leak of 89 byte(s) in 1 object(s) allocated from: #0 0x7fc2ce2a293f in __interceptor_malloc (/lib64/libasan.so.6+0xae93f) #1 0x7fc2cdff9c6c in __gf_malloc libglusterfs/src/mem-pool.c:201 #2 0x7fc2cdff9c6c in __gf_malloc libglusterfs/src/mem-pool.c:188 #3 0x7fc2cdf86bde in dict_allocate_and_serialize libglusterfs/src/dict.c:3285 #4 0x7fc2b8398843 in __glusterd_send_svc_configure_req xlators/mgmt/glusterd/src/glusterd-svc-helper.c:830 #5 0x7fc2b8399238 in glusterd_attach_svc xlators/mgmt/glusterd/src/glusterd-svc-helper.c:932 gluster#6 0x7fc2b83a60f1 in glusterd_shdsvc_start xlators/mgmt/glusterd/src/glusterd-shd-svc.c:509 gluster#7 0x7fc2b83a5124 in glusterd_shdsvc_manager xlators/mgmt/glusterd/src/glusterd-shd-svc.c:335 gluster#8 0x7fc2b8395364 in glusterd_svcs_manager xlators/mgmt/glusterd/src/glusterd-svc-helper.c:143 gluster#9 0x7fc2b82e3a6c in glusterd_op_start_volume xlators/mgmt/glusterd/src/glusterd-volume-ops.c:2412 gluster#10 0x7fc2b835ec5a in gd_mgmt_v3_commit_fn xlators/mgmt/glusterd/src/glusterd-mgmt.c:329 gluster#11 0x7fc2b8365497 in glusterd_mgmt_v3_commit xlators/mgmt/glusterd/src/glusterd-mgmt.c:1639 gluster#12 0x7fc2b836ad30 in glusterd_mgmt_v3_initiate_all_phases xlators/mgmt/glusterd/src/glusterd-mgmt.c:2651 gluster#13 0x7fc2b82d504b in __glusterd_handle_cli_start_volume xlators/mgmt/glusterd/src/glusterd-volume-ops.c:364 gluster#14 0x7fc2b817465c in glusterd_big_locked_handler xlators/mgmt/glusterd/src/glusterd-handler.c:79 gluster#15 0x7fc2ce020ff9 in synctask_wrap libglusterfs/src/syncop.c:385 gluster#16 0x7fc2cd69184f (/lib64/libc.so.6+0x5784f) Signed-off-by: Dmitry Antipov <[email protected]> Fixes: gluster#1000
amarts
pushed a commit
that referenced
this pull request
Aug 24, 2022
Fix 'qemu-img' crash discovered as follows: $ gluster volume info test0 Volume Name: test0 Type: Distribute Volume ID: dc5607a7-fadc-42fd-a532-de0b791097ef Status: Started Snapshot Count: 0 Number of Bricks: 3 Transport-type: tcp Bricks: Brick1: 192.168.222.111:/pool/0 Brick2: 192.168.222.111:/pool/1 Brick3: 192.168.222.111:/pool/2 Options Reconfigured: storage.fips-mode-rchecksum: on transport.address-family: inet nfs.disable: on $ qemu-img info gluster://192.168.222.111/test0/test0.img [2022-08-17 08:15:46.704459 +0000] I [io-stats.c:3797:ios_sample_buf_size_configure] 0-test0: Configure ios_sample_buf size is 1024 because ios_sample_interval is 0 Segmentation fault (core dumped) $ gdb -q qemu-img core ... Program terminated with signal SIGSEGV, Segmentation fault. #0 dict_ref (this=this@entry=0x48003278) at dict.c:655 655 GF_ATOMIC_INC(this->refcount); ... (gdb) p *this Cannot access memory at address 0x48003278 (gdb) bt 4 #0 dict_ref (this=this@entry=0x48003278) at dict.c:655 #1 0x00007fb96f34e695 in syncop_seek_cbk (frame=frame@entry=0x55a04de5a2c8, cookie=0x7ffea4b96340, this=<optimized out>, op_ret=op_ret@entry=-1, op_errno=op_errno@entry=77, offset=offset@entry=0, xdata=0x48003278) at syncop.c:3167 #2 0x00007fb9669e7a42 in io_stats_seek_cbk (frame=frame@entry=0x55a04de5a3b8, cookie=<optimized out>, this=<optimized out>, op_ret=op_ret@entry=-1, op_errno=op_errno@entry=77, offset=offset@entry=0, xdata=0x48003278) at io-stats.c:2610 #3 0x00007fb96f39d47d in default_seek_cbk (frame=0x55a04de5b698, cookie=<optimized out>, this=<optimized out>, op_ret=-1, op_errno=77, offset=0, xdata=0x48003278) at defaults.c:1615 #4 0x00007fb96c174f47 in client4_0_seek (frame=0x7fb948000eb8, this=<optimized out>, data=<optimized out>) at client-rpc-fops_v2.c:5299 (More stack frames follow...) Signed-off-by: Dmitry Antipov <[email protected]> Updates: gluster#1000
amarts
pushed a commit
that referenced
this pull request
Aug 26, 2022
Fix 'qemu-img' crash discovered as follows: $ gluster volume info test0 Volume Name: test0 Type: Distribute Volume ID: dc5607a7-fadc-42fd-a532-de0b791097ef Status: Started Snapshot Count: 0 Number of Bricks: 3 Transport-type: tcp Bricks: Brick1: 192.168.222.111:/pool/0 Brick2: 192.168.222.111:/pool/1 Brick3: 192.168.222.111:/pool/2 Options Reconfigured: storage.fips-mode-rchecksum: on transport.address-family: inet nfs.disable: on $ qemu-img info gluster://192.168.222.111/test0/test0.img [2022-08-17 08:15:46.704459 +0000] I [io-stats.c:3797:ios_sample_buf_size_configure] 0-test0: Configure ios_sample_buf size is 1024 because ios_sample_interval is 0 Segmentation fault (core dumped) $ gdb -q qemu-img core ... Program terminated with signal SIGSEGV, Segmentation fault. #0 dict_ref (this=this@entry=0x48003278) at dict.c:655 655 GF_ATOMIC_INC(this->refcount); ... (gdb) p *this Cannot access memory at address 0x48003278 (gdb) bt 4 #0 dict_ref (this=this@entry=0x48003278) at dict.c:655 #1 0x00007fb96f34e695 in syncop_seek_cbk (frame=frame@entry=0x55a04de5a2c8, cookie=0x7ffea4b96340, this=<optimized out>, op_ret=op_ret@entry=-1, op_errno=op_errno@entry=77, offset=offset@entry=0, xdata=0x48003278) at syncop.c:3167 #2 0x00007fb9669e7a42 in io_stats_seek_cbk (frame=frame@entry=0x55a04de5a3b8, cookie=<optimized out>, this=<optimized out>, op_ret=op_ret@entry=-1, op_errno=op_errno@entry=77, offset=offset@entry=0, xdata=0x48003278) at io-stats.c:2610 #3 0x00007fb96f39d47d in default_seek_cbk (frame=0x55a04de5b698, cookie=<optimized out>, this=<optimized out>, op_ret=-1, op_errno=77, offset=0, xdata=0x48003278) at defaults.c:1615 #4 0x00007fb96c174f47 in client4_0_seek (frame=0x7fb948000eb8, this=<optimized out>, data=<optimized out>) at client-rpc-fops_v2.c:5299 (More stack frames follow...) Signed-off-by: Dmitry Antipov <[email protected]> Updates: gluster#1000
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
No description provided.