Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Quota new design #16

Closed
wants to merge 7 commits into from
Closed

Conversation

vpshastry
Copy link

No description provided.

vpshastry and others added 7 commits May 27, 2013 10:38
Old implementation
* Client side implementation of quota
    - Not secured
    - Increased traffic in updating the ctx

New Implementation
* 2 stages of quota implementation is done: Soft and hard quota
    Upon reaching soft quota limit on the directory it logs and no more writes
    allowed after hard quota limit. Additionally the usage is logged @ 80, 90,
    95 percent of the hard limit.

* Quota is moved to server-side.
    Server side implementation secures the quota from mounting with
    modified volfile.

    There will be 2 quota xlators
    i. Quota Server
        It takes care of the enforcing the quota and maintains the context
        specific to the brick. Since this doesn't have the complete picture of
        the cluster, cluster wide usage is updated from the quota client. This
        updated context is saved and used for the enforcement.

        It updates its context by searching the QUOTA_UPDATE_KEY from the dict
        in the setxattr call.

    ii. Quota Client
        A new xlator introduced with this patch. Its a gluster client process
        with no mount point, started upon enabling quota or restarting the
        volume.

        It queries for the sizes on all the bricks, aggregates the size and
        sends back the updated size, periodically. It maintains 2 lists: one
        for usage above soft quota and other for below soft quota. The timeout
        between successive updation is configurable and typically/by default
        more for below-soft-quota-list and less for above-soft-quota-list.

        There will be 2 threads for each volume: one for above-soft-quota-list
        and one for below-soft-quota-list, to periodically trigger the get and
        setxattrs.

   Thus in the current implementation we'll have 2 quota xlators: one in server
   graph and one in trusted client of which the sole purpose will be to
   aggregate the quota size xattrs on all the bricks and send the same to
   server quota xlator.

* Changes in glusterd
   A single volfile is created for all the volumes, similar to nfs volfile.
   All files related to quota client (volfile, pid etc) are stored in
   GLUSTERD_DEFAULT_WORK_DIR/quota-client.

Change-Id: I16ec5be0c2faaf42b14034b9ccaf17796adef082
Signed-off-by: Varun Shastry <[email protected]>
Change-Id: Iaa9d191f8622189f768f3b7dac71523f55376cff
Signed-off-by: Varun Shastry <[email protected]>
Change-Id: I329beb8994af929a06641315fe1abb5d994b53bf
Signed-off-by: Varun Shastry <[email protected]>
Change-Id: I3a232f64b73e6ff263e763b5c6b0b4e0741b7639
Signed-off-by: Krutika Dhananjay <[email protected]>
Change-Id: I3a232f64b73e6ff263e763b5c6b0b4e0741b7639
Signed-off-by: Krutika Dhananjay <[email protected]>
@avati
Copy link
Member

avati commented Sep 7, 2013

Changes come in only through review.gluster.org

@avati avati closed this Sep 7, 2013
dmantipov added a commit to dmantipov/glusterfs that referenced this pull request Nov 29, 2021
Unconditionally free serialized dict data
in '__glusterd_send_svc_configure_req()'.

Found with AddressSanitizer:

==273334==ERROR: LeakSanitizer: detected memory leaks

Direct leak of 89 byte(s) in 1 object(s) allocated from:
    #0 0x7fc2ce2a293f in __interceptor_malloc (/lib64/libasan.so.6+0xae93f)
    gluster#1 0x7fc2cdff9c6c in __gf_malloc libglusterfs/src/mem-pool.c:201
    gluster#2 0x7fc2cdff9c6c in __gf_malloc libglusterfs/src/mem-pool.c:188
    gluster#3 0x7fc2cdf86bde in dict_allocate_and_serialize libglusterfs/src/dict.c:3285
    gluster#4 0x7fc2b8398843 in __glusterd_send_svc_configure_req xlators/mgmt/glusterd/src/glusterd-svc-helper.c:830
    gluster#5 0x7fc2b8399238 in glusterd_attach_svc xlators/mgmt/glusterd/src/glusterd-svc-helper.c:932
    gluster#6 0x7fc2b83a60f1 in glusterd_shdsvc_start xlators/mgmt/glusterd/src/glusterd-shd-svc.c:509
    gluster#7 0x7fc2b83a5124 in glusterd_shdsvc_manager xlators/mgmt/glusterd/src/glusterd-shd-svc.c:335
    gluster#8 0x7fc2b8395364 in glusterd_svcs_manager xlators/mgmt/glusterd/src/glusterd-svc-helper.c:143
    gluster#9 0x7fc2b82e3a6c in glusterd_op_start_volume xlators/mgmt/glusterd/src/glusterd-volume-ops.c:2412
    gluster#10 0x7fc2b835ec5a in gd_mgmt_v3_commit_fn xlators/mgmt/glusterd/src/glusterd-mgmt.c:329
    gluster#11 0x7fc2b8365497 in glusterd_mgmt_v3_commit xlators/mgmt/glusterd/src/glusterd-mgmt.c:1639
    gluster#12 0x7fc2b836ad30 in glusterd_mgmt_v3_initiate_all_phases xlators/mgmt/glusterd/src/glusterd-mgmt.c:2651
    gluster#13 0x7fc2b82d504b in __glusterd_handle_cli_start_volume xlators/mgmt/glusterd/src/glusterd-volume-ops.c:364
    gluster#14 0x7fc2b817465c in glusterd_big_locked_handler xlators/mgmt/glusterd/src/glusterd-handler.c:79
    gluster#15 0x7fc2ce020ff9 in synctask_wrap libglusterfs/src/syncop.c:385
    gluster#16 0x7fc2cd69184f  (/lib64/libc.so.6+0x5784f)

Signed-off-by: Dmitry Antipov <[email protected]>
Fixes: gluster#1000
xhernandez pushed a commit that referenced this pull request Mar 4, 2022
Unconditionally free serialized dict data
in '__glusterd_send_svc_configure_req()'.

Found with AddressSanitizer:

==273334==ERROR: LeakSanitizer: detected memory leaks

Direct leak of 89 byte(s) in 1 object(s) allocated from:
    #0 0x7fc2ce2a293f in __interceptor_malloc (/lib64/libasan.so.6+0xae93f)
    #1 0x7fc2cdff9c6c in __gf_malloc libglusterfs/src/mem-pool.c:201
    #2 0x7fc2cdff9c6c in __gf_malloc libglusterfs/src/mem-pool.c:188
    #3 0x7fc2cdf86bde in dict_allocate_and_serialize libglusterfs/src/dict.c:3285
    #4 0x7fc2b8398843 in __glusterd_send_svc_configure_req xlators/mgmt/glusterd/src/glusterd-svc-helper.c:830
    #5 0x7fc2b8399238 in glusterd_attach_svc xlators/mgmt/glusterd/src/glusterd-svc-helper.c:932
    #6 0x7fc2b83a60f1 in glusterd_shdsvc_start xlators/mgmt/glusterd/src/glusterd-shd-svc.c:509
    #7 0x7fc2b83a5124 in glusterd_shdsvc_manager xlators/mgmt/glusterd/src/glusterd-shd-svc.c:335
    #8 0x7fc2b8395364 in glusterd_svcs_manager xlators/mgmt/glusterd/src/glusterd-svc-helper.c:143
    #9 0x7fc2b82e3a6c in glusterd_op_start_volume xlators/mgmt/glusterd/src/glusterd-volume-ops.c:2412
    #10 0x7fc2b835ec5a in gd_mgmt_v3_commit_fn xlators/mgmt/glusterd/src/glusterd-mgmt.c:329
    #11 0x7fc2b8365497 in glusterd_mgmt_v3_commit xlators/mgmt/glusterd/src/glusterd-mgmt.c:1639
    #12 0x7fc2b836ad30 in glusterd_mgmt_v3_initiate_all_phases xlators/mgmt/glusterd/src/glusterd-mgmt.c:2651
    #13 0x7fc2b82d504b in __glusterd_handle_cli_start_volume xlators/mgmt/glusterd/src/glusterd-volume-ops.c:364
    #14 0x7fc2b817465c in glusterd_big_locked_handler xlators/mgmt/glusterd/src/glusterd-handler.c:79
    #15 0x7fc2ce020ff9 in synctask_wrap libglusterfs/src/syncop.c:385
    #16 0x7fc2cd69184f  (/lib64/libc.so.6+0x5784f)

Signed-off-by: Dmitry Antipov <[email protected]>
Fixes: #1000
mohit84 added a commit to mohit84/glusterfs that referenced this pull request Oct 20, 2023
The client is throwing below stacktrace while asan is enabled.
The client is facing an issue while application is trying
to call removexattr in 2x1 subvol and non-mds subvol is down.
As we can see in below stacktrace dht_setxattr_mds_cbk is calling
dht_setxattr_non_mds_cbk and dht_setxattr_non_mds_cbk is trying to
wipe local because call_cnt is 0 but dht_setxattr_mds_cbk is trying
to access frame->local that;s why it is crashed.

x621000051c34 is located 1844 bytes inside of 4164-byte region [0x621000051500,0x621000052544)
freed by thread T7 here:
    #0 0x7f916ccb9388 in __interceptor_free.part.0 (/lib64/libasan.so.8+0xb9388)
    GlusterFS#1 0x7f91654af204 in dht_local_wipe /root/glusterfs_new/glusterfs/xlators/cluster/dht/src/dht-helper.c:713
    gluster#2 0x7f91654af204 in dht_setxattr_non_mds_cbk /root/glusterfs_new/glusterfs/xlators/cluster/dht/src/dht-common.c:3900
    gluster#3 0x7f91694c1f42 in client4_0_removexattr_cbk /root/glusterfs_new/glusterfs/xlators/protocol/client/src/client-rpc-fops_v2.c:1061
    gluster#4 0x7f91694ba26f in client_submit_request /root/glusterfs_new/glusterfs/xlators/protocol/client/src/client.c:288
    gluster#5 0x7f91695021bd in client4_0_removexattr /root/glusterfs_new/glusterfs/xlators/protocol/client/src/client-rpc-fops_v2.c:4480
    gluster#6 0x7f91694a5f56 in client_removexattr /root/glusterfs_new/glusterfs/xlators/protocol/client/src/client.c:1439
    gluster#7 0x7f91654a1161 in dht_setxattr_mds_cbk /root/glusterfs_new/glusterfs/xlators/cluster/dht/src/dht-common.c:3979
    gluster#8 0x7f91694c1f42 in client4_0_removexattr_cbk /root/glusterfs_new/glusterfs/xlators/protocol/client/src/client-rpc-fops_v2.c:1061
    gluster#9 0x7f916cbc4340 in rpc_clnt_handle_reply /root/glusterfs_new/glusterfs/rpc/rpc-lib/src/rpc-clnt.c:723
    gluster#10 0x7f916cbc4340 in rpc_clnt_notify /root/glusterfs_new/glusterfs/rpc/rpc-lib/src/rpc-clnt.c:890
    gluster#11 0x7f916cbb7ec5 in rpc_transport_notify /root/glusterfs_new/glusterfs/rpc/rpc-lib/src/rpc-transport.c:504
    gluster#12 0x7f916a1aa5fa in socket_event_poll_in_async /root/glusterfs_new/glusterfs/rpc/rpc-transport/socket/src/socket.c:2358
    gluster#13 0x7f916a1bd7c2 in gf_async ../../../../libglusterfs/src/glusterfs/async.h:187
    gluster#14 0x7f916a1bd7c2 in socket_event_poll_in /root/glusterfs_new/glusterfs/rpc/rpc-transport/socket/src/socket.c:2399
    gluster#15 0x7f916a1bd7c2 in socket_event_handler /root/glusterfs_new/glusterfs/rpc/rpc-transport/socket/src/socket.c:2790
    gluster#16 0x7f916a1bd7c2 in socket_event_handler /root/glusterfs_new/glusterfs/rpc/rpc-transport/socket/src/socket.c:2710
    gluster#17 0x7f916c946d22 in event_dispatch_epoll_handler /root/glusterfs_new/glusterfs/libglusterfs/src/event-epoll.c:614
    gluster#18 0x7f916c946d22 in event_dispatch_epoll_worker /root/glusterfs_new/glusterfs/libglusterfs/src/event-epoll.c:725
    gluster#19 0x7f916be8cdec in start_thread (/lib64/libc.so.6+0x8cdec)

Solution: Use switch instead of using if statement to wind a operation, in case of switch
          the code will not try to access local after wind a operation for last dht
          subvol.

Fixes: gluster#3732
Change-Id: I031bc814d6df98058430ef4de7040e3370d1c677
Signed-off-by: Mohit Agrawal <[email protected]>
mohit84 added a commit that referenced this pull request Oct 23, 2023
The client is throwing below stacktrace while asan is enabled.
The client is facing an issue while application is trying
to call removexattr in 2x1 subvol and non-mds subvol is down.
As we can see in below stacktrace dht_setxattr_mds_cbk is calling
dht_setxattr_non_mds_cbk and dht_setxattr_non_mds_cbk is trying to
wipe local because call_cnt is 0 but dht_setxattr_mds_cbk is trying
to access frame->local that;s why it is crashed.

x621000051c34 is located 1844 bytes inside of 4164-byte region [0x621000051500,0x621000052544)
freed by thread T7 here:
    #0 0x7f916ccb9388 in __interceptor_free.part.0 (/lib64/libasan.so.8+0xb9388)
    #1 0x7f91654af204 in dht_local_wipe /root/glusterfs_new/glusterfs/xlators/cluster/dht/src/dht-helper.c:713
    #2 0x7f91654af204 in dht_setxattr_non_mds_cbk /root/glusterfs_new/glusterfs/xlators/cluster/dht/src/dht-common.c:3900
    #3 0x7f91694c1f42 in client4_0_removexattr_cbk /root/glusterfs_new/glusterfs/xlators/protocol/client/src/client-rpc-fops_v2.c:1061
    #4 0x7f91694ba26f in client_submit_request /root/glusterfs_new/glusterfs/xlators/protocol/client/src/client.c:288
    #5 0x7f91695021bd in client4_0_removexattr /root/glusterfs_new/glusterfs/xlators/protocol/client/src/client-rpc-fops_v2.c:4480
    #6 0x7f91694a5f56 in client_removexattr /root/glusterfs_new/glusterfs/xlators/protocol/client/src/client.c:1439
    #7 0x7f91654a1161 in dht_setxattr_mds_cbk /root/glusterfs_new/glusterfs/xlators/cluster/dht/src/dht-common.c:3979
    #8 0x7f91694c1f42 in client4_0_removexattr_cbk /root/glusterfs_new/glusterfs/xlators/protocol/client/src/client-rpc-fops_v2.c:1061
    #9 0x7f916cbc4340 in rpc_clnt_handle_reply /root/glusterfs_new/glusterfs/rpc/rpc-lib/src/rpc-clnt.c:723
    #10 0x7f916cbc4340 in rpc_clnt_notify /root/glusterfs_new/glusterfs/rpc/rpc-lib/src/rpc-clnt.c:890
    #11 0x7f916cbb7ec5 in rpc_transport_notify /root/glusterfs_new/glusterfs/rpc/rpc-lib/src/rpc-transport.c:504
    #12 0x7f916a1aa5fa in socket_event_poll_in_async /root/glusterfs_new/glusterfs/rpc/rpc-transport/socket/src/socket.c:2358
    #13 0x7f916a1bd7c2 in gf_async ../../../../libglusterfs/src/glusterfs/async.h:187
    #14 0x7f916a1bd7c2 in socket_event_poll_in /root/glusterfs_new/glusterfs/rpc/rpc-transport/socket/src/socket.c:2399
    #15 0x7f916a1bd7c2 in socket_event_handler /root/glusterfs_new/glusterfs/rpc/rpc-transport/socket/src/socket.c:2790
    #16 0x7f916a1bd7c2 in socket_event_handler /root/glusterfs_new/glusterfs/rpc/rpc-transport/socket/src/socket.c:2710
    #17 0x7f916c946d22 in event_dispatch_epoll_handler /root/glusterfs_new/glusterfs/libglusterfs/src/event-epoll.c:614
    #18 0x7f916c946d22 in event_dispatch_epoll_worker /root/glusterfs_new/glusterfs/libglusterfs/src/event-epoll.c:725
    #19 0x7f916be8cdec in start_thread (/lib64/libc.so.6+0x8cdec)

Solution: Use switch instead of using if statement to wind a operation, in case of switch
          the code will not try to access local after wind a operation for last dht
          subvol.

Fixes: #3732
Change-Id: I031bc814d6df98058430ef4de7040e3370d1c677

Signed-off-by: Mohit Agrawal <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants