-
Notifications
You must be signed in to change notification settings - Fork 1.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Segmentation fault in gluster client #4271
Comments
Bump on this one to see if there is a solution |
I will look into this and update.
|
Thanks @aravindavk - @SowjanyaKotha will reply on this. Really appreciate the quick response here 👍 |
@aravindavk The fault on the existing node volume happens at different times. add-brick is on such case(most cases), It can happen at remove-brick as well. |
@aravindavk any updates on this? We're hitting this issue consistently after a few attempts and hence pushing for a solution |
From the backtrace, I can see that What were the steps used to setup new node and the existing nodes (Clients and Servers)? New SSL key generated in the new node (used in add-brick command) or SSL key file is reused from the existing node that is replaced? If cleanup is not done to |
I tested this in our lab, couldn't reproduce the crash. The steps I did were:
The details about the tests are available here: |
@aravindavk A new certificate is created for the node. But the issue happens randomly. If the certificate is not correct, it should always fail. Would it matter if the cert location is not the default one? |
Description of problem:
Setup of 2 node mirrored volumes with clients installed on both nodes. When one of the node becomes faulty, the node is removed and replaced with a new node with the same name/IP. While adding brick, the active client crashes. The issue occurs randomly when ssl is enabled on IO. It is not seen in non-ssl setups.
The exact command to reproduce the issue:
gluster volume add-brick efa_logs replica 2 10.18.120.135:/apps/opt/efa/logs force
The full output of the command that failed:
Expected results:
add-brick should be successful
Mandatory info:
- The output of the
gluster volume info
command:- The output of the
gluster volume status
command:- The output of the
gluster volume heal
command:**- Provide logs present on following locations of client and server nodes -
/var/log/glusterfs/
**- Is there any crash ? Provide the backtrace and coredump
Additional info:
- The operating system / glusterfs version:
It is reproducible with gluster version 9.6 and 11.0 on Ubuntu setup installed with Debian files.
Note: Please hide any confidential data which you don't want to share in public like IP address, file name, hostname or any other configuration
The text was updated successfully, but these errors were encountered: