-
Notifications
You must be signed in to change notification settings - Fork 580
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Active-active replication is broken in 6.0.18-6.2.1 #389
Comments
I don't know if it's related, but #378 gave me the inspiration to test with an older version of Keydb. I don't know if it is related in any way, but thought I'd mention it. |
@nicknezis Did it work with an older version? which version? |
6.0.16 seemed to behave. |
I think the link you posted to the helm chart is wrong? |
Thanks for catching that. I've updated the link to the proper URL. https://artifacthub.io/packages/helm/enapter/keydb |
Is there anyone still experiencing this issue, will be helpful to understand how to prioritize this. |
Closing as there has been no response in 30 days |
Describe the bug
I spin up an Active-active cluster in Kubernetes with default values (based on the enapter/keydb helm chart). I then load a fake dataset with 6 million key value pairs using
cat data.txt | keydb-cli --pipe
.When monitoring each pod with
kubectl log
andwatch -d ls -lart /data/
the replication of data never completes and eventually ends in some pods spewing errors to the log. During the load, there are temp.rdb files present alongside the main dump.rdb file in each pod. These temp files never fully go away.When downgrading the verison of the Keydb docker image to 6.0.16, the buggy behavior is not present. With this version the dataset loads successfully without failure. The cluster reaches a steady state with a single dump.rdb in each pod and no constant resyncing attempts that ultimately fail.
The text was updated successfully, but these errors were encountered: