-
Notifications
You must be signed in to change notification settings - Fork 273
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[BUG] Unexpected rabbitmqCluster replica number when operator misses particular scale events #758
Comments
This issue might be hard to reproduce as it will only manifest when the particular interleaving happens (e.g., when the reconcile goroutine runs slow). We are recently building a tool https://github.com/sieve-project/sieve to reliably detect and reproduce bugs like the above one in various k8s controllers. This bug can be automatically reproduced by just a few commands using our tool. Please take a try if you also want to reproduce the bug. To use the tool, please first run the environment check:
If the checker passes, please build the Kubernetes and controller images using the commands below as our tool needs to set up a kind cluster to reproduce the bug:
Finally, run
It will try to scale the rabbitmq cluster from 1 -> 3 -> 2 and also manipulate the goroutine interleaving of the operator to trigger the previously mentioned corner case interleaving.
Ideally there should be 4 pods eventually (1 for the operator pod and 3 replicas), but after the test run we observe 3 pods (1 for the operator pod and 2 replicas). For more information please refer to https://github.com/sieve-project/sieve#sieve-testing-datacenter-infrastructures-using-partial-histories and https://github.com/sieve-project/sieve/blob/main/docs/reprod.md#rabbitmq-cluster-operator-758 Please let me know if you encounter any problems when reproducing the bug with the tool. |
Hello @srteam2020, thank you for reporting this issue. We talked about this issue in our internal sync up, and we acknowledge this is a flaw in the Operator. However, we think this situation is an edge case, and given that we haven't received further reports of this bheaviour, we won't be able to get to this any time soon. If you would like to contribute a fix for this issue, we will very gladly work with you to validate the changes and merge a potential fix. Once again, thank you for reporting this issue, and thank you for testing this Operator. |
Describe the bug
The rabbitmq operator have the following checking to prevent users to scale down statefulset (and pvc) of the cluster,
If the user tries to change the replica from 3 to 2, the scaling down should not happen and the replicas should still remain 3.
We ran a workload that changes replica from 1 -> 3 -> 2. Ideally, the replicas should remain 3 since scaling down is not allowed. However, we observed there are 2 replicas at the end in some cases.
The reason is that the goroutine of performing reconciliation (g1) and the goroutine of updating the locally cached rabbitmqCluster object (g2) are concurrent, and certain interleaving of the two goroutines can lead to the unexpected scaling behavior.
Ideally, this interleaving will lead to the correct result:
And we find that the following interleaving (when the operator's reconcile goroutine runs relatively slowly) can lead to the unexpected scaling behavior mentioned above:
To Reproduce
As mentioned above, we change the replica number as 1 -> 3 -> 2. When the operator misses the view of "replica = 3" and directly sees "replica = 2", the rabbitmq statefulset will end up with 2 replicas, not 3.
Expected behavior
Since scale down is not allowed, we should have 3 replicas eventually.
Version and environment information
Additional context
This issue is probably hard to fix from the operator side, as how the operator reads events from users (e.g., scaling up/down) and how the different goroutines interleave with each other are decided by the
controller-runtime
package, not the operator code. We didn't find a good way to solve this issue with the existingcontroller-runtime
package.Instead, we can at least make the potential debugging process easier when other users encounter similar problems later again by adding more logs. For example, we can log the
currentReplicas
anddesiredReplicas
inscaleDown
so users can know the exact sequence of replica number seen by the operator.We are willing to send a PR to add one more log here.
The text was updated successfully, but these errors were encountered: