Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Multiple Datacenters compete over seed service ownership #626

Closed
burmanm opened this issue Mar 27, 2024 · 0 comments · Fixed by #627
Closed

Multiple Datacenters compete over seed service ownership #626

burmanm opened this issue Mar 27, 2024 · 0 comments · Fixed by #627
Assignees
Labels
bug Something isn't working done Issues in the state 'done'

Comments

@burmanm
Copy link
Contributor

burmanm commented Mar 27, 2024

What happened?

If there are two datacenters in the same namespace, the seed-service's ownership will be competed and causes infinite loop in the reconciliation process since we keep updating the ownership forever. This does not prevent any real actions from happening, but causes unnecessary writes and logging.

cass-operator git:(master) ✗ kubectl get svc cluster1-seed-service -o yaml | grep name:
    app.kubernetes.io/name: cassandra
  name: cluster1-seed-service
    name: dc2cass-operator git:(master) ✗ kubectl get svc cluster1-seed-service -o yaml | grep name:
    app.kubernetes.io/name: cassandra
  name: cluster1-seed-service
    name: dc2cass-operator git:(master) ✗ kubectl get svc cluster1-seed-service -o yaml | grep name:
    app.kubernetes.io/name: cassandra
  name: cluster1-seed-service
    name: dc1cass-operator git:(master) ✗
cass-operator git:(master) ✗ kubectl get svc cluster1-seed-service -o yaml | grep resourceVersion
  resourceVersion: "8937"cass-operator git:(master) ✗ kubectl get svc cluster1-seed-service -o yaml | grep resourceVersion
  resourceVersion: "8962"cass-operator git:(master) 

This has started to happen some time after update to Kubernetes 1.28 in the tests and controller-runtime to newer version, so around 1.19.0.

What did you expect to happen?

No response

How can we reproduce it (as minimally and precisely as possible)?

Run decommission_dc test.

cass-operator version

1.19.0

Kubernetes version

1.28

Method of installation

No response

Anything else we need to know?

No response

@burmanm burmanm added the bug Something isn't working label Mar 27, 2024
@burmanm burmanm self-assigned this Mar 27, 2024
@burmanm burmanm moved this to In Progress in K8ssandra Mar 27, 2024
@adejanovski adejanovski added the in-progress Issues in the state 'in-progress' label Mar 27, 2024
@burmanm burmanm moved this from In Progress to Ready For Review in K8ssandra Mar 27, 2024
@adejanovski adejanovski added ready-for-review Issues in the state 'ready-for-review' and removed in-progress Issues in the state 'in-progress' labels Mar 27, 2024
@adejanovski adejanovski moved this from Ready For Review to Review in K8ssandra Mar 28, 2024
@adejanovski adejanovski added review Issues in the state 'review' and removed ready-for-review Issues in the state 'ready-for-review' labels Mar 28, 2024
@github-project-automation github-project-automation bot moved this from Review to Done in K8ssandra Mar 28, 2024
@adejanovski adejanovski added done Issues in the state 'done' and removed review Issues in the state 'review' labels Mar 28, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working done Issues in the state 'done'
Projects
No open projects
Archived in project
Development

Successfully merging a pull request may close this issue.

2 participants