Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[bug]: ZookeeperCluster crd's status disappear #350

Closed
gxglls opened this issue Jun 17, 2021 · 7 comments · Fixed by #351
Closed

[bug]: ZookeeperCluster crd's status disappear #350

gxglls opened this issue Jun 17, 2021 · 7 comments · Fixed by #351
Assignees

Comments

@gxglls
Copy link
Contributor

gxglls commented Jun 17, 2021

Description

ZookeeperCluster crd's status disappear when install zookeeper-operator/zookeeper version 0.2.11,there are no fields in ZookeeperCluster crd's status when zk-operator/zookeeper pod running.

ZookeeperCluster crd's status display current in zookeeper-operator/zookeeper version 0.2.8.

Importance

must-have

@anishakj
Copy link
Contributor

anishakj commented Jun 17, 2021

@gxglls what is the kubernetes version in your cluster? Also could you please share the status output

@anishakj
Copy link
Contributor

i am able to see the status as below

kubectl get zk -n pravega-test
NAME        REPLICAS   READY REPLICAS   VERSION              DESIRED VERSION      INTERNAL ENDPOINT   EXTERNAL ENDPOINT   AGE
zookeeper   3          3                0.2.10-177-ba22332   0.2.10-177-ba22332   172.30.94.28:2181   N/A                 20d

@gxglls
Copy link
Contributor Author

gxglls commented Jun 18, 2021

k8s version 1.21

zk operator 0.2.11:


root@ziyin-k8s-clean:/root # helm list
NAME                    NAMESPACE       REVISION        UPDATED                                         STATUS          CHART                           APP VERSION
zookeeper               default         1               2021-06-17 22:14:20.952710084 -0400 EDT         deployed        zookeeper-0.2.11                0.2.11
zookeeper-operator      default         1               2021-06-17 22:02:59.782076918 -0400 EDT         deployed        zookeeper-operator-0.2.11       0.2.11

root@ziyin-k8s-clean:/root # kubectl get pod
NAME                                  READY   STATUS    RESTARTS   AGE
zookeeper-0                           1/1     Running   1          2m49s
zookeeper-1                           1/1     Running   1          77s
zookeeper-2                           1/1     Running   0          53s
zookeeper-operator-7fc7b5fcf6-k27qs   1/1     Running   0          14m

root@ziyin-k8s-clean:/root # kubectl get zk
NAME        REPLICAS   READY REPLICAS   VERSION   DESIRED VERSION   INTERNAL ENDPOINT   EXTERNAL ENDPOINT   AGE
zookeeper   3                                     0.2.11

zk operator 0.2.8:

root@ziyin-k8s-clean:/root # helm list
NAME                    NAMESPACE       REVISION        UPDATED                                         STATUS          CHART                           APP VERSION
zookeeper               default         1               2021-06-17 22:23:59.799809175 -0400 EDT         deployed        zookeeper-0.2.8                 0.2.8
zookeeper-operator      default         1               2021-06-17 22:22:53.344166248 -0400 EDT         deployed        zookeeper-operator-0.2.8        0.2.8

root@ziyin-k8s-clean:/root # kubectl get zk
NAME        REPLICAS   READY REPLICAS   VERSION   DESIRED VERSION   INTERNAL ENDPOINT    EXTERNAL ENDPOINT   AGE
zookeeper   3          3                0.2.8     0.2.8             10.98.144.119:2181   N/A                 8m53s

root@ziyin-k8s-clean:/root # kubectl get pod
NAME                                  READY   STATUS    RESTARTS   AGE
zookeeper-0                           1/1     Running   0          11m
zookeeper-1                           1/1     Running   0          10m
zookeeper-2                           1/1     Running   0          9m51s
zookeeper-operator-78675f5c56-6ngf8   1/1     Running   0          12m

@gxglls
Copy link
Contributor Author

gxglls commented Jun 18, 2021

i am able to see the status as below

kubectl get zk -n pravega-test
NAME        REPLICAS   READY REPLICAS   VERSION              DESIRED VERSION      INTERNAL ENDPOINT   EXTERNAL ENDPOINT   AGE
zookeeper   3          3                0.2.10-177-ba22332   0.2.10-177-ba22332   172.30.94.28:2181   N/A                 20d

try 0.2.11?

@anishakj
Copy link
Contributor

i am able to see the status as below

kubectl get zk -n pravega-test
NAME        REPLICAS   READY REPLICAS   VERSION              DESIRED VERSION      INTERNAL ENDPOINT   EXTERNAL ENDPOINT   AGE
zookeeper   3          3                0.2.10-177-ba22332   0.2.10-177-ba22332   172.30.94.28:2181   N/A                 20d

try 0.2.11?

It is showing there as well

kubectl get zk
NAME        REPLICAS   READY REPLICAS   VERSION   DESIRED VERSION   INTERNAL ENDPOINT     EXTERNAL ENDPOINT   AGE
zookeeper   3          3                0.2.11    0.2.11            10.100.200.143:2181   N/A                 2m33s

@anishakj
Copy link
Contributor

i am able to see the status as below

kubectl get zk -n pravega-test
NAME        REPLICAS   READY REPLICAS   VERSION              DESIRED VERSION      INTERNAL ENDPOINT   EXTERNAL ENDPOINT   AGE
zookeeper   3          3                0.2.10-177-ba22332   0.2.10-177-ba22332   172.30.94.28:2181   N/A                 20d

try 0.2.11?

It is showing there as well

kubectl get zk
NAME        REPLICAS   READY REPLICAS   VERSION   DESIRED VERSION   INTERNAL ENDPOINT     EXTERNAL ENDPOINT   AGE
zookeeper   3          3                0.2.11    0.2.11            10.100.200.143:2181   N/A                 2m33s

Which is the API version of CRD in your cluster, in my cluster it is v1beta1

@anishakj
Copy link
Contributor

@gxglls I am able to reproduce the issue with CRD version v1

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants