We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
`[root@master containers]# kubectl logs my-milvus-zookeeper-0 exec /scripts/setup.sh: exec format error [root@master containers]# kubectl describe pod my-milvus-zookeeper-0 Name: my-milvus-zookeeper-0 Namespace: default Priority: 0 Node: master/192.168.6.242 Start Time: Fri, 17 May 2024 10:24:16 +0800 Labels: app.kubernetes.io/component=zookeeper app.kubernetes.io/instance=my-milvus app.kubernetes.io/managed-by=Helm app.kubernetes.io/name=zookeeper controller-revision-hash=my-milvus-zookeeper-76fd4b8cf7 helm.sh/chart=zookeeper-8.1.2 statefulset.kubernetes.io/pod-name=my-milvus-zookeeper-0 Annotations: cni.projectcalico.org/containerID: 3f63624af437de0a6227b24d364ac49377704c9e36a93251b81d0b748babcad4 cni.projectcalico.org/podIP: 10.244.219.115/32 cni.projectcalico.org/podIPs: 10.244.219.115/32 Status: Running IP: 10.244.219.115 IPs: IP: 10.244.219.115 Controlled By: StatefulSet/my-milvus-zookeeper Containers: zookeeper: Container ID: docker://a8f4be73fc54aabf0265a8a181c29a36f4fe4badcb1b949dd30c41b464603293 Image: docker.io/bitnami/zookeeper:3.7.0-debian-10-r320 Image ID: docker-pullable://bitnami/zookeeper@sha256:c19c5473ef3feb8a0db00b92891c859915d06f7b888be4b3fdb78aaca109cd1f Ports: 2181/TCP, 2888/TCP, 3888/TCP Host Ports: 0/TCP, 0/TCP, 0/TCP Command: /scripts/setup.sh State: Waiting Reason: CrashLoopBackOff Last State: Terminated Reason: Error Exit Code: 1 Started: Fri, 17 May 2024 14:50:32 +0800 Finished: Fri, 17 May 2024 14:50:32 +0800 Ready: False Restart Count: 57 Limits: cpu: 1 memory: 2Gi Requests: cpu: 250m memory: 256Mi Liveness: exec [/bin/bash -c echo "ruok" | timeout 2 nc -w 2 localhost 2181 | grep imok] delay=30s timeout=5s period=10s #success=1 #failure=6 Readiness: exec [/bin/bash -c echo "ruok" | timeout 2 nc -w 2 localhost 2181 | grep imok] delay=5s timeout=5s period=10s #success=1 #failure=6 Environment: BITNAMI_DEBUG: false ZOO_DATA_LOG_DIR: ZOO_PORT_NUMBER: 2181 ZOO_TICK_TIME: 2000 ZOO_INIT_LIMIT: 10 ZOO_SYNC_LIMIT: 5 ZOO_PRE_ALLOC_SIZE: 65536 ZOO_SNAPCOUNT: 100000 ZOO_MAX_CLIENT_CNXNS: 60 ZOO_4LW_COMMANDS_WHITELIST: srvr, mntr, ruok ZOO_LISTEN_ALLIPS_ENABLED: no ZOO_AUTOPURGE_INTERVAL: 0 ZOO_AUTOPURGE_RETAIN_COUNT: 3 ZOO_MAX_SESSION_TIMEOUT: 40000 ZOO_SERVERS: my-milvus-zookeeper-0.my-milvus-zookeeper-headless.default.svc.cluster.local:2888:3888::1 my-milvus-zookeeper-1.my-milvus-zookeeper-headless.default.svc.cluster.local:2888:3888::2 my-milvus-zookeeper-2.my-milvus-zookeeper-headless.default.svc.cluster.local:2888:3888::3 ZOO_ENABLE_AUTH: no ZOO_HEAP_SIZE: 1024 ZOO_LOG_LEVEL: ERROR ALLOW_ANONYMOUS_LOGIN: yes POD_NAME: my-milvus-zookeeper-0 (v1:metadata.name) Mounts: /bitnami/zookeeper from data (rw) /scripts/setup.sh from scripts (rw,path="setup.sh") /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-sbjvl (ro) Conditions: Type Status Initialized True Ready False ContainersReady False PodScheduled True Volumes: data: Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace) ClaimName: data-my-milvus-zookeeper-0 ReadOnly: false scripts: Type: ConfigMap (a volume populated by a ConfigMap) Name: my-milvus-zookeeper-scripts Optional: false kube-api-access-sbjvl: Type: Projected (a volume that contains injected data from multiple sources) TokenExpirationSeconds: 3607 ConfigMapName: kube-root-ca.crt ConfigMapOptional: DownwardAPI: true QoS Class: Burstable Node-Selectors: Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s node.kubernetes.io/unreachable:NoExecute op=Exists for 300s Events: Type Reason Age From Message
Normal Pulled 39m (x51 over 4h29m) kubelet Container image "docker.io/bitnami/zookeeper:3.7.0-debian-10-r320" already present on machine Warning BackOff 4m16s (x1319 over 4h29m) kubelet Back-off restarting failed container `
The text was updated successfully, but these errors were encountered:
No branches or pull requests
`[root@master containers]# kubectl logs my-milvus-zookeeper-0
exec /scripts/setup.sh: exec format error
[root@master containers]# kubectl describe pod my-milvus-zookeeper-0
Name: my-milvus-zookeeper-0
Namespace: default
Priority: 0
Node: master/192.168.6.242
Start Time: Fri, 17 May 2024 10:24:16 +0800
Labels: app.kubernetes.io/component=zookeeper
app.kubernetes.io/instance=my-milvus
app.kubernetes.io/managed-by=Helm
app.kubernetes.io/name=zookeeper
controller-revision-hash=my-milvus-zookeeper-76fd4b8cf7
helm.sh/chart=zookeeper-8.1.2
statefulset.kubernetes.io/pod-name=my-milvus-zookeeper-0
Annotations: cni.projectcalico.org/containerID: 3f63624af437de0a6227b24d364ac49377704c9e36a93251b81d0b748babcad4
cni.projectcalico.org/podIP: 10.244.219.115/32
cni.projectcalico.org/podIPs: 10.244.219.115/32
Status: Running
IP: 10.244.219.115
IPs:
IP: 10.244.219.115
Controlled By: StatefulSet/my-milvus-zookeeper
Containers:
zookeeper:
Container ID: docker://a8f4be73fc54aabf0265a8a181c29a36f4fe4badcb1b949dd30c41b464603293
Image: docker.io/bitnami/zookeeper:3.7.0-debian-10-r320
Image ID: docker-pullable://bitnami/zookeeper@sha256:c19c5473ef3feb8a0db00b92891c859915d06f7b888be4b3fdb78aaca109cd1f
Ports: 2181/TCP, 2888/TCP, 3888/TCP
Host Ports: 0/TCP, 0/TCP, 0/TCP
Command:
/scripts/setup.sh
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Error
Exit Code: 1
Started: Fri, 17 May 2024 14:50:32 +0800
Finished: Fri, 17 May 2024 14:50:32 +0800
Ready: False
Restart Count: 57
Limits:
cpu: 1
memory: 2Gi
Requests:
cpu: 250m
memory: 256Mi
Liveness: exec [/bin/bash -c echo "ruok" | timeout 2 nc -w 2 localhost 2181 | grep imok] delay=30s timeout=5s period=10s #success=1 #failure=6
Readiness: exec [/bin/bash -c echo "ruok" | timeout 2 nc -w 2 localhost 2181 | grep imok] delay=5s timeout=5s period=10s #success=1 #failure=6
Environment:
BITNAMI_DEBUG: false
ZOO_DATA_LOG_DIR:
ZOO_PORT_NUMBER: 2181
ZOO_TICK_TIME: 2000
ZOO_INIT_LIMIT: 10
ZOO_SYNC_LIMIT: 5
ZOO_PRE_ALLOC_SIZE: 65536
ZOO_SNAPCOUNT: 100000
ZOO_MAX_CLIENT_CNXNS: 60
ZOO_4LW_COMMANDS_WHITELIST: srvr, mntr, ruok
ZOO_LISTEN_ALLIPS_ENABLED: no
ZOO_AUTOPURGE_INTERVAL: 0
ZOO_AUTOPURGE_RETAIN_COUNT: 3
ZOO_MAX_SESSION_TIMEOUT: 40000
ZOO_SERVERS: my-milvus-zookeeper-0.my-milvus-zookeeper-headless.default.svc.cluster.local:2888:3888::1 my-milvus-zookeeper-1.my-milvus-zookeeper-headless.default.svc.cluster.local:2888:3888::2 my-milvus-zookeeper-2.my-milvus-zookeeper-headless.default.svc.cluster.local:2888:3888::3
ZOO_ENABLE_AUTH: no
ZOO_HEAP_SIZE: 1024
ZOO_LOG_LEVEL: ERROR
ALLOW_ANONYMOUS_LOGIN: yes
POD_NAME: my-milvus-zookeeper-0 (v1:metadata.name)
Mounts:
/bitnami/zookeeper from data (rw)
/scripts/setup.sh from scripts (rw,path="setup.sh")
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-sbjvl (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
data:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: data-my-milvus-zookeeper-0
ReadOnly: false
scripts:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: my-milvus-zookeeper-scripts
Optional: false
kube-api-access-sbjvl:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional:
DownwardAPI: true
QoS Class: Burstable
Node-Selectors:
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
Normal Pulled 39m (x51 over 4h29m) kubelet Container image "docker.io/bitnami/zookeeper:3.7.0-debian-10-r320" already present on machine
Warning BackOff 4m16s (x1319 over 4h29m) kubelet Back-off restarting failed container
`
The text was updated successfully, but these errors were encountered: