-
Notifications
You must be signed in to change notification settings - Fork 506
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Readiness probe failed: command "/opt/scripts/readinessprobe" timed out on DB Pod when installing on Mac M1, even when using arm64 images #1420
Comments
|
operator pod logs
|
The problem is with the mongo-agent which doesn't have an official arm64 image yet ( https://quay.io/repository/mongodb/mongodb-agent?tab=tags) while operator, version-upgrade-post-start-hook and readinessprobe do, so when installing mongo via the operator (the operator points to the quay.io images) the mongo-agent that gets installed not compatible with the M1 and mongo pods hang as
Hopefully official arm64 image for the mongo-agent will be released soon so to use operator 0.8.3. |
@vinnytwice |
This issue is being marked stale because it has been open for 60 days with no activity. Please comment if this issue is still affecting you. If there is no change, this issue will be closed in 30 days. |
Let me assign this to @wtrocki |
@vinnytwice you are pointing to the wrong agent image, the correct one is: https://quay.io/repository/mongodb/mongodb-agent-ubi?tab=tags |
@nammn Hi and thanks for pointing that out.
|
I tried fresh images on my raspberry pi cluster:
still no luck:
|
This issue is being marked stale because it has been open for 60 days with no activity. Please comment if this issue is still affecting you. If there is no change, this issue will be closed in 30 days. |
@aleksandrov I got it working on an Orange Pi. Which means you should be able to run it on a RPI. Below is my operator config: ## Reference to one or more secrets to be used when pulling images
## ref: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/
imagePullSecrets: []
# - name: "image-pull-secret"
## Operator
operator:
# Name that will be assigned to most of internal Kubernetes objects like
# Deployment, ServiceAccount, Role etc.
name: mongodb-kubernetes-operator
# Name of the operator image
operatorImageName: mongodb-kubernetes-operator
# Name of the deployment of the operator pod
deploymentName: mongodb-kubernetes-operator
# Version of mongodb-kubernetes-operator
# version: 0.8.1
version: 0.9.0
# Uncomment this line to watch all namespaces
# watchNamespace: "*"
# Resources allocated to Operator Pod
resources:
limits:
cpu: 750m
memory: 500Mi
requests:
cpu: 300m
memory: 200Mi
# replicas deployed for the operator pod. Running 1 is optimal and suggested.
replicas: 1
# Additional environment variables
extraEnvs: []
# environment:
# - name: CLUSTER_DOMAIN
# value: my-cluster.domain
podSecurityContext:
runAsNonRoot: true
runAsUser: 2000
securityContext: {}
## Operator's database
database:
name: mongodb-database
# set this to the namespace where you would like
# to deploy the MongoDB database,
# Note if the database namespace is not same
# as the operator namespace,
# make sure to set "watchNamespace" to "*"
# to ensure that the operator has the
# permission to reconcile resources in other namespaces
# namespace: mongodb-database
agent:
name: mongodb-agent-ubi
version: 107.0.6.8587-1-arm64
versionUpgradeHook:
name: mongodb-kubernetes-operator-version-upgrade-post-start-hook
version: 1.0.8
readinessProbe:
name: mongodb-kubernetes-readinessprobe
version: 1.0.17
mongodb:
name: mongo
repo: docker.io
registry:
# agent: docker.io/mohsinonxrm
# versionUpgradeHook: docker.io/mohsinonxrm
# readinessProbe: docker.io/mohsinonxrm
# operator: docker.io/mohsinonxrm
agent: quay.io/mongodb
versionUpgradeHook: quay.io/mongodb
readinessProbe: quay.io/mongodb
operator: quay.io/mongodb
pullPolicy: Always
# Set to false if CRDs have been installed already. The CRDs can be installed
# manually from the code repo: github.com/mongodb/mongodb-kubernetes-operator or
# using the `community-operator-crds` Helm chart.
community-operator-crds:
enabled: true |
This issue was closed because it became stale and did not receive further updates. If the issue is still affecting you, please re-open it, or file a fresh Issue with updated information. |
What did you do to encounter the bug?
Steps to reproduce the behavior:
kubectl get po -w
describe po mongo-rs-0
What did you expect?
MongoDB rs pod running as it does when I install the same charts on Hetzner Cloud (dough on x86 processor) running rancher/k3s:v1.27.4-k3s1 image.
The only difference is that on local K3D cluster I declare PVs for physical external ssd, and the SC provisions them instead of provisioning the cloud provider's.
What happened instead?
mongo-rs-0 is stuck in 1/2 ready state, dough PVs are created and data and logs are populated in the disk.
Things I tried:
Overriding the mongo-community-operator helm chart, pointing directly to arm64 images for the operator an the redness probe ( couldn't find an arm64 image for the agent) , using a custom values.yaml:
But the reason of the readiness prob failing is now
Readiness probe failed: panic: open /var/log/mongodb-mms-automation/healthstatus/agent-health-status.json: no such file or directory
Screenshots
If applicable, add screenshots to help explain your problem.
Operator Information
I tried different combinations of operator and db version
Kubernetes Cluster Information
K3D local cluster running rancher/k3s:v1.27.4-k3s1 image, on MacBook Air M1 with external ssd used for PVs
kubectl describe output*
yaml definitions
mongo rs:
PVs:
values:
The text was updated successfully, but these errors were encountered: