Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Error on OSD pod: "MountVolume.SetUp failed for volume ..." using ceph-helm on GKE #72

Open
vladbrk opened this issue Oct 19, 2018 · 5 comments

Comments

@vladbrk
Copy link

vladbrk commented Oct 19, 2018

I'm trying deploy Ceph on GKE(k8s) using ceph-helm, but after run "helm install ..." osd pod can't be created due to error "MountVolume.SetUp failed for volume ..." Full error see below

GKE env:
2 nodes: n1-standart-1 (1 virtual CPU, 3.75Gb RAM, hdd 100Gb, 2 mounted ssd 375Gb)
kubernetes: 1.9.7-gke.6 or 1.10.7-gke.6

I use this instruction (my script see below)
http://docs.ceph.com/docs/mimic/start/kube-helm/
but it fails on command "helm install ..."

kubectl get pods -n ceph

NAME                                        READY     STATUS                  RESTARTS   AGE
ceph-mds-5696f9df5d-nmgtb                   0/1       Pending                 0          12m
ceph-mds-keyring-generator-rh9tr            0/1       Completed               0          12m
ceph-mgr-8656b978df-w4mt6                   1/1       Running                 2          12m
ceph-mgr-keyring-generator-t2x7j            0/1       Completed               0          12m
ceph-mon-check-7d49bd686c-nmpw5             1/1       Running                 0          12m
ceph-mon-keyring-generator-hpbcg            0/1       Completed               0          12m
ceph-mon-xjjs4                              3/3       Running                 0          12m
ceph-namespace-client-key-generator-np2kv   0/1       Completed               0          12m
ceph-osd-dev-sdb-5wzs6                      0/1       Init:CrashLoopBackOff   6          12m
ceph-osd-dev-sdb-zwldd                      0/1       Init:CrashLoopBackOff   6          12m
ceph-osd-dev-sdc-qsqpl                      0/1       Init:CrashLoopBackOff   6          12m
ceph-osd-dev-sdc-x4722                      0/1       Init:CrashLoopBackOff   6          12m
ceph-osd-keyring-generator-xlmmb            0/1       Completed               0          12m
ceph-rbd-provisioner-5544dcbcf5-gb9ws       1/1       Running                 0          12m
ceph-rbd-provisioner-5544dcbcf5-hnmjm       1/1       Running                 0          12m
ceph-rgw-65b4bd8cc5-24fxz                   0/1       Pending                 0          12m
ceph-rgw-keyring-generator-4fp2j            0/1       Completed               0          12m
ceph-storage-keys-generator-x5nzl           0/1       Completed               0          12m

Describe failed osd pod shows:

Events:
  Type     Reason                 Age                From                                                        Message
  ----     ------                 ----               ----                                                        -------
  Normal   SuccessfulMountVolume  14m                kubelet, gke-standard-cluster-2-default-pool-8b55990f-s264  MountVolume.SetUp succeeded for volume "run-udev"
  Normal   SuccessfulMountVolume  14m                kubelet, gke-standard-cluster-2-default-pool-8b55990f-s264  MountVolume.SetUp succeeded for volume "pod-var-lib-ceph"
  Normal   SuccessfulMountVolume  14m                kubelet, gke-standard-cluster-2-default-pool-8b55990f-s264  MountVolume.SetUp succeeded for volume "pod-run"
  Normal   SuccessfulMountVolume  14m                kubelet, gke-standard-cluster-2-default-pool-8b55990f-s264  MountVolume.SetUp succeeded for volume "ceph-etc"
  Normal   SuccessfulMountVolume  14m                kubelet, gke-standard-cluster-2-default-pool-8b55990f-s264  MountVolume.SetUp succeeded for volume "ceph-bin"
  Normal   SuccessfulMountVolume  14m                kubelet, gke-standard-cluster-2-default-pool-8b55990f-s264  MountVolume.SetUp succeeded for volume "devices"
  Normal   SuccessfulMountVolume  14m                kubelet, gke-standard-cluster-2-default-pool-8b55990f-s264  MountVolume.SetUp succeeded for volume "default-token-mdmzq"
  Warning  FailedMount            14m (x3 over 14m)  kubelet, gke-standard-cluster-2-default-pool-8b55990f-s264  MountVolume.SetUp failed for volume "ceph-client-admin-keyring" : secrets "ceph-client-admin-keyring" not found
  Warning  FailedMount            14m (x3 over 14m)  kubelet, gke-standard-cluster-2-default-pool-8b55990f-s264  MountVolume.SetUp failed for volume "ceph-bootstrap-mds-keyring" : secrets "ceph-bootstrap-mds-keyring" not found
  Warning  FailedMount            14m (x4 over 14m)  kubelet, gke-standard-cluster-2-default-pool-8b55990f-s264  MountVolume.SetUp failed for volume "ceph-bootstrap-osd-keyring" : secrets "ceph-bootstrap-osd-keyring" not found
  Warning  FailedMount            14m (x4 over 14m)  kubelet, gke-standard-cluster-2-default-pool-8b55990f-s264  MountVolume.SetUp failed for volume "ceph-mon-keyring" : secrets "ceph-mon-keyring" not found
  Warning  FailedMount            14m (x4 over 14m)  kubelet, gke-standard-cluster-2-default-pool-8b55990f-s264  MountVolume.SetUp failed for volume "ceph-bootstrap-rgw-keyring" : secrets "ceph-bootstrap-rgw-keyring" not found
  Normal   Pulled                 9m (x5 over 11m)   kubelet, gke-standard-cluster-2-default-pool-8b55990f-s264  Container image "docker.io/ceph/daemon:tag-build-master-luminous-ubuntu-16.04" already present on machine
  Warning  BackOff                4m (x29 over 11m)  kubelet, gke-standard-cluster-2-default-pool-8b55990f-s264  Back-off restarting failed container

Command "lsblk -f" on both nodes shows

NAME   FSTYPE LABEL           UUID                                 MOUNTPOINT
sdb                                                                /mnt/disks/ssd0
├─sdb2                                                             
└─sdb1                                                             
sdc                                                                /mnt/disks/ssd1
├─sdc2                                                             
└─sdc1                                                             
sda                                                                
└─sda1 ext4   cloudimg-rootfs 819b0621-c9ea-4d69-b955-966a1b7c9cff /

Command "gdisk -l /dev/sdb" (and sdc) on osd nodes shows

GPT fdisk (gdisk) version 1.0.1

Partition table scan:
  MBR: protective
  BSD: not present
  APM: not present
  GPT: present

Found valid GPT with protective MBR; using GPT.
Disk /dev/sdc: 98304000 sectors, 375.0 GiB
Logical sector size: 4096 bytes
Disk identifier (GUID): 86E73D28-8AD8-4F5A-B58C-12C61E508C96
Partition table holds up to 128 entries
First usable sector is 6, last usable sector is 98303994
Partitions will be aligned on 256-sector boundaries
Total free space is 250 sectors (1000.0 KiB)

Number  Start (sector)    End (sector)  Size       Code  Name
   1         1310976        98303994   370.0 GiB   F804  ceph data
   2             256         1310975   5.0 GiB     F802  ceph journal

Commands below show no error
kubectl logs -n ceph pod/ceph-mon-xjjs4 -c ceph-mon | grep error
kubectl logs -n ceph pod/ceph-mon-xjjs4 -c cluster-log-tailer | grep error
kubectl logs -n ceph pod/ceph-mon-xjjs4 -c cluster-audit-log-tailer | grep error

Output "kubectl logs -n ceph pod/ceph-mon-xjjs4 -c ceph-mon"

+ export LC_ALL=C
+ LC_ALL=C
+ source variables_entrypoint.sh
++ ALL_SCENARIOS='osd osd_directory osd_directory_single osd_ceph_disk osd_ceph_disk_prepare osd_ceph_disk_activate osd_ceph_activate_journal mgr'
++ : ceph
++ : ceph-config/ceph
++ : 172.21.0.0/20
++ : mon
++ : 0
++ : gke-standard-cluster-2-default-pool-8b55990f-bj85
++ : gke-standard-cluster-2-default-pool-8b55990f-bj85
++ : /var/lib/ceph/mon/monmap
++ : /var/lib/ceph/mon/ceph-gke-standard-cluster-2-default-pool-8b55990f-bj85
++ : 1
++ : 0
++ : mds-gke-standard-cluster-2-default-pool-8b55990f-bj85
++ : 0
++ : 100
++ : 0
++ : 0
+++ uuidgen
++ : 63d877a7-1d14-4882-bdde-a6995abbf4a3
+++ uuidgen
++ : d7796d8c-7c16-4765-bcb4-e49b4e34c8cf
++ : root=default host=gke-standard-cluster-2-default-pool-8b55990f-bj85
++ : 0
++ : cephfs
++ : cephfs_data
++ : 8
++ : cephfs_metadata
++ : 8
++ : gke-standard-cluster-2-default-pool-8b55990f-bj85
++ :
++ :
++ : 8080
++ : 0
++ : 9000
++ : 0.0.0.0
++ : cephnfs
++ : gke-standard-cluster-2-default-pool-8b55990f-bj85
++ : 0.0.0.0
++ CLI_OPTS='--cluster ceph'
++ DAEMON_OPTS='--cluster ceph --setuser ceph --setgroup ceph -d'
++ MOUNT_OPTS='-t xfs -o noatime,inode64'
++ MDS_KEYRING=/var/lib/ceph/mds/ceph-mds-gke-standard-cluster-2-default-pool-8b55990f-bj85/keyring
++ ADMIN_KEYRING=/etc/ceph/ceph.client.admin.keyring
++ MON_KEYRING=/etc/ceph/ceph.mon.keyring
++ RGW_KEYRING=/var/lib/ceph/radosgw/gke-standard-cluster-2-default-pool-8b55990f-bj85/keyring
++ MGR_KEYRING=/var/lib/ceph/mgr/ceph-gke-standard-cluster-2-default-pool-8b55990f-bj85/keyring
++ MDS_BOOTSTRAP_KEYRING=/var/lib/ceph/bootstrap-mds/ceph.keyring
++ RGW_BOOTSTRAP_KEYRING=/var/lib/ceph/bootstrap-rgw/ceph.keyring
++ OSD_BOOTSTRAP_KEYRING=/var/lib/ceph/bootstrap-osd/ceph.keyring
++ OSD_PATH_BASE=/var/lib/ceph/osd/ceph
+ source common_functions.sh
++ set -ex
+ [[ -z 172.21.0.0/20 ]]
+ [[ -z 10.140.0.2 ]]
+ [[ -z 10.140.0.2 ]]
+ [[ -z 172.21.0.0/20 ]]
+ get_mon_config
++ ceph-conf --lookup fsid -c /etc/ceph/ceph.conf
+ local fsid=ba3982a0-3a07-45b6-b69b-02bc37deeb00
+ timeout=10
+ MONMAP_ADD=
+ [[ -z '' ]]
+ [[ 10 -gt 0 ]]
+ [[ 1 -eq 0 ]]
++ kubectl get pods --namespace=ceph -l application=ceph -l component=mon -o template '--template={{range .items}}{{if .status.podIP}}--add {{.spec.nodeName}} {{.status.podIP}} {{end}} {{end}}'
+ MONMAP_ADD='--add gke-standard-cluster-2-default-pool-8b55990f-bj85 10.140.0.2  '
+ ((  timeout--  ))
+ sleep 1
+ [[ -z --addgke-standard-cluster-2-default-pool-8b55990f-bj8510.140.0.2 ]]
+ [[ -z --addgke-standard-cluster-2-default-pool-8b55990f-bj8510.140.0.2 ]]
+ '[' -f /var/lib/ceph/mon/monmap ']'
+ monmaptool --create --add gke-standard-cluster-2-default-pool-8b55990f-bj85 10.140.0.2 --fsid ba3982a0-3a07-45b6-b69b-02bc37deeb00 /var/lib/ceph/mon/monmap --clobber
monmaptool: monmap file /var/lib/ceph/mon/monmap
monmaptool: set fsid to ba3982a0-3a07-45b6-b69b-02bc37deeb00
monmaptool: writing epoch 0 to /var/lib/ceph/mon/monmap (1 monitors)
+ chown ceph. /var/log/ceph
+ [[ ! -e /var/lib/ceph/mon/ceph-gke-standard-cluster-2-default-pool-8b55990f-bj85/keyring ]]
+ [[ ! -e /var/lib/ceph/mon/ceph-gke-standard-cluster-2-default-pool-8b55990f-bj85/done ]]
+ '[' '!' -e /etc/ceph/ceph.mon.keyring.seed ']'
+ cp -vf /etc/ceph/ceph.mon.keyring.seed /etc/ceph/ceph.mon.keyring
'/etc/ceph/ceph.mon.keyring.seed' -> '/etc/ceph/ceph.mon.keyring'
+ '[' '!' -e /var/lib/ceph/mon/monmap ']'
+ for keyring in '$OSD_BOOTSTRAP_KEYRING' '$MDS_BOOTSTRAP_KEYRING' '$RGW_BOOTSTRAP_KEYRING' '$ADMIN_KEYRING'
+ ceph-authtool /etc/ceph/ceph.mon.keyring --import-keyring /var/lib/ceph/bootstrap-osd/ceph.keyring
importing contents of /var/lib/ceph/bootstrap-osd/ceph.keyring into /etc/ceph/ceph.mon.keyring
+ for keyring in '$OSD_BOOTSTRAP_KEYRING' '$MDS_BOOTSTRAP_KEYRING' '$RGW_BOOTSTRAP_KEYRING' '$ADMIN_KEYRING'
+ ceph-authtool /etc/ceph/ceph.mon.keyring --import-keyring /var/lib/ceph/bootstrap-mds/ceph.keyring
importing contents of /var/lib/ceph/bootstrap-mds/ceph.keyring into /etc/ceph/ceph.mon.keyring
+ for keyring in '$OSD_BOOTSTRAP_KEYRING' '$MDS_BOOTSTRAP_KEYRING' '$RGW_BOOTSTRAP_KEYRING' '$ADMIN_KEYRING'
+ ceph-authtool /etc/ceph/ceph.mon.keyring --import-keyring /var/lib/ceph/bootstrap-rgw/ceph.keyring
importing contents of /var/lib/ceph/bootstrap-rgw/ceph.keyring into /etc/ceph/ceph.mon.keyring
+ for keyring in '$OSD_BOOTSTRAP_KEYRING' '$MDS_BOOTSTRAP_KEYRING' '$RGW_BOOTSTRAP_KEYRING' '$ADMIN_KEYRING'
+ ceph-authtool /etc/ceph/ceph.mon.keyring --import-keyring /etc/ceph/ceph.client.admin.keyring
importing contents of /etc/ceph/ceph.client.admin.keyring into /etc/ceph/ceph.mon.keyring
+ ceph-mon --setuser ceph --setgroup ceph --cluster ceph --mkfs -i gke-standard-cluster-2-default-pool-8b55990f-bj85 --monmap /var/lib/ceph/mon/monmap --keyring /etc/ceph/ceph.mon.keyring --mon-data /var/lib/ceph/mon/ceph-gke-standard-cluster-2-default-pool-8b55990f-bj85
+ touch /var/lib/ceph/mon/ceph-gke-standard-cluster-2-default-pool-8b55990f-bj85/done
+ log SUCCESS
+ '[' -z SUCCESS ']'
++ date '+%F %T'
2018-10-19 18:44:44  /start_mon.sh: SUCCESS
+ TIMESTAMP='2018-10-19 18:44:44'
+ echo '2018-10-19 18:44:44  /start_mon.sh: SUCCESS'
+ return 0
+ exec /usr/bin/ceph-mon --cluster ceph --setuser ceph --setgroup ceph -d -i gke-standard-cluster-2-default-pool-8b55990f-bj85 --mon-data /var/lib/ceph/mon/ceph-gke-standard-cluster-2-default-pool-8b55990f-bj85 --public-addr 10.140.0.2:6789
2018-10-19 18:44:44.694633 7fcdefd43f00  0 set uid:gid to 64045:64045 (ceph:ceph)
2018-10-19 18:44:44.694866 7fcdefd43f00  0 ceph version 12.2.3 (2dab17a455c09584f2a85e6b10888337d1ec8949) luminous (stable), process (unknown), pid 1
2018-10-19 18:44:44.695016 7fcdefd43f00  0 pidfile_write: ignore empty --pid-file
2018-10-19 18:44:44.702045 7fcdefd43f00  0 load: jerasure load: lrc load: isa
2018-10-19 18:44:44.702378 7fcdefd43f00  0  set rocksdb option compression = kNoCompression
2018-10-19 18:44:44.702472 7fcdefd43f00  0  set rocksdb option write_buffer_size = 33554432
2018-10-19 18:44:44.702548 7fcdefd43f00  0  set rocksdb option compression = kNoCompression
2018-10-19 18:44:44.702609 7fcdefd43f00  0  set rocksdb option write_buffer_size = 33554432
2018-10-19 18:44:44.702874 7fcdefd43f00  4 rocksdb: RocksDB version: 5.4.0

2018-10-19 18:44:44.702934 7fcdefd43f00  4 rocksdb: Git sha rocksdb_build_git_sha:@0@
2018-10-19 18:44:44.702970 7fcdefd43f00  4 rocksdb: Compile date Feb 19 2018
2018-10-19 18:44:44.703023 7fcdefd43f00  4 rocksdb: DB SUMMARY

2018-10-19 18:44:44.703128 7fcdefd43f00  4 rocksdb: CURRENT file:  CURRENT

2018-10-19 18:44:44.703183 7fcdefd43f00  4 rocksdb: IDENTITY file:  IDENTITY

2018-10-19 18:44:44.703225 7fcdefd43f00  4 rocksdb: MANIFEST file:  MANIFEST-000001 size: 13 Bytes

2018-10-19 18:44:44.703281 7fcdefd43f00  4 rocksdb: SST files in /var/lib/ceph/mon/ceph-gke-standard-cluster-2-default-pool-8b55990f-bj85/store.db dir, Total Num: 0, files:

2018-10-19 18:44:44.703333 7fcdefd43f00  4 rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-gke-standard-cluster-2-default-pool-8b55990f-bj85/store.db: 000003.log size: 1103 ;

2018-10-19 18:44:44.703369 7fcdefd43f00  4 rocksdb:                         Options.error_if_exists: 0
2018-10-19 18:44:44.703422 7fcdefd43f00  4 rocksdb:                       Options.create_if_missing: 0
2018-10-19 18:44:44.703457 7fcdefd43f00  4 rocksdb:                         Options.paranoid_checks: 1
...

My script for ceph installation

#INSTALL AND START HELM
sudo apt-get update
mkdir ~/helm
cd ~/helm
wget https://storage.googleapis.com/kubernetes-helm/helm-v2.11.0-linux-386.tar.gz
tar -zxvf helm-v2.11.0-linux-386.tar.gz
sudo mv linux-386/helm /usr/local/bin/helm

# RUN PROXY
kubectl proxy --port=8080

#CONFIGURE YOUR CEPH CLUSTER
cd ~
cat<<EOF>tiller-rbac-config.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
  name: tiller
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: tiller
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- kind: ServiceAccount
  name: tiller
  namespace: kube-system
EOF
kubectl create -f tiller-rbac-config.yaml
helm init --service-account tiller

helm serve
helm repo add local http://localhost:8879/charts

# ADD CEPH-HELM TO HELM LOCAL REPOS
git clone https://github.com/ceph/ceph-helm
cd ceph-helm/ceph
make

#CONFIGURE YOUR CEPH CLUSTER
cd ~
cat<<EOF>ceph-overrides.yaml
network:
  public:   172.21.0.0/20
  cluster:   172.21.0.0/20
osd_devices:
  - name: dev-sdb
    device: /dev/sdb
    zap: "1"
  - name: dev-sdc
    device: /dev/sdc
    zap: "1"
storageclass:
  name: ceph-rbd
  pool: rbd
  user_id: k8s
EOF

#CREATE THE CEPH CLUSTER NAMESPACE
kubectl create namespace ceph

#CONFIGURE RBAC PERMISSIONS
kubectl create clusterrolebinding test --clusterrole=cluster-admin [email protected]
kubectl create -f ~/ceph-helm/ceph/rbac.yaml

#LABEL KUBELETS
kubectl label node gke-standard-cluster-2-default-pool-8b55990f-bj85 ceph-mon=enabled ceph-mgr=enabled
kubectl label node gke-standard-cluster-2-default-pool-8b55990f-bj85 ceph-osd=enabled ceph-osd-device-dev-sdb=enabled ceph-osd-device-dev-sdc=enabled
kubectl label node gke-standard-cluster-2-default-pool-8b55990f-s264 ceph-osd=enabled ceph-osd-device-dev-sdb=enabled ceph-osd-device-dev-sdc=enabled

#CEPH DEPLOYMENT
helm install --name=ceph local/ceph --namespace=ceph -f ~/ceph-overrides.yaml 

Ps
I had read this articles, but it looks like aren't my case
#55
#51
#48
#45

And if smb knows other way (tested and working) to deploy Ceph on K8s, tell me please

@neuhalje
Copy link

I had the same issue with OSD nodes. After deleting (kubectl delete pod -n ceph ceph-osd-dev-...) all OSD pods the cluster came up nicely.

@nlds90
Copy link

nlds90 commented Jan 23, 2019

For GKE, you should ssh the osd VM instances and unmount the local SSD disks
Make sure you're using Ubuntu Image Type & Turn off liveness-readiness-probes

@vladbrk
Copy link
Author

vladbrk commented Jan 23, 2019

@neuhalje
After deleting ceph-osd-dev-... I've got rid from "FailedMount ... MountVolume.SetUp failed for volume ..." but ceph-osd-dev pods still Init:CrashLoopBackOff

kubectl describe -n ceph pod ceph-osd-dev-sdb-mdplb

Events:
  Type     Reason                 Age              From                                                        Message
  ----     ------                 ----             ----                                                        -------
  Normal   Created                1m               kubelet, gke-standard-cluster-1-default-pool-732c77d1-889q  Created container
  Normal   SuccessfulMountVolume  1m               kubelet, gke-standard-cluster-1-default-pool-732c77d1-889q  MountVolume.SetUp succeeded for volume "run-udev"
  Normal   SuccessfulMountVolume  1m               kubelet, gke-standard-cluster-1-default-pool-732c77d1-889q  MountVolume.SetUp succeeded for volume "pod-var-lib-ceph"
  Normal   SuccessfulMountVolume  1m               kubelet, gke-standard-cluster-1-default-pool-732c77d1-889q  MountVolume.SetUp succeeded for volume "ceph-etc"
  Normal   SuccessfulMountVolume  1m               kubelet, gke-standard-cluster-1-default-pool-732c77d1-889q  MountVolume.SetUp succeeded for volume "ceph-bootstrap-osd-keyring"
  Normal   SuccessfulMountVolume  1m               kubelet, gke-standard-cluster-1-default-pool-732c77d1-889q  MountVolume.SetUp succeeded for volume "ceph-bootstrap-rgw-keyring"
  Normal   SuccessfulMountVolume  1m               kubelet, gke-standard-cluster-1-default-pool-732c77d1-889q  MountVolume.SetUp succeeded for volume "default-token-9mmv7"
  Normal   SuccessfulMountVolume  1m               kubelet, gke-standard-cluster-1-default-pool-732c77d1-889q  MountVolume.SetUp succeeded for volume "pod-run"
  Normal   SuccessfulMountVolume  1m               kubelet, gke-standard-cluster-1-default-pool-732c77d1-889q  MountVolume.SetUp succeeded for volume "ceph-mon-keyring"
  Normal   SuccessfulMountVolume  1m (x3 over 1m)  kubelet, gke-standard-cluster-1-default-pool-732c77d1-889q  (combined from similar events): MountVolume.SetUp succeeded for volume "ceph-bin"
  Normal   Pulled                 1m               kubelet, gke-standard-cluster-1-default-pool-732c77d1-889q  Container image "docker.io/kolla/ubuntu-source-kubernetes-entrypoint:4.0.0" already present on machine
  Normal   SuccessfulMountVolume  1m               kubelet, gke-standard-cluster-1-default-pool-732c77d1-889q  MountVolume.SetUp succeeded for volume "devices"
  Normal   Started                1m               kubelet, gke-standard-cluster-1-default-pool-732c77d1-889q  Started container
  Normal   Pulled                 1m               kubelet, gke-standard-cluster-1-default-pool-732c77d1-889q  Container image "docker.io/ceph/daemon:tag-build-master-luminous-ubuntu-16.04" already present on machine
  Normal   Created                1m               kubelet, gke-standard-cluster-1-default-pool-732c77d1-889q  Created container
  Normal   Started                1m               kubelet, gke-standard-cluster-1-default-pool-732c77d1-889q  Started container
  Normal   Pulled                 1m (x2 over 1m)  kubelet, gke-standard-cluster-1-default-pool-732c77d1-889q  Container image "docker.io/ceph/daemon:tag-build-master-luminous-ubuntu-16.04" already present on machine
  Normal   Created                1m (x2 over 1m)  kubelet, gke-standard-cluster-1-default-pool-732c77d1-889q  Created container
  Normal   Started                1m (x2 over 1m)  kubelet, gke-standard-cluster-1-default-pool-732c77d1-889q  Started container
  Warning  BackOff                1m               kubelet, gke-standard-cluster-1-default-pool-732c77d1-889q  Back-off restarting failed container

But still I decided go further and continued build cluster using script below

kubectl -n ceph exec -ti ceph-mon-9nz8x -c ceph-mon -- bash
# QVFDbkR6NWN0MjkyTlJBQXgyWmlNRGF6SzF5OW9idVhNRXlNNFE9PQo=
exit

kubectl -n ceph edit secrets/pvc-ceph-client-key
# manually add key 
#apiVersion: v1
#data:
#  key: QVFDbkR6NWN0MjkyTlJBQXgyWmlNRGF6SzF5OW9idVhNRXlNNFE9PQo=

kubectl -n ceph get secrets/pvc-ceph-client-key -o json | jq '.metadata.namespace = "default"' | kubectl create -f -
kubectl get secrets

kubectl -n ceph exec -ti ceph-mon-9nz8x -c ceph-mon -- ceph osd pool create rbd 128
kubectl -n ceph exec -ti ceph-mon-9nz8x -c ceph-mon -- rbd pool init rbd

cat <<EOF> pvc-rbd.yaml
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: ceph-pvc
spec:
  accessModes:
   - ReadWriteOnce
  resources:
    requests:
       storage: 10Gi
  storageClassName: ceph-rbd
EOF
kubectl create -f pvc-rbd.yaml

And it doesnot works because ceph-pvc is pending

kubectl get pvc

> NAME       STATUS    VOLUME    CAPACITY   ACCESS MODES   STORAGECLASS   AGE
> ceph-pvc   Pending                                       ceph-rbd       7d

kubectl describe pvc ceph-pvc

 Events:
   Type    Reason                Age                   From                         Message
   ----    ------                ----                  ----                         -------
   Normal  ExternalProvisioning  4m (x237324 over 6d)  persistentvolume-controller  waiting for a volume to be created, either by external provisioner "ceph.com/rbd" or manually created by system administrator

@gopher-ninja
Copy link

How did you added user and secrets to ceph cluster?

I followed same procedure but in "make" it show no secrets found in helm-toolkit.. hence after doing helm install steps the ceph-mon pod is going in crashloopbackoff stage and in log it states that it has not found various keyrings required.

@miahwk
Copy link

miahwk commented Feb 22, 2019

@jlhanrey I got the same issue. And I have already done everything. But still not working.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants