Skip to content
This repository has been archived by the owner on Oct 21, 2020. It is now read-only.

NFS client ,test-pod.yaml not running. #964

Closed
cuisongliu opened this issue Sep 1, 2018 · 4 comments
Closed

NFS client ,test-pod.yaml not running. #964

cuisongliu opened this issue Sep 1, 2018 · 4 comments

Comments

@cuisongliu
Copy link

kind: StorageClass
metadata:
  name: managed-nfs-storage
  namespace: kube-nfs
provisioner: nfs.jerry.com/kubernetes # or choose another name, must match deployment's env PROVISIONER_NAME'
parameters:
  archiveOnDelete: "false"
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: nfs-client-provisioner
  namespace: kube-nfs
---
kind: Deployment
apiVersion: extensions/v1beta1
metadata:
  name: nfs-client-provisioner
  namespace: kube-nfs
spec:
  replicas: 1
  strategy:
    type: Recreate
  template:
    metadata:
      labels:
        app: nfs-client-provisioner
      namespace: kube-nfs
    spec:
      serviceAccountName: nfs-client-provisioner
      containers:
        - name: nfs-client-provisioner
          image: harbor.jerry.com/kubernetes/external_storage/nfs-client-provisioner:v3.1.0-k8s1.11
          volumeMounts:
            - name: nfs-client-root
              mountPath: /persistentvolumes
          env:
            - name: PROVISIONER_NAME
              value: nfs.jerry.com/kubernetes
            - name: NFS_SERVER
              value: 172.16.3.254
            - name: NFS_PATH
              value: /data/kubernetes
      volumes:
        - name: nfs-client-root
          nfs:
            server: 172.16.3.254
            path: /data/kubernetes
---

apiVersion: v1
kind: Namespace
metadata:
   name: kube-nfs`

`kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: nfs-client-provisioner-runner
rules:
  - apiGroups: [""]
    resources: ["persistentvolumes"]
    verbs: ["get", "list", "watch", "create", "delete"]
  - apiGroups: [""]
    resources: ["persistentvolumeclaims"]
    verbs: ["get", "list", "watch", "update"]
  - apiGroups: ["storage.k8s.io"]
    resources: ["storageclasses"]
    verbs: ["get", "list", "watch"]
  - apiGroups: [""]
    resources: ["events"]
    verbs: ["create", "update", "patch"]
  - apiGroups: [""]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: run-nfs-client-provisioner
subjects:
  - kind: ServiceAccount
    name: nfs-client-provisioner
    namespace: kube-nfs
roleRef:
  kind: ClusterRole
  name: nfs-client-provisioner-runner
  apiGroup: rbac.authorization.k8s.io
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: leader-locking-nfs-client-provisioner
rules:
  - apiGroups: [""]
    resources: ["endpoints"]
    verbs: ["get", "list", "watch", "create", "update", "patch"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: leader-locking-nfs-client-provisioner
subjects:
  - kind: ServiceAccount
    name: nfs-client-provisioner
    # replace with namespace where provisioner is deployed
    namespace: kube-nfs
roleRef:
  kind: Role
  name: leader-locking-nfs-client-provisioner
  apiGroup: rbac.authorization.k8s.io
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: test-claim
  namespace: kube-nfs
  annotations:
    volume.beta.kubernetes.io/storage-class: "managed-nfs-storage"
spec:
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 1Mi
---
kind: Pod
apiVersion: v1
metadata:
  name: test-pod
spec:
  containers:
  - name: test-pod
    image: harbor.jerry.com/library/busybox:1.24
    command:
      - "/bin/sh"
    args:
      - "-c"
      - "touch /mnt/SUCCESS && exit 0 || exit 1"
    volumeMounts:
      - name: nfs-pvc
        mountPath: "/mnt"
  restartPolicy: "Never"
  volumes:
    - name: nfs-pvc
      persistentVolumeClaim:
        claimName: test-claim

the 172.16.3.254 is my nfs server.But,the pvc(test-claim) is Pending,and the pod tall me "pod has unbound PersistentVolumeClaims (repeated 6 times)"
I need create the PV ?And the param " mountPath: /persistentvolumes " means i need to create the Folder the name is '/persistentvolumes' in my nfs server and mount to nfs ?

@cuisongliu
Copy link
Author

the nfs provisioner is error for:

error retrieving resource lock default/jerry.com-kubernetes: endpoints "jerry.com-kubernetes" is forbidden: User "system:serviceaccount:default:nfs-client-provisioner" cannot get endpoints in the namespace "default

@ramene
Copy link

ramene commented Sep 3, 2018

I was stuck at the same issue, even after making changes to the RBAC noted here: #953 and #924

@cuisongliu
Copy link
Author

@ramene ohh,thank u very much. I'm find this error and fix it.This mistake is due to my carelessness. Look my rbac config find the rbac and sc is not the same namespace. In the dashboard the Deployment log is find it.

@ramene
Copy link

ramene commented Sep 3, 2018

Thanks @cuisongliu - I've found that using the helm chart does work, something I'd missed earlier; as noted here: #953

NAMESPACE     NAME                                   READY     STATUS    RESTARTS   AGE       IP                NODE
default       pod/efs-provisioner-5fb5866dcc-rk9fw   1/1       Running   0          3m        192.168.122.52    ip-192-168-118-34.us-west-2.compute.internal
kube-system   pod/aws-node-cj4hr                     1/1       Running   1          4h        192.168.118.34    ip-192-168-118-34.us-west-2.compute.internal
kube-system   pod/aws-node-f8gpf                     1/1       Running   1          4h        192.168.165.92    ip-192-168-165-92.us-west-2.compute.internal
kube-system   pod/aws-node-jw6z2                     1/1       Running   1          4h        192.168.225.164   ip-192-168-225-164.us-west-2.compute.internal
kube-system   pod/kube-dns-7cc87d595-t8c85           3/3       Running   0          4h        192.168.194.94    ip-192-168-225-164.us-west-2.compute.internal
kube-system   pod/kube-proxy-dv64j                   1/1       Running   0          4h        192.168.165.92    ip-192-168-165-92.us-west-2.compute.internal
kube-system   pod/kube-proxy-jnx6d                   1/1       Running   0          4h        192.168.118.34    ip-192-168-118-34.us-west-2.compute.internal
kube-system   pod/kube-proxy-vnsfk                   1/1       Running   0          4h        192.168.225.164   ip-192-168-225-164.us-west-2.compute.internal
kube-system   pod/tiller-deploy-895d57dd9-lmp7b      1/1       Running   0          3h        192.168.168.154   ip-192-168-165-92.us-west-2.compute.internal

NAMESPACE     NAME                    TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)         AGE       SELECTOR
default       service/kubernetes      ClusterIP   10.100.0.1     <none>        443/TCP         4h        <none>
kube-system   service/kube-dns        ClusterIP   10.100.0.10    <none>        53/UDP,53/TCP   4h        k8s-app=kube-dns
kube-system   service/tiller-deploy   ClusterIP   10.100.16.59   <none>        44134/TCP       3h        app=helm,name=tiller

NAMESPACE   NAME                                 STATUS    VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
default     persistentvolumeclaim/my-efs-vol-1   Bound     pvc-7df61fb1-af48-11e8-af17-020501b6d6ac   1Mi        RWX            efs            8s

You have the create the PVC separately... after running the helm chart

$ helm upgrade --install efs-provisioner ./
Release "efs-provisioner" does not exist. Installing it now.
NAME:   efs-provisioner
LAST DEPLOYED: Mon Sep  3 03:07:45 2018
NAMESPACE: default
STATUS: DEPLOYED

RESOURCES:
==> v1beta2/Deployment
NAME             DESIRED  CURRENT  UP-TO-DATE  AVAILABLE  AGE
efs-provisioner  1        1        1           0          1s

==> v1/Pod(related)
NAME                              READY  STATUS    RESTARTS  AGE
efs-provisioner-5fb5866dcc-rk9fw  0/1    Init:0/1  0         1s

==> v1beta1/StorageClass
NAME  PROVISIONER          AGE
efs   example.com/aws-efs  1s

==> v1/ServiceAccount
NAME             SECRETS  AGE
efs-provisioner  1        1s

==> v1/ClusterRole
NAME             AGE
efs-provisioner  1s

==> v1/ClusterRoleBinding
NAME             AGE
efs-provisioner  1s

NOTES:
You can provision an EFS-backed persistent volume with a persistent volume claim like below:

kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: my-efs-vol-1
  annotations:
    volume.beta.kubernetes.io/storage-class: efs
spec:
  storageClassName: efs
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 1Mi

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants