Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Cannot run hostpath integration example #37

Closed
sean-dooher opened this issue Aug 3, 2018 · 9 comments
Closed

Cannot run hostpath integration example #37

sean-dooher opened this issue Aug 3, 2018 · 9 comments

Comments

@sean-dooher
Copy link

Link to page: https://kubernetes-csi.github.io/docs/Example.html

I've been trying to develop my own CSI driver, but unfortunately I can't even seem to get the example in the docs running with minikube. I have tried it on both a macOS and linux host with the kubeadm bootstraps and the kvm2, virtualbox, and hyperkit drivers for minikube.

I am consistently getting the following error:

Name:         csi-pod
Namespace:    default
Node:         minikube/192.168.64.3
Start Time:   Fri, 03 Aug 2018 11:13:58 -0700
Labels:       app=hostpath-driver
Annotations:  <none>
Status:       Running
IP:           172.17.0.4
Containers:
  external-provisioner:
    Container ID:  docker://1e8f87f4d3a4eb3c603afc6304c6040d746e7fe1b698f1339dce3899dbd467b5
    Image:         quay.io/k8scsi/csi-provisioner:v0.2.1
    Image ID:      docker-pullable://quay.io/k8scsi/csi-provisioner@sha256:fd4ed32315e846b6654f97c95b373da001fd9638cd5935c20a5bf9f5889e8602
    Port:          <none>
    Host Port:     <none>
    Args:
      --v=5
      --provisioner=csi-hostpath
      --csi-address=/csi/csi.sock
    State:          Running
      Started:      Fri, 03 Aug 2018 11:14:02 -0700
    Ready:          True
    Restart Count:  0
    Environment:    <none>
    Mounts:
      /csi from socket-dir (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from csi-service-account-token-qgwmr (ro)
  driver-registrar:
    Container ID:  docker://27a57b906712f5b2be9d700d54fc614be0677ace085bb12da7bc3d2f55909dfc
    Image:         quay.io/k8scsi/driver-registrar:v0.2.0
    Image ID:      docker-pullable://quay.io/k8scsi/driver-registrar@sha256:9a84ec490b5ff5390b12be21acf707273781cd0911cc597712a254bc1862f220
    Port:          <none>
    Host Port:     <none>
    Args:
      --v=5
      --csi-address=/csi/csi.sock
    State:          Running
      Started:      Fri, 03 Aug 2018 11:14:04 -0700
    Ready:          True
    Restart Count:  0
    Environment:
      KUBE_NODE_NAME:   (v1:spec.nodeName)
    Mounts:
      /csi from socket-dir (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from csi-service-account-token-qgwmr (ro)
  external-attacher:
    Container ID:  docker://88c9dbd33133c01c15b1b59dd32c7ea8104bc0862eedb30dcc43224e9804327c
    Image:         quay.io/k8scsi/csi-attacher:v0.2.0
    Image ID:      docker-pullable://quay.io/k8scsi/csi-attacher@sha256:5cbb7934bd86d400c221379cff8b24ed4c06e121ea59608cfd7e67690100ba54
    Port:          <none>
    Host Port:     <none>
    Args:
      --v=5
      --csi-address=$(ADDRESS)
    State:          Running
      Started:      Fri, 03 Aug 2018 11:14:07 -0700
    Ready:          True
    Restart Count:  0
    Environment:
      ADDRESS:  /csi/csi.sock
    Mounts:
      /csi from socket-dir (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from csi-service-account-token-qgwmr (ro)
  hostpath-driver:
    Container ID:  docker://14923bdcda87b52deae5fccf045788a17b45cc1f2bb1e95e39ead96ca6e5d26f
    Image:         quay.io/k8scsi/hostpathplugin:v0.2.0
    Image ID:      docker-pullable://quay.io/k8scsi/hostpathplugin@sha256:6c640a9b6a87e9f7261ff73be2e000367aa21f8f0c6ebfda97d4eefa5523ab53
    Port:          <none>
    Host Port:     <none>
    Args:
      --v=5
      --endpoint=$(CSI_ENDPOINT)
      --nodeid=$(KUBE_NODE_NAME)
    State:          Waiting
      Reason:       RunContainerError
    Last State:     Terminated
      Reason:       ContainerCannotRun
      Message:      OCI runtime create failed: open /var/run/docker/runtime-runc/moby/14923bdcda87b52deae5fccf045788a17b45cc1f2bb1e95e39ead96ca6e5d26f/state.json: no such file or directory: unknown
      Exit Code:    128
      Started:      Fri, 03 Aug 2018 11:14:08 -0700
      Finished:     Fri, 03 Aug 2018 11:14:08 -0700
    Ready:          False
    Restart Count:  0
    Environment:
      CSI_ENDPOINT:    unix:///csi/csi.sock
      KUBE_NODE_NAME:   (v1:spec.nodeName)
    Mounts:
      /csi from socket-dir (rw)
      /var/lib/kubelet/pods from mountpoint-dir (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from csi-service-account-token-qgwmr (ro)
Conditions:
  Type           Status
  Initialized    True 
  Ready          False 
  PodScheduled   True 
Volumes:
  socket-dir:
    Type:          HostPath (bare host directory volume)
    Path:          /var/lib/kubelet/plugins/csi-hostpath
    HostPathType:  DirectoryOrCreate
  mountpoint-dir:
    Type:          HostPath (bare host directory volume)
    Path:          /var/lib/kubelet/pods
    HostPathType:  DirectoryOrCreate
  csi-service-account-token-qgwmr:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  csi-service-account-token-qgwmr
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type     Reason                 Age   From               Message
  ----     ------                 ----  ----               -------
  Normal   Scheduled              13s   default-scheduler  Successfully assigned csi-pod to minikube
  Normal   SuccessfulMountVolume  12s   kubelet, minikube  MountVolume.SetUp succeeded for volume "mountpoint-dir"
  Normal   SuccessfulMountVolume  12s   kubelet, minikube  MountVolume.SetUp succeeded for volume "socket-dir"
  Normal   SuccessfulMountVolume  12s   kubelet, minikube  MountVolume.SetUp succeeded for volume "csi-service-account-token-qgwmr"
  Normal   Pulling                12s   kubelet, minikube  pulling image "quay.io/k8scsi/csi-provisioner:v0.2.1"
  Normal   Created                9s    kubelet, minikube  Created container
  Normal   Pulled                 9s    kubelet, minikube  Successfully pulled image "quay.io/k8scsi/csi-provisioner:v0.2.1"
  Normal   Started                9s    kubelet, minikube  Started container
  Normal   Pulling                9s    kubelet, minikube  pulling image "quay.io/k8scsi/driver-registrar:v0.2.0"
  Normal   Pulling                7s    kubelet, minikube  pulling image "quay.io/k8scsi/csi-attacher:v0.2.0"
  Normal   Pulled                 7s    kubelet, minikube  Successfully pulled image "quay.io/k8scsi/driver-registrar:v0.2.0"
  Normal   Started                7s    kubelet, minikube  Started container
  Normal   Created                7s    kubelet, minikube  Created container
  Normal   Pulled                 5s    kubelet, minikube  Successfully pulled image "quay.io/k8scsi/csi-attacher:v0.2.0"
  Normal   Created                5s    kubelet, minikube  Created container
  Normal   Started                4s    kubelet, minikube  Started container
  Normal   Pulling                4s    kubelet, minikube  pulling image "quay.io/k8scsi/hostpathplugin:v0.2.0"
  Normal   Pulled                 3s    kubelet, minikube  Successfully pulled image "quay.io/k8scsi/hostpathplugin:v0.2.0"
  Normal   Created                3s    kubelet, minikube  Created container
  Warning  Failed                 3s    kubelet, minikube  Error: failed to start container "hostpath-driver": Error response from daemon: OCI runtime create failed: open /var/run/docker/runtime-runc/moby/hostpath-driver/state.json: no such file or directory: unknown

I have tried to create the folder manually before deploying to no avail. I'm not sure if there's a problem with my config or if it's a problem with the driver itself. Either way, the example is not working and should probably be updated to work. I'll continue working on fixing it (any help would be appreciated) and if I find a solution I'll submit a PR.

@sean-dooher
Copy link
Author

Upon further inspection it seems that once the host-path driver container starts, the docker daemon cleans up the /var/run directory. /var/run/docker.sock disappears alongside a variety of other files in /var/run/docker, although systemctl still shows the daemon as active and running. The log file for the hostpath-driver container is empty as well. I have tried this with a few other csi-drivers such as the mock-driver and the same behavior occurs.

@sean-dooher
Copy link
Author

Further information: this seems to only happen when I run it using minikube. I ran it on normal cluster and it seems to be working fine. So there must be some issue with how I am setting up my minikube cluster

@sean-dooher
Copy link
Author

I ended up fixing the problem with the following script:

#!/bin/bash
minikube start --vm-driver=hyperkit --kubernetes-version v1.10.0 --feature-gates=KubeletPluginsWatcher=true,BlockVolume=true,CSIBlockVolume=true --bootstrapper=localkube

# replace /var sys links with abs links
VAR_CMD="sudo find /var -type l -execdir bash -c 'ln -sfn \"\$(readlink -f \"\$0\")\" \"\$0\"' {} \\;"
minikube ssh "$VAR_CMD"

# replace etc sys links with abs links
ETC_CMD="sudo find /etc -type l -execdir bash -c 'ln -sfn \"\$(readlink -f \"\$0\")\" \"\$0\"' {} \\;"
minikube ssh "$ETC_CMD"

The syslinks from /run to /var/run where misbehaving when launching pods with bind mounts due to them being relative syslinks. Changing them to absolute syslinks using the last two commands fixed that. Kubeadm gave me a 80-20 split of not working/working so I switched to localkube and everything seems to be working now!

Hope this helps someone and prevents them from delving down into this particular rabbit hold.

@oleksiys
Copy link
Contributor

oleksiys commented Nov 29, 2018

I think I faced the same issue and was able to create an easy reproducer for it. I filed a minikube issue
kubernetes/minikube#3378

@pohly
Copy link
Collaborator

pohly commented Nov 30, 2018

@oleksiys @sean-dooher: thanks for your analysis on this one. Do you agree that we can close this issue now, as it seems to be caused by minikube?

We could of course document the issue, but I don't think we should - that would become unmanageable quickly.

@oleksiys
Copy link
Contributor

I agree, I think we can close this issue and add a reference to the minikube issue from the "Known issues" section.

@pohly
Copy link
Collaborator

pohly commented Dec 3, 2018 via email

@chandresh-pancholi
Copy link

Above command doesn't work with Kubernetes version 1.12.4.

This PR shows localkube removed from latest minikube.
kubernetes/minikube#2911

As per csi documentation, steps are written for Kubernetes 1.12 version.

@zhangxiaoyu-zidif
Copy link

minikube 1.1.1 & Kubernetes v1.14.3 have same issue》

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants