Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Minikube v0.21.0 hostPath PVCs do not bind (Pending forever). #1783

Closed
bgehman opened this issue Aug 4, 2017 · 21 comments
Closed

Minikube v0.21.0 hostPath PVCs do not bind (Pending forever). #1783

bgehman opened this issue Aug 4, 2017 · 21 comments
Labels
help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. kind/documentation Categorizes issue or PR as related to documentation. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.

Comments

@bgehman
Copy link

bgehman commented Aug 4, 2017

Is this a BUG REPORT or FEATURE REQUEST? (choose one):
BUG REPORT

Minikube version (use minikube version):
minikube version: v0.21.0

Environment:

  • OS (e.g. from /etc/os-release):
    Red Hat Enterprise Linux Workstation release 7.3 (Maipo)

  • VM Driver (e.g. cat ~/.minikube/machines/minikube/config.json | grep DriverName):
    kvm

  • ISO version (e.g. cat ~/.minikube/machines/minikube/config.json | grep -i ISO or minikube ssh cat /etc/VERSION):
    minikube-v0.23.0.iso

  • Install tools:

  • Others:

What happened:
HostPath PVCs will not bind to PVs -- PVCs stuck in Pending forever.

What you expected to happen:
HostPath PVCs successfully bind, as it did in Minikube v0.20.0

How to reproduce it (as minimally and precisely as possible):
Create test.yml:

---
kind: PersistentVolume
apiVersion: v1
metadata:
  name: test-pv0
spec:
  accessModes:
    - ReadWriteOnce
  capacity:
    storage: 1Gi
  hostPath:
    path: /data/test

---

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: test-claim0
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 1Gi
  volumeName: test-pv0

Execute: kubectl apply -f test.yml

Anything else do we need to know:
This worked fine in v0.20.0 of minikube -- seems to be a regression in newest version. Any ideas, work-arounds would be greatly appreciated!

@spuranam
Copy link

spuranam commented Aug 4, 2017

@bgehman I ran into the same issue, delete the default storageclass and recreating it fixed this.

minikube addons disable default-storageclass
# storageclass.yaml
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  namespace: kube-system
  name: standard
  annotations:
    storageclass.beta.kubernetes.io/is-default-class: "true"
  labels:
    addonmanager.kubernetes.io/mode: Reconcile
provisioner: k8s.io/minikube-hostpath
kubectl apply -f storageclass.yaml

@jacobfederer
Copy link

Could you please run kubectl describe pv ?

I ran into a similar issue and resolved it by explicitly defining the storage class in my pv definition.

apiVersion: v1
kind: PersistentVolume
metadata:
  name: postgresql-storage
spec:
  capacity:
    storage: 5Gi
  accessModes:
    - ReadWriteOnce
  persistentVolumeReclaimPolicy: Retain
  storageClassName: standard <----
  hostPath:
      path: /data/postgresql/

@bgehman
Copy link
Author

bgehman commented Aug 7, 2017

@jacobfederer Defining the storageClassName in the PV definition didn't seem to have any affect.

Minikube v0.20.0 (working)

$ kubectl describe pv test-pv0
Name:		test-pv0
Labels:		<none>
StorageClass:	
Status:		Bound
Claim:		default/test-claim0
Reclaim Policy:	Retain
Access Modes:	RWO
Capacity:	1Gi
Message:	
Source:
    Type:	HostPath (bare host directory volume)
    Path:	/data/test
No events.


$ kubectl describe pvc test-claim0 
Name:		test-claim0
Namespace:	default
StorageClass:	
Status:		Bound
Volume:		test-pv0
Labels:		<none>
Capacity:	1Gi
Access Modes:	RWO
No events.

Minikube v0.21.0 (broken -- with and without adding storageClassName standard to PV))

$ kubectl describe pv test-pv0    
Name:		test-pv0
Labels:		<none>
StorageClass:	
Status:		Available
Claim:		
Reclaim Policy:	Retain
Access Modes:	RWO
Capacity:	1Gi
Message:	
Source:
    Type:	HostPath (bare host directory volume)
    Path:	/data/test
Events:
  FirstSeen	LastSeen	Count	From				SubObjectPath	Type		Reason		Message
  ---------	--------	-----	----				-------------	--------	------		-------
  1m		13s		8	{persistentvolume-controller }			Warning		VolumeMismatch	Volume's size is smaller than requested or volume's class does not match with claim


$ kubectl describe pvc test-claim0
Name:		test-claim0
Namespace:	default
StorageClass:	
Status:		Pending
Volume:		test-pv0
Labels:		<none>
Capacity:	0
Access Modes:	
No events.

@bgehman
Copy link
Author

bgehman commented Aug 7, 2017

I figured it out. My kubectl client was backlevel at version 1.5.2 while the new minikube kubernetes server version is at 1.7.0. Upgrading my client to 1.7.0 and the problem goes away. Trying to recreate the StorageClass object suggested by @spuranam was failing and the led me to the client version mismatch.

Also, had to add the storageClassName: standard to the PV definition as well (as @jacobfederer mentioned).

I guess devs can close this, or suggest upstream warning if client/server version mismatches if compatibility can't be guaranteed. Thanks for all that helped.

@r2d4
Copy link
Contributor

r2d4 commented Aug 9, 2017

I'm going to leave this open and tag it as something we should add some documentation for.

@r2d4 r2d4 added kind/documentation Categorizes issue or PR as related to documentation. help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. labels Aug 9, 2017
@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

Prevent issues from auto-closing with an /lifecycle frozen comment.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or @fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jan 2, 2018
@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten
/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Feb 7, 2018
@elgalu
Copy link

elgalu commented Feb 13, 2018

/remove-lifecycle stale

@elgalu
Copy link

elgalu commented Feb 13, 2018

I'm facing this issue now.

Normal ExternalProvisioning 4s (x9 over 2m) persistentvolume-controller waiting for a volume to be created, either by external provisioner "k8s.io/minikube-hostpath" or manually created by system administrator

minikube version: v0.25.0

kubectl version #=> Client Version: version.Info{Major:"1", Minor:"9", GitVersion:"v1.9.3", GitCommit:"d2835416544f298c919e2ead3be3d0864b52323b", GitTreeState:"clean", BuildDate:"2018-02-07T12:22:21Z", GoVersion:"go1.9.2", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"", Minor:"", GitVersion:"v1.9.0", GitCommit:"925c127ec6b946659ad0fd596fa959be43f0cc05", GitTreeState:"clean", BuildDate:"2018-01-26T19:04:38Z", GoVersion:"go1.9.1", Compiler:"gc", Platform:"linux/amd64"}

Docker version 18.02.0-ce, build fc4de44

@elgalu
Copy link

elgalu commented Feb 13, 2018

/remove-lifecycle rotten

@k8s-ci-robot k8s-ci-robot removed the lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. label Feb 13, 2018
@bgehman
Copy link
Author

bgehman commented Feb 13, 2018

@elgalu couple things to check:

  • Ensure your kubectl client and kubectl server versions match: kubectl version --short=true, if not, change your client to match your server.
  • Check that the storageClass of the PVC and PV match: kubectl get pv and kubectl get pvc -n <namespace>

Failure to bind hostPaths typically fall into one or both of those cases.

@elgalu
Copy link

elgalu commented Feb 13, 2018

kubectl version --short=true
#=> Client Version: v1.9.3
#=> Server Version: v1.9.0

@jberkus
Copy link

jberkus commented Feb 15, 2018

I'm hitting this problem too, and my versions match exactly:

  Warning  FailedScheduling       4m (x2 over 4m)  default-scheduler  PersistentVolumeClaim is not bound: "pgdata-acid-minimal-cluster-0"
Client Version: v1.8.0
Server Version: v1.8.0

... this seems to be a general issue with pvcs since 0.21?

@jberkus
Copy link

jberkus commented Feb 15, 2018

... and, nuked the minikube vm, started everything over from scratch, and now it runs without the PVC issue. No changes to manifests. Clearly there's some other triggering condition here I don't know about.

@elgalu
Copy link

elgalu commented Feb 16, 2018

Same here, I moved away from the none driver to kvm2 and everything works!

minikube start --vm-driver=kvm2 --memory=8192 --disk-size=40g

@craynic
Copy link

craynic commented Mar 23, 2018

I also meet this issue.
Days before something happened to docker-daemon and I restarted it. After that the pod storage-provisioner got status Evicted. I delete the pod and create a new one from /etc/kubernetes/addons/storage-provisioner.yaml, and all things get right.

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jun 21, 2018
@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten
/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Jul 21, 2018
@fejta-bot
Copy link

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

@karolinepauls
Copy link

Client Version: v1.15.0
Server Version: v1.15.0

VM: none

E0627 22:06:09.754834   22524 logs.go:155] Failed to list containers for "storage-provisioner": running command: docker ps -a --filter="name=storage-provisioner" --format="{{.ID}}"                                                            output: WARNING: Error loading config file: /home/user/.docker/config.json: open /home/user/.docker/config.json: permission denied
Got permission denied while trying to connect to the Docker daemon socket at unix:///var/run/docker.sock: Get http://%2Fvar%2Frun%2Fdocker.sock/v1.39/containers/json?all=1&filters=%7B%22name%22%3A%7B%22storage-provisioner%22%3Atrue%7D%7D:
dial unix /var/run/docker.sock: connect: permission denied                                                                                                                                                                                     : running command: docker ps -a --filter="name=storage-provisioner" --format="{{.ID}}"
.: exit status 1

@karolinepauls
Copy link

Chowning ~/.docker/config.json helped.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. kind/documentation Categorizes issue or PR as related to documentation. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.
Projects
None yet
Development

No branches or pull requests

10 participants