Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Feature] Local Storage Provider #67

Open
geku opened this issue May 26, 2019 · 15 comments
Open

[Feature] Local Storage Provider #67

geku opened this issue May 26, 2019 · 15 comments
Assignees
Labels
enhancement New feature or request help wanted Extra attention is needed
Milestone

Comments

@geku
Copy link

geku commented May 26, 2019

It would be great to have a default storage provider similar to what Minikube provides. This allows to deploy and develop Kubernetes pods requiring storage.

Scope of your request

Additional addon to deploy to single node clusters.

Describe the solution you'd like

I got it working by using the storage provisioner of Minikube by creating following resources:

---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: storage-provisioner
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: storage-provisioner
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:persistent-volume-provisioner
subjects:
  - kind: ServiceAccount
    name: storage-provisioner
    namespace: kube-system
---
apiVersion: v1
kind: Pod
metadata:
  name: storage-provisioner
  namespace: kube-system
spec:
  serviceAccountName: storage-provisioner
  tolerations:
  - effect: NoExecute
    key: node.kubernetes.io/not-ready
    operator: Exists
    tolerationSeconds: 300
  - effect: NoExecute
    key: node.kubernetes.io/unreachable
    operator: Exists
    tolerationSeconds: 300
  hostNetwork: true
  containers:
  - name: storage-provisioner
    image: gcr.io/k8s-minikube/storage-provisioner:v1.8.1
    command: ["/storage-provisioner"]
    imagePullPolicy: IfNotPresent
    volumeMounts:
    - mountPath: /tmp
      name: tmp
  volumes:
  - name: tmp
    hostPath:
      path: /tmp
      type: Directory
---
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: standard
  namespace: kube-system
  annotations:
    storageclass.kubernetes.io/is-default-class: "true"
  labels:
    addonmanager.kubernetes.io/mode: EnsureExists
provisioner: k8s.io/minikube-hostpath

Describe alternatives you've considered

An alternative might be local persistent volumes but the Minikube solution looks simpler. With the local persistent volumes it could work even with multiple nodes.

I'm not sure if the addon should be integrated into k3s directly. On the other hand I think it's more a feature required for local development and therefore probably fits better to k3d.

@geku geku added the enhancement New feature or request label May 26, 2019
@iwilltry42 iwilltry42 changed the title Local Storage Provider [Feature] Local Storage Provider May 27, 2019
@iwilltry42
Copy link
Member

Hey, thanks for creating this issue and providing your solution 👍
I confirmed that the provisioner you posted is working, by saving it to prov.yaml and then creating it upon creation time by mounting it like this: k3d create -v $(pwd)/prov.yaml:/var/lib/rancher/k3s/server/manifests/prov.yaml.
However, one should note that the hostPath /tmp is not persisted on disk when we shutdown the cluster, if we don't declare it as a docker volume/bind.

@poikilotherm
Copy link
Contributor

poikilotherm commented May 29, 2019

I am using https://github.com/rancher/local-path-provisioner for this at IQSS/dataverse-kubernetes k3s/k3d demo

Obviously, you need to patch your PVCs. I did that via Kustomization (kubectl apply -k).

@iwilltry42
Copy link
Member

iwilltry42 commented Jun 12, 2019

I also just tested the Local Persistent Volumes which are GA since Kubernetes v1.14.
Here's what I did:

  1. k3d create -n test --workers 2
    1.2 export KUBECONFIG="$(k3d get-kubeconfig --name='test')"
  2. kubectl apply -f storageclass.yaml where storageclass.yaml:
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: local-storage
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: WaitForFirstConsumer
  1. docker exec k3d-test-worker-0 mkdir /tmp/test-pv (the path we're accessing has to exist)
  2. kubectl apply -f deploy.yaml, where deploy.yaml:
apiVersion: apps/v1beta2
kind: Deployment
metadata:
  name: test-deploy
  labels:
    app: test-deploy
spec:
  replicas: 1
  strategy:
    type: Recreate
  selector:
    matchLabels:
      app: test-deploy
  template:
    metadata:
      name: test-deploy
      labels:
        app: test-deploy
    spec:
      containers:
        - name: main
          image: postgres
          imagePullPolicy: IfNotPresent
          volumeMounts:
            - mountPath: /test
              name: test-mount
      volumes:
        - name: test-mount
          persistentVolumeClaim:
            claimName: test-mount
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: test-mount
spec:
  volumeName: example-pv
  storageClassName: local-storage
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 1Gi

---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: example-pv
spec:
  capacity:
    storage: 1Gi
  # volumeMode field requires BlockVolume Alpha feature gate to be enabled.
  volumeMode: Filesystem
  accessModes:
  - ReadWriteOnce
  persistentVolumeReclaimPolicy: Retain
  storageClassName: local-storage
  local:
    path: /tmp/test-pv
  nodeAffinity:
    required:
      nodeSelectorTerms:
      - matchExpressions:
        - key: kubernetes.io/hostname
          operator: In
          values:
          - k3d-test-worker-0
  1. kubectl exec test-deploy-76cbfc4c94-v2q8s touch /test/test.txt
  2. docker exec k3d-test-worker-0 ls /tmp/test-pv

This was just a test, to see that the thing is working 👍

@poikilotherm
Copy link
Contributor

Cool @iwilltry42! 👍

Let me emphasize a bit on an IMHO important aspect: local storage provider in K8s 1.14 is promising, but (as you noted) does not yet support dynamic provisioning.

This is my only reason to favor rancher/local-path-provisioner over K8s local storage.
Following kubernetes-sigs/sig-storage-local-static-provisioner#51, it's going to take a while...

@iwilltry42 iwilltry42 added this to the v2.0 milestone Jun 29, 2019
@iwilltry42
Copy link
Member

k3s built-in local storage provider coming: https://twitter.com/ibuildthecloud/status/1167511203108638720

@iwilltry42 iwilltry42 self-assigned this Sep 4, 2019
@lukasmrtvy
Copy link

k8s local dynamic provisioning issue:
kubernetes-sigs/sig-storage-local-static-provisioner#51

@iwilltry42
Copy link
Member

k3s has the local-storage storage class built in now: https://rancher.com/docs/k3s/latest/en/storage/

@lukasmrtvy
Copy link

lukasmrtvy commented Nov 22, 2019

@iwilltry42
I noticed that example (https://rancher.com/docs/k3s/latest/en/storage/#pvc-yaml) uses capacity, but according to the docs (https://github.com/rancher/local-path-provisioner#cons), its not possible yet.
EDIT: sry, its request, not limit :)

@deiwin
Copy link

deiwin commented Feb 16, 2020

k3s has the local-storage storage class built in now: https://rancher.com/docs/k3s/latest/en/storage/

That seems to work well with k3d create --volume "<local-path>:/var/lib/rancher/k3s/storage" for persistence. The only issue I'm seeing is that k3d delete doesn't allow the local path provisioner to clean up existing PVC folders and they don't seem to be cleaned up when creating a new cluster using the same storage folder either.

@iwilltry42
Copy link
Member

Hi @deiwin, k3d delete doesn't do any kind of "graceful" shutdown of the managed k3s instances, it simply removes the containers. What would be required to allow for a proper cleanup and what would be the use case for this?

@deiwin
Copy link

deiwin commented Feb 17, 2020

What would be required to allow for a proper cleanup

If it'd delete all deployments/statefulsets/daemonsets using PVCs and then all PVCs, then I think local-path-provisioner would do the cleanup. It'd have to have some way of knowing that that cleanup's done, though.

what would be the use case for this?

You write above, that "However, one should note that the hostPath /tmp is not persisted on disk when we shutdown the cluster, if we don't declare it as a docker volume/bind." I haven't verified it, but I was thinking that doing k3d create --volume "<local-path>:/var/lib/rancher/k3s/storage" as I mentioned above would help with that persistence.

I'm working on a cluster setup that supports two different use cases: 1) start and stop the same long-running cluster with persistent storage (for a DB) and 2) recreate whole clusters from scratch fairly often.

As I said, I haven't verified this, but I think case (1) requires persistence, but case (2) currently leaves behind data from PVCs that don't exist anymore in new clusters.

As a workaround, I'm currently pairing k3d delete with rm -rf <local-path-for-storage>. That's fairly simple, but I don't know if other users would think to do that.

@iwilltry42 iwilltry42 added the help wanted Extra attention is needed label Feb 18, 2020
@iwilltry42 iwilltry42 removed this from the v2.0 milestone Apr 21, 2020
@iwilltry42
Copy link
Member

Hey 👋 Is there any more need or input for this feature?

@iwilltry42 iwilltry42 added this to the Backlog milestone Feb 5, 2021
@vincentgerris
Copy link

What I am missing at least in the documentation is where the data is stored locally.
does docker map the /var/lib/rancher/k3s/storage to /tmp ?
Here it is documented to point to opt:
https://github.com/rancher/local-path-provisioner/blob/master/README.md#usage
A clarification / documentation of this is greatly appreciated, as is the option to add a PV to point to a local folder for storage in an existing k3d cluster.

@vincentgerris
Copy link

I also think clean up is not needed from k3d when it comes to a mapped local file. a PV could be mapped to multiple clusters and otherwise the cleanup logic has to check for all that.
The pvc could be cleaned, but doesn't that just get removed when the whole cluster is removed?
I noticed one can add a volume but not a mapping to a file system after the cluster is created (unless I missed this and should run a docker command).
If that can be considered a feature of k3d then there could be a delete option too, which would do what @deiwin suggest in a more or less complete way. just my thoughts :)

@iwilltry42
Copy link
Member

iwilltry42 commented Oct 12, 2021

Hi @vincentgerris , thanks for your input!
The local-path-provisioner is a "feature" (as in auto-deployed service) of K3s, so not directly related to k3d. k3d also does not manage anything about it.
You can find the configuration of the provisioner here: https://github.com/k3s-io/k3s/blob/master/manifests/local-storage.yaml and you can edit it e.g. as mentioned here: #787 (comment) .
In k3d, the path generated by K3s is /var/lib/rancher/k3s/storage and so you can use k3d cluster create -v /my/local/path:/var/lib/rancher/k3s/storage to map it to some directory on your host.

I noticed one can add a volume but not a mapping to a file system after the cluster is created (unless I missed this and should run a docker command).

Docker does not have any option of adding mounts to existing containers, so there's no possibility for us to achieve this in k3d. You'd have to do the volume mounts upfront when creating the cluster.

Update 1: I created an issue to put up some documentation on K3s features and how to use/modify them in k3d: #795

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request help wanted Extra attention is needed
Projects
None yet
Development

No branches or pull requests

6 participants