Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to restore data through snapshot of localpv LVM #175

Closed
lixiaopengy opened this issue Jan 26, 2022 · 11 comments
Closed

How to restore data through snapshot of localpv LVM #175

lixiaopengy opened this issue Jan 26, 2022 · 11 comments

Comments

@lixiaopengy
Copy link

I only see examples related to creating snapshots in the document. How can I restore data through the generated snapshots?

@pawanpraka1
Copy link
Contributor

Restore is not supported yet. We can create a clone volume out of the snapshot. See this #13.

@murkylife
Copy link

Can you please provide more info on how I can create a clone volume out of a snapshot?

@pawanpraka1
Copy link
Contributor

@murkylife clone is not supported yet. It is in our roadmap. Could you add you use case to the issue #13 . It will help us prioritize this feature.

@murkylife
Copy link

@pawanpraka1 , done, thanks. If there's no support for any restore, whats the point of the snapshot support? Can i access it somehow?

@pawanpraka1
Copy link
Contributor

@murkylife there are manual steps required to create the clone volume and few manual CRs we have to create to make it available to a new pod.

@nkwangleiGIT
Copy link
Contributor

What's the status of snapshot restore support, is there anyone working on this?

@phhutter
Copy link
Contributor

I'm also quite interested in this feature.
Do you have any eta for the 3.3 release?

@PraveenJP
Copy link

PraveenJP commented Nov 2, 2022

Any update on this? Please share the manual steps to restore @pawanpraka1

@oscar-martin
Copy link

oscar-martin commented Sep 28, 2023

This is how I restored a snapshot.

Assumptions:

  • lvm-localpv is deployed
  • local-path is the StorageClass
  • lvmpv-snapclass is the VolumeStorageClass

Creation of a VolumeSnapshot for a PVC using local-path

Preparation

It creates a PVC and a Pod. Once the Pod is created, enter into the container and create some content inside /data folder.

kubectl apply -f - <<<'
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  labels:
    test: backup
  name: test-local-path
  namespace: default
spec:
  accessModes:
  - ReadWriteOnce
  resources:
    requests:
      storage: 5Mi
  storageClassName: local-path
  volumeMode: Filesystem
---
apiVersion: v1
kind: Pod
metadata:
  name: test-local-path
  namespace: default
spec:
  containers:
  - image: busybox:latest
    command:
      - tail
      - "-f"
      - "/dev/null"
    imagePullPolicy: IfNotPresent
    name: container
    volumeMounts:
      - mountPath: /data
        name: data
  volumes:
    - name: data
      persistentVolumeClaim:
        claimName: test-local-path
'

Creation of a VolumeSnapshot

It creates the VolumeSnapshot instance on the PVC created above.

kubectl apply -f -<<<'
apiVersion: snapshot.storage.k8s.io/v1
kind: VolumeSnapshot
metadata:
  name: test-shared-snapshot
  namespace: default
spec:
  volumeSnapshotClassName: lvmpv-snapclass
  source:
    persistentVolumeClaimName: test-local-path
'

VolumeSnapshot status:

status:
  boundVolumeSnapshotContentName: snapcontent-7ba359d9-2f4f-463b-a533-5fc34784def5
  creationTime: "2023-09-28T07:44:50Z"
  readyToUse: true
  restoreSize: "0"

Restore process

Creation of a LVMVolume

Set .spec.capacity as it was said in the PV.spec.capacity but in bytes!

Set .metadata.name as the VolumeSnapshot.metadata.uid

Set .spec.ownerNodeID as the PV.spec.nodeAffinity.required.nodeSelectorTerms[0].matchExpressions[0].values[0]

Set .spec.volGroup and .spec.vgPattern as the SC.parameters.volgroup and SC.parameters.vgpattern, where SC is the StorageClass of the PV (.spec.storageClassName of the PV)

If vgPattern is empty in StorageClass, derive it from this pseudo-expression: ^{volgroup}$

kubectl apply -f -<<<'
apiVersion: local.openebs.io/v1alpha1
kind: LVMVolume
metadata:
  labels:
    kubernetes.io/nodename: worker-2
  name: 7ba359d9-2f4f-463b-a533-5fc34784def5
  namespace: default
spec:
  capacity: "5242880"
  ownerNodeID: worker-2
  shared: "no"
  thinProvision: "no"
  vgPattern: ^test$
  volGroup: test
'

LVMVolume status:

status:
  state: Ready

Creation of a PV

Set .spec.capacity as it was said in the PVC.spec.capacity

Add annotation pv.kubernetes.io/provisioned-by: [local.csi.openebs.io](<http://local.csi.openebs.io>) to let the PV to be deleted appropriately

Set .metadata.name to pvc-[VS.metadata.uid]

Set .spec.csi.volumeHandle to VS.metadata.uid

Set .spec.csi.volumeAttributes.[openebs.io/volgroup] to SC.parameters.volgroup

Set .spec.nodeAffinity to OrigPV.spec.nodeAffinity

kubectl apply -f -<<<'
apiVersion: v1
kind: PersistentVolume
metadata:
  name: pvc-7ba359d9-2f4f-463b-a533-5fc34784def5
  annotations:
    pv.kubernetes.io/provisioned-by: local.csi.openebs.io
spec:
  accessModes:
  - ReadWriteOnce
  capacity:
    storage: 5Mi
  csi:
    driver: local.csi.openebs.io
    fsType: ext4
    volumeAttributes:
      openebs.io/cas-type: localpv-lvm
      openebs.io/volgroup: test
    volumeHandle: 7ba359d9-2f4f-463b-a533-5fc34784def5
  nodeAffinity:
    required:
      nodeSelectorTerms:
      - matchExpressions:
        - key: openebs.io/nodename
          operator: In
          values:
          - worker-2
  persistentVolumeReclaimPolicy: Delete
  storageClassName: local-path
  volumeMode: Filesystem
'

PV status:

status:
  phase: Available

Creation of a PVC:

Set .spec.volumeName from PV.metadata.name

kubectl apply -f -<<<'
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: test-local-path-from-snapshot
  namespace: default
spec:
  accessModes:
  - ReadWriteOnce
  resources:
    requests:
      storage: 5Mi
  storageClassName: local-path
  volumeMode: Filesystem
  volumeName: pvc-7ba359d9-2f4f-463b-a533-5fc34784def5
'

PVC status:

status:
  accessModes:
  - ReadWriteOnce
  capacity:
    storage: 5Mi
  phase: Bound

Check the restored volume from a new Pod

kubectl apply -f -<<<'
apiVersion: v1
kind: Pod
metadata:
  name: test-local-path-from-snapshot
  namespace: default
spec:
  containers:
  - image: busybox:latest
    command:
      - tail
      - "-f"
      - "/dev/null"
    imagePullPolicy: IfNotPresent
    name: container
    volumeMounts:
      - mountPath: /data
        name: data
  volumes:
    - name: data
      persistentVolumeClaim:
        claimName: test-local-path-from-snapshot
'

Once the pod is running, enter into the container and check the data is in /data.

Tear down

  • Deletion of PVC
    • Releases the PV
  • Deletion of VS
    • Deletes VSC
      • Deletes the PV
      • Deletes the LVMVolume
        • Deletes the lv in the node

@dsharma-dc
Copy link
Contributor

This is to be added as part of roadmap for enhancements.

@avishnu
Copy link
Member

avishnu commented Sep 17, 2024

Thanks @oscar-martin for documenting the steps. We'll be closing this ticket and tracking the enhancement via #13 .

@avishnu avishnu closed this as completed Sep 17, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

10 participants