-
Notifications
You must be signed in to change notification settings - Fork 1.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Data mover might not work with local PVs/PVCs CSIs #7044
Comments
neither would work in my case, because it needs to bind to the same node as the PVC being snapshotted. if I have 3 PVCs in 3 different nodes, I need to allow list all of them but I’ll still have the problem mentioned on this ticket. |
I double checked the code for openebs/lvm-localpv, I don't think it supports restoring/cloning volume from CSI snapshot, see the code here, if the volume is from a snapshot, it returns Also see the features in its Readme, the Clone is not supported. So may be you need to use openebs/zfs-localpv. |
Finally, I figured out that the current problem is not about that the data mover is running in a node where the snapshot doesn't exist/isn't accessible, but that the snapshot cannot be cloned as a volume at all since openebs/lvm-localpv doesn't support it. The snapshot location for local volumes doesn't have a problem:
|
@trunet Since openebs/lvm-localpv doesn't support snapshot clone, I don't think you can use either CSI snapshot backup or CSI snapshot data movement backup to back up volumes provisioned by it. As mentioned above openebs/zfs-localpv may be an alternative for you. |
ok, I’ll give it a shot, thanks. |
@Lyndon-Li If i get you right the problem with openebs/lvm-localpv is, that the data mover clones the snapshot into a new pv, which is then uploaded to the remote backup location. This is not possible with openebs/lvm-localpv, as they did not implement the clone feature. Now as i want to use openebs/lvm-localpv, but need backups i got an idea and i want to know, if this could work: The snapshot created by openebs/lvm-localpv can be mounted on the host it is stored on. This way the snapshots can easily be copied over to a remote location. I mean it's easy to do it manually for a single snapshot or write a script to perform the steps needed. (Mount, Copy, Unmount) Now my idea is to tell velero to use my own datamover called "openebs-lvm-mover" for example. This datamover could be implemented in form of a script running on the worker host looking for DataUpload resources that have "openebs-lvm-mover" set as the datamover and is related to a snapshot stored on that host. In this case it would start the script that mounts and copies the snapshot to a remote location. After the script finishes the DataUpload needs to be maked as finished, and the snapshot can be removed. Is this correct or did i miss something? |
@DommDe |
What steps did you take and what happened:
Due to data mover being scheduled to a different node than the original PV/PVC, the pod stuck in Pending and doesn't upload.
I'm trying to use velero CSI to upload snapshots to a minio cluster. All my persistent volumes are backed by openebs/lvm-localpv.
At first, I was getting #6964 error, and using
velero/velero:main
containing #6976 fixed it.I created my backup with the following:
velero backup create trunettest --include-namespaces redis --snapshot-move-data
Now, the problem is datamover pod was scheduled to a node that doesn't have the snapshot (remember, it's a local PV). Therefore it stuck in Pending for ~30 minutes now.
What did you expect to happen:
Snapshot should be uploading successfully to the backup location.
The following information will help us better understand what's going on:
bundle-2023-11-01-02-22-29.tar.gz
Anything else you would like to add:
Original PVC contains
volume.kubernetes.io/selected-node
annotation which can help data-mover pod set an affinity to the correct node:Environment:
velero version
):velero client config get features
):features: <NOT SET>
kubectl version
):/etc/os-release
): talosVote on this issue!
This is an invitation to the Velero community to vote on issues, you can see the project's top voted issues listed here.
Use the "reaction smiley face" up to the right of this comment to vote.
The text was updated successfully, but these errors were encountered: