-
Notifications
You must be signed in to change notification settings - Fork 110
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Doc Source]: how to delete volumes from a pool #1082
Comments
I have the same problem and no understanding of how to cleanup the orphan volumes. I'm in the process of deleting all resources in the mayastor namespace and detaching and wiping the disks! |
@orboan Try to use mayastor-client in the pod. With that binary you are able to list all nexus and all children and that can be used for rebuilding degraded volumes as well. /bin # ./mayastor-client -b 10.72.4.111 pool list /bin # ./mayastor-client -b 10.72.4.112 nexus list on node 10.72.4.113 /bin # ./mayastor-client -b 10.72.4.113 nexus remove 30ab9ef3-f787-410b-a38a-1d828600ddc6 nvmf://10.72.4.111:8420/nqn.2019-05.io.openebs:30ab9ef3-f787-410b-a38a-1d828600ddc6?uuid=bac9cc57-9 /bin # ./mayastor-client -b 10.72.4.113 nexus children 30ab9ef3-f787-410b-a38a-1d828600ddc6 /bin # ./mayastor-client -b 10.72.4.113 nexus children 30ab9ef3-f787-410b-a38a-1d828600ddc6 /bin # ./mayastor-client -b 10.72.4.113 rebuild state 30ab9ef3-f787-410b-a38a-1d828600ddc6 nvmf://10.72.4.112:8420/nqn.2019-05.io.openebs:30ab9ef3-f787-410b-a38a-1d828600ddc6?uuid=5fad4b54- I was experimenting this in version 0.8.0, and it was working in that. However this is totally undocumented unfortunately, although this is a very important part of the system I believe. |
We're seeing the same issue. Removing a PV via |
A question to those experiencing this issue - is the If so, then the PV created during dynamic provisioning should also carry the Delete policy, and that being the case, Mayastor assets (i.e. replicas) should be deleted automatically when the PVC (and hence PV) is deleted. If that is not the case, then this is a bug that we should look into but it isn't behaviour which is currently seen during system testing of the current release. As has been noted, setting the A later version of Mayastor (expected Q4 2022) will likely feature automated garbage collection of orphaned volumes (i.e. set to Retain when the PVC has been deleted). |
Indeed the reclaimPolicy is set to "Retain". Thanks for the info |
It's worth mentioning that mayastor was not removing the underlying volumes after the PV was deleted, not the PVC. So, sure, setting the reclaim policy to Setting the reclaimPolicy to Basically, it was a manual cleanup process that was fairly dangerous I'm sure. Just wanted to put my thoughts here for a future person that finds themselves in this same situation. Looking forward to support for Retained volumes. I don't want some SRE to accidentally delete a PVC for a major database and then have to go into a disaster recovery panic. There are a lot of places where running |
Looks like this is a known k8s bug: We've added Automatic GC on orphan volumes: openebs/mayastor-control-plane#724 |
Are you proposing new content, or a change to the existing documentation layout or structure?
I would like to propose you may add to the documentation how to delete volumes from a pool. I apologize in advance if this is already explained somewhere, but obviously I could not find it.
The reason for that is because the current supported provisioning is only thick, the available pool space could run out in some situations, and therefore it could be needed to remove some volumes that are not needed anymore.
What I just did is to delete the corresponding PVC and the PV using the regular kubectl, but when I checkout the volumes with the mayastor plugin (kubectl mayastor get volumes) the volumes are still there wasting space from the pool.
Thank you.
The text was updated successfully, but these errors were encountered: