Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Doc Source]: how to delete volumes from a pool #1082

Closed
orboan opened this issue Jan 29, 2022 · 7 comments
Closed

[Doc Source]: how to delete volumes from a pool #1082

orboan opened this issue Jan 29, 2022 · 7 comments
Labels
backlog BUG Something isn't working documentation Improvements or additions to documentation

Comments

@orboan
Copy link

orboan commented Jan 29, 2022

Are you proposing new content, or a change to the existing documentation layout or structure?

I would like to propose you may add to the documentation how to delete volumes from a pool. I apologize in advance if this is already explained somewhere, but obviously I could not find it.
The reason for that is because the current supported provisioning is only thick, the available pool space could run out in some situations, and therefore it could be needed to remove some volumes that are not needed anymore.
What I just did is to delete the corresponding PVC and the PV using the regular kubectl, but when I checkout the volumes with the mayastor plugin (kubectl mayastor get volumes) the volumes are still there wasting space from the pool.

Thank you.

@orboan orboan added the documentation Improvements or additions to documentation label Jan 29, 2022
@orboan orboan changed the title [Doc Source]: [Issue Summary] [Doc Source]: how to delete volumes from a pool Jan 31, 2022
@bugslifesolutions
Copy link

bugslifesolutions commented Mar 19, 2022

I have the same problem and no understanding of how to cleanup the orphan volumes. I'm in the process of deleting all resources in the mayastor namespace and detaching and wiping the disks!

@kpoos
Copy link

kpoos commented Apr 21, 2022

@orboan Try to use mayastor-client in the pod. With that binary you are able to list all nexus and all children and that can be used for rebuilding degraded volumes as well.

/bin # ./mayastor-client -b 10.72.4.111 pool list
NAME STATE CAPACITY USED DISKS
pool-on-openebs-test-02-san online 268171214848 5368709120 aio:///dev/vdd?uuid=ae169e36-2c3c-4559-bb8a-3785bcb2c7b4
pool-on-openebs-test-02 online 9991058620416 612087365632 aio:///dev/vdc?uuid=768c1f01-4bfd-4618-a4cf-921a751a7116

/bin # ./mayastor-client -b 10.72.4.112 nexus list
NAME SIZE STATE REBUILDS PATH
774ba995-9521-45dd-ae61-de188ee4e9dc 21474836480 online 0 nvmf://10.72.4.112:8420/nqn.2019-05.io.openebs:nexus-774ba995-9521-45dd-ae61-de188ee4e9dc
49216324-cd59-4afd-9a93-3e16f1a2b70d 10737418240 online 0 nvmf://10.72.4.112:8420/nqn.2019-05.io.openebs:nexus-49216324-cd59-4afd-9a93-3e16f1a2b70d
b9fee01a-7309-4e04-ba0f-861ea228aa08 10737418240 online 0 nvmf://10.72.4.112:8420/nqn.2019-05.io.openebs:nexus-b9fee01a-7309-4e04-ba0f-861ea228aa08
70a57e2f-4a42-4461-9e44-a4e9a4002f6b 21474836480 online 0 nvmf://10.72.4.112:8420/nqn.2019-05.io.openebs:nexus-70a57e2f-4a42-4461-9e44-a4e9a4002f6b
f690b066-6809-498d-808c-9d428c1ee09e 10737418240 online 0 nvmf://10.72.4.112:8420/nqn.2019-05.io.openebs:nexus-f690b066-6809-498d-808c-9d428c1ee09e
68057349-a5ad-4c43-adb5-a1402dab9042 536870912000 online 0 nvmf://10.72.4.112:8420/nqn.2019-05.io.openebs:nexus-68057349-a5ad-4c43-adb5-a1402dab9042
0807a310-a00d-4d3a-9a2f-8a4456927a45 52428800 online 0 nvmf://10.72.4.112:8420/nqn.2019-05.io.openebs:nexus-0807a310-a00d-4d3a-9a2f-8a4456927a45
/bin # ./mayastor-client -b 10.72.4.112 replica list
POOL NAME THIN SHARE SIZE URI
pool-on-openebs-test-03-san 0807a310-a00d-4d3a-9a2f-8a4456927a45 false none 54525952 bdev:///0807a310-a00d-4d3a-9a2f-8a4456927a45?uuid=72a895eb-4925-4ec8-ae94-abe67bf746df
pool-on-openebs-test-03-san 30ab9ef3-f787-410b-a38a-1d828600ddc6 false nvmf 10737418240 nvmf://10.72.4.112:8420/nqn.2019-05.io.openebs:30ab9ef3-f787-410b-a38a-1d828600ddc6?uuid=cf3bd1f9-2272-4819-9c1d-1ba238670190
pool-on-openebs-test-03-san f690b066-6809-498d-808c-9d428c1ee09e false none 10737418240 bdev:///f690b066-6809-498d-808c-9d428c1ee09e?uuid=43320b71-83c1-4662-aea0-90938351d2f2
pool-on-openebs-test-03-san 176d5553-b525-4a46-989f-6b62d9af1483 false none 21474836480 bdev:///176d5553-b525-4a46-989f-6b62d9af1483?uuid=5a49e2ee-7589-4648-92e2-372f76bdd8c6
pool-on-openebs-test-03-san 82f10ebd-6177-4e6f-ae1c-a393755f96dd false none 21474836480 bdev:///82f10ebd-6177-4e6f-ae1c-a393755f96dd?uuid=db2e9bdd-6838-40fd-89e2-f9a59b1a5cb0
pool-on-openebs-test-03 b9fee01a-7309-4e04-ba0f-861ea228aa08 false none 10737418240 bdev:///b9fee01a-7309-4e04-ba0f-861ea228aa08?uuid=f5cfe4f0-9cad-4f7b-97f4-9d982844cd94
pool-on-openebs-test-03 49216324-cd59-4afd-9a93-3e16f1a2b70d false none 10737418240 bdev:///49216324-cd59-4afd-9a93-3e16f1a2b70d?uuid=ce09fdf8-7dde-4bca-b3f1-be4248eb7e1e
pool-on-openebs-test-03 70a57e2f-4a42-4461-9e44-a4e9a4002f6b false none 21474836480 bdev:///70a57e2f-4a42-4461-9e44-a4e9a4002f6b?uuid=177c511c-1000-4cd9-ac18-19df73a11a5c
pool-on-openebs-test-03 774ba995-9521-45dd-ae61-de188ee4e9dc false none 21474836480 bdev:///774ba995-9521-45dd-ae61-de188ee4e9dc?uuid=8c9607fd-1763-4778-bf98-43b2b6fdbe00
pool-on-openebs-test-03 68057349-a5ad-4c43-adb5-a1402dab9042 false none 536870912000 bdev:///68057349-a5ad-4c43-adb5-a1402dab9042?uuid=fd08ebc9-d7be-4c71-9013-2dd6b398eac1

on node 10.72.4.113
/bin # ./mayastor-client -b 10.72.4.113 nexus children 30ab9ef3-f787-410b-a38a-1d828600ddc6
NAME STATE
bdev:///30ab9ef3-f787-410b-a38a-1d828600ddc6?uuid=ad02bcea-b171-41d4-98c7-9d8b48469432 online
nvmf://10.72.4.112:8420/nqn.2019-05.io.openebs:30ab9ef3-f787-410b-a38a-1d828600ddc6?uuid=cf3bd1f9-2272-4819-9c1d-1ba238670190 faulted
nvmf://10.72.4.111:8420/nqn.2019-05.io.openebs:30ab9ef3-f787-410b-a38a-1d828600ddc6?uuid=bac9cc57-943a-47cb-a029-0a8aa0ac95ae faulted

/bin # ./mayastor-client -b 10.72.4.113 nexus remove 30ab9ef3-f787-410b-a38a-1d828600ddc6 nvmf://10.72.4.111:8420/nqn.2019-05.io.openebs:30ab9ef3-f787-410b-a38a-1d828600ddc6?uuid=bac9cc57-9
43a-47cb-a029-0a8aa0ac95ae
nvmf://10.72.4.111:8420/nqn.2019-05.io.openebs:30ab9ef3-f787-410b-a38a-1d828600ddc6?uuid=bac9cc57-943a-47cb-a029-0a8aa0ac95ae

/bin # ./mayastor-client -b 10.72.4.113 nexus children 30ab9ef3-f787-410b-a38a-1d828600ddc6
NAME STATE
bdev:///30ab9ef3-f787-410b-a38a-1d828600ddc6?uuid=ad02bcea-b171-41d4-98c7-9d8b48469432 online
nvmf://10.72.4.112:8420/nqn.2019-05.io.openebs:30ab9ef3-f787-410b-a38a-1d828600ddc6?uuid=cf3bd1f9-2272-4819-9c1d-1ba238670190 faulted
/bin #

/bin # ./mayastor-client -b 10.72.4.113 nexus children 30ab9ef3-f787-410b-a38a-1d828600ddc6
NAME STATE
bdev:///30ab9ef3-f787-410b-a38a-1d828600ddc6?uuid=ad02bcea-b171-41d4-98c7-9d8b48469432 online
nvmf://10.72.4.112:8420/nqn.2019-05.io.openebs:30ab9ef3-f787-410b-a38a-1d828600ddc6?uuid=cf3bd1f9-2272-4819-9c1d-1ba238670190 faulted
nvmf://10.72.4.111:8420/nqn.2019-05.io.openebs:30ab9ef3-f787-410b-a38a-1d828600ddc6?uuid=4225a123-10aa-445c-9d36-f6c5a13237e9 degraded
nvmf://10.72.4.112:8420/nqn.2019-05.io.openebs:30ab9ef3-f787-410b-a38a-1d828600ddc6?uuid=5fad4b54-aa95-4796-9d79-c815f7ef8ff1 degraded
/bin # ./mayastor-client -b 10.72.4.113 nexus remove 30ab9ef3-f787-410b-a38a-1d828600ddc6 nvmf://10.72.4.112:8420/nqn.2019-05.io.openebs:30ab9ef3-f787-410b-a38a-1d828600ddc6?uuid=cf3bd1f9-2
272-4819-9c1d-1ba238670190
nvmf://10.72.4.112:8420/nqn.2019-05.io.openebs:30ab9ef3-f787-410b-a38a-1d828600ddc6?uuid=cf3bd1f9-2272-4819-9c1d-1ba238670190
/bin # ./mayastor-client -b 10.72.4.113 nexus children 30ab9ef3-f787-410b-a38a-1d828600ddc6
NAME STATE
bdev:///30ab9ef3-f787-410b-a38a-1d828600ddc6?uuid=ad02bcea-b171-41d4-98c7-9d8b48469432 online
nvmf://10.72.4.111:8420/nqn.2019-05.io.openebs:30ab9ef3-f787-410b-a38a-1d828600ddc6?uuid=4225a123-10aa-445c-9d36-f6c5a13237e9 online
nvmf://10.72.4.112:8420/nqn.2019-05.io.openebs:30ab9ef3-f787-410b-a38a-1d828600ddc6?uuid=5fad4b54-aa95-4796-9d79-c815f7ef8ff1 degraded
/bin #

/bin # ./mayastor-client -b 10.72.4.113 rebuild state 30ab9ef3-f787-410b-a38a-1d828600ddc6 nvmf://10.72.4.112:8420/nqn.2019-05.io.openebs:30ab9ef3-f787-410b-a38a-1d828600ddc6?uuid=5fad4b54-
aa95-4796-9d79-c815f7ef8ff1
state
running
/bin # ./mayastor-client -b 10.72.4.113 rebuild progress 30ab9ef3-f787-410b-a38a-1d828600ddc6 nvmf://10.72.4.112:8420/nqn.2019-05.io.openebs:30ab9ef3-f787-410b-a38a-1d828600ddc6?uuid=5fad4b
54-aa95-4796-9d79-c815f7ef8ff1
progress (%)
23
/bin # ./mayastor-client -b 10.72.4.113 rebuild progress 30ab9ef3-f787-410b-a38a-1d828600ddc6 nvmf://10.72.4.112:8420/nqn.2019-05.io.openebs:30ab9ef3-f787-410b-a38a-1d828600ddc6?uuid=5fad4b
54-aa95-4796-9d79-c815f7ef8ff1
progress (%)
24

I was experimenting this in version 0.8.0, and it was working in that. However this is totally undocumented unfortunately, although this is a very important part of the system I believe.

@crazyscience
Copy link

We're seeing the same issue. Removing a PV via kubectl appears to not remove the underlying storage allocation based on what we're seeing in mayastor get volumes. This is all fine and well for a very short time, but after putting CI/CD processes into the cluster, things can get out of hand pretty quickly. There definitely needs to be some cleanup. I'm sure it can be done manually as mentioned above. However, manually interaction should not be required as the destruction of the data was already ordered via kubectl.

@GlennBullingham
Copy link
Member

A question to those experiencing this issue - is the reclaimPolicy of the corresponding StorageClass set to Delete?

If so, then the PV created during dynamic provisioning should also carry the Delete policy, and that being the case, Mayastor assets (i.e. replicas) should be deleted automatically when the PVC (and hence PV) is deleted. If that is not the case, then this is a bug that we should look into but it isn't behaviour which is currently seen during system testing of the current release.

As has been noted, setting the reclaimPolicy to anything other than Delete will lead to orphaned assets. There is currently no benefit in setting this policy to Retain, since once the corresponding PVC and/or PV have been deleted there is no way to restore access to the volume from the underlying, orphaned replicas. This is by design - Mayastor is currently predicated on dynamic provisioning and doesn't support static.

A later version of Mayastor (expected Q4 2022) will likely feature automated garbage collection of orphaned volumes (i.e. set to Retain when the PVC has been deleted).

@crazyscience
Copy link

Indeed the reclaimPolicy is set to "Retain". Thanks for the info

@crazyscience
Copy link

It's worth mentioning that mayastor was not removing the underlying volumes after the PV was deleted, not the PVC. So, sure, setting the reclaim policy to Retain is expected to keep the PV around after the PVC is removed. The expectation is that after deleting the PV (in this case, manually), the underlying volume would be destroyed. It is not.

Setting the reclaimPolicy to Delete does in fact fix the problem, but dynamic provisioning doesn't necessitate automatic deletion of underlying PVs. I don't understand how/why this would be a design feature as it results in a broken cluster state. I managed to clean it up manually by using the mayastor CLI and then going into the etcd cluster and deleting references and then restarting some of the mayastor pods (don't remember which ones).

Basically, it was a manual cleanup process that was fairly dangerous I'm sure. Just wanted to put my thoughts here for a future person that finds themselves in this same situation. Looking forward to support for Retained volumes. I don't want some SRE to accidentally delete a PVC for a major database and then have to go into a disaster recovery panic. There are a lot of places where running kubectl delete -f some-spec-that-was-previously-applied.yml may contain a PVC. For mission critical stuff, we would use a different class that requires an additional step to destroy the underlying PV. We can't do this with mayastor currently.

@tiagolobocastro
Copy link
Contributor

Looks like this is a known k8s bug:
kubernetes-csi/external-provisioner#546
https://github.com/kubernetes/enhancements/blob/master/keps/sig-storage/2644-honor-pv-reclaim-policy/README.md

We've added Automatic GC on orphan volumes: openebs/mayastor-control-plane#724
WA on current release, please restart the csi-controller pod

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
backlog BUG Something isn't working documentation Improvements or additions to documentation
Projects
None yet
Development

No branches or pull requests

6 participants