Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix(zfspv): fixing data loss in case of pod deletion #91

Merged
merged 1 commit into from
Apr 22, 2020

Conversation

pawanpraka1
Copy link
Contributor

cherry-pick :- #89

looks like a bug in ZFS as when you change the mountpoint property to none,
ZFS automatically umounts the file system. When we delete the pod, we get the
unmount request for the old pod and mount request for the new pod. Unmount
is done by the driver by setting mountpoint to none and the driver assumes that
unmount has done and proceeded to delete the mountpath, but here zfs has not unmounted
the dataset

$ sudo zfs get all zfspv-pool/pvc-3fe69b0e-9f91-4c6e-8e5c-eb4218468765 | grep mount
zfspv-pool/pvc-3fe69b0e-9f91-4c6e-8e5c-eb4218468765  mounted               yes                                                                                                -
zfspv-pool/pvc-3fe69b0e-9f91-4c6e-8e5c-eb4218468765  mountpoint            none                                                                                               local
zfspv-pool/pvc-3fe69b0e-9f91-4c6e-8e5c-eb4218468765  canmount              on

here, the driver will assume that dataset has been unmouted and proceed to delete the
mountpath and it will delete the data as part of cleaning up for the NodeUnPublish request.

Shifting to use zfs umount instead of doing zfs set mountpoint=none for umounting the dataset.
Also the driver is using os.RemoveAll which is very risky as it will clean
child also, since the mountpoint is not supposed to have anything,
just os.Remove is sufficient and it will fail if there is anything there.

Signed-off-by: Pawan [email protected]

looks like a bug in ZFS as when you change the mountpoint property to none,
ZFS automatically umounts the file system. When we delete the pod, we get the
unmount request for the old pod and mount request for the new pod. Unmount
is done by the driver by setting mountpoint to none and the driver assumes that
unmount has done and proceeded to delete the mountpath, but here zfs has not unmounted
the dataset

```
$ sudo zfs get all zfspv-pool/pvc-3fe69b0e-9f91-4c6e-8e5c-eb4218468765 | grep mount
zfspv-pool/pvc-3fe69b0e-9f91-4c6e-8e5c-eb4218468765  mounted               yes                                                                                                -
zfspv-pool/pvc-3fe69b0e-9f91-4c6e-8e5c-eb4218468765  mountpoint            none                                                                                               local
zfspv-pool/pvc-3fe69b0e-9f91-4c6e-8e5c-eb4218468765  canmount              on
```

here, the driver will assume that dataset has been unmouted and proceed to delete the
mountpath and it will delete the data as part of cleaning up for the NodeUnPublish request.

Shifting to use zfs umount instead of doing zfs set mountpoint=none for umounting the dataset.
Also the driver is using os.RemoveAll which is very risky as it will clean
child also, since the mountpoint is not supposed to have anything,
just os.Remove is sufficient and it will fail if there is anything there.

Signed-off-by: Pawan <[email protected]>
@pawanpraka1 pawanpraka1 requested a review from kmova April 22, 2020 18:43
@pawanpraka1 pawanpraka1 added the bug Something isn't working. label Apr 22, 2020
@pawanpraka1 pawanpraka1 added this to the v0.7.0 milestone Apr 22, 2020
@kmova kmova modified the milestones: v0.7.0, v0.6.1 Apr 22, 2020
@kmova kmova merged commit 920e9fa into openebs:v0.6.x Apr 22, 2020
@pawanpraka1 pawanpraka1 deleted the v0.6.x branch April 23, 2020 06:50
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants