Skip to content

Commit

Permalink
fix(zfspv): fixing data loss in case of pod deletion
Browse files Browse the repository at this point in the history
When we delete the pod, we get the unmount request for the old pod
and mount request for the new pod. When unmount is done by the driver
and system is slow in processing the umount request for the volume, the driver
assumes that unmount has done and proceeded to delete the mountpath,
as unmount has not done completely, the driver will delete the data
as part of cleaning up for the NodeUnPublish request.

The driver is using os.RemoveAll which is very risky as it will clean
child also, since the mountpoint is not supposed to have anything,
just os.Remove is sufficient and it will fail if there is anything there.

Signed-off-by: Pawan <[email protected]>
  • Loading branch information
pawanpraka1 committed Apr 22, 2020
1 parent 280949c commit b0c0f87
Showing 1 changed file with 1 addition and 2 deletions.
3 changes: 1 addition & 2 deletions pkg/zfs/mount.go
Original file line number Diff line number Diff line change
Expand Up @@ -78,9 +78,8 @@ func UmountVolume(vol *apis.ZFSVolume, targetPath string,
}
}

if err := os.RemoveAll(targetPath); err != nil {
if err := os.Remove(targetPath); err != nil {
logrus.Errorf("zfspv: failed to remove mount path Error: %v", err)
return err
}

logrus.Infof("umount done path %v", targetPath)
Expand Down

0 comments on commit b0c0f87

Please sign in to comment.