Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add support for standalone snapshot creation and cloning support #693

Closed
wants to merge 3 commits into from

Conversation

Madhu-1
Copy link
Collaborator

@Madhu-1 Madhu-1 commented Oct 18, 2019

Add support for standalone snapshot creation and creating new PVC from snapshot

with the current implementation in ceph-csi, it's not possible to delete the cloned volume if the snapshot is present due to the child linking, to remove this dependency we had a discussion (Thanks for @dillaman for suggesting this idea) and come up with an idea to separate out the clone and snapshot, so that we can delete the snapshot and cloned image in any order.

the steps followed to create an independent snapshot as follows

Create a snapshot

  • Create a temporary snapshot from the parent volume
  • Clone a new image from a temporary snapshot with options
    --rbd-default-clone-format 2 --image-feature layering,deep-flatten
  • Delete temporary snapshot created
  • Create a snapshot with requested Name from cloned volume

Create PVC from a snapshot

  • Clone a new image from the snapshot with user-provided options
  • Check the depth of the image as the maximum number of nested volume
    clones can be (max 10 can be changed based on the configuration)
    if the depth is reached flatten the newly cloned image

Delete a snapshot

  • Delete the temporary cloned image which was created for a snapshot
  • Delete the snapshot

example commands:-

1) rbd snap create <RBD image for src k8s volume>@<random snap name>
2) rbd clone --rbd-default-clone-format 2 --image-feature
layering,deep-flatten <RBD image for src  k8s volume>@<random snap <RBD image for temporary snap image>
3) rbd snap rm <RBD image for src k8s volume>@<random snap name>
4) rbd snap create <RBD image for temporary snap image>@<random snap name> <k8s snap name>
5) rbd clone --rbd-default-clone-format 2 --image-feature <k8s dst vol config>
<RBD image for temporary snap image>@<random snap name> <RBD image for k8s dst vol>

Signed-off-by: Madhu Rajanna [email protected]

Partial Fixes: #675
Fixes: #403
#467 will be abandoned with this PR

Note: @dillaman I have addressed the review comment in #467 PR PTAL

Copy link

@dillaman dillaman left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks like a good start. One concern that I couldn't easily comment on inline since it wasn't a changed line was the structure of rbdSnapshot and how RbdImageName sometimes points to the original volume and sometimes to the clone. I think that it should always refer to the snapshot clone image name -- and therefore RbdSnapName would always be the same as RbdImageName in the case of this new v2 snapshot implementation. For the legacy CSI snapshots, they could be different.

}()
RESTORE:
// create clone image and delete snapshot
err = restoreSnapshot(ctx, cloneRbdVol, snap, cr)

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nit: can we give this method a better name? Perhaps something like cloneRbdImageFromSnapshot?

pkg/rbd/rbd_util.go Outdated Show resolved Hide resolved
vol := new(rbdVolume)
vol.ClusterID = rbdSnap.ClusterID
vol.Monitors = rbdSnap.Monitors
vol.Pool = rbdSnap.Pool

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nit: we should be able to (optionally) store snapshots in a different pool from the source image. It doesn't need to be part of this PR, but just a heads-up.

func createSnapshot(ctx context.Context, pOpts *rbdSnapshot, cr *util.Credentials) error {
var output []byte

image := pOpts.RbdImageName
snapName := pOpts.RbdSnapName

klog.V(4).Infof(util.Log(ctx, "rbd: snap create %s using mon %s, pool %s"), image, pOpts.Monitors, pOpts.Pool)
klog.V(4).Infof(util.Log(ctx, "rbd: snap create %s/%s using mon %s, pool %s"), image, snapName, pOpts.Monitors, pOpts.Pool)

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Keep it consistent w/ the standardized RBD image-spec <pool-name>/<image-name>@<snap-name>:

Suggested change
klog.V(4).Infof(util.Log(ctx, "rbd: snap create %s/%s using mon %s, pool %s"), image, snapName, pOpts.Monitors, pOpts.Pool)
klog.V(4).Infof(util.Log(ctx, "rbd: snap create %s/%s@%s using mon %s"), pOpts.Pool, image, snapName, pOpts.Monitors)

@@ -552,32 +592,13 @@ func createSnapshot(ctx context.Context, pOpts *rbdSnapshot, cr *util.Credential
return nil
}

func unprotectSnapshot(ctx context.Context, pOpts *rbdSnapshot, cr *util.Credentials) error {

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nit: won't you need to handle backwards compatibility? While you won't be creating protected snapshots anymore, you might need to unprotect legacy snapshots when deleting images.


// attempt to use Ceph manager based deletion support if available
rbdCephMgrSupported, err := rbdManagerTaskDeleteImage(ctx, pOpts, cr)
id, err = getImageIDfromTrash(ctx, pOpts, cr)

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This seams potentially dangerous. It seems like it would be safer to record the image id in the omap tables at CO snapshot / RBD image creation time or prior to moving to the trash.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

do I need to record the image ID? or can I fetch the image ID from image prior to moving it to trash?

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I guess it depends if volume/snapshot IDs are 100% guaranteed to be unique for the lifetime of the CO. If, for example, we had a volume/snapshot named "X" that the CO deleted and we moved to the trash but cannot delete, can the CO ever create a new volume named "X"? If there is no guarantee that these CO names cannot be reused, then for safety we would need to record the image id prior to removing the image.

... but thinking about that some more, I guess we would just immediately fail in the create step since we would see existing omap entries for the volume/snapshot the CO thought it had deleted and then attempted to re-use?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we are generating some unique ID for each PVC name and we create a mapping for unique ID and PVC name if the image deletion fails we don't remove the mapping from the Omap, if we remove it from Omap means the image is already deleted.
even if the unique ID and PVC name mapping is not present in omap we generate a new ID and use it as image name, I think it should be fine

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, but we have an ask out to the external-provisioner sidecar to provide a deterministic name to the CSI based on the namespace and PVC name. If you then delete a PVC (and the CSI keeps it behind the scenes) and then re-create it, you are saying the CSI will generate a unique internal volume id? If so, I think the current implementation is fine. It could be more efficient to just look up the image id before moving it to the trash, but then you would have unique logic on the recovery path where it needs to scan the full trash which might have hundreds or thousands of images. The middle ground would be to record the id before moving it to the trash to avoid the full trash list and search.

func deleteSnapshot(ctx context.Context, pOpts *rbdSnapshot, cr *util.Credentials) error {
var output []byte

image := pOpts.RbdImageName
snapName := pOpts.RbdSnapName

klog.V(4).Infof(util.Log(ctx, "rbd: snap rm %s using mon %s, pool %s"), image, pOpts.Monitors, pOpts.Pool)
klog.V(4).Infof(util.Log(ctx, "rbd: snap rm %s@%s using mon %s, pool %s"), image, snapName, pOpts.Monitors, pOpts.Pool)

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nit: use image-spec style

@@ -601,12 +616,11 @@ func restoreSnapshot(ctx context.Context, pVolOpts *rbdVolume, pSnapOpts *rbdSna
image := pVolOpts.RbdImageName
snapName := pSnapOpts.RbdSnapName

klog.V(4).Infof(util.Log(ctx, "rbd: clone %s using mon %s, pool %s"), image, pVolOpts.Monitors, pVolOpts.Pool)
klog.V(4).Infof(util.Log(ctx, "rbd: clone %s@%s %s using mon %s, pool %s"), pSnapOpts.RbdImageName, snapName, image, pVolOpts.Monitors, pVolOpts.Pool)

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nit: use image-spec style


output, err := execCommand("ceph", args)
if err != nil {
if strings.Contains(string(output), rbdTaskRemoveCmdInvalidString1) &&

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nit: can we move these to a common helper?

pkg/util/voljournal.go Outdated Show resolved Hide resolved
@Madhu-1
Copy link
Collaborator Author

Madhu-1 commented Oct 18, 2019

Looks like a good start. One concern that I couldn't easily comment on inline since it wasn't a changed line was the structure of rbdSnapshot and how RbdImageName sometimes points to the original volume and sometimes to the clone. I think that it should always refer to the snapshot clone image name -- and therefore RbdSnapName would always be the same as RbdImageName in the case of this new v2 snapshot implementation. For the legacy CSI snapshots, they could be different.

rbdSnapshot refers to original image only during the first time when we need to reserve the snapshot in Omap once its done it will be point to the cloned image name

will add support to remove older snapshot

@dillaman
Copy link

rbdSnapshot refers to original image only during the first time when we need to reserve the snapshot in Omap once its done it will be point to the cloned image name

will add support to remove older snapshot

Except you are also passing the rbdVol for the source at the same time. Therefore, there is no need to have the confusion w/ the name switching back-and-forth.

@Madhu-1
Copy link
Collaborator Author

Madhu-1 commented Oct 18, 2019

rbdSnapshot refers to original image only during the first time when we need to reserve the snapshot in Omap once its done it will be point to the cloned image name
will add support to remove older snapshot

Except you are also passing the rbdVol for the source at the same time. Therefore, there is no need to have the confusion w/ the name switching back-and-forth.

setting RBDImageName from source volume here because we are generating the snapshotID and reserving snapshot details in Omap here prior to this we don't have the snapshotID to set the RbdImageName in snapshot struct, once we do the reservation we are setting back to clone volume here to create a final snapshot from cloned volume

if am missing anything or doing it wrong let me know

@dillaman
Copy link

if am missing anything or doing it wrong let me know

If you need it for reserving the snapshot, pass the source volume struct instead. This is for clarity to avoid the back-and-forth.

@Madhu-1
Copy link
Collaborator Author

Madhu-1 commented Oct 18, 2019

@dillaman addressed review comments PTAL

@Madhu-1 Madhu-1 force-pushed the clone branch 3 times, most recently from bf49e29 to 88c94b5 Compare October 18, 2019 18:14
@Madhu-1
Copy link
Collaborator Author

Madhu-1 commented Oct 19, 2019

@dillaman still am facing the mounting issue for cloned PVC below are the details

Linux version 4.15.0-1040-gcp (buildd@lgw01-amd64-044) (gcc version 7.4.0 (Ubuntu 7.4.0-1ubuntu1~18.04.1)) #42-Ubuntu SMP Wed Aug 7 15:17:54 UTC 2019
W1019 05:16:16.699187   24874 rbd_attach.go:225] ID: 101 Req-ID: 0001-0024-3df72d6e-396a-4789-a181-a5aa0b38e2c8-0000000000000001-8c2e5b6d-f22f-11e9-9ac1-0242ac11000c rbd: map error exit status 6, rbd output: rbd: sysfs write failed
RBD image feature set mismatch. You can disable features unsupported by the kernel with "rbd feature disable replicapool/csi-vol-8c2e5b6d-f22f-11e9-9ac1-0242ac11000c".
In some cases useful info is found in syslog - try "dmesg | tail".
rbd: map failed: (6) No such device or address
E1019 05:16:16.699657   24874 utils.go:161] ID: 101 Req-ID: 0001-0024-3df72d6e-396a-4789-a181-a5aa0b38e2c8-0000000000000001-8c2e5b6d-f22f-11e9-9ac1-0242ac11000c GRPC error: rpc error: code = Internal desc = rbd: map failed exit status 6, rbd output: rbd: sysfs write failed
RBD image feature set mismatch. You can disable features unsupported by the kernel with "rbd feature disable replicapool/csi-vol-8c2e5b6d-f22f-11e9-9ac1-0242ac11000c".
In some cases useful info is found in syslog - try "dmesg | tail".
rbd: map failed: (6) No such device or address

but in my local testing on kernel versions 5.3.7-1.el7.elrepo.x86_64 tests passed

@Madhu-1 Madhu-1 force-pushed the clone branch 2 times, most recently from b61c4c5 to 90a77a1 Compare October 19, 2019 06:26
@Madhu-1
Copy link
Collaborator Author

Madhu-1 commented Oct 20, 2019

@dillaman PTAL let me know if anything is pending here

@nixpanic what would be kernel version in centos CI?

@nixpanic
Copy link
Member

Currently the kernel on the CentOS-7 has version 3.10.0-1062.4.1.el7 (see the updates here). I do not know what parts related to Ceph have been backported. You would need to check the changelog of the package or check the online version from RHEL.

@Madhu-1
Copy link
Collaborator Author

Madhu-1 commented Oct 22, 2019

Currently the kernel on the CentOS-7 has version 3.10.0-1062.4.1.el7 (see the updates here). I do not know what parts related to Ceph have been backported. You would need to check the changelog of the package or check the online version from RHEL.

Thanks for the info @nixpanic, anyway we can create a vagrant based kube cluster, any idea when centos 8 libvirt vagrant box will be available? i tried to find some info https://bugs.centos.org/view.php?id=16455

@Madhu-1
Copy link
Collaborator Author

Madhu-1 commented Oct 22, 2019

Even we can upgrade the kernel version in Vm created using vagrant. @nixpanic any idea when can we have E2E setup on centos CI?

@dillaman
Copy link

@Madhu-1 Ouch -- I thought krbd passed the optional read-only flag [1] to the get_features call since that would have let the older kernels use deep-flatten on the snapshots since those are only opened in read-only mode. I think w/o that support, we are going to need to force flattening of snapshots prior to the creation of any RBD snapshots on the CO snapshot backing RBD image.

@idryomov It's too late to help us now, but can we get that flag added to krbd so that in the future read-write image features don't affect read-only images?

[1] https://github.com/ceph/ceph/blob/master/src/cls/rbd/cls_rbd.cc#L862

@dillaman
Copy link

What's the status of snapshot / clone support for the v2.1 release?

Signed-off-by: Madhu Rajanna <[email protected]>
```
* create temporary RBD snapshot on source volume's RBD image
* RBD clone CO source volume's RBD image's snapshot to new RBD image
* remove temporary RBD snapshot on CO source volume's RBD image

* serialize work on RBD snapshot image clone
* test RBD snapshot image clone's depth
* if depth over hard limit flatten clone
* create RBD snapshot on RBD snapshot image clone
* create RBD volume image clone from RBD snapshot image clone

* delete temporary RBD image
* delete temporary RBD snapshot
```

Signed-off-by: Madhu Rajanna <[email protected]>
@humblec
Copy link
Collaborator

humblec commented Mar 3, 2020

@Madhu-1 can you revisit this PR ?

@Madhu-1
Copy link
Collaborator Author

Madhu-1 commented Mar 3, 2020

@humblec am already working on this one, already updated the PR (need to do more work)

@humblec humblec mentioned this pull request Mar 17, 2020
6 tasks
@humblec
Copy link
Collaborator

humblec commented Mar 17, 2020

[Status update]
Last week we had a meeting around snapshot/clone design/implementation we introduced recently with this PR, to support further use cases like Bakcup/DR , Data replication..etc. The main reason behind this meeting was to avoid a revamp again with snapshot/clone implementation while we start supporting these use cases too in near future. Atm, we are listing down the requirements to support it along with possible changes/enhancements we need with current implementation/design. Further technical notes will be followed/pointed here to those discussion. Considering we are at code freeze of v2.1.0 and the state we are with this features, the decision is to defer it to next month release (v3.0.0 #865 ) and release v2.1.0 as planned with other bug fixes and enhancements.

@ShyamsundarR
Copy link
Contributor

[Status update]
Last week we had a meeting around snapshot/clone design/implementation we introduced recently with this PR, to support further use cases like Bakcup/DR , Data replication..etc. The main reason behind this meeting was to avoid a revamp again with snapshot/clone implementation while we start supporting these use cases too in near future. Atm, we are listing down the requirements to support it along with possible changes/enhancements we need with current implementation/design. Further technical notes will be followed/pointed here to those discussion. Considering we are at code freeze of v2.1.0 and the state we are with this features, the decision is to defer it to next month release (v3.0.0 #865 ) and release v2.1.0 as planned with other bug fixes and enhancements.

Details of other use-cases is present here.

The decision is to continue with the current design and implementation as it does not impede the future uses as understood at present.

@nixpanic nixpanic added this to the release-3.0.0 milestone Apr 9, 2020
@mergify
Copy link
Contributor

mergify bot commented May 21, 2020

This pull request now has conflicts with the target branch. Could you please resolve conflicts and force push the corrected changes? 🙏

@ShyamsundarR
Copy link
Contributor

@dillaman an image whose ancestor is undergoing a flatten is still mountable by the clients, right?

e.g, if the image chain is, image1 -> image2 (CSI-snap) -> image3 (CSI-clone), when cloning from a CSI-Snapshot. image2 could be undergoing a flatten (either due to soft or hard flatten limits), whereas image3 could be mounted and used in the interim.

An image that is undergoing flattening is not mountable by the clients, right? As a result our base rule is always flatten the images that serve as the intermediate CSI-snapshot

If the above is true, when cloning from an CSI-Volume rather than from a CSI-Snapshot, where the current image chain would be image1 (source) -> image2 (RBD clone serving as hidden CSI-snapshot) -> image3 (RBD clone serving as the CSI-clone), could we flatten image2 assuming image3 is at the hard limit of clone depth, but still return a success for the operation?

@dillaman
Copy link

dillaman commented Jun 4, 2020

e.g, if the image chain is, image1 -> image2 (CSI-snap) -> image3 (CSI-clone), when cloning from a CSI-Snapshot. image2 could be undergoing a flatten (either due to soft or hard flatten limits), whereas image3 could be mounted and used in the interim.

Yes, but only if at the "soft" limit. If the total chain length is >15, krbd cannot properly handle opening that image with 14 ancestors. If your entire chain is >=15, we need to pause the creation of the new PVC that violates the limit.

Now, we could always just choose to restrict the chain depth upon k8s snapshot creation by just taking the k8s snapshot chain length + 1 (i.e. assume it will get a new child PVC eventually). That would allow you to use the "snapshot creation pending / uploading" status return while it's being flattened and therefore prevent its use via a new PVC from snapshot operation.

@dillaman
Copy link

dillaman commented Jun 5, 2020

@ShyamsundarR I went down the rabbit hole on this one this afternoon. In order to support thin-provisioned clones and PVCs from snapshots, we require kernel v5.1 or later (RHEL 8.2 or later). Any platform that attempts to use this functionality on older kernels (I won't name names because it's very silly that they are still using such an old OS), will need to fully flatten the clone/PVC from snapshot prior to giving access to the end-user. That means we will need to support flatten checks upon PVC creation (i.e. max image chain length is 1 -- no parents are supported).

@ShyamsundarR
Copy link
Contributor

@ShyamsundarR I went down the rabbit hole on this one this afternoon. In order to support thin-provisioned clones and PVCs from snapshots, we require kernel v5.1 or later (RHEL 8.2 or later). Any platform that attempts to use this functionality on older kernels (I won't name names because it's very silly that they are still using such an old OS), will need to fully flatten the clone/PVC from snapshot prior to giving access to the end-user. That means we will need to support flatten checks upon PVC creation (i.e. max image chain length is 1 -- no parents are supported).

Here is what I understand what CSI must do,

  • Add image feature deep-flatten to RBD images representing CSI-snapshots (includes intermediate snapshot image created for a CSI-clone from CSI-volume)
    • These intermediate images are never mapped, and hence provide enough exclusivity during flatten, to ensure RBD-snapshots of these images are also flattened
  • Set (or provide documentation) hard limit for depth as 1, for setups using kernel versions older than v5.1
    • This should prevent any change in the implementation, and help with setups having older kernel versions
  • For kernel versions at or newer than v5.1 hard/soft limits can be set higher

Thoughts/Questions:

  • It is technically feasible to use rbd-nbd, where kernel versions are older than v5.1 to leverage a higher hard/soft limit
    • Barring the pod eviction requirements, required at present, when updating the nodeplugin pods

@ShyamsundarR
Copy link
Contributor

@ShyamsundarR I went down the rabbit hole on this one this afternoon. In order to support thin-provisioned clones and PVCs from snapshots, we require kernel v5.1 or later (RHEL 8.2 or later). Any platform that attempts to use this functionality on older kernels (I won't name names because it's very silly that they are still using such an old OS), will need to fully flatten the clone/PVC from snapshot prior to giving access to the end-user. That means we will need to support flatten checks upon PVC creation (i.e. max image chain length is 1 -- no parents are supported).

Here is what I understand what CSI must do,

* Add image feature deep-flatten to RBD images representing CSI-snapshots (includes intermediate snapshot image created for a CSI-clone from CSI-volume)
  
  * These intermediate images are never mapped, and hence provide enough exclusivity during flatten, to ensure RBD-snapshots of these images are also flattened

* Set (or provide documentation) hard limit for depth as 1, for setups using kernel versions older than v5.1
  • When hard limit is 1, do not flatten intermediate images that represent CSI-Snapshots, this reduces one round of flattening (reduces space consumption as well) and keeps CSI-snapshot operation relatively quick. NOTE: Intermediate images will be flattened when per image hard/soft snapshot limits are reached though Limits for RBD create PVC from snapshot #1098 .
  • This is an optimization for older kernels, as discussed with @Madhu-1 and as pointed by @dillaman above ("will need to fully flatten the clone/PVC from snapshot prior to giving access to the end-user"), but stating it more explicitly here
  * This should prevent any change in the implementation, and help with setups having older kernel versions

* For kernel versions at or newer than v5.1 hard/soft limits can be set higher

Thoughts/Questions:

* It is technically feasible to use rbd-nbd, where kernel versions are older than v5.1 to leverage a higher hard/soft limit
  
  * Barring the pod eviction requirements, required at present, when updating the nodeplugin pods

@dillaman
Copy link

dillaman commented Jun 8, 2020

When hard limit is 1, do not flatten intermediate images that represent CSI-Snapshots, this reduces one round of flattening (reduces space consumption as well) and keeps CSI-snapshot operation relatively quick. NOTE: Intermediate images will be flattened when per image hard/soft snapshot limits are reached though #1098 .

... or just have a node kernel capability flag in the RBD controller saying deep-flatten is supported or not. If it's not supported, always flatten the PVC image.

Copy link
Member

@nixpanic nixpanic left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not sure what reviews were still pending, GitHub suggests I did not post the comments yet?

Will spend some more time on reviewing today.

@@ -1,12 +1,11 @@
---
# need for docker build
sudo: true
dist: xenial
dist: bionic
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hmm, this is already done? Does the branch need a rebase on master?

e2e/rbd.go Outdated
@@ -91,19 +91,31 @@ var _ = Describe("RBD", func() {
deleteConfigMap(rbdDirPath)
deleteResource(rbdExamplePath + "secret.yaml")
deleteResource(rbdExamplePath + "storageclass.yaml")
<<<<<<< HEAD
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

defintely not correct.

Please explain in the commit message why the revert is needed. No need to mention that it is a revert in the subject, just point to the commit in the message.

cloneRbd := generateVolFromSnap(rbdSnap)
// add image feature for cloneRbd
cloneRbd.ImageFormat = rbdImageFormat2
cloneRbd.ImageFeatures = "layering"
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You can use the constant rbd.FeatureNameLayering for this when you move to go-ceph v0.3 or newer.

// generate cloned volume details from snapshot
cloneRbd := generateVolFromSnap(rbdSnap)
// add image feature for cloneRbd
cloneRbd.ImageFormat = rbdImageFormat2
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

imageFormat2 is the default, I think

if err != nil {
if _, ok := err.(ErrImageNotFound); !ok {
klog.Errorf(util.Log(ctx, "failed to delete rbd image: %s/%s with error: %v"), rbdVol.Pool, rbdVol.VolName, err)
return nil, status.Error(codes.Internal, err.Error())
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think that deleting an image should succeed in case the image does not exist? Check the CSI spec for that.

// it asynchronously. If command is not found returns a bool set to false
func rbdManagerTaskDeleteImage(ctx context.Context, pOpts *rbdVolume, cr *util.Credentials) (bool, error) {
// rbdManagerTaskTrashRemove adds a ceph manager task to remove image from trash, thus removing it asynchronously. If command is not found returns a bool set to false
func rbdManagerTaskTrashRemove(ctx context.Context, pOpts *rbdVolume, cr *util.Credentials) (bool, error) {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

changes related to deleting images can be its own commit, reducing the noise in the main change


klog.V(4).Infof(util.Log(ctx, "rbd: trash mv %s using mon %s, pool %s"), image, pOpts.Monitors, pOpts.Pool)

args := []string{"trash", "mv", image, "--pool", pOpts.Pool, "--id", cr.ID, "-m", pOpts.Monitors,
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think you can use go-ceph for this, like

img := pOpts.open()
img.Trash()
img.Close()

@Madhu-1
Copy link
Collaborator Author

Madhu-1 commented Jun 16, 2020

This will be closed by #1160. keeping this open till #1160 is merged

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Add support for smart cloning in rbd dump cephcsi container logs in E2E
6 participants