Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Stale volumesnapshot and volumesnapshotcontents created for volumegroupsnapshot #1050

Open
Madhu-1 opened this issue Apr 2, 2024 · 22 comments
Labels
lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale.

Comments

@Madhu-1
Copy link
Contributor

Madhu-1 commented Apr 2, 2024

for _, snapshot := range snapshots {
volumeSnapshotContentName := GetSnapshotContentNameForVolumeGroupSnapshotContent(string(groupSnapshotContent.UID), snapshot.SourceVolumeId)
volumeSnapshotName := GetSnapshotNameForVolumeGroupSnapshotContent(string(groupSnapshotContent.UID), snapshot.SourceVolumeId)
volumeSnapshotNamespace := groupSnapshotContent.Spec.VolumeGroupSnapshotRef.Namespace
volumeSnapshotContent := &crdv1.VolumeSnapshotContent{
ObjectMeta: metav1.ObjectMeta{
Name: volumeSnapshotContentName,
},
Spec: crdv1.VolumeSnapshotContentSpec{
VolumeSnapshotRef: v1.ObjectReference{
Kind: "VolumeSnapshots",
Name: volumeSnapshotName,
Namespace: volumeSnapshotNamespace,
},
DeletionPolicy: groupSnapshotContent.Spec.DeletionPolicy,
Driver: groupSnapshotContent.Spec.Driver,
Source: crdv1.VolumeSnapshotContentSource{
SnapshotHandle: &snapshot.SnapshotId,
},
// TODO: Populate this field when volume mode conversion is enabled by default
SourceVolumeMode: nil,
},
Status: &crdv1.VolumeSnapshotContentStatus{
VolumeGroupSnapshotHandle: &groupSnapshotContent.Name,
},
}
if groupSnapshotSecret != nil {
klog.V(5).Infof("createGroupSnapshotContent: set annotation [%s] on volume snapshot content [%s].", utils.AnnDeletionSecretRefName, volumeSnapshotContent.Name)
metav1.SetMetaDataAnnotation(&volumeSnapshotContent.ObjectMeta, utils.AnnDeletionSecretRefName, groupSnapshotSecret.Name)
klog.V(5).Infof("createGroupSnapshotContent: set annotation [%s] on volume snapshot content [%s].", utils.AnnDeletionSecretRefNamespace, volumeSnapshotContent.Name)
metav1.SetMetaDataAnnotation(&volumeSnapshotContent.ObjectMeta, utils.AnnDeletionSecretRefNamespace, groupSnapshotSecret.Namespace)
}
label := make(map[string]string)
label["volumeGroupSnapshotName"] = groupSnapshotContent.Spec.VolumeGroupSnapshotRef.Name
volumeSnapshot := &crdv1.VolumeSnapshot{
ObjectMeta: metav1.ObjectMeta{
Name: volumeSnapshotName,
Namespace: volumeSnapshotNamespace,
Labels: label,
Finalizers: []string{utils.VolumeSnapshotInGroupFinalizer},
},
Spec: crdv1.VolumeSnapshotSpec{
Source: crdv1.VolumeSnapshotSource{
VolumeSnapshotContentName: &volumeSnapshotContentName,
},
},
}
vsc, err := ctrl.clientset.SnapshotV1().VolumeSnapshotContents().Create(context.TODO(), volumeSnapshotContent, metav1.CreateOptions{})
if err != nil {
return groupSnapshotContent, err
}
snapshotContentNames = append(snapshotContentNames, vsc.Name)
_, err = ctrl.clientset.SnapshotV1().VolumeSnapshots(volumeSnapshotNamespace).Create(context.TODO(), volumeSnapshot, metav1.CreateOptions{})
if err != nil {
return groupSnapshotContent, err
}
_, err = ctrl.updateSnapshotContentStatus(volumeSnapshotContent, snapshot.SnapshotId, snapshot.ReadyToUse, snapshot.CreationTime.AsTime().UnixNano(), snapshot.SizeBytes, groupSnapshotID)
if err != nil {
return groupSnapshotContent, err
}
}

The above code seems to be buggy and i got around 200 volumesnapshots/volumesnapshotcontent created for a single volumegroupsnapshot with single PVC.

It works as below

  1. Create volumegroupsnapshot with csi driver
  2. Get the response and generate VS and VSC name based on the timestamp
  3. Create the volumesnapshotcontent
  4. Create the volumesnapshot
  5. Update volumesnapshotcontent

Repeat steps 3,4,5 for all the snapshots in the volumegroupsnapshot RPC response and at last update the volumegroupsnapshotcontent status with VSC names.

If there are any issues in 3,4,5 steps or if the csi-snapshotter is restarted we might end up having stale resources that need to be garbage collected.

cc @xing-yang @RaunakShah @nixpanic

@nixpanic
Copy link
Member

nixpanic commented Apr 3, 2024

I guess that an error like AlreadyExists on a Create() call should be handled as success (of a previous try), and if needed a Get() might be done instead. That is probably easier than garbage collecting created objects.

@Madhu-1
Copy link
Contributor Author

Madhu-1 commented Apr 3, 2024

@nixpanic

func GetSnapshotNameForVolumeGroupSnapshotContent(groupSnapshotContentUUID, pvUUID string) string {
return fmt.Sprintf("snapshot-%x-%s", sha256.Sum256([]byte(groupSnapshotContentUUID+pvUUID)), time.Now().Format("2006-01-02-3.4.5"))
}
// GetSnapshotContentNameForVolumeGroupSnapshotContent returns a unique content name for the
// passed in VolumeGroupSnapshotContent.
func GetSnapshotContentNameForVolumeGroupSnapshotContent(groupSnapshotContentUUID, pvUUID string) string {
return fmt.Sprintf("snapcontent-%x-%s", sha256.Sum256([]byte(groupSnapshotContentUUID+pvUUID)), time.Now().Format("2006-01-02-3.4.5"))
}
here the names are generated based on the timestamp and they will not remain the same, we need to generate some const names or we might need to keep track of generated names.

@nixpanic
Copy link
Member

nixpanic commented Apr 3, 2024

Yes, tracking the timestamp of the initial request is required for this. All objects can use the same timestamp, possibly stored&retrieved in an annotation of the groupSnapshotContent object.

@Madhu-1
Copy link
Contributor Author

Madhu-1 commented Apr 3, 2024

How about if we generate the sha of groupSnapshotContentUUID+pvUUID+snapshotID and remove the timestamp?

func GetSnapshotNameForVolumeGroupSnapshotContent(groupSnapshotContentUUID, pvUUID string) string { 
 	return fmt.Sprintf("snapshot-%x-%s", sha256.Sum256([]byte(groupSnapshotContentUUID+pvUUID+snapshotID))) 
 } 
  
 // GetSnapshotContentNameForVolumeGroupSnapshotContent returns a unique content name for the 
 // passed in VolumeGroupSnapshotContent. 
 func GetSnapshotContentNameForVolumeGroupSnapshotContent(groupSnapshotContentUUID, pvUUID string) string { 
 	return fmt.Sprintf("snapcontent-%x-%s", sha256.Sum256([]byte(groupSnapshotContentUUID+pvUUID+snapshotID))) 
 } 

There could be a reason why timestamp was considered but not sure why,

@xing-yang
Copy link
Collaborator

We use timestamp because that's usually how people check at which point in time they want to restore the data back. Also you could create multiple snapshots for the same groupSnapshotContentUUID+pvUUID+snapshotID combination.

@xing-yang
Copy link
Collaborator

I think we need to do cleanups if steps 3, 4, 5 fail.

@xing-yang
Copy link
Collaborator

@Madhu-1 Are the failures you encountered coming from VolumeSnapshot/VolumeSnapshotContent API object creation/update or from the creation of snapshot resource on the underlying storage system?

@Madhu-1
Copy link
Contributor Author

Madhu-1 commented Apr 3, 2024

We use timestamp because that's usually how people check at which point in time they want to restore the data back. Also you could create multiple snapshots for the same groupSnapshotContentUUID+pvUUID+snapshotID combination.

@xing-yang Should users use the creationTime stamp on the volumesnapshotcontent to restore the data back? This timestamp represents when the volumesnapshot was created, not when the snapshot was created in the storage backend.

@Madhu-1
Copy link
Contributor Author

Madhu-1 commented Apr 3, 2024

@Madhu-1 Are the failures you encountered coming from VolumeSnapshot/VolumeSnapshotContent API object creation/update or from the creation of snapshot resource on the underlying storage system?

i think it was due to the VolumeSnapshotContent update (i will run tests again and update the issue with logs) but again that is our source of the problem i was also thinking about the problem when the sidecar is restarted as well (this can also lead to the stale resources)

@Madhu-1
Copy link
Contributor Author

Madhu-1 commented Apr 10, 2024

@xing-yang here are the logs of csi-snapshotter

I0410 09:11:15.060484       1 connection.go:252] GRPC error: <nil>
I0410 09:11:15.060508       1 main.go:226] Start NewCSISnapshotSideCarController with snapshotter [rook-ceph.cephfs.csi.ceph.com] kubeconfig [] csiTimeout [2m30s] csiAddress [unix:///csi/csi-provisioner.sock] resyncPeriod [15m0s] snapshotNamePrefix [snapshot] snapshotNameUUIDLength [824636560368]
I0410 09:11:15.060573       1 connection.go:244] GRPC call: /csi.v1.GroupController/GroupControllerGetCapabilities
I0410 09:11:15.060582       1 connection.go:245] GRPC request: {}
I0410 09:11:15.061465       1 connection.go:251] GRPC response: {"capabilities":[{"Type":{"Rpc":{"type":1}}}]}
I0410 09:11:15.061483       1 connection.go:252] GRPC error: <nil>
I0410 09:11:15.063045       1 snapshot_controller_base.go:169] Starting CSI snapshotter
I0410 09:11:15.063804       1 reflector.go:289] Starting reflector *v1alpha1.VolumeGroupSnapshotContent (15m0s) from github.com/kubernetes-csi/external-snapshotter/client/v7/informers/externalversions/factory.go:142
I0410 09:11:15.063803       1 reflector.go:289] Starting reflector *v1.VolumeSnapshotContent (15m0s) from github.com/kubernetes-csi/external-snapshotter/client/v7/informers/externalversions/factory.go:142
I0410 09:11:15.063851       1 reflector.go:325] Listing and watching *v1.VolumeSnapshotContent from github.com/kubernetes-csi/external-snapshotter/client/v7/informers/externalversions/factory.go:142
I0410 09:11:15.063829       1 reflector.go:325] Listing and watching *v1alpha1.VolumeGroupSnapshotContent from github.com/kubernetes-csi/external-snapshotter/client/v7/informers/externalversions/factory.go:142
I0410 09:11:15.063803       1 reflector.go:289] Starting reflector *v1alpha1.VolumeGroupSnapshotClass (15m0s) from github.com/kubernetes-csi/external-snapshotter/client/v7/informers/externalversions/factory.go:142adhu
I0410 09:11:15.063915       1 reflector.go:325] Listing and watching *v1alpha1.VolumeGroupSnapshotClass from github.com/kubernetes-csi/external-snapshotter/client/v7/informers/externalversions/factory.go:142
I0410 09:11:15.063804       1 reflector.go:289] Starting reflector *v1.VolumeSnapshotClass (15m0s) from github.com/kubernetes-csi/external-snapshotter/client/v7/informers/externalversions/factory.go:142
I0410 09:11:15.063994       1 reflector.go:325] Listing and watching *v1.VolumeSnapshotClass from github.com/kubernetes-csi/external-snapshotter/client/v7/informers/externalversions/factory.go:142
I0410 09:11:15.076190       1 reflector.go:351] Caches populated for *v1alpha1.VolumeGroupSnapshotClass from github.com/kubernetes-csi/external-snapshotter/client/v7/informers/externalversions/factory.go:142
I0410 09:11:15.076182       1 reflector.go:351] Caches populated for *v1.VolumeSnapshotClass from github.com/kubernetes-csi/external-snapshotter/client/v7/informers/externalversions/factory.go:142
I0410 09:11:15.076182       1 reflector.go:351] Caches populated for *v1.VolumeSnapshotContent from github.com/kubernetes-csi/external-snapshotter/client/v7/informers/externalversions/factory.go:142
I0410 09:11:15.076182       1 reflector.go:351] Caches populated for *v1alpha1.VolumeGroupSnapshotContent from github.com/kubernetes-csi/external-snapshotter/client/v7/informers/externalversions/factory.go:142
I0410 09:11:15.163923       1 snapshot_controller_base.go:402] controller initialized
I0410 09:12:04.906973       1 groupsnapshot_helper.go:53] enqueued "groupsnapcontent-aecb80a8-6c51-41a9-81e2-0f623ddefb6c" for sync
I0410 09:12:04.907071       1 groupsnapshot_helper.go:53] enqueued "groupsnapcontent-aecb80a8-6c51-41a9-81e2-0f623ddefb6c" for sync
I0410 09:12:04.907260       1 groupsnapshot_helper.go:82] syncGroupSnapshotContentByKey[groupsnapcontent-aecb80a8-6c51-41a9-81e2-0f623ddefb6c]
I0410 09:12:04.907493       1 util.go:245] storeObjectUpdate: adding groupsnapshotcontent "groupsnapcontent-aecb80a8-6c51-41a9-81e2-0f623ddefb6c", version 153838
I0410 09:12:04.907527       1 groupsnapshot_helper.go:165] synchronizing VolumeGroupSnapshotContent[groupsnapcontent-aecb80a8-6c51-41a9-81e2-0f623ddefb6c]
I0410 09:12:04.907893       1 groupsnapshot_helper.go:327] Check if VolumeGroupSnapshotContent[groupsnapcontent-aecb80a8-6c51-41a9-81e2-0f623ddefb6c] should be deleted.
I0410 09:12:04.908015       1 groupsnapshot_helper.go:186] syncGroupSnapshotContent: Call CreateGroupSnapshot for group snapshot content groupsnapcontent-aecb80a8-6c51-41a9-81e2-0f623ddefb6c
I0410 09:12:04.908135       1 groupsnapshot_helper.go:356] createGroupSnapshot for group snapshot content [groupsnapcontent-aecb80a8-6c51-41a9-81e2-0f623ddefb6c]: started
I0410 09:12:04.908258       1 groupsnapshot_helper.go:375] createGroupSnapshotWrapper: Creating group snapshot for group snapshot content groupsnapcontent-aecb80a8-6c51-41a9-81e2-0f623ddefb6c through the plugin ...
I0410 09:12:04.908287       1 groupsnapshot_helper.go:518] getCSIGroupSnapshotInput for group snapshot content groupsnapcontent-aecb80a8-6c51-41a9-81e2-0f623ddefb6c
I0410 09:12:04.908297       1 groupsnapshot_helper.go:548] getGroupSnapshotClass: VolumeGroupSnapshotClassName [csi-cephfsplugin-groupsnapclass]
I0410 09:12:04.916046       1 groupsnapshot_helper.go:571] setAnnVolumeGroupSnapshotBeingCreated: set annotation [groupsnapshot.storage.kubernetes.io/volumegroupsnapshot-being-created:yes] on groupSnapshotContent [groupsnapcontent-aecb80a8-6c51-41a9-81e2-0f623ddefb6c].
I0410 09:12:04.926552       1 util.go:245] storeObjectUpdate: adding content "groupsnapcontent-aecb80a8-6c51-41a9-81e2-0f623ddefb6c", version 153842
I0410 09:12:04.926668       1 groupsnapshot_helper.go:53] enqueued "groupsnapcontent-aecb80a8-6c51-41a9-81e2-0f623ddefb6c" for sync
I0410 09:12:04.926579       1 groupsnapshot_helper.go:596] setAnnVolumeGroupSnapshotBeingCreated: volume group snapshot content &{TypeMeta:{Kind: APIVersion:} ObjectMeta:{Name:groupsnapcontent-aecb80a8-6c51-41a9-81e2-0f623ddefb6c GenerateName: Namespace: SelfLink: UID:bc7b18e0-fc63-4d22-bc1b-a9014984da14 ResourceVersion:153842 Generation:1 CreationTimestamp:2024-04-10 09:12:04 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[] Annotations:map[groupsnapshot.storage.kubernetes.io/deletion-secret-name:rook-csi-cephfs-provisioner groupsnapshot.storage.kubernetes.io/deletion-secret-namespace:rook-ceph groupsnapshot.storage.kubernetes.io/volumegroupsnapshot-being-created:yes] OwnerReferences:[] Finalizers:[groupsnapshot.storage.kubernetes.io/volumegroupsnapshotcontent-bound-protection] ManagedFields:[{Manager:csi-snapshotter Operation:Update APIVersion:groupsnapshot.storage.k8s.io/v1alpha1 Time:2024-04-10 09:12:04 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:metadata":{"f:annotations":{"f:groupsnapshot.storage.kubernetes.io/volumegroupsnapshot-being-created":{}}}} Subresource:} {Manager:snapshot-controller Operation:Update APIVersion:groupsnapshot.storage.k8s.io/v1alpha1 Time:2024-04-10 09:12:04 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:metadata":{"f:annotations":{".":{},"f:groupsnapshot.storage.kubernetes.io/deletion-secret-name":{},"f:groupsnapshot.storage.kubernetes.io/deletion-secret-namespace":{}},"f:finalizers":{".":{},"v:\"groupsnapshot.storage.kubernetes.io/volumegroupsnapshotcontent-bound-protection\"":{}}},"f:spec":{".":{},"f:deletionPolicy":{},"f:driver":{},"f:source":{".":{},"f:volumeHandles":{}},"f:volumeGroupSnapshotClassName":{},"f:volumeGroupSnapshotRef":{}}} Subresource:}]} Spec:{VolumeGroupSnapshotRef:{Kind:VolumeGroupSnapshot Namespace:default Name:my-group-snapshot-1 UID:aecb80a8-6c51-41a9-81e2-0f623ddefb6c APIVersion:groupsnapshot.storage.k8s.io/v1alpha1 ResourceVersion:153836 FieldPath:} DeletionPolicy:Delete Driver:rook-ceph.cephfs.csi.ceph.com VolumeGroupSnapshotClassName:0xc0003a6300 Source:{VolumeHandles:[0001-0009-rook-ceph-0000000000000001-e869764e-d890-407b-8ad8-4b6e4ed91373 0001-0009-rook-ceph-0000000000000001-48c212fa-a983-483f-92cd-016ebdfae9fe] GroupSnapshotHandles:<nil>}} Status:<nil>}
I0410 09:12:04.927474       1 group_snapshotter.go:52] CSI CreateGroupSnapshot: groupsnapshot-aecb80a8-6c51-41a9-81e2-0f623ddefb6c
I0410 09:12:04.928492       1 connection.go:244] GRPC call: /csi.v1.Identity/GetPluginInfo
I0410 09:12:04.928518       1 connection.go:245] GRPC request: {}
I0410 09:12:04.938698       1 connection.go:251] GRPC response: {"name":"rook-ceph.cephfs.csi.ceph.com","vendor_version":"canary"}
I0410 09:12:04.938772       1 connection.go:252] GRPC error: <nil>
I0410 09:12:04.938787       1 connection.go:244] GRPC call: /csi.v1.GroupController/CreateVolumeGroupSnapshot
I0410 09:12:04.938796       1 connection.go:245] GRPC request: {"name":"groupsnapshot-aecb80a8-6c51-41a9-81e2-0f623ddefb6c","parameters":{"clusterID":"rook-ceph","csi.storage.k8s.io/volumegroupsnapshot/name":"my-group-snapshot-1","csi.storage.k8s.io/volumegroupsnapshot/namespace":"default","csi.storage.k8s.io/volumegroupsnapshotcontent/name":"groupsnapcontent-aecb80a8-6c51-41a9-81e2-0f623ddefb6c","fsName":"myfs"},"secrets":"***stripped***","source_volume_ids":["0001-0009-rook-ceph-0000000000000001-e869764e-d890-407b-8ad8-4b6e4ed91373","0001-0009-rook-ceph-0000000000000001-48c212fa-a983-483f-92cd-016ebdfae9fe"]}
I0410 09:12:05.073158       1 connection.go:251] GRPC response: {}
I0410 09:12:05.073178       1 connection.go:252] GRPC error: rpc error: code = Internal desc = Quiesce operation is in progress
I0410 09:12:05.073197       1 groupsnapshot_helper.go:409] createGroupSnapshotWrapper: CreateGroupSnapshot for groupSnapshotContent groupsnapcontent-aecb80a8-6c51-41a9-81e2-0f623ddefb6c returned error: rpc error: code = Internal desc = Quiesce operation is in progress
I0410 09:12:05.084179       1 groupsnapshot_helper.go:53] enqueued "groupsnapcontent-aecb80a8-6c51-41a9-81e2-0f623ddefb6c" for sync
I0410 09:12:05.084179       1 groupsnapshot_helper.go:622] Removed VolumeGroupSnapshotBeingCreated annotation from volume group snapshot content groupsnapcontent-aecb80a8-6c51-41a9-81e2-0f623ddefb6c
I0410 09:12:05.084232       1 util.go:273] storeObjectUpdate updating groupsnapshotcontent "groupsnapcontent-aecb80a8-6c51-41a9-81e2-0f623ddefb6c" with version 153843
I0410 09:12:05.084258       1 groupsnapshot_helper.go:707] updateGroupSnapshotContentErrorStatusWithEvent[groupsnapcontent-aecb80a8-6c51-41a9-81e2-0f623ddefb6c]
I0410 09:12:05.092815       1 groupsnapshot_helper.go:53] enqueued "groupsnapcontent-aecb80a8-6c51-41a9-81e2-0f623ddefb6c" for sync
I0410 09:12:05.093072       1 util.go:273] storeObjectUpdate updating groupsnapshotcontent "groupsnapcontent-aecb80a8-6c51-41a9-81e2-0f623ddefb6c" with version 153844
E0410 09:12:05.093103       1 groupsnapshot_helper.go:360] createGroupSnapshot for groupSnapshotContent [groupsnapcontent-aecb80a8-6c51-41a9-81e2-0f623ddefb6c]: error occurred in createGroupSnapshotWrapper: failed to take group snapshot of the volumes [0001-0009-rook-ceph-0000000000000001-e869764e-d890-407b-8ad8-4b6e4ed91373 0001-0009-rook-ceph-0000000000000001-48c212fa-a983-483f-92cd-016ebdfae9fe]: "rpc error: code = Internal desc = Quiesce operation is in progress"
E0410 09:12:05.093303       1 groupsnapshot_helper.go:149] could not sync group snapshot content "groupsnapcontent-aecb80a8-6c51-41a9-81e2-0f623ddefb6c": failed to take group snapshot of the volumes [0001-0009-rook-ceph-0000000000000001-e869764e-d890-407b-8ad8-4b6e4ed91373 0001-0009-rook-ceph-0000000000000001-48c212fa-a983-483f-92cd-016ebdfae9fe]: "rpc error: code = Internal desc = Quiesce operation is in progress"
I0410 09:12:05.093347       1 groupsnapshot_helper.go:71] Failed to sync group snapshot content "groupsnapcontent-aecb80a8-6c51-41a9-81e2-0f623ddefb6c", will retry again: failed to take group snapshot of the volumes [0001-0009-rook-ceph-0000000000000001-e869764e-d890-407b-8ad8-4b6e4ed91373 0001-0009-rook-ceph-0000000000000001-48c212fa-a983-483f-92cd-016ebdfae9fe]: "rpc error: code = Internal desc = Quiesce operation is in progress"
I0410 09:12:05.093429       1 groupsnapshot_helper.go:82] syncGroupSnapshotContentByKey[groupsnapcontent-aecb80a8-6c51-41a9-81e2-0f623ddefb6c]
I0410 09:12:05.093451       1 util.go:273] storeObjectUpdate updating groupsnapshotcontent "groupsnapcontent-aecb80a8-6c51-41a9-81e2-0f623ddefb6c" with version 153844
I0410 09:12:05.093462       1 groupsnapshot_helper.go:165] synchronizing VolumeGroupSnapshotContent[groupsnapcontent-aecb80a8-6c51-41a9-81e2-0f623ddefb6c]
I0410 09:12:05.093513       1 groupsnapshot_helper.go:327] Check if VolumeGroupSnapshotContent[groupsnapcontent-aecb80a8-6c51-41a9-81e2-0f623ddefb6c] should be deleted.
I0410 09:12:05.093525       1 groupsnapshot_helper.go:778] checkandUpdateGroupSnapshotContentStatus[groupsnapcontent-aecb80a8-6c51-41a9-81e2-0f623ddefb6c] started
I0410 09:12:05.093533       1 groupsnapshot_helper.go:375] createGroupSnapshotWrapper: Creating group snapshot for group snapshot content groupsnapcontent-aecb80a8-6c51-41a9-81e2-0f623ddefb6c through the plugin ...
I0410 09:12:05.093538       1 groupsnapshot_helper.go:518] getCSIGroupSnapshotInput for group snapshot content groupsnapcontent-aecb80a8-6c51-41a9-81e2-0f623ddefb6c
I0410 09:12:05.093543       1 groupsnapshot_helper.go:548] getGroupSnapshotClass: VolumeGroupSnapshotClassName [csi-cephfsplugin-groupsnapclass]
I0410 09:12:05.093650       1 event.go:364] Event(v1.ObjectReference{Kind:"VolumeGroupSnapshotContent", Namespace:"", Name:"groupsnapcontent-aecb80a8-6c51-41a9-81e2-0f623ddefb6c", UID:"bc7b18e0-fc63-4d22-bc1b-a9014984da14", APIVersion:"groupsnapshot.storage.k8s.io/v1alpha1", ResourceVersion:"153844", FieldPath:""}): type: 'Warning' reason: 'GroupSnapshotCreationFailed' Failed to create group snapshot: failed to take group snapshot of the volumes [0001-0009-rook-ceph-0000000000000001-e869764e-d890-407b-8ad8-4b6e4ed91373 0001-0009-rook-ceph-0000000000000001-48c212fa-a983-483f-92cd-016ebdfae9fe]: "rpc error: code = Internal desc = Quiesce operation is in progress"
I0410 09:12:05.095897       1 groupsnapshot_helper.go:571] setAnnVolumeGroupSnapshotBeingCreated: set annotation [groupsnapshot.storage.kubernetes.io/volumegroupsnapshot-being-created:yes] on groupSnapshotContent [groupsnapcontent-aecb80a8-6c51-41a9-81e2-0f623ddefb6c].
I0410 09:12:05.108797       1 util.go:273] storeObjectUpdate updating content "groupsnapcontent-aecb80a8-6c51-41a9-81e2-0f623ddefb6c" with version 153846
I0410 09:12:05.108817       1 groupsnapshot_helper.go:53] enqueued "groupsnapcontent-aecb80a8-6c51-41a9-81e2-0f623ddefb6c" for sync
I0410 09:12:05.108834       1 groupsnapshot_helper.go:596] setAnnVolumeGroupSnapshotBeingCreated: volume group snapshot content &{TypeMeta:{Kind: APIVersion:} ObjectMeta:{Name:groupsnapcontent-aecb80a8-6c51-41a9-81e2-0f623ddefb6c GenerateName: Namespace: SelfLink: UID:bc7b18e0-fc63-4d22-bc1b-a9014984da14 ResourceVersion:153846 Generation:1 CreationTimestamp:2024-04-10 09:12:04 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[] Annotations:map[groupsnapshot.storage.kubernetes.io/deletion-secret-name:rook-csi-cephfs-provisioner groupsnapshot.storage.kubernetes.io/deletion-secret-namespace:rook-ceph groupsnapshot.storage.kubernetes.io/volumegroupsnapshot-being-created:yes] OwnerReferences:[] Finalizers:[groupsnapshot.storage.kubernetes.io/volumegroupsnapshotcontent-bound-protection] ManagedFields:[{Manager:snapshot-controller Operation:Update APIVersion:groupsnapshot.storage.k8s.io/v1alpha1 Time:2024-04-10 09:12:04 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:metadata":{"f:annotations":{".":{},"f:groupsnapshot.storage.kubernetes.io/deletion-secret-name":{},"f:groupsnapshot.storage.kubernetes.io/deletion-secret-namespace":{}},"f:finalizers":{".":{},"v:\"groupsnapshot.storage.kubernetes.io/volumegroupsnapshotcontent-bound-protection\"":{}}},"f:spec":{".":{},"f:deletionPolicy":{},"f:driver":{},"f:source":{".":{},"f:volumeHandles":{}},"f:volumeGroupSnapshotClassName":{},"f:volumeGroupSnapshotRef":{}}} Subresource:} {Manager:csi-snapshotter Operation:Update APIVersion:groupsnapshot.storage.k8s.io/v1alpha1 Time:2024-04-10 09:12:05 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:metadata":{"f:annotations":{"f:groupsnapshot.storage.kubernetes.io/volumegroupsnapshot-being-created":{}}}} Subresource:} {Manager:csi-snapshotter Operation:Update APIVersion:groupsnapshot.storage.k8s.io/v1alpha1 Time:2024-04-10 09:12:05 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:status":{".":{},"f:error":{".":{},"f:message":{},"f:time":{}},"f:readyToUse":{}}} Subresource:status}]} Spec:{VolumeGroupSnapshotRef:{Kind:VolumeGroupSnapshot Namespace:default Name:my-group-snapshot-1 UID:aecb80a8-6c51-41a9-81e2-0f623ddefb6c APIVersion:groupsnapshot.storage.k8s.io/v1alpha1 ResourceVersion:153836 FieldPath:} DeletionPolicy:Delete Driver:rook-ceph.cephfs.csi.ceph.com VolumeGroupSnapshotClassName:0xc000374e10 Source:{VolumeHandles:[0001-0009-rook-ceph-0000000000000001-e869764e-d890-407b-8ad8-4b6e4ed91373 0001-0009-rook-ceph-0000000000000001-48c212fa-a983-483f-92cd-016ebdfae9fe] GroupSnapshotHandles:<nil>}} Status:0xc000351180}
I0410 09:12:05.109060       1 group_snapshotter.go:52] CSI CreateGroupSnapshot: groupsnapshot-aecb80a8-6c51-41a9-81e2-0f623ddefb6c
I0410 09:12:05.109178       1 connection.go:244] GRPC call: /csi.v1.Identity/GetPluginInfo
I0410 09:12:05.109187       1 connection.go:245] GRPC request: {}
I0410 09:12:05.110006       1 connection.go:251] GRPC response: {"name":"rook-ceph.cephfs.csi.ceph.com","vendor_version":"canary"}
I0410 09:12:05.110022       1 connection.go:252] GRPC error: <nil>
I0410 09:12:05.110030       1 connection.go:244] GRPC call: /csi.v1.GroupController/CreateVolumeGroupSnapshot
I0410 09:12:05.110036       1 connection.go:245] GRPC request: {"name":"groupsnapshot-aecb80a8-6c51-41a9-81e2-0f623ddefb6c","parameters":{"clusterID":"rook-ceph","csi.storage.k8s.io/volumegroupsnapshot/name":"my-group-snapshot-1","csi.storage.k8s.io/volumegroupsnapshot/namespace":"default","csi.storage.k8s.io/volumegroupsnapshotcontent/name":"groupsnapcontent-aecb80a8-6c51-41a9-81e2-0f623ddefb6c","fsName":"myfs"},"secrets":"***stripped***","source_volume_ids":["0001-0009-rook-ceph-0000000000000001-e869764e-d890-407b-8ad8-4b6e4ed91373","0001-0009-rook-ceph-0000000000000001-48c212fa-a983-483f-92cd-016ebdfae9fe"]}
I0410 09:12:05.285651       1 connection.go:251] GRPC response: {}
I0410 09:12:05.285671       1 connection.go:252] GRPC error: rpc error: code = Internal desc = quiesce operation is in progress
I0410 09:12:05.285688       1 groupsnapshot_helper.go:409] createGroupSnapshotWrapper: CreateGroupSnapshot for groupSnapshotContent groupsnapcontent-aecb80a8-6c51-41a9-81e2-0f623ddefb6c returned error: rpc error: code = Internal desc = quiesce operation is in progress
I0410 09:12:05.293419       1 groupsnapshot_helper.go:622] Removed VolumeGroupSnapshotBeingCreated annotation from volume group snapshot content groupsnapcontent-aecb80a8-6c51-41a9-81e2-0f623ddefb6c
I0410 09:12:05.293446       1 util.go:273] storeObjectUpdate updating groupsnapshotcontent "groupsnapcontent-aecb80a8-6c51-41a9-81e2-0f623ddefb6c" with version 153847
I0410 09:12:05.293481       1 groupsnapshot_helper.go:707] updateGroupSnapshotContentErrorStatusWithEvent[groupsnapcontent-aecb80a8-6c51-41a9-81e2-0f623ddefb6c]
I0410 09:12:05.293531       1 groupsnapshot_helper.go:53] enqueued "groupsnapcontent-aecb80a8-6c51-41a9-81e2-0f623ddefb6c" for sync
I0410 09:12:05.304093       1 groupsnapshot_helper.go:53] enqueued "groupsnapcontent-aecb80a8-6c51-41a9-81e2-0f623ddefb6c" for sync
I0410 09:12:05.304100       1 util.go:273] storeObjectUpdate updating groupsnapshotcontent "groupsnapcontent-aecb80a8-6c51-41a9-81e2-0f623ddefb6c" with version 153848
E0410 09:12:05.304119       1 groupsnapshot_helper.go:782] checkandUpdateGroupSnapshotContentStatus [groupsnapcontent-aecb80a8-6c51-41a9-81e2-0f623ddefb6c]: error occurred failed to take group snapshot of the volumes [0001-0009-rook-ceph-0000000000000001-e869764e-d890-407b-8ad8-4b6e4ed91373 0001-0009-rook-ceph-0000000000000001-48c212fa-a983-483f-92cd-016ebdfae9fe]: "rpc error: code = Internal desc = quiesce operation is in progress"
E0410 09:12:05.304141       1 groupsnapshot_helper.go:149] could not sync group snapshot content "groupsnapcontent-aecb80a8-6c51-41a9-81e2-0f623ddefb6c": failed to take group snapshot of the volumes [0001-0009-rook-ceph-0000000000000001-e869764e-d890-407b-8ad8-4b6e4ed91373 0001-0009-rook-ceph-0000000000000001-48c212fa-a983-483f-92cd-016ebdfae9fe]: "rpc error: code = Internal desc = quiesce operation is in progress"
I0410 09:12:05.304131       1 event.go:364] Event(v1.ObjectReference{Kind:"VolumeGroupSnapshotContent", Namespace:"", Name:"groupsnapcontent-aecb80a8-6c51-41a9-81e2-0f623ddefb6c", UID:"bc7b18e0-fc63-4d22-bc1b-a9014984da14", APIVersion:"groupsnapshot.storage.k8s.io/v1alpha1", ResourceVersion:"153848", FieldPath:""}): type: 'Warning' reason: 'GroupSnapshotContentCheckandUpdateFailed' Failed to check and update group snapshot content: failed to take group snapshot of the volumes [0001-0009-rook-ceph-0000000000000001-e869764e-d890-407b-8ad8-4b6e4ed91373 0001-0009-rook-ceph-0000000000000001-48c212fa-a983-483f-92cd-016ebdfae9fe]: "rpc error: code = Internal desc = quiesce operation is in progress"
I0410 09:12:05.304162       1 groupsnapshot_helper.go:71] Failed to sync group snapshot content "groupsnapcontent-aecb80a8-6c51-41a9-81e2-0f623ddefb6c", will retry again: failed to take group snapshot of the volumes [0001-0009-rook-ceph-0000000000000001-e869764e-d890-407b-8ad8-4b6e4ed91373 0001-0009-rook-ceph-0000000000000001-48c212fa-a983-483f-92cd-016ebdfae9fe]: "rpc error: code = Internal desc = quiesce operation is in progress"
I0410 09:12:05.304208       1 groupsnapshot_helper.go:82] syncGroupSnapshotContentByKey[groupsnapcontent-aecb80a8-6c51-41a9-81e2-0f623ddefb6c]
I0410 09:12:05.304223       1 util.go:273] storeObjectUpdate updating groupsnapshotcontent "groupsnapcontent-aecb80a8-6c51-41a9-81e2-0f623ddefb6c" with version 153848
I0410 09:12:05.304238       1 groupsnapshot_helper.go:165] synchronizing VolumeGroupSnapshotContent[groupsnapcontent-aecb80a8-6c51-41a9-81e2-0f623ddefb6c]
I0410 09:12:05.304248       1 groupsnapshot_helper.go:327] Check if VolumeGroupSnapshotContent[groupsnapcontent-aecb80a8-6c51-41a9-81e2-0f623ddefb6c] should be deleted.
I0410 09:12:05.304257       1 groupsnapshot_helper.go:778] checkandUpdateGroupSnapshotContentStatus[groupsnapcontent-aecb80a8-6c51-41a9-81e2-0f623ddefb6c] started
I0410 09:12:05.304264       1 groupsnapshot_helper.go:375] createGroupSnapshotWrapper: Creating group snapshot for group snapshot content groupsnapcontent-aecb80a8-6c51-41a9-81e2-0f623ddefb6c through the plugin ...
I0410 09:12:05.304271       1 groupsnapshot_helper.go:518] getCSIGroupSnapshotInput for group snapshot content groupsnapcontent-aecb80a8-6c51-41a9-81e2-0f623ddefb6c
I0410 09:12:05.304275       1 groupsnapshot_helper.go:548] getGroupSnapshotClass: VolumeGroupSnapshotClassName [csi-cephfsplugin-groupsnapclass]
I0410 09:12:05.307789       1 groupsnapshot_helper.go:571] setAnnVolumeGroupSnapshotBeingCreated: set annotation [groupsnapshot.storage.kubernetes.io/volumegroupsnapshot-being-created:yes] on groupSnapshotContent [groupsnapcontent-aecb80a8-6c51-41a9-81e2-0f623ddefb6c].
I0410 09:12:05.323290       1 util.go:273] storeObjectUpdate updating content "groupsnapcontent-aecb80a8-6c51-41a9-81e2-0f623ddefb6c" with version 153850
I0410 09:12:05.323337       1 groupsnapshot_helper.go:596] setAnnVolumeGroupSnapshotBeingCreated: volume group snapshot content &{TypeMeta:{Kind: APIVersion:} ObjectMeta:{Name:groupsnapcontent-aecb80a8-6c51-41a9-81e2-0f623ddefb6c GenerateName: Namespace: SelfLink: UID:bc7b18e0-fc63-4d22-bc1b-a9014984da14 ResourceVersion:153850 Generation:1 CreationTimestamp:2024-04-10 09:12:04 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[] Annotations:map[groupsnapshot.storage.kubernetes.io/deletion-secret-name:rook-csi-cephfs-provisioner groupsnapshot.storage.kubernetes.io/deletion-secret-namespace:rook-ceph groupsnapshot.storage.kubernetes.io/volumegroupsnapshot-being-created:yes] OwnerReferences:[] Finalizers:[groupsnapshot.storage.kubernetes.io/volumegroupsnapshotcontent-bound-protection] ManagedFields:[{Manager:snapshot-controller Operation:Update APIVersion:groupsnapshot.storage.k8s.io/v1alpha1 Time:2024-04-10 09:12:04 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:metadata":{"f:annotations":{".":{},"f:groupsnapshot.storage.kubernetes.io/deletion-secret-name":{},"f:groupsnapshot.storage.kubernetes.io/deletion-secret-namespace":{}},"f:finalizers":{".":{},"v:\"groupsnapshot.storage.kubernetes.io/volumegroupsnapshotcontent-bound-protection\"":{}}},"f:spec":{".":{},"f:deletionPolicy":{},"f:driver":{},"f:source":{".":{},"f:volumeHandles":{}},"f:volumeGroupSnapshotClassName":{},"f:volumeGroupSnapshotRef":{}}} Subresource:} {Manager:csi-snapshotter Operation:Update APIVersion:groupsnapshot.storage.k8s.io/v1alpha1 Time:2024-04-10 09:12:05 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:metadata":{"f:annotations":{"f:groupsnapshot.storage.kubernetes.io/volumegroupsnapshot-being-created":{}}}} Subresource:} {Manager:csi-snapshotter Operation:Update APIVersion:groupsnapshot.storage.k8s.io/v1alpha1 Time:2024-04-10 09:12:05 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:status":{".":{},"f:error":{".":{},"f:message":{},"f:time":{}},"f:readyToUse":{}}} Subresource:status}]} Spec:{VolumeGroupSnapshotRef:{Kind:VolumeGroupSnapshot Namespace:default Name:my-group-snapshot-1 UID:aecb80a8-6c51-41a9-81e2-0f623ddefb6c APIVersion:groupsnapshot.storage.k8s.io/v1alpha1 ResourceVersion:153836 FieldPath:} DeletionPolicy:Delete Driver:rook-ceph.cephfs.csi.ceph.com VolumeGroupSnapshotClassName:0xc00059a7c0 Source:{VolumeHandles:[0001-0009-rook-ceph-0000000000000001-e869764e-d890-407b-8ad8-4b6e4ed91373 0001-0009-rook-ceph-0000000000000001-48c212fa-a983-483f-92cd-016ebdfae9fe] GroupSnapshotHandles:<nil>}} Status:0xc000644680}
I0410 09:12:05.323512       1 group_snapshotter.go:52] CSI CreateGroupSnapshot: groupsnapshot-aecb80a8-6c51-41a9-81e2-0f623ddefb6c
I0410 09:12:05.323531       1 connection.go:244] GRPC call: /csi.v1.Identity/GetPluginInfo
I0410 09:12:05.323541       1 connection.go:245] GRPC request: {}
I0410 09:12:05.323808       1 groupsnapshot_helper.go:53] enqueued "groupsnapcontent-aecb80a8-6c51-41a9-81e2-0f623ddefb6c" for sync
I0410 09:12:05.324670       1 connection.go:251] GRPC response: {"name":"rook-ceph.cephfs.csi.ceph.com","vendor_version":"canary"}
I0410 09:12:05.324750       1 connection.go:252] GRPC error: <nil>
I0410 09:12:05.324776       1 connection.go:244] GRPC call: /csi.v1.GroupController/CreateVolumeGroupSnapshot
I0410 09:12:05.324795       1 connection.go:245] GRPC request: {"name":"groupsnapshot-aecb80a8-6c51-41a9-81e2-0f623ddefb6c","parameters":{"clusterID":"rook-ceph","csi.storage.k8s.io/volumegroupsnapshot/name":"my-group-snapshot-1","csi.storage.k8s.io/volumegroupsnapshot/namespace":"default","csi.storage.k8s.io/volumegroupsnapshotcontent/name":"groupsnapcontent-aecb80a8-6c51-41a9-81e2-0f623ddefb6c","fsName":"myfs"},"secrets":"***stripped***","source_volume_ids":["0001-0009-rook-ceph-0000000000000001-e869764e-d890-407b-8ad8-4b6e4ed91373","0001-0009-rook-ceph-0000000000000001-48c212fa-a983-483f-92cd-016ebdfae9fe"]}
I0410 09:12:05.407085       1 connection.go:251] GRPC response: {"group_snapshot":{"creation_time":{"nanos":404368730,"seconds":1712740325},"group_snapshot_id":"0001-0009-rook-ceph-0000000000000001-39b344e5-89af-4f4f-8ee2-6dbbaf7fb130","ready_to_use":true,"snapshots":[{"creation_time":{"nanos":256400000,"seconds":1712740325},"group_snapshot_id":"0001-0009-rook-ceph-0000000000000001-39b344e5-89af-4f4f-8ee2-6dbbaf7fb130","ready_to_use":true,"size_bytes":1073741824,"snapshot_id":"0001-0009-rook-ceph-0000000000000001-b2756afe-a5f6-427b-a5b2-a9850fea241f","source_volume_id":"0001-0009-rook-ceph-0000000000000001-48c212fa-a983-483f-92cd-016ebdfae9fe"},{"creation_time":{"nanos":185273000,"seconds":1712740325},"group_snapshot_id":"0001-0009-rook-ceph-0000000000000001-39b344e5-89af-4f4f-8ee2-6dbbaf7fb130","ready_to_use":true,"size_bytes":1073741824,"snapshot_id":"0001-0009-rook-ceph-0000000000000001-4ff62c50-eaed-4c71-9b7d-677229b9bd37","source_volume_id":"0001-0009-rook-ceph-0000000000000001-e869764e-d890-407b-8ad8-4b6e4ed91373"}]}}
I0410 09:12:05.407123       1 connection.go:252] GRPC error: <nil>
I0410 09:12:05.407138       1 group_snapshotter.go:72] CSI CreateGroupSnapshot: groupsnapshot-aecb80a8-6c51-41a9-81e2-0f623ddefb6c driver name [rook-ceph.cephfs.csi.ceph.com] group snapshot ID [0001-0009-rook-ceph-0000000000000001-39b344e5-89af-4f4f-8ee2-6dbbaf7fb130] time stamp [seconds:1712740325 nanos:404368730] snapshots [[size_bytes:1073741824 snapshot_id:"0001-0009-rook-ceph-0000000000000001-b2756afe-a5f6-427b-a5b2-a9850fea241f" source_volume_id:"0001-0009-rook-ceph-0000000000000001-48c212fa-a983-483f-92cd-016ebdfae9fe" creation_time:<seconds:1712740325 nanos:256400000 > ready_to_use:true group_snapshot_id:"0001-0009-rook-ceph-0000000000000001-39b344e5-89af-4f4f-8ee2-6dbbaf7fb130"  size_bytes:1073741824 snapshot_id:"0001-0009-rook-ceph-0000000000000001-4ff62c50-eaed-4c71-9b7d-677229b9bd37" source_volume_id:"0001-0009-rook-ceph-0000000000000001-e869764e-d890-407b-8ad8-4b6e4ed91373" creation_time:<seconds:1712740325 nanos:185273000 > ready_to_use:true group_snapshot_id:"0001-0009-rook-ceph-0000000000000001-39b344e5-89af-4f4f-8ee2-6dbbaf7fb130" ]] readyToUse [true]
I0410 09:12:05.408066       1 groupsnapshot_helper.go:420] Created group snapshot: driver rook-ceph.cephfs.csi.ceph.com, groupSnapshotId 0001-0009-rook-ceph-0000000000000001-39b344e5-89af-4f4f-8ee2-6dbbaf7fb130, creationTime 2024-04-10 09:12:05.40436873 +0000 UTC, readyToUse true
I0410 09:12:05.409533       1 groupsnapshot_helper.go:461] createGroupSnapshotContent: set annotation [snapshot.storage.kubernetes.io/deletion-secret-name] on volume snapshot content [snapcontent-c16610e4567816237a8dee3c4069d7c52738451106d0f90bd2891cbf9694dc0a-2024-04-10-9.12.5].
I0410 09:12:05.409623       1 groupsnapshot_helper.go:464] createGroupSnapshotContent: set annotation [snapshot.storage.kubernetes.io/deletion-secret-namespace] on volume snapshot content [snapcontent-c16610e4567816237a8dee3c4069d7c52738451106d0f90bd2891cbf9694dc0a-2024-04-10-9.12.5].
I0410 09:12:05.423536       1 snapshot_controller_base.go:206] enqueued "snapcontent-c16610e4567816237a8dee3c4069d7c52738451106d0f90bd2891cbf9694dc0a-2024-04-10-9.12.5" for sync
I0410 09:12:05.423656       1 snapshot_controller_base.go:247] syncContentByKey[snapcontent-c16610e4567816237a8dee3c4069d7c52738451106d0f90bd2891cbf9694dc0a-2024-04-10-9.12.5]
I0410 09:12:05.423690       1 util.go:245] storeObjectUpdate: adding content "snapcontent-c16610e4567816237a8dee3c4069d7c52738451106d0f90bd2891cbf9694dc0a-2024-04-10-9.12.5", version 153851
I0410 09:12:05.423714       1 snapshot_controller.go:58] synchronizing VolumeSnapshotContent[snapcontent-c16610e4567816237a8dee3c4069d7c52738451106d0f90bd2891cbf9694dc0a-2024-04-10-9.12.5]
I0410 09:12:05.423912       1 snapshot_controller.go:597] Check if VolumeSnapshotContent[snapcontent-c16610e4567816237a8dee3c4069d7c52738451106d0f90bd2891cbf9694dc0a-2024-04-10-9.12.5] should be deleted.
I0410 09:12:05.423954       1 snapshot_controller.go:137] checkandUpdateContentStatus[snapcontent-c16610e4567816237a8dee3c4069d7c52738451106d0f90bd2891cbf9694dc0a-2024-04-10-9.12.5] started
I0410 09:12:05.424121       1 snapshot_controller.go:259] checkandUpdateContentStatusOperation: call GetSnapshotStatus for snapshot which is pre-bound to content [snapcontent-c16610e4567816237a8dee3c4069d7c52738451106d0f90bd2891cbf9694dc0a-2024-04-10-9.12.5]
I0410 09:12:05.424194       1 snapshotter.go:113] GetSnapshotStatus: 0001-0009-rook-ceph-0000000000000001-b2756afe-a5f6-427b-a5b2-a9850fea241f
I0410 09:12:05.424426       1 connection.go:244] GRPC call: /csi.v1.Controller/ControllerGetCapabilities
I0410 09:12:05.424463       1 connection.go:245] GRPC request: {}
I0410 09:12:05.427229       1 connection.go:251] GRPC response: {"capabilities":[{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":5}}},{"Type":{"Rpc":{"type":9}}},{"Type":{"Rpc":{"type":7}}},{"Type":{"Rpc":{"type":13}}}]}
I0410 09:12:05.427397       1 connection.go:252] GRPC error: <nil>
I0410 09:12:05.427446       1 snapshot_controller.go:290] checkandUpdateContentStatusOperation: driver rook-ceph.cephfs.csi.ceph.com, snapshotId 0001-0009-rook-ceph-0000000000000001-b2756afe-a5f6-427b-a5b2-a9850fea241f, creationTime 0001-01-01 00:00:00 +0000 UTC, size 0, readyToUse true, groupSnapshotID 
I0410 09:12:05.427481       1 snapshot_controller.go:435] updateSnapshotContentStatus: updating VolumeSnapshotContent [snapcontent-c16610e4567816237a8dee3c4069d7c52738451106d0f90bd2891cbf9694dc0a-2024-04-10-9.12.5], snapshotHandle 0001-0009-rook-ceph-0000000000000001-b2756afe-a5f6-427b-a5b2-a9850fea241f, readyToUse true, createdAt 1712740325427475344, size 0, groupSnapshotID 
I0410 09:12:05.433358       1 snapshot_controller.go:435] updateSnapshotContentStatus: updating VolumeSnapshotContent [snapcontent-c16610e4567816237a8dee3c4069d7c52738451106d0f90bd2891cbf9694dc0a-2024-04-10-9.12.5], snapshotHandle 0001-0009-rook-ceph-0000000000000001-b2756afe-a5f6-427b-a5b2-a9850fea241f, readyToUse true, createdAt 1712740325256400000, size 1073741824, groupSnapshotID 0001-0009-rook-ceph-0000000000000001-39b344e5-89af-4f4f-8ee2-6dbbaf7fb130
I0410 09:12:05.439685       1 util.go:273] storeObjectUpdate updating content "snapcontent-c16610e4567816237a8dee3c4069d7c52738451106d0f90bd2891cbf9694dc0a-2024-04-10-9.12.5" with version 153853
I0410 09:12:05.517405       1 request.go:629] Waited for 80.03487ms due to client-side throttling, not priority and fairness, request: PUT:https://10.96.0.1:443/apis/snapshot.storage.k8s.io/v1/volumesnapshotcontents/snapcontent-c16610e4567816237a8dee3c4069d7c52738451106d0f90bd2891cbf9694dc0a-2024-04-10-9.12.5/status
I0410 09:12:05.525164       1 groupsnapshot_helper.go:707] updateGroupSnapshotContentErrorStatusWithEvent[groupsnapcontent-aecb80a8-6c51-41a9-81e2-0f623ddefb6c]
I0410 09:12:05.716706       1 request.go:629] Waited for 191.314658ms due to client-side throttling, not priority and fairness, request: PATCH:https://10.96.0.1:443/apis/groupsnapshot.storage.k8s.io/v1alpha1/volumegroupsnapshotcontents/groupsnapcontent-aecb80a8-6c51-41a9-81e2-0f623ddefb6c/status
I0410 09:12:05.732799       1 util.go:273] storeObjectUpdate updating groupsnapshotcontent "groupsnapcontent-aecb80a8-6c51-41a9-81e2-0f623ddefb6c" with version 153855
E0410 09:12:05.732851       1 groupsnapshot_helper.go:782] checkandUpdateGroupSnapshotContentStatus [groupsnapcontent-aecb80a8-6c51-41a9-81e2-0f623ddefb6c]: error occurred snapshot controller failed to update snapcontent-c16610e4567816237a8dee3c4069d7c52738451106d0f90bd2891cbf9694dc0a-2024-04-10-9.12.5 on API server: Operation cannot be fulfilled on volumesnapshotcontents.snapshot.storage.k8s.io "snapcontent-c16610e4567816237a8dee3c4069d7c52738451106d0f90bd2891cbf9694dc0a-2024-04-10-9.12.5": the object has been modified; please apply your changes to the latest version and try again
E0410 09:12:05.732876       1 groupsnapshot_helper.go:149] could not sync group snapshot content "groupsnapcontent-aecb80a8-6c51-41a9-81e2-0f623ddefb6c": snapshot controller failed to update snapcontent-c16610e4567816237a8dee3c4069d7c52738451106d0f90bd2891cbf9694dc0a-2024-04-10-9.12.5 on API server: Operation cannot be fulfilled on volumesnapshotcontents.snapshot.storage.k8s.io "snapcontent-c16610e4567816237a8dee3c4069d7c52738451106d0f90bd2891cbf9694dc0a-2024-04-10-9.12.5": the object has been modified; please apply your changes to the latest version and try again
I0410 09:12:05.732897       1 groupsnapshot_helper.go:71] Failed to sync group snapshot content "groupsnapcontent-aecb80a8-6c51-41a9-81e2-0f623ddefb6c", will retry again: snapshot controller failed to update snapcontent-c16610e4567816237a8dee3c4069d7c52738451106d0f90bd2891cbf9694dc0a-2024-04-10-9.12.5 on API server: Operation cannot be fulfilled on volumesnapshotcontents.snapshot.storage.k8s.io "snapcontent-c16610e4567816237a8dee3c4069d7c52738451106d0f90bd2891cbf9694dc0a-2024-04-10-9.12.5": the object has been modified; please apply your changes to the latest version and try again
I0410 09:12:05.732929       1 groupsnapshot_helper.go:82] syncGroupSnapshotContentByKey[groupsnapcontent-aecb80a8-6c51-41a9-81e2-0f623ddefb6c]
I0410 09:12:05.732809       1 groupsnapshot_helper.go:53] enqueued "groupsnapcontent-aecb80a8-6c51-41a9-81e2-0f623ddefb6c" for sync
I0410 09:12:05.732948       1 util.go:273] storeObjectUpdate updating groupsnapshotcontent "groupsnapcontent-aecb80a8-6c51-41a9-81e2-0f623ddefb6c" with version 153855
I0410 09:12:05.732956       1 groupsnapshot_helper.go:165] synchronizing VolumeGroupSnapshotContent[groupsnapcontent-aecb80a8-6c51-41a9-81e2-0f623ddefb6c]
I0410 09:12:05.732968       1 groupsnapshot_helper.go:327] Check if VolumeGroupSnapshotContent[groupsnapcontent-aecb80a8-6c51-41a9-81e2-0f623ddefb6c] should be deleted.
I0410 09:12:05.732975       1 groupsnapshot_helper.go:778] checkandUpdateGroupSnapshotContentStatus[groupsnapcontent-aecb80a8-6c51-41a9-81e2-0f623ddefb6c] started
I0410 09:12:05.732986       1 groupsnapshot_helper.go:375] createGroupSnapshotWrapper: Creating group snapshot for group snapshot content groupsnapcontent-aecb80a8-6c51-41a9-81e2-0f623ddefb6c through the plugin ...
I0410 09:12:05.732993       1 groupsnapshot_helper.go:518] getCSIGroupSnapshotInput for group snapshot content groupsnapcontent-aecb80a8-6c51-41a9-81e2-0f623ddefb6c
I0410 09:12:05.732998       1 groupsnapshot_helper.go:548] getGroupSnapshotClass: VolumeGroupSnapshotClassName [csi-cephfsplugin-groupsnapclass]
I0410 09:12:05.733328       1 event.go:364] Event(v1.ObjectReference{Kind:"VolumeGroupSnapshotContent", Namespace:"", Name:"groupsnapcontent-aecb80a8-6c51-41a9-81e2-0f623ddefb6c", UID:"bc7b18e0-fc63-4d22-bc1b-a9014984da14", APIVersion:"groupsnapshot.storage.k8s.io/v1alpha1", ResourceVersion:"153855", FieldPath:""}): type: 'Warning' reason: 'GroupSnapshotContentCheckandUpdateFailed' Failed to check and update group snapshot content: snapshot controller failed to update snapcontent-c16610e4567816237a8dee3c4069d7c52738451106d0f90bd2891cbf9694dc0a-2024-04-10-9.12.5 on API server: Operation cannot be fulfilled on volumesnapshotcontents.snapshot.storage.k8s.io "snapcontent-c16610e4567816237a8dee3c4069d7c52738451106d0f90bd2891cbf9694dc0a-2024-04-10-9.12.5": the object has been modified; please apply your changes to the latest version and try again
I0410 09:12:05.745951       1 group_snapshotter.go:52] CSI CreateGroupSnapshot: groupsnapshot-aecb80a8-6c51-41a9-81e2-0f623ddefb6c
I0410 09:12:05.745982       1 connection.go:244] GRPC call: /csi.v1.Identity/GetPluginInfo
I0410 09:12:05.745991       1 connection.go:245] GRPC request: {}
I0410 09:12:05.747420       1 connection.go:251] GRPC response: {"name":"rook-ceph.cephfs.csi.ceph.com","vendor_version":"canary"}
I0410 09:12:05.747441       1 connection.go:252] GRPC error: <nil>
I0410 09:12:05.747463       1 connection.go:244] GRPC call: /csi.v1.GroupController/CreateVolumeGroupSnapshot
I0410 09:12:05.747509       1 connection.go:245] GRPC request: {"name":"groupsnapshot-aecb80a8-6c51-41a9-81e2-0f623ddefb6c","parameters":{"clusterID":"rook-ceph","csi.storage.k8s.io/volumegroupsnapshot/name":"my-group-snapshot-1","csi.storage.k8s.io/volumegroupsnapshot/namespace":"default","csi.storage.k8s.io/volumegroupsnapshotcontent/name":"groupsnapcontent-aecb80a8-6c51-41a9-81e2-0f623ddefb6c","fsName":"myfs"},"secrets":"***stripped***","source_volume_ids":["0001-0009-rook-ceph-0000000000000001-e869764e-d890-407b-8ad8-4b6e4ed91373","0001-0009-rook-ceph-0000000000000001-48c212fa-a983-483f-92cd-016ebdfae9fe"]}
I0410 09:12:05.909405       1 connection.go:251] GRPC response: {"group_snapshot":{"creation_time":{"nanos":908522518,"seconds":1712740325},"group_snapshot_id":"0001-0009-rook-ceph-0000000000000001-39b344e5-89af-4f4f-8ee2-6dbbaf7fb130","ready_to_use":true,"snapshots":[{"creation_time":{"nanos":256400000,"seconds":1712740325},"group_snapshot_id":"0001-0009-rook-ceph-0000000000000001-39b344e5-89af-4f4f-8ee2-6dbbaf7fb130","ready_to_use":true,"size_bytes":1073741824,"snapshot_id":"0001-0009-rook-ceph-0000000000000001-b2756afe-a5f6-427b-a5b2-a9850fea241f","source_volume_id":"0001-0009-rook-ceph-0000000000000001-48c212fa-a983-483f-92cd-016ebdfae9fe"},{"creation_time":{"nanos":185273000,"seconds":1712740325},"group_snapshot_id":"0001-0009-rook-ceph-0000000000000001-39b344e5-89af-4f4f-8ee2-6dbbaf7fb130","ready_to_use":true,"size_bytes":1073741824,"snapshot_id":"0001-0009-rook-ceph-0000000000000001-4ff62c50-eaed-4c71-9b7d-677229b9bd37","source_volume_id":"0001-0009-rook-ceph-0000000000000001-e869764e-d890-407b-8ad8-4b6e4ed91373"}]}}
I0410 09:12:05.909438       1 connection.go:252] GRPC error: <nil>
I0410 09:12:05.909455       1 group_snapshotter.go:72] CSI CreateGroupSnapshot: groupsnapshot-aecb80a8-6c51-41a9-81e2-0f623ddefb6c driver name [rook-ceph.cephfs.csi.ceph.com] group snapshot ID [0001-0009-rook-ceph-0000000000000001-39b344e5-89af-4f4f-8ee2-6dbbaf7fb130] time stamp [seconds:1712740325 nanos:908522518] snapshots [[size_bytes:1073741824 snapshot_id:"0001-0009-rook-ceph-0000000000000001-b2756afe-a5f6-427b-a5b2-a9850fea241f" source_volume_id:"0001-0009-rook-ceph-0000000000000001-48c212fa-a983-483f-92cd-016ebdfae9fe" creation_time:<seconds:1712740325 nanos:256400000 > ready_to_use:true group_snapshot_id:"0001-0009-rook-ceph-0000000000000001-39b344e5-89af-4f4f-8ee2-6dbbaf7fb130"  size_bytes:1073741824 snapshot_id:"0001-0009-rook-ceph-0000000000000001-4ff62c50-eaed-4c71-9b7d-677229b9bd37" source_volume_id:"0001-0009-rook-ceph-0000000000000001-e869764e-d890-407b-8ad8-4b6e4ed91373" creation_time:<seconds:1712740325 nanos:185273000 > ready_to_use:true group_snapshot_id:"0001-0009-rook-ceph-0000000000000001-39b344e5-89af-4f4f-8ee2-6dbbaf7fb130" ]] readyToUse [true]
I0410 09:12:05.909596       1 groupsnapshot_helper.go:420] Created group snapshot: driver rook-ceph.cephfs.csi.ceph.com, groupSnapshotId 0001-0009-rook-ceph-0000000000000001-39b344e5-89af-4f4f-8ee2-6dbbaf7fb130, creationTime 2024-04-10 09:12:05.908522518 +0000 UTC, readyToUse true
I0410 09:12:05.909704       1 groupsnapshot_helper.go:461] createGroupSnapshotContent: set annotation [snapshot.storage.kubernetes.io/deletion-secret-name] on volume snapshot content [snapcontent-c16610e4567816237a8dee3c4069d7c52738451106d0f90bd2891cbf9694dc0a-2024-04-10-9.12.5].
I0410 09:12:05.909718       1 groupsnapshot_helper.go:464] createGroupSnapshotContent: set annotation [snapshot.storage.kubernetes.io/deletion-secret-namespace] on volume snapshot content [snapcontent-c16610e4567816237a8dee3c4069d7c52738451106d0f90bd2891cbf9694dc0a-2024-04-10-9.12.5].
I0410 09:12:05.926412       1 groupsnapshot_helper.go:707] updateGroupSnapshotContentErrorStatusWithEvent[groupsnapcontent-aecb80a8-6c51-41a9-81e2-0f623ddefb6c]
I0410 09:12:06.090512       1 snapshot_controller_base.go:206] enqueued "snapcontent-c16610e4567816237a8dee3c4069d7c52738451106d0f90bd2891cbf9694dc0a-2024-04-10-9.12.5" for sync
I0410 09:12:06.090558       1 snapshot_controller_base.go:247] syncContentByKey[snapcontent-c16610e4567816237a8dee3c4069d7c52738451106d0f90bd2891cbf9694dc0a-2024-04-10-9.12.5]
I0410 09:12:06.090569       1 util.go:273] storeObjectUpdate updating content "snapcontent-c16610e4567816237a8dee3c4069d7c52738451106d0f90bd2891cbf9694dc0a-2024-04-10-9.12.5" with version 153858
I0410 09:12:06.090576       1 snapshot_controller.go:58] synchronizing VolumeSnapshotContent[snapcontent-c16610e4567816237a8dee3c4069d7c52738451106d0f90bd2891cbf9694dc0a-2024-04-10-9.12.5]
I0410 09:12:06.090597       1 snapshot_controller.go:597] Check if VolumeSnapshotContent[snapcontent-c16610e4567816237a8dee3c4069d7c52738451106d0f90bd2891cbf9694dc0a-2024-04-10-9.12.5] should be deleted.
I0410 09:12:06.117376       1 request.go:629] Waited for 190.771419ms due to client-side throttling, not priority and fairness, request: PATCH:https://10.96.0.1:443/apis/groupsnapshot.storage.k8s.io/v1alpha1/volumegroupsnapshotcontents/groupsnapcontent-aecb80a8-6c51-41a9-81e2-0f623ddefb6c/status
I0410 09:12:06.129228       1 util.go:273] storeObjectUpdate updating groupsnapshotcontent "groupsnapcontent-aecb80a8-6c51-41a9-81e2-0f623ddefb6c" with version 153859
E0410 09:12:06.129255       1 groupsnapshot_helper.go:782] checkandUpdateGroupSnapshotContentStatus [groupsnapcontent-aecb80a8-6c51-41a9-81e2-0f623ddefb6c]: error occurred volumesnapshotcontents.snapshot.storage.k8s.io "snapcontent-c16610e4567816237a8dee3c4069d7c52738451106d0f90bd2891cbf9694dc0a-2024-04-10-9.12.5" already exists
I0410 09:12:06.129261       1 groupsnapshot_helper.go:53] enqueued "groupsnapcontent-aecb80a8-6c51-41a9-81e2-0f623ddefb6c" for sync
E0410 09:12:06.129268       1 groupsnapshot_helper.go:149] could not sync group snapshot content "groupsnapcontent-aecb80a8-6c51-41a9-81e2-0f623ddefb6c": volumesnapshotcontents.snapshot.storage.k8s.io "snapcontent-c16610e4567816237a8dee3c4069d7c52738451106d0f90bd2891cbf9694dc0a-2024-04-10-9.12.5" already exists
I0410 09:12:06.129284       1 groupsnapshot_helper.go:71] Failed to sync group snapshot content "groupsnapcontent-aecb80a8-6c51-41a9-81e2-0f623ddefb6c", will retry again: volumesnapshotcontents.snapshot.storage.k8s.io "snapcontent-c16610e4567816237a8dee3c4069d7c52738451106d0f90bd2891cbf9694dc0a-2024-04-10-9.12.5" already exists
I0410 09:12:06.129312       1 groupsnapshot_helper.go:82] syncGroupSnapshotContentByKey[groupsnapcontent-aecb80a8-6c51-41a9-81e2-0f623ddefb6c]
I0410 09:12:06.129319       1 event.go:364] Event(v1.ObjectReference{Kind:"VolumeGroupSnapshotContent", Namespace:"", Name:"groupsnapcontent-aecb80a8-6c51-41a9-81e2-0f623ddefb6c", UID:"bc7b18e0-fc63-4d22-bc1b-a9014984da14", APIVersion:"groupsnapshot.storage.k8s.io/v1alpha1", ResourceVersion:"153859", FieldPath:""}): type: 'Warning' reason: 'GroupSnapshotContentCheckandUpdateFailed' Failed to check and update group snapshot content: volumesnapshotcontents.snapshot.storage.k8s.io "snapcontent-c16610e4567816237a8dee3c4069d7c52738451106d0f90bd2891cbf9694dc0a-2024-04-10-9.12.5" already exists
I0410 09:12:06.129328       1 util.go:273] storeObjectUpdate updating groupsnapshotcontent "groupsnapcontent-aecb80a8-6c51-41a9-81e2-0f623ddefb6c" with version 153859
I0410 09:12:06.129343       1 groupsnapshot_helper.go:165] synchronizing VolumeGroupSnapshotContent[groupsnapcontent-aecb80a8-6c51-41a9-81e2-0f623ddefb6c]
I0410 09:12:06.129349       1 groupsnapshot_helper.go:327] Check if VolumeGroupSnapshotContent[groupsnapcontent-aecb80a8-6c51-41a9-81e2-0f623ddefb6c] should be deleted.
I0410 09:12:06.129355       1 groupsnapshot_helper.go:778] checkandUpdateGroupSnapshotContentStatus[groupsnapcontent-aecb80a8-6c51-41a9-81e2-0f623ddefb6c] started
I0410 09:12:06.129362       1 groupsnapshot_helper.go:375] createGroupSnapshotWrapper: Creating group snapshot for group snapshot content groupsnapcontent-aecb80a8-6c51-41a9-81e2-0f623ddefb6c through the plugin ...
I0410 09:12:06.129383       1 groupsnapshot_helper.go:518] getCSIGroupSnapshotInput for group snapshot content groupsnapcontent-aecb80a8-6c51-41a9-81e2-0f623ddefb6c
I0410 09:12:06.129389       1 groupsnapshot_helper.go:548] getGroupSnapshotClass: VolumeGroupSnapshotClassName [csi-cephfsplugin-groupsnapclass]
I0410 09:12:06.136014       1 group_snapshotter.go:52] CSI CreateGroupSnapshot: groupsnapshot-aecb80a8-6c51-41a9-81e2-0f623ddefb6c
I0410 09:12:06.136033       1 connection.go:244] GRPC call: /csi.v1.Identity/GetPluginInfo
I0410 09:12:06.136040       1 connection.go:245] GRPC request: {}
I0410 09:12:06.136678       1 connection.go:251] GRPC response: {"name":"rook-ceph.cephfs.csi.ceph.com","vendor_version":"canary"}
I0410 09:12:06.136695       1 connection.go:252] GRPC error: <nil>
I0410 09:12:06.136709       1 connection.go:244] GRPC call: /csi.v1.GroupController/CreateVolumeGroupSnapshot
I0410 09:12:06.136715       1 connection.go:245] GRPC request: {"name":"groupsnapshot-aecb80a8-6c51-41a9-81e2-0f623ddefb6c","parameters":{"clusterID":"rook-ceph","csi.storage.k8s.io/volumegroupsnapshot/name":"my-group-snapshot-1","csi.storage.k8s.io/volumegroupsnapshot/namespace":"default","csi.storage.k8s.io/volumegroupsnapshotcontent/name":"groupsnapcontent-aecb80a8-6c51-41a9-81e2-0f623ddefb6c","fsName":"myfs"},"secrets":"***stripped***","source_volume_ids":["0001-0009-rook-ceph-0000000000000001-e869764e-d890-407b-8ad8-4b6e4ed91373","0001-0009-rook-ceph-0000000000000001-48c212fa-a983-483f-92cd-016ebdfae9fe"]}
I0410 09:12:06.220828       1 connection.go:251] GRPC response: {"group_snapshot":{"creation_time":{"nanos":220315713,"seconds":1712740326},"group_snapshot_id":"0001-0009-rook-ceph-0000000000000001-39b344e5-89af-4f4f-8ee2-6dbbaf7fb130","ready_to_use":true,"snapshots":[{"creation_time":{"nanos":256400000,"seconds":1712740325},"group_snapshot_id":"0001-0009-rook-ceph-0000000000000001-39b344e5-89af-4f4f-8ee2-6dbbaf7fb130","ready_to_use":true,"size_bytes":1073741824,"snapshot_id":"0001-0009-rook-ceph-0000000000000001-b2756afe-a5f6-427b-a5b2-a9850fea241f","source_volume_id":"0001-0009-rook-ceph-0000000000000001-48c212fa-a983-483f-92cd-016ebdfae9fe"},{"creation_time":{"nanos":185273000,"seconds":1712740325},"group_snapshot_id":"0001-0009-rook-ceph-0000000000000001-39b344e5-89af-4f4f-8ee2-6dbbaf7fb130","ready_to_use":true,"size_bytes":1073741824,"snapshot_id":"0001-0009-rook-ceph-0000000000000001-4ff62c50-eaed-4c71-9b7d-677229b9bd37","source_volume_id":"0001-0009-rook-ceph-0000000000000001-e869764e-d890-407b-8ad8-4b6e4ed91373"}]}}
I0410 09:12:06.220847       1 connection.go:252] GRPC error: <nil>
I0410 09:12:06.220862       1 group_snapshotter.go:72] CSI CreateGroupSnapshot: groupsnapshot-aecb80a8-6c51-41a9-81e2-0f623ddefb6c driver name [rook-ceph.cephfs.csi.ceph.com] group snapshot ID [0001-0009-rook-ceph-0000000000000001-39b344e5-89af-4f4f-8ee2-6dbbaf7fb130] time stamp [seconds:1712740326 nanos:220315713] snapshots [[size_bytes:1073741824 snapshot_id:"0001-0009-rook-ceph-0000000000000001-b2756afe-a5f6-427b-a5b2-a9850fea241f" source_volume_id:"0001-0009-rook-ceph-0000000000000001-48c212fa-a983-483f-92cd-016ebdfae9fe" creation_time:<seconds:1712740325 nanos:256400000 > ready_to_use:true group_snapshot_id:"0001-0009-rook-ceph-0000000000000001-39b344e5-89af-4f4f-8ee2-6dbbaf7fb130"  size_bytes:1073741824 snapshot_id:"0001-0009-rook-ceph-0000000000000001-4ff62c50-eaed-4c71-9b7d-677229b9bd37" source_volume_id:"0001-0009-rook-ceph-0000000000000001-e869764e-d890-407b-8ad8-4b6e4ed91373" creation_time:<seconds:1712740325 nanos:185273000 > ready_to_use:true group_snapshot_id:"0001-0009-rook-ceph-0000000000000001-39b344e5-89af-4f4f-8ee2-6dbbaf7fb130" ]] readyToUse [true]
I0410 09:12:06.220943       1 groupsnapshot_helper.go:420] Created group snapshot: driver rook-ceph.cephfs.csi.ceph.com, groupSnapshotId 0001-0009-rook-ceph-0000000000000001-39b344e5-89af-4f4f-8ee2-6dbbaf7fb130, creationTime 2024-04-10 09:12:06.220315713 +0000 UTC, readyToUse true
I0410 09:12:06.221032       1 groupsnapshot_helper.go:461] createGroupSnapshotContent: set annotation [snapshot.storage.kubernetes.io/deletion-secret-name] on volume snapshot content [snapcontent-c16610e4567816237a8dee3c4069d7c52738451106d0f90bd2891cbf9694dc0a-2024-04-10-9.12.6].
I0410 09:12:06.221051       1 groupsnapshot_helper.go:464] createGroupSnapshotContent: set annotation [snapshot.storage.kubernetes.io/deletion-secret-namespace] on volume snapshot content [snapcontent-c16610e4567816237a8dee3c4069d7c52738451106d0f90bd2891cbf9694dc0a-2024-04-10-9.12.6].
I0410 09:12:06.316589       1 request.go:629] Waited for 95.170684ms due to client-side throttling, not priority and fairness, request: POST:https://10.96.0.1:443/apis/snapshot.storage.k8s.io/v1/volumesnapshotcontents
I0410 09:12:06.323666       1 snapshot_controller_base.go:206] enqueued "snapcontent-c16610e4567816237a8dee3c4069d7c52738451106d0f90bd2891cbf9694dc0a-2024-04-10-9.12.6" for sync
I0410 09:12:06.323705       1 snapshot_controller_base.go:247] syncContentByKey[snapcontent-c16610e4567816237a8dee3c4069d7c52738451106d0f90bd2891cbf9694dc0a-2024-04-10-9.12.6]
I0410 09:12:06.323725       1 util.go:245] storeObjectUpdate: adding content "snapcontent-c16610e4567816237a8dee3c4069d7c52738451106d0f90bd2891cbf9694dc0a-2024-04-10-9.12.6", version 153861
I0410 09:12:06.323764       1 snapshot_controller.go:58] synchronizing VolumeSnapshotContent[snapcontent-c16610e4567816237a8dee3c4069d7c52738451106d0f90bd2891cbf9694dc0a-2024-04-10-9.12.6]
I0410 09:12:06.323794       1 snapshot_controller.go:597] Check if VolumeSnapshotContent[snapcontent-c16610e4567816237a8dee3c4069d7c52738451106d0f90bd2891cbf9694dc0a-2024-04-10-9.12.6] should be deleted.
I0410 09:12:06.323807       1 snapshot_controller.go:137] checkandUpdateContentStatus[snapcontent-c16610e4567816237a8dee3c4069d7c52738451106d0f90bd2891cbf9694dc0a-2024-04-10-9.12.6] started
I0410 09:12:06.323812       1 snapshot_controller.go:259] checkandUpdateContentStatusOperation: call GetSnapshotStatus for snapshot which is pre-bound to content [snapcontent-c16610e4567816237a8dee3c4069d7c52738451106d0f90bd2891cbf9694dc0a-2024-04-10-9.12.6]
I0410 09:12:06.323826       1 snapshotter.go:113] GetSnapshotStatus: 0001-0009-rook-ceph-0000000000000001-b2756afe-a5f6-427b-a5b2-a9850fea241f
I0410 09:12:06.323842       1 connection.go:244] GRPC call: /csi.v1.Controller/ControllerGetCapabilities
I0410 09:12:06.323852       1 connection.go:245] GRPC request: {}
I0410 09:12:06.324646       1 connection.go:251] GRPC response: {"capabilities":[{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":5}}},{"Type":{"Rpc":{"type":9}}},{"Type":{"Rpc":{"type":7}}},{"Type":{"Rpc":{"type":13}}}]}
I0410 09:12:06.324659       1 connection.go:252] GRPC error: <nil>
I0410 09:12:06.324668       1 snapshot_controller.go:290] checkandUpdateContentStatusOperation: driver rook-ceph.cephfs.csi.ceph.com, snapshotId 0001-0009-rook-ceph-0000000000000001-b2756afe-a5f6-427b-a5b2-a9850fea241f, creationTime 0001-01-01 00:00:00 +0000 UTC, size 0, readyToUse true, groupSnapshotID 
I0410 09:12:06.324681       1 snapshot_controller.go:435] updateSnapshotContentStatus: updating VolumeSnapshotContent [snapcontent-c16610e4567816237a8dee3c4069d7c52738451106d0f90bd2891cbf9694dc0a-2024-04-10-9.12.6], snapshotHandle 0001-0009-rook-ceph-0000000000000001-b2756afe-a5f6-427b-a5b2-a9850fea241f, readyToUse true, createdAt 1712740326324678085, size 0, groupSnapshotID 
I0410 09:12:06.516979       1 request.go:629] Waited for 193.316816ms due to client-side throttling, not priority and fairness, request: POST:https://10.96.0.1:443/apis/snapshot.storage.k8s.io/v1/namespaces/default/volumesnapshots
I0410 09:12:06.524565       1 snapshot_controller.go:435] updateSnapshotContentStatus: updating VolumeSnapshotContent [snapcontent-c16610e4567816237a8dee3c4069d7c52738451106d0f90bd2891cbf9694dc0a-2024-04-10-9.12.6], snapshotHandle 0001-0009-rook-ceph-0000000000000001-b2756afe-a5f6-427b-a5b2-a9850fea241f, readyToUse true, createdAt 1712740325256400000, size 1073741824, groupSnapshotID 0001-0009-rook-ceph-0000000000000001-39b344e5-89af-4f4f-8ee2-6dbbaf7fb130
I0410 09:12:06.717340       1 request.go:629] Waited for 392.608937ms due to client-side throttling, not priority and fairness, request: GET:https://10.96.0.1:443/apis/snapshot.storage.k8s.io/v1/volumesnapshotcontents/snapcontent-c16610e4567816237a8dee3c4069d7c52738451106d0f90bd2891cbf9694dc0a-2024-04-10-9.12.6
I0410 09:12:06.917137       1 request.go:629] Waited for 392.473331ms due to client-side throttling, not priority and fairness, request: GET:https://10.96.0.1:443/apis/snapshot.storage.k8s.io/v1/volumesnapshotcontents/snapcontent-c16610e4567816237a8dee3c4069d7c52738451106d0f90bd2891cbf9694dc0a-2024-04-10-9.12.6
I0410 09:12:07.117581       1 request.go:629] Waited for 397.590336ms due to client-side throttling, not priority and fairness, request: PUT:https://10.96.0.1:443/apis/snapshot.storage.k8s.io/v1/volumesnapshotcontents/snapcontent-c16610e4567816237a8dee3c4069d7c52738451106d0f90bd2891cbf9694dc0a-2024-04-10-9.12.6/status
I0410 09:12:07.128457       1 util.go:273] storeObjectUpdate updating content "snapcontent-c16610e4567816237a8dee3c4069d7c52738451106d0f90bd2891cbf9694dc0a-2024-04-10-9.12.6" with version 153867
I0410 09:12:07.317126       1 request.go:629] Waited for 396.526382ms due to client-side throttling, not priority and fairness, request: PUT:https://10.96.0.1:443/apis/snapshot.storage.k8s.io/v1/volumesnapshotcontents/snapcontent-c16610e4567816237a8dee3c4069d7c52738451106d0f90bd2891cbf9694dc0a-2024-04-10-9.12.6/status
I0410 09:12:07.320787       1 groupsnapshot_helper.go:707] updateGroupSnapshotContentErrorStatusWithEvent[groupsnapcontent-aecb80a8-6c51-41a9-81e2-0f623ddefb6c]
I0410 09:12:07.517215       1 request.go:629] Waited for 196.3424ms due to client-side throttling, not priority and fairness, request: PATCH:https://10.96.0.1:443/apis/groupsnapshot.storage.k8s.io/v1alpha1/volumegroupsnapshotcontents/groupsnapcontent-aecb80a8-6c51-41a9-81e2-0f623ddefb6c/status
I0410 09:12:07.527527       1 groupsnapshot_helper.go:53] enqueued "groupsnapcontent-aecb80a8-6c51-41a9-81e2-0f623ddefb6c" for sync
I0410 09:12:07.527522       1 util.go:273] storeObjectUpdate updating groupsnapshotcontent "groupsnapcontent-aecb80a8-6c51-41a9-81e2-0f623ddefb6c" with version 153870
E0410 09:12:07.527567       1 groupsnapshot_helper.go:782] checkandUpdateGroupSnapshotContentStatus [groupsnapcontent-aecb80a8-6c51-41a9-81e2-0f623ddefb6c]: error occurred snapshot controller failed to update snapcontent-c16610e4567816237a8dee3c4069d7c52738451106d0f90bd2891cbf9694dc0a-2024-04-10-9.12.6 on API server: Operation cannot be fulfilled on volumesnapshotcontents.snapshot.storage.k8s.io "snapcontent-c16610e4567816237a8dee3c4069d7c52738451106d0f90bd2891cbf9694dc0a-2024-04-10-9.12.6": the object has been modified; please apply your changes to the latest version and try again
E0410 09:12:07.527586       1 groupsnapshot_helper.go:149] could not sync group snapshot content "groupsnapcontent-aecb80a8-6c51-41a9-81e2-0f623ddefb6c": snapshot controller failed to update snapcontent-c16610e4567816237a8dee3c4069d7c52738451106d0f90bd2891cbf9694dc0a-2024-04-10-9.12.6 on API server: Operation cannot be fulfilled on volumesnapshotcontents.snapshot.storage.k8s.io "snapcontent-c16610e4567816237a8dee3c4069d7c52738451106d0f90bd2891cbf9694dc0a-2024-04-10-9.12.6": the object has been modified; please apply your changes to the latest version and try again
I0410 09:12:07.527600       1 groupsnapshot_helper.go:71] Failed to sync group snapshot content "groupsnapcontent-aecb80a8-6c51-41a9-81e2-0f623ddefb6c", will retry again: snapshot controller failed to update snapcontent-c16610e4567816237a8dee3c4069d7c52738451106d0f90bd2891cbf9694dc0a-2024-04-10-9.12.6 on API server: Operation cannot be fulfilled on volumesnapshotcontents.snapshot.storage.k8s.io "snapcontent-c16610e4567816237a8dee3c4069d7c52738451106d0f90bd2891cbf9694dc0a-2024-04-10-9.12.6": the object has been modified; please apply your changes to the latest version and try again
I0410 09:12:07.527620       1 groupsnapshot_helper.go:82] syncGroupSnapshotContentByKey[groupsnapcontent-aecb80a8-6c51-41a9-81e2-0f623ddefb6c]
I0410 09:12:07.527608       1 event.go:364] Event(v1.ObjectReference{Kind:"VolumeGroupSnapshotContent", Namespace:"", Name:"groupsnapcontent-aecb80a8-6c51-41a9-81e2-0f623ddefb6c", UID:"bc7b18e0-fc63-4d22-bc1b-a9014984da14", APIVersion:"groupsnapshot.storage.k8s.io/v1alpha1", ResourceVersion:"153870", FieldPath:""}): type: 'Warning' reason: 'GroupSnapshotContentCheckandUpdateFailed' Failed to check and update group snapshot content: snapshot controller failed to update snapcontent-c16610e4567816237a8dee3c4069d7c52738451106d0f90bd2891cbf9694dc0a-2024-04-10-9.12.6 on API server: Operation cannot be fulfilled on volumesnapshotcontents.snapshot.storage.k8s.io "snapcontent-c16610e4567816237a8dee3c4069d7c52738451106d0f90bd2891cbf9694dc0a-2024-04-10-9.12.6": the object has been modified; please apply your changes to the latest version and try again
I0410 09:12:07.527637       1 util.go:273] storeObjectUpdate updating groupsnapshotcontent "groupsnapcontent-aecb80a8-6c51-41a9-81e2-0f623ddefb6c" with version 153870
I0410 09:12:07.527645       1 groupsnapshot_helper.go:165] synchronizing VolumeGroupSnapshotContent[groupsnapcontent-aecb80a8-6c51-41a9-81e2-0f623ddefb6c]
I0410 09:12:07.527654       1 groupsnapshot_helper.go:327] Check if VolumeGroupSnapshotContent[groupsnapcontent-aecb80a8-6c51-41a9-81e2-0f623ddefb6c] should be deleted.
I0410 09:12:07.527659       1 groupsnapshot_helper.go:778] checkandUpdateGroupSnapshotContentStatus[groupsnapcontent-aecb80a8-6c51-41a9-81e2-0f623ddefb6c] started
I0410 09:12:07.527665       1 groupsnapshot_helper.go:375] createGroupSnapshotWrapper: Creating group snapshot for group snapshot content groupsnapcontent-aecb80a8-6c51-41a9-81e2-0f623ddefb6c through the plugin ...
I0410 09:12:07.527671       1 groupsnapshot_helper.go:518] getCSIGroupSnapshotInput for group snapshot content groupsnapcontent-aecb80a8-6c51-41a9-81e2-0f623ddefb6c
I0410 09:12:07.527676       1 groupsnapshot_helper.go:548] getGroupSnapshotClass: VolumeGroupSnapshotClassName [csi-cephfsplugin-groupsnapclass]
I0410 09:12:07.530431       1 group_snapshotter.go:52] CSI CreateGroupSnapshot: groupsnapshot-aecb80a8-6c51-41a9-81e2-0f623ddefb6c
I0410 09:12:07.530454       1 connection.go:244] GRPC call: /csi.v1.Identity/GetPluginInfo
I0410 09:12:07.530460       1 connection.go:245] GRPC request: {}
I0410 09:12:07.531148       1 connection.go:251] GRPC response: {"name":"rook-ceph.cephfs.csi.ceph.com","vendor_version":"canary"}
I0410 09:12:07.531166       1 connection.go:252] GRPC error: <nil>
I0410 09:12:07.531173       1 connection.go:244] GRPC call: /csi.v1.GroupController/CreateVolumeGroupSnapshot
I0410 09:12:07.531178       1 connection.go:245] GRPC request: {"name":"groupsnapshot-aecb80a8-6c51-41a9-81e2-0f623ddefb6c","parameters":{"clusterID":"rook-ceph","csi.storage.k8s.io/volumegroupsnapshot/name":"my-group-snapshot-1","csi.storage.k8s.io/volumegroupsnapshot/namespace":"default","csi.storage.k8s.io/volumegroupsnapshotcontent/name":"groupsnapcontent-aecb80a8-6c51-41a9-81e2-0f623ddefb6c","fsName":"myfs"},"secrets":"***stripped***","source_volume_ids":["0001-0009-rook-ceph-0000000000000001-e869764e-d890-407b-8ad8-4b6e4ed91373","0001-0009-rook-ceph-0000000000000001-48c212fa-a983-483f-92cd-016ebdfae9fe"]}
I0410 09:12:07.612851       1 connection.go:251] GRPC response: {"group_snapshot":{"creation_time":{"nanos":612307315,"seconds":1712740327},"group_snapshot_id":"0001-0009-rook-ceph-0000000000000001-39b344e5-89af-4f4f-8ee2-6dbbaf7fb130","ready_to_use":true,"snapshots":[{"creation_time":{"nanos":256400000,"seconds":1712740325},"group_snapshot_id":"0001-0009-rook-ceph-0000000000000001-39b344e5-89af-4f4f-8ee2-6dbbaf7fb130","ready_to_use":true,"size_bytes":1073741824,"snapshot_id":"0001-0009-rook-ceph-0000000000000001-b2756afe-a5f6-427b-a5b2-a9850fea241f","source_volume_id":"0001-0009-rook-ceph-0000000000000001-48c212fa-a983-483f-92cd-016ebdfae9fe"},{"creation_time":{"nanos":185273000,"seconds":1712740325},"group_snapshot_id":"0001-0009-rook-ceph-0000000000000001-39b344e5-89af-4f4f-8ee2-6dbbaf7fb130","ready_to_use":true,"size_bytes":1073741824,"snapshot_id":"0001-0009-rook-ceph-0000000000000001-4ff62c50-eaed-4c71-9b7d-677229b9bd37","source_volume_id":"0001-0009-rook-ceph-0000000000000001-e869764e-d890-407b-8ad8-4b6e4ed91373"}]}}
I0410 09:12:07.612869       1 connection.go:252] GRPC error: <nil>
I0410 09:12:07.612878       1 group_snapshotter.go:72] CSI CreateGroupSnapshot: groupsnapshot-aecb80a8-6c51-41a9-81e2-0f623ddefb6c driver name [rook-ceph.cephfs.csi.ceph.com] group snapshot ID [0001-0009-rook-ceph-0000000000000001-39b344e5-89af-4f4f-8ee2-6dbbaf7fb130] time stamp [seconds:1712740327 nanos:612307315] snapshots [[size_bytes:1073741824 snapshot_id:"0001-0009-rook-ceph-0000000000000001-b2756afe-a5f6-427b-a5b2-a9850fea241f" source_volume_id:"0001-0009-rook-ceph-0000000000000001-48c212fa-a983-483f-92cd-016ebdfae9fe" creation_time:<seconds:1712740325 nanos:256400000 > ready_to_use:true group_snapshot_id:"0001-0009-rook-ceph-0000000000000001-39b344e5-89af-4f4f-8ee2-6dbbaf7fb130"  size_bytes:1073741824 snapshot_id:"0001-0009-rook-ceph-0000000000000001-4ff62c50-eaed-4c71-9b7d-677229b9bd37" source_volume_id:"0001-0009-rook-ceph-0000000000000001-e869764e-d890-407b-8ad8-4b6e4ed91373" creation_time:<seconds:1712740325 nanos:185273000 > ready_to_use:true group_snapshot_id:"0001-0009-rook-ceph-0000000000000001-39b344e5-89af-4f4f-8ee2-6dbbaf7fb130" ]] readyToUse [true]
I0410 09:12:07.612959       1 groupsnapshot_helper.go:420] Created group snapshot: driver rook-ceph.cephfs.csi.ceph.com, groupSnapshotId 0001-0009-rook-ceph-0000000000000001-39b344e5-89af-4f4f-8ee2-6dbbaf7fb130, creationTime 2024-04-10 09:12:07.612307315 +0000 UTC, readyToUse true
I0410 09:12:07.613041       1 groupsnapshot_helper.go:461] createGroupSnapshotContent: set annotation [snapshot.storage.kubernetes.io/deletion-secret-name] on volume snapshot content [snapcontent-c16610e4567816237a8dee3c4069d7c52738451106d0f90bd2891cbf9694dc0a-2024-04-10-9.12.7].
I0410 09:12:07.613059       1 groupsnapshot_helper.go:464] createGroupSnapshotContent: set annotation [snapshot.storage.kubernetes.io/deletion-secret-namespace] on volume snapshot content [snapcontent-c16610e4567816237a8dee3c4069d7c52738451106d0f90bd2891cbf9694dc0a-2024-04-10-9.12.7].
I0410 09:12:07.717523       1 request.go:629] Waited for 104.343224ms due to client-side throttling, not priority and fairness, request: POST:https://10.96.0.1:443/apis/snapshot.storage.k8s.io/v1/volumesnapshotcontents
I0410 09:12:07.724623       1 snapshot_controller_base.go:206] enqueued "snapcontent-c16610e4567816237a8dee3c4069d7c52738451106d0f90bd2891cbf9694dc0a-2024-04-10-9.12.7" for sync
I0410 09:12:07.724651       1 snapshot_controller_base.go:247] syncContentByKey[snapcontent-c16610e4567816237a8dee3c4069d7c52738451106d0f90bd2891cbf9694dc0a-2024-04-10-9.12.7]
I0410 09:12:07.724664       1 util.go:245] storeObjectUpdate: adding content "snapcontent-c16610e4567816237a8dee3c4069d7c52738451106d0f90bd2891cbf9694dc0a-2024-04-10-9.12.7", version 153873
I0410 09:12:07.724833       1 snapshot_controller.go:58] synchronizing VolumeSnapshotContent[snapcontent-c16610e4567816237a8dee3c4069d7c52738451106d0f90bd2891cbf9694dc0a-2024-04-10-9.12.7]
I0410 09:12:07.724967       1 snapshot_controller.go:597] Check if VolumeSnapshotContent[snapcontent-c16610e4567816237a8dee3c4069d7c52738451106d0f90bd2891cbf9694dc0a-2024-04-10-9.12.7] should be deleted.
I0410 09:12:07.724985       1 snapshot_controller.go:137] checkandUpdateContentStatus[snapcontent-c16610e4567816237a8dee3c4069d7c52738451106d0f90bd2891cbf9694dc0a-2024-04-10-9.12.7] started
I0410 09:12:07.724992       1 snapshot_controller.go:259] checkandUpdateContentStatusOperation: call GetSnapshotStatus for snapshot which is pre-bound to content [snapcontent-c16610e4567816237a8dee3c4069d7c52738451106d0f90bd2891cbf9694dc0a-2024-04-10-9.12.7]
I0410 09:12:07.725151       1 snapshotter.go:113] GetSnapshotStatus: 0001-0009-rook-ceph-0000000000000001-b2756afe-a5f6-427b-a5b2-a9850fea241f
I0410 09:12:07.725175       1 connection.go:244] GRPC call: /csi.v1.Controller/ControllerGetCapabilities
I0410 09:12:07.725182       1 connection.go:245] GRPC request: {}
I0410 09:12:07.726067       1 connection.go:251] GRPC response: {"capabilities":[{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":5}}},{"Type":{"Rpc":{"type":9}}},{"Type":{"Rpc":{"type":7}}},{"Type":{"Rpc":{"type":13}}}]}
I0410 09:12:07.726086       1 connection.go:252] GRPC error: <nil>
I0410 09:12:07.726110       1 snapshot_controller.go:290] checkandUpdateContentStatusOperation: driver rook-ceph.cephfs.csi.ceph.com, snapshotId 0001-0009-rook-ceph-0000000000000001-b2756afe-a5f6-427b-a5b2-a9850fea241f, creationTime 0001-01-01 00:00:00 +0000 UTC, size 0, readyToUse true, groupSnapshotID 
I0410 09:12:07.726127       1 snapshot_controller.go:435] updateSnapshotContentStatus: updating VolumeSnapshotContent [snapcontent-c16610e4567816237a8dee3c4069d7c52738451106d0f90bd2891cbf9694dc0a-2024-04-10-9.12.7], snapshotHandle 0001-0009-rook-ceph-0000000000000001-b2756afe-a5f6-427b-a5b2-a9850fea241f, readyToUse true, createdAt 1712740327726122253, size 0, groupSnapshotID 
I0410 09:12:07.891465       1 snapshot_controller_base.go:206] enqueued "snapcontent-c16610e4567816237a8dee3c4069d7c52738451106d0f90bd2891cbf9694dc0a-2024-04-10-9.12.6" for sync
I0410 09:12:07.891533       1 snapshot_controller_base.go:247] syncContentByKey[snapcontent-c16610e4567816237a8dee3c4069d7c52738451106d0f90bd2891cbf9694dc0a-2024-04-10-9.12.6]
I0410 09:12:07.891547       1 util.go:273] storeObjectUpdate updating content "snapcontent-c16610e4567816237a8dee3c4069d7c52738451106d0f90bd2891cbf9694dc0a-2024-04-10-9.12.6" with version 153874
I0410 09:12:07.892200       1 snapshot_controller.go:58] synchronizing VolumeSnapshotContent[snapcontent-c16610e4567816237a8dee3c4069d7c52738451106d0f90bd2891cbf9694dc0a-2024-04-10-9.12.6]
I0410 09:12:07.892877       1 snapshot_controller.go:597] Check if VolumeSnapshotContent[snapcontent-c16610e4567816237a8dee3c4069d7c52738451106d0f90bd2891cbf9694dc0a-2024-04-10-9.12.6] should be deleted.
I0410 09:12:07.917375       1 request.go:629] Waited for 192.632955ms due to client-side throttling, not priority and fairness, request: POST:https://10.96.0.1:443/apis/snapshot.storage.k8s.io/v1/namespaces/default/volumesnapshots
I0410 09:12:07.925115       1 snapshot_controller.go:435] updateSnapshotContentStatus: updating VolumeSnapshotContent [snapcontent-c16610e4567816237a8dee3c4069d7c52738451106d0f90bd2891cbf9694dc0a-2024-04-10-9.12.7], snapshotHandle 0001-0009-rook-ceph-0000000000000001-b2756afe-a5f6-427b-a5b2-a9850fea241f, readyToUse true, createdAt 1712740325256400000, size 1073741824, groupSnapshotID 0001-0009-rook-ceph-0000000000000001-39b344e5-89af-4f4f-8ee2-6dbbaf7fb130
I0410 09:12:08.117044       1 request.go:629] Waited for 390.858168ms due to client-side throttling, not priority and fairness, request: GET:https://10.96.0.1:443/apis/snapshot.storage.k8s.io/v1/volumesnapshotcontents/snapcontent-c16610e4567816237a8dee3c4069d7c52738451106d0f90bd2891cbf9694dc0a-2024-04-10-9.12.7
I0410 09:12:08.317622       1 request.go:629] Waited for 392.373863ms due to client-side throttling, not priority and fairness, request: GET:https://10.96.0.1:443/apis/snapshot.storage.k8s.io/v1/volumesnapshotcontents/snapcontent-c16610e4567816237a8dee3c4069d7c52738451106d0f90bd2891cbf9694dc0a-2024-04-10-9.12.7
I0410 09:12:08.517277       1 request.go:629] Waited for 396.634547ms due to client-side throttling, not priority and fairness, request: PUT:https://10.96.0.1:443/apis/snapshot.storage.k8s.io/v1/volumesnapshotcontents/snapcontent-c16610e4567816237a8dee3c4069d7c52738451106d0f90bd2891cbf9694dc0a-2024-04-10-9.12.7/status
I0410 09:12:08.523990       1 util.go:273] storeObjectUpdate updating content "snapcontent-c16610e4567816237a8dee3c4069d7c52738451106d0f90bd2891cbf9694dc0a-2024-04-10-9.12.7" with version 153879
I0410 09:12:08.717137       1 request.go:629] Waited for 396.945118ms due to client-side throttling, not priority and fairness, request: PUT:https://10.96.0.1:443/apis/snapshot.storage.k8s.io/v1/volumesnapshotcontents/snapcontent-c16610e4567816237a8dee3c4069d7c52738451106d0f90bd2891cbf9694dc0a-2024-04-10-9.12.7/status
I0410 09:12:08.721440       1 groupsnapshot_helper.go:707] updateGroupSnapshotContentErrorStatusWithEvent[groupsnapcontent-aecb80a8-6c51-41a9-81e2-0f623ddefb6c]
I0410 09:12:08.916906       1 request.go:629] Waited for 195.350654ms due to client-side throttling, not priority and fairness, request: PATCH:https://10.96.0.1:443/apis/groupsnapshot.storage.k8s.io/v1alpha1/volumegroupsnapshotcontents/groupsnapcontent-aecb80a8-6c51-41a9-81e2-0f623ddefb6c/status
I0410 09:12:08.931489       1 util.go:273] storeObjectUpdate updating groupsnapshotcontent "groupsnapcontent-aecb80a8-6c51-41a9-81e2-0f623ddefb6c" with version 153881
I0410 09:12:08.931532       1 groupsnapshot_helper.go:53] enqueued "groupsnapcontent-aecb80a8-6c51-41a9-81e2-0f623ddefb6c" for sync
E0410 09:12:08.931541       1 groupsnapshot_helper.go:782] checkandUpdateGroupSnapshotContentStatus [groupsnapcontent-aecb80a8-6c51-41a9-81e2-0f623ddefb6c]: error occurred snapshot controller failed to update snapcontent-c16610e4567816237a8dee3c4069d7c52738451106d0f90bd2891cbf9694dc0a-2024-04-10-9.12.7 on API server: Operation cannot be fulfilled on volumesnapshotcontents.snapshot.storage.k8s.io "snapcontent-c16610e4567816237a8dee3c4069d7c52738451106d0f90bd2891cbf9694dc0a-2024-04-10-9.12.7": the object has been modified; please apply your changes to the latest version and try again
I0410 09:12:08.931555       1 event.go:364] Event(v1.ObjectReference{Kind:"VolumeGroupSnapshotContent", Namespace:"", Name:"groupsnapcontent-aecb80a8-6c51-41a9-81e2-0f623ddefb6c", UID:"bc7b18e0-fc63-4d22-bc1b-a9014984da14", APIVersion:"groupsnapshot.storage.k8s.io/v1alpha1", ResourceVersion:"153881", FieldPath:""}): type: 'Warning' reason: 'GroupSnapshotContentCheckandUpdateFailed' Failed to check and update group snapshot content: snapshot controller failed to update snapcontent-c16610e4567816237a8dee3c4069d7c52738451106d0f90bd2891cbf9694dc0a-2024-04-10-9.12.7 on API server: Operation cannot be fulfilled on volumesnapshotcontents.snapshot.storage.k8s.io "snapcontent-c16610e4567816237a8dee3c4069d7c52738451106d0f90bd2891cbf9694dc0a-2024-04-10-9.12.7": the object has been modified; please apply your changes to the latest version and try again
E0410 09:12:08.931559       1 groupsnapshot_helper.go:149] could not sync group snapshot content "groupsnapcontent-aecb80a8-6c51-41a9-81e2-0f623ddefb6c": snapshot controller failed to update snapcontent-c16610e4567816237a8dee3c4069d7c52738451106d0f90bd2891cbf9694dc0a-2024-04-10-9.12.7 on API server: Operation cannot be fulfilled on volumesnapshotcontents.snapshot.storage.k8s.io "snapcontent-c16610e4567816237a8dee3c4069d7c52738451106d0f90bd2891cbf9694dc0a-2024-04-10-9.12.7": the object has been modified; please apply your changes to the latest version and try again
I0410 09:12:08.931653       1 groupsnapshot_helper.go:71] Failed to sync group snapshot content "groupsnapcontent-aecb80a8-6c51-41a9-81e2-0f623ddefb6c", will retry again: snapshot controller failed to update snapcontent-c16610e4567816237a8dee3c4069d7c52738451106d0f90bd2891cbf9694dc0a-2024-04-10-9.12.7 on API server: Operation cannot be fulfilled on volumesnapshotcontents.snapshot.storage.k8s.io "snapcontent-c16610e4567816237a8dee3c4069d7c52738451106d0f90bd2891cbf9694dc0a-2024-04-10-9.12.7": the object has been modified; please apply your changes to the latest version and try again
I0410 09:12:08.931687       1 groupsnapshot_helper.go:82] syncGroupSnapshotContentByKey[groupsnapcontent-aecb80a8-6c51-41a9-81e2-0f623ddefb6c]
I0410 09:12:08.931704       1 util.go:273] storeObjectUpdate updating groupsnapshotcontent "groupsnapcontent-aecb80a8-6c51-41a9-81e2-0f623ddefb6c" with version 153881
I0410 09:12:08.931713       1 groupsnapshot_helper.go:165] synchronizing VolumeGroupSnapshotContent[groupsnapcontent-aecb80a8-6c51-41a9-81e2-0f623ddefb6c]
I0410 09:12:08.931719       1 groupsnapshot_helper.go:327] Check if VolumeGroupSnapshotContent[groupsnapcontent-aecb80a8-6c51-41a9-81e2-0f623ddefb6c] should be deleted.
I0410 09:12:08.931728       1 groupsnapshot_helper.go:778] checkandUpdateGroupSnapshotContentStatus[groupsnapcontent-aecb80a8-6c51-41a9-81e2-0f623ddefb6c] started
I0410 09:12:08.931734       1 groupsnapshot_helper.go:375] createGroupSnapshotWrapper: Creating group snapshot for group snapshot content groupsnapcontent-aecb80a8-6c51-41a9-81e2-0f623ddefb6c through the plugin ...

@nixpanic
Copy link
Member

@Madhu-1 is it possible for the CephFS implementation to not return an error like this:

I0410 09:12:05.073178       1 connection.go:252] GRPC error: rpc error: code = Internal desc = Quiesce operation is in progress

But instead return a VolumeGroupSnapshot that has readyToUse: false until quiesce and snapshot creation is complete?

@Madhu-1
Copy link
Contributor Author

Madhu-1 commented Apr 12, 2024

@Madhu-1 is it possible for the CephFS implementation to not return an error like this:

I0410 09:12:05.073178       1 connection.go:252] GRPC error: rpc error: code = Internal desc = Quiesce operation is in progress

But instead return a VolumeGroupSnapshot that has readyToUse: false until quiesce and snapshot creation is complete?

@nixpanic i think its will contain only the groupID (it can change if something goes wrong and rollback happens) to avoid corner cases and as no snapshot operations are technically started yet i prefer to return internal error until the quiesce is completed.

@nixpanic
Copy link
Member

Ok, I was also not sure if the group/snapshot IDs would be available to return early.

Other operations return Aborted when an operation is still in progress. The CSI spec is not very clear on using Aborted for this procedure, but I think it is nicer to use that than Internal. We should probably mention Aborted explicitly in the CSI specification (like is done for CreateVolume, CreateSnapshot and so on).

@nixpanic
Copy link
Member

Ok, I was also not sure if the group/snapshot IDs would be available to return early.

Other operations return Aborted when an operation is still in progress. The CSI spec is not very clear on using Aborted for this procedure, but I think it is nicer to use that than Internal. We should probably mention Aborted explicitly in the CSI specification (like is done for CreateVolume, CreateSnapshot and so on).

Created a PR for that: container-storage-interface/spec#563

@leonardoce
Copy link
Contributor

In my tests, I traced it down to this commit, which does not introduce the issue but increases the probability of triggering it by adding another call to updateSnapshotContentStatus from createGroupSnapshotWrapper, which is not idempotent.

And, unfortunately, every call to createGroupSnapshotWrapper results in more VolumeSnapshots & VolumeSnapshotContents getting created.

leonardoce added a commit to leonardoce/external-snapshotter that referenced this issue Apr 15, 2024
This patch replace the usage of `UpdateStatus` with a patch against the
`status` subresource in the `updateSnapshotContentStatus`.

The reason behind this change is to avoid conflicts that could
potentially arise, as they pose a risk within the context of the calling
function `createGroupSnapshotWrapper`.

If `createGroupSnapshotWrapper` is called multiple times for the same
GroupSnapshot, this will lead to multiple stale `VolumeSnapshot` and
`VolumeSnapshotContent` being created.

Partially closes: kubernetes-csi#1050
leonardoce added a commit to leonardoce/external-snapshotter that referenced this issue Apr 16, 2024
This patch replace the usage of `UpdateStatus` with a patch against the
`status` subresource in the `updateSnapshotContentStatus`.

The reason behind this change is to avoid conflicts that could
potentially arise, as they pose a risk within the context of the calling
function `createGroupSnapshotWrapper`.

If `createGroupSnapshotWrapper` is called multiple times for the same
GroupSnapshot, this will lead to multiple stale `VolumeSnapshot` and
`VolumeSnapshotContent` being created.

Related to: kubernetes-csi#1050
leonardoce added a commit to leonardoce/external-snapshotter that referenced this issue Apr 16, 2024
This patch replace the usage of `UpdateStatus` with a patch against the
`status` subresource in the `updateSnapshotContentStatus`.

The reason behind this change is to avoid conflicts that could
potentially arise, as they pose a risk within the context of the calling
function `createGroupSnapshotWrapper`.

If `createGroupSnapshotWrapper` is called multiple times for the same
GroupSnapshot, this will lead to multiple stale `VolumeSnapshot` and
`VolumeSnapshotContent` being created.

Related to: kubernetes-csi#1050
leonardoce added a commit to leonardoce/external-snapshotter that referenced this issue Apr 16, 2024
This patch replace the usage of `UpdateStatus` with a patch against the
`status` subresource in the `updateSnapshotContentStatus`.

The reason behind this change is to avoid conflicts that could
potentially arise, as they pose a risk within the context of the calling
function `createGroupSnapshotWrapper`.

If `createGroupSnapshotWrapper` is called multiple times for the same
GroupSnapshot, this will lead to multiple stale `VolumeSnapshot` and
`VolumeSnapshotContent` being created.

Related to: kubernetes-csi#1050
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jul 14, 2024
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle rotten
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Aug 13, 2024
@Madhu-1
Copy link
Contributor Author

Madhu-1 commented Aug 14, 2024

/remove-lifecycle rotten

@k8s-ci-robot k8s-ci-robot removed the lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. label Aug 14, 2024
@yati1998
Copy link
Contributor

yati1998 commented Sep 4, 2024

Hi, I tested the bug with latest builds,
I think it works fine.

  1. Created PVCs with label group=test
yatipadia:ceph-csi$ kubectl get pvc
NAME           STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS      VOLUMEATTRIBUTESCLASS   AGE
cephfs-pvc     Bound    pvc-9c7c26f6-9340-4ce2-9a8e-e6970811c36d   1Gi        RWO            rook-cephfs       <unset>                 17h
cephfs-pvc-1   Bound    pvc-83723b72-c960-4533-8381-f8b4b50afec8   1Gi        RWO            rook-cephfs       <unset>                 17h
cephfs-pvc-2   Bound    pvc-df7a60f9-6cfe-4620-aa2b-18310f3d74fc   2Gi        RWO            rook-cephfs       <unset>                 5m7s

  1. Created VolumegroupSnapshot
yatipadia:ceph-csi$ kubectl get volumegroupsnapshot
NAME                       READYTOUSE   VOLUMEGROUPSNAPSHOTCLASS          VOLUMEGROUPSNAPSHOTCONTENT                              CREATIONTIME   AGE
new-groupsnapshot-demo     true         csi-cephfsplugin-groupsnapclass   groupsnapcontent-f37117e6-b3c0-41c2-9ab9-18705a78eee0   26m            26m
new-groupsnapshot-demo-1   true         csi-cephfsplugin-groupsnapclass   groupsnapcontent-b8b1c10d-5c07-47c3-bc36-42d4294628e4   4m42s          4m43s

Output for Volumegroupsnapshotcontent, volumesnapshotcontent and volumesnapshot

yatipadia:ceph-csi$ kubectl get volumegroupsnapshotcontent
NAME                                                    READYTOUSE   DELETIONPOLICY   DRIVER                          VOLUMEGROUPSNAPSHOTCLASS          VOLUMEGROUPSNAPSHOTNAMESPACE   VOLUMEGROUPSNAPSHOT        AGE
groupsnapcontent-b8b1c10d-5c07-47c3-bc36-42d4294628e4   true         Delete           rook-ceph.cephfs.csi.ceph.com   csi-cephfsplugin-groupsnapclass   default                        new-groupsnapshot-demo-1   5m25s
groupsnapcontent-f37117e6-b3c0-41c2-9ab9-18705a78eee0   true         Delete           rook-ceph.cephfs.csi.ceph.com   csi-cephfsplugin-groupsnapclass   default                        new-groupsnapshot-demo     26m

yatipadia:ceph-csi$ kubectl get volumesnapshot
NAME                                                                                            READYTOUSE   SOURCEPVC   SOURCESNAPSHOTCONTENT                                                                              RESTORESIZE   SNAPSHOTCLASS   SNAPSHOTCONTENT                                                                                    CREATIONTIME   AGE
snapshot-4a40303d602a48851596ec77f492befe4c9ba77e268ec15250f4d52c1927c47c-2024-08-29-11.23.13   true                     snapcontent-4a40303d602a48851596ec77f492befe4c9ba77e268ec15250f4d52c1927c47c-2024-08-29-11.23.13   1Gi                           snapcontent-4a40303d602a48851596ec77f492befe4c9ba77e268ec15250f4d52c1927c47c-2024-08-29-11.23.13   5m59s          5m59s
snapshot-c06b464a14d60c4bcf91848f69f77178a210a434335897f4f738a7023c1123ad-2024-08-29-11.23.14   true                     snapcontent-c06b464a14d60c4bcf91848f69f77178a210a434335897f4f738a7023c1123ad-2024-08-29-11.23.14   2Gi                           snapcontent-c06b464a14d60c4bcf91848f69f77178a210a434335897f4f738a7023c1123ad-2024-08-29-11.23.14   5m59s          5m57s
snapshot-d8aa98437b4461cfddfa08d970171b5d971eacd00aea485ca6c730f66a9886d4-2024-08-29-11.1.55    true                     snapcontent-d8aa98437b4461cfddfa08d970171b5d971eacd00aea485ca6c730f66a9886d4-2024-08-29-11.1.55    1Gi                           snapcontent-d8aa98437b4461cfddfa08d970171b5d971eacd00aea485ca6c730f66a9886d4-2024-08-29-11.1.55    27m            27m
snapshot-eee979df5ec649d929d9a31b78d7aaed57003d9bd8c4d3e1ff2d7ba9e105c452-2024-08-29-11.23.13   true                     snapcontent-eee979df5ec649d929d9a31b78d7aaed57003d9bd8c4d3e1ff2d7ba9e105c452-2024-08-29-11.23.13   1Gi                           snapcontent-eee979df5ec649d929d9a31b78d7aaed57003d9bd8c4d3e1ff2d7ba9e105c452-2024-08-29-11.23.13   5m59s          5m58s
snapshot-f65a4678955339d03bb7fe2368f1f2a450f1425b3fbebac62a42a4c1e608f242-2024-08-29-11.1.55    true                     snapcontent-f65a4678955339d03bb7fe2368f1f2a450f1425b3fbebac62a42a4c1e608f242-2024-08-29-11.1.55    1Gi                           snapcontent-f65a4678955339d03bb7fe2368f1f2a450f1425b3fbebac62a42a4c1e608f242-2024-08-29-11.1.55    27m            27m

yatipadia:ceph-csi$ kubectl get volumesnapshotcontent
NAME                                                                                               READYTOUSE   RESTORESIZE   DELETIONPOLICY   DRIVER                          VOLUMESNAPSHOTCLASS   VOLUMESNAPSHOT                                                                                  VOLUMESNAPSHOTNAMESPACE   AGE
snapcontent-4a40303d602a48851596ec77f492befe4c9ba77e268ec15250f4d52c1927c47c-2024-08-29-11.23.13   true         1073741824    Delete           rook-ceph.cephfs.csi.ceph.com                         snapshot-4a40303d602a48851596ec77f492befe4c9ba77e268ec15250f4d52c1927c47c-2024-08-29-11.23.13   default                   6m24s
snapcontent-c06b464a14d60c4bcf91848f69f77178a210a434335897f4f738a7023c1123ad-2024-08-29-11.23.14   true         2147483648    Delete           rook-ceph.cephfs.csi.ceph.com                         snapshot-c06b464a14d60c4bcf91848f69f77178a210a434335897f4f738a7023c1123ad-2024-08-29-11.23.14   default                   6m22s
snapcontent-d8aa98437b4461cfddfa08d970171b5d971eacd00aea485ca6c730f66a9886d4-2024-08-29-11.1.55    true         1073741824    Delete           rook-ceph.cephfs.csi.ceph.com                         snapshot-d8aa98437b4461cfddfa08d970171b5d971eacd00aea485ca6c730f66a9886d4-2024-08-29-11.1.55    default                   27m
snapcontent-eee979df5ec649d929d9a31b78d7aaed57003d9bd8c4d3e1ff2d7ba9e105c452-2024-08-29-11.23.13   true         1073741824    Delete           rook-ceph.cephfs.csi.ceph.com                         snapshot-eee979df5ec649d929d9a31b78d7aaed57003d9bd8c4d3e1ff2d7ba9e105c452-2024-08-29-11.23.13   default                   6m24s
snapcontent-f65a4678955339d03bb7fe2368f1f2a450f1425b3fbebac62a42a4c1e608f242-2024-08-29-11.1.55    true         1073741824    Delete           rook-ceph.cephfs.csi.ceph.com                         snapshot-f65a4678955339d03bb7fe2368f1f2a450f1425b3fbebac62a42a4c1e608f242-2024-08-29-11.1.55    default                   27m

The above result is after following the bekow steps:

  1. created 2 pvcs with group=test label
  2. created volumegroupsnapshot
    3.Created another PVC with 2GI
  3. labelled the pvc with same selector
  4. Created volumegroupsnapshot

Result:

total of 5 snapshotcontent can be seen at the end of the above testing which is expected.

@yati1998
Copy link
Contributor

yati1998 commented Sep 4, 2024

cc @Madhu-1

@Madhu-1
Copy link
Contributor Author

Madhu-1 commented Sep 4, 2024

@yati1998 its still possible to reproduce this one which involves the restart of the snapshotter

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Dec 3, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale.
Projects
None yet
Development

No branches or pull requests

7 participants