Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

OADP-1668: Volumesnapshot related CR’s namely Volumesnapshot and VolumeSnapshotcontent are not being included by the OADP Version of Velero server in the backup bundle #974

Closed
vjaincatalogic opened this issue Apr 14, 2023 · 28 comments
Assignees
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.

Comments

@vjaincatalogic
Copy link

Only developers will use GitHub issues for development purposes

agree

@kaovilai
Copy link
Member

Thanks for your report. tracking in JIRA

@kaovilai kaovilai changed the title Bug: Volumesnapshot related CR’s namely Volumesnapshot and VolumeSnapshotcontent are not being included by the OADP Version of Velero server in the backup bundle OADP-1668: Volumesnapshot related CR’s namely Volumesnapshot and VolumeSnapshotcontent are not being included by the OADP Version of Velero server in the backup bundle Apr 14, 2023
@kaovilai
Copy link
Member

@vjaincatalogic is this from building from source? or are you using official releases?

@vjaincatalogic
Copy link
Author

@kaovilai , Yes this is from official release. I'm not building from source. I used the official oadp-operator from market place.

@kaovilai
Copy link
Member

Can you verify version?

@vjaincatalogic
Copy link
Author

oadp version is v1.1.3

@shubham-pampattiwar
Copy link
Member

@vjaincatalogic I do see volumesnapshotcontent being backed up from the bundle content you provided. Could you provide more details on:

  • backup CR used
  • DPA CR used + DPA CR status
  • velero logs
  • volume-snapshot-mover logs
  • VolumeSnapshotBackup CR and its status (This would in application ns)

@vjaincatalogic
Copy link
Author

Hi @shubham-pampattiwar, I will provide you all these details but currently the cluster was deleted and it will take me some time to get back to you.

@vjaincatalogic
Copy link
Author

backup Bundle:
backup_bundle.tar.gz

I provide you the first three, backup CR used, DPA CR used + DPA CR status, velero logs
requested.tar.gz

volume-snapshot-mover logs and VolumeSnapshotBackup CR and its status , I'm not able to find as I had not used Volume Snapshot Mover, IS it the reason, I'm not able to see the volumesnapshot in backup bundle in my BSL?

@shubham-pampattiwar
Copy link
Member

@vjaincatalogic Which type of OADP backup are you trying to take here ? Native CSI backup or Datamover CSI backup ?

@draghuram
Copy link

@shubham-pampattiwar, this was CSI backup. By "Datamover" CSI backup, do you mean Restic or Kopia backup?

@kaovilai
Copy link
Member

kaovilai commented May 2, 2023

"Datamover" CSI backup, do you mean Restic or Kopia backup?

We means our own implementation of "Data Mover" in OADP v1.1 which takes CSI snapshots, and make a restic copy of data from snapshot to S3. This is an OADP specific capability which upstream will have with vmware-tanzu/velero#5968

See doc at
https://access.redhat.com/documentation/en-us/openshift_container_platform/4.10/html-single/backup_and_restore/index#oadp-using-data-mover-for-csi-snapshots_backing-up-applications

TL;DR: we do not mean restic or kopia.

@kaovilai
Copy link
Member

kaovilai commented May 2, 2023

If you do not have enable: true here then you did not use our Data Mover.

apiVersion: oadp.openshift.io/v1alpha1
kind: DataProtectionApplication
metadata:
  name: velero-sample
  namespace: openshift-adp
spec:
  features:
    dataMover:
      enable: true

@vjaincatalogic
Copy link
Author

vjaincatalogic commented May 4, 2023

Hi @shubham-pampattiwar / @kaovilai , SO I understood my DataProtectionApplication is not having this dataMover enabled, but is it required for CSI VolumeSnapShots and without OADP dataMover, we will not see the VolumeSnapShot data in BackupStorageLocation?

@kaovilai
Copy link
Member

kaovilai commented May 4, 2023

Should not be required.

@vjaincatalogic
Copy link
Author

In that case, I'm just confirming from the OADP,
OADP team got all the information required to investigate on this issue and OADP team is still investigating?

@kaovilai
Copy link
Member

kaovilai commented May 8, 2023

At this time we are not requesting any more information. This issue is still open to investigation.

@vjaincatalogic
Copy link
Author

Hi Team, Any Updates on this issue?

@weshayutin
Copy link
Contributor

weshayutin commented Jun 16, 2023

We had to push it out further. Tracked in OADP-1669

@kaovilai
Copy link
Member

Still looking into it. Thanks for the details!

@openshift-bot
Copy link

Issues go stale after 90d of inactivity.

Mark the issue as fresh by commenting /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
Exclude this issue from closing by commenting /lifecycle frozen.

If this issue is safe to close now please do so with /close.

/lifecycle stale

@openshift-ci openshift-ci bot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Oct 9, 2023
@kaovilai
Copy link
Member

/remove-lifecycle stale

@openshift-ci openshift-ci bot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Oct 10, 2023
@openshift-bot
Copy link

Issues go stale after 90d of inactivity.

Mark the issue as fresh by commenting /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
Exclude this issue from closing by commenting /lifecycle frozen.

If this issue is safe to close now please do so with /close.

/lifecycle stale

@openshift-ci openshift-ci bot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jan 8, 2024
@openshift-bot
Copy link

Stale issues rot after 30d of inactivity.

Mark the issue as fresh by commenting /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.
Exclude this issue from closing by commenting /lifecycle frozen.

If this issue is safe to close now please do so with /close.

/lifecycle rotten
/remove-lifecycle stale

@openshift-ci openshift-ci bot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Feb 8, 2024
@kaovilai
Copy link
Member

kaovilai commented Feb 8, 2024

/remove-lifecycle stale

@openshift-bot
Copy link

Rotten issues close after 30d of inactivity.

Reopen the issue by commenting /reopen.
Mark the issue as fresh by commenting /remove-lifecycle rotten.
Exclude this issue from closing again by commenting /lifecycle frozen.

/close

@openshift-ci openshift-ci bot closed this as completed Mar 9, 2024
Copy link

openshift-ci bot commented Mar 9, 2024

@openshift-bot: Closing this issue.

In response to this:

Rotten issues close after 30d of inactivity.

Reopen the issue by commenting /reopen.
Mark the issue as fresh by commenting /remove-lifecycle rotten.
Exclude this issue from closing again by commenting /lifecycle frozen.

/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.
Projects
None yet
Development

No branches or pull requests

6 participants