Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Error restoring restic volumes - invalid id \"\": multiple IDs with prefix found #1998

Closed
robertgates55 opened this issue Oct 25, 2019 · 4 comments
Labels
Restic Relates to the restic integration

Comments

@robertgates55
Copy link

What steps did you take and what happened:
On attempting to restore a successful velero backup of a namespace with restic enabled, I'm getting the following error in the restic logs:

time="2019-10-25T10:05:57Z" level=error msg="Error restoring volume" controller=pod-volume-restore error="error running restic restore, cmd=restic restore --repo=s3:s3-eu-west-1.amazonaws.com/duco-qa1-backups-dea6831f/restic/knowledgehub --password-file=/tmp/velero-restic-credentials-knowledgehub194842862 --cache-dir=/scratch/.cache/restic  --target=., stdout=, stderr=invalid id \"\": multiple IDs with prefix found\n: exit status 1" error.file="/go/src/github.com/heptio/velero/pkg/controller/pod_volume_restore_controller.go:343" error.function="github.com/heptio/velero/pkg/controller.(*podVolumeRestoreController).restorePodVolume" logSource="pkg/controller/pod_volume_restore_controller.go:298" name=knowledgehub-1571946878-20191025110546-wmsh7 namespace=velero restore=velero/knowledgehub-1571946878-20191025110546

What did you expect to happen:
A successful, clean restore of all my volumes.

The output of the following commands will help us better understand what's going on:
(Pasting long output into a GitHub gist or other pastebin is fine.)

velero backup describe <backupname>:

Name:         knowledgehub-1571946878
Namespace:    velero
Labels:       velero.io/backup=knowledgehub-1571946878
              velero.io/pv=pvc-c615f4f3-f67d-11e9-8225-068a97741dda
              velero.io/storage-location=default
Annotations:  <none>

Phase:  Completed

Namespaces:
  Included:  knowledgehub
  Excluded:  <none>

Resources:
  Included:        *
  Excluded:        <none>
  Cluster-scoped:  auto

Label selector:  <none>

Storage Location:  default

Snapshot PVs:  auto

TTL:  720h0m0s

Hooks:  <none>

Backup Format Version:  1

Started:    2019-10-24 20:54:41 +0100 BST
Completed:  2019-10-24 20:54:47 +0100 BST

Expiration:  2019-11-23 19:54:41 +0000 GMT

Persistent Volumes:  5 of 5 snapshots completed successfully (specify --details for more information)

Restic Backups (specify --details for more information):
  Completed:  2

Anything else you would like to add:
[Miscellaneous information that will assist in solving the issue.]

Environment:

  • Velero version (use velero version):
Client:
        Version: v1.0.0
        Git commit: 72f5cadc3a865019ab9dc043d4952c9bfd5f2ecb
Server:
        Version: v1.1.0
  • Velero features (use velero client config get features): features: <NOT SET>
  • Kubernetes version (use kubectl version):
Client Version: version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.3", GitCommit:"5e53fd6bc17c0dec8434817e69b04a25d8ae0ff0", GitTreeState:"clean", BuildDate:"2019-06-06T01:44:30Z", GoVersion:"go1.12.5", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"14+", GitVersion:"v1.14.6-eks-5047ed", GitCommit:"5047edce664593832e9b889e447ac75ab104f527", GitTreeState:"clean", BuildDate:"2019-08-21T22:32:40Z", GoVersion:"go1.12.9", Compiler:"gc", Platform:"linux/amd64"}
  • Kubernetes installer & version: EKS
@skriss
Copy link
Contributor

skriss commented Oct 28, 2019

Can you provide the output of kubectl -n velero get podvolumebackups -l velero.io/backup-name=knowledgehub-1571946878 -o yaml?

@skriss skriss added Restic Relates to the restic integration Waiting for info labels Oct 28, 2019
@schaze
Copy link

schaze commented Nov 1, 2019

I had the same issue today (backup contained only restic backups no volume snapshots).

When I checked the used backup (your command above) I could see that the volume was empty. I had the volume mounted to the wrong path in the pod, so the container did not write any data to it.
Once there was data on the volume, backup and restores went fine.

However, I think there should be a more graceful way for velero to handle this, instead of failing on restore.

@skriss
Copy link
Contributor

skriss commented Nov 1, 2019

@schaze thanks for that - it's possible we inadvertently introduced a regression around this, as empty restic backups/restores were supposed to be gracefully handled already. I'll look into it.

@skriss
Copy link
Contributor

skriss commented Nov 8, 2019

Closing this out per discussion & bug fix in v1.2 - feel free to reach out again as needed.

@skriss skriss closed this as completed Nov 8, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Restic Relates to the restic integration
Projects
None yet
Development

No branches or pull requests

3 participants