Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat(zfspv): adding support to restore on different setup/nodes #118

Merged
merged 1 commit into from
Sep 30, 2020

Conversation

pawanpraka1
Copy link
Contributor

@pawanpraka1 pawanpraka1 commented Sep 23, 2020

fixes : openebs/zfs-localpv#200

Signed-off-by: Pawan [email protected]

Why is this PR required? What issue does it fix?:

We have the node affinity set on the PV and the ZFSVolume object
which has the original node name. While doing the restore if original
nodes are not present, the Pod will not come into running state.

What this PR does?:

Doc: https://velero.io/docs/v1.4/restore-reference/#changing-pvc-selected-node.

Here we can create a config map which will have the node mapping like below:

apiVersion: v1
kind: ConfigMap
metadata:
  # any name can be used; Velero uses the labels (below)
  # to identify it rather than the name
  name: change-pvc-node-selector-config
  # must be in the velero namespace
  namespace: velero
  # the below labels should be used verbatim in your
  # ConfigMap.
  labels:
    # this value-less label identifies the ConfigMap as
    # config for a plugin (i.e. the built-in restore item action plugin)
    velero.io/plugin-config: ""
    # this label identifies the name and kind of plugin
    # that this ConfigMap is for.
    velero.io/change-pvc-node: RestoreItemAction
data:
  # add 1+ key-value pairs here, where the key is the old
  # node name and the value is the new node name.
  pawan-old-node1: pawan-new-node1
  pawan-old-node2: pawan-new-node2

While doing the restore, we will create the volume and the PV as per
the node mapping provided in the config map.

Checklist:

  • Fixes #
  • PR Title follows the convention of <type>(<scope>): <subject>
  • Has the change log section been updated?
  • Commit has unit tests
  • Commit has integration tests
  • (Optional) Are upgrade changes included in this PR? If not, mention the issue/PR to track:
  • (Optional) If documentation changes are required, which issue on https://github.com/openebs/openebs-docs is used to track them:

We have the node affinity set on the PV and the ZFSVolume object
which has the original node name. While doing the restore if original
nodes are not present, the Pod will not come into running state.

Here we can create a config map which will have the node mapping like below:

```yaml
apiVersion: v1
kind: ConfigMap
metadata:
  # any name can be used; Velero uses the labels (below)
  # to identify it rather than the name
  name: change-pvc-node-selector-config
  # must be in the velero namespace
  namespace: velero
  # the below labels should be used verbatim in your
  # ConfigMap.
  labels:
    # this value-less label identifies the ConfigMap as
    # config for a plugin (i.e. the built-in restore item action plugin)
    velero.io/plugin-config: ""
    # this label identifies the name and kind of plugin
    # that this ConfigMap is for.
    velero.io/change-pvc-node: RestoreItemAction
data:
  # add 1+ key-value pairs here, where the key is the old
  # node name and the value is the new node name.
  pawan-old-node1: pawan-new-node1
  pawan-old-node2: pawan-new-node2
```

While doing the restore, we will create the volume and the PV as per
the node mapping provided in the config map.

Reference : https://velero.io/docs/v1.4/restore-reference/#changing-pvc-selected-node

Signed-off-by: Pawan <[email protected]>
@kmova kmova merged commit b4ac369 into openebs:master Sep 30, 2020
@pawanpraka1 pawanpraka1 deleted the restore branch September 30, 2020 06:12
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

add support to restore in a cluster with different nodes
2 participants