Skip to content

Commit

Permalink
add channgelog
Browse files Browse the repository at this point in the history
Signed-off-by: Ming <[email protected]>
  • Loading branch information
qiuming-best committed Jan 17, 2023
1 parent 5899287 commit a388d6c
Show file tree
Hide file tree
Showing 2 changed files with 10 additions and 9 deletions.
1 change: 1 addition & 0 deletions changelogs/unreleased/5773-qiuming-best
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
Design for Handling backup of volumes by resources filters
18 changes: 9 additions & 9 deletions design/handle-backup-of-volumes-by-resources-filters.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@ Currently, Velero doesn't have one flexible way to filter volumes.

If users want to skip backup of volumes or only backup some volumes in different namespaces in batch, currently they need to use the opt-in and opt-out approach one by one, or use label-selector but if it has big different labels on each different related pods, which is cumbersome when they have lots of volumes to handle with. it would be convenient if Velero could provide one way to filter the backup of volumes just by `some specific volumes attributes`.

Also, currently, it's not accurate enough if the users want to select a specific volume to do a backup or skip by without patching labels or annotations to the pods. It would be useful if users could accurately select target volume by `one specific resource selector. for the users could accurately select the volume to backup or skip in their console when using velero for secondary development.
Also, currently, it's not accurate enough if the users want to select a specific volume to do a backup or skip by without patching labels or annotations to the pods. It would be useful if users could accurately select target volume by `one specific resource selector`. Users could accurately select the volume to backup or skip in their own console when using velero for secondary development.

## Background
As of Today, Velero has lots of filters to handle (backup or skip backup) resources including resources filters like `IncludedNamespaces, ExcludedNamespaces`, label selectors like `LabelSelector, OrLabelSelectors`, annotation like `backup.velero.io/must-include-additional-items` etc. But it's not enough flexible to handle volumes, we need one generic way to filter volumes.
Expand Down Expand Up @@ -39,12 +39,12 @@ When Velero handles volumes backup should respect the filter rules defined in th
The resources filters rules should contain both `include` and `exclude` rules.

For the rules on `one specific resource selector`, we introduced a `GVRN` way of resources filters, for resources are identified by their resource type and resource name, or GVRN.
Here we call it `Volumes Attributes Selector`, which matches volumes with the same attributes defined.

Here we call it `GVRN Selector` which exactly matches the resources to be handled.

For the attributes on `some specific volumes attributes`, we basically follow the defined data struct [PersistentVolumeSpec](https://github.com/kubernetes/kubernetes/blob/v1.26.0/pkg/apis/core/types.go#L304), and only handle partial common fields of it currently.

Here we call it `GVRN Selector` which exactly matches the resources to be handled.
Here we call it `Volumes Attributes Selector`, which matches volumes with the same attributes defined.

### filter fields format
The filter JSON config file would look like this:
Expand Down Expand Up @@ -114,11 +114,11 @@ In the storage part, we defined `Volumes Attributes Selector` to filter resource

The storage part defined rules including `pv` and `volume`, which correspond to `Kopia, Restic` and `Volume snapshot`.

A filter in storage with a specific key and empty value, which means the value matches any value. For example, if the `storage.pv.exclude.persistentVolumeSource.nfs` is `{}` it means if `NFS` is used as `persistentVolumeSource` in Persistent Volume will be skipped no matter if the NFS server or NFS Path is,
A filter in storage with a specific key and empty value, which means the value matches any value. For example, if the `storage.pv.exclude.persistentVolumeSource.nfs` is `{}` it means if `NFS` is used as `persistentVolumeSource` in Persistent Volume will be skipped no matter what the NFS server or NFS Path is,

A filter may have multiple values, all the values are concatenated by commas. For example, the `storage.pv.include.storageClassName` is `gp2, ebs-sc` which means Persistent Volume with gp2 or ebs-sc storage class both will be back up.

The size of each single filter value should limit to 256 bytes in case of an unfriendly variable assignment.
The size of each single filter value should limit to 256 bytes in case of an unfriendly long variable assignment.

If user defined pv filter rules but used Kopia or Restic to do a backup, the backup will fail in validating the resource filter configuration. Same as the situation if using defined volume filter rules but using CSI or plugins to take volume snapshots.

Expand Down Expand Up @@ -167,7 +167,7 @@ data:
kind: ConfigMap
metadata:
creationTimestamp: "2023-01-16T14:08:12Z"
name: backup01
name: cm-backup01
namespace: default
resourceVersion: "17891025"
uid: b73e7f76-fc9e-4e72-8e2e-79db717fe9f1
Expand All @@ -184,7 +184,7 @@ The name of the configmap is `cm-`+`$BackupName`, and it's in Velero install nam
- Resource filter configmap should also persist into object storage and could be also synchronized automatically when at startup.

### Display of volume resource filter
As the resource filter configmap is referenced by backup CR, the rules in configmap are not so intuitive, so we need to integrate rules in configmap to the output of the command `velero backup describe`
As the resource filter configmap is referenced by backup CR, the rules in configmap are not so intuitive, so we need to integrate rules in configmap to the output of the command `velero backup describe`, and make it more readable.

## Compatibility
Currently, we have these resources filters:
Expand Down

0 comments on commit a388d6c

Please sign in to comment.