You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
As a part of #6734, related to #8044 and #8186. At present, Velero Generic Restore doesn't consider the distribution across nodes because of #8044, once we enhance it, the restore activities should be distributed evenly across nodes whenever possible.
The text was updated successfully, but these errors were encountered:
I'm not sure if this is better than letting k8s scheduler decide the placement of restore-pod, therefore we may introduce a configuration set for restore-pod in "node-agent-configuration"
If #8044 is fixed, i.e. we introduce a flag to allow restore pod to ignore the "WaitForFirstConsumer" attribute in storage class, we can run datadownload in parallel and allow better distribution of restore pods.
As a part of #6734, related to #8044 and #8186. At present, Velero Generic Restore doesn't consider the distribution across nodes because of #8044, once we enhance it, the restore activities should be distributed evenly across nodes whenever possible.
The text was updated successfully, but these errors were encountered: