-
Notifications
You must be signed in to change notification settings - Fork 363
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[velero] Chart ux improvements #138
base: main
Are you sure you want to change the base?
Conversation
1540382
to
a1a7a9a
Compare
There are probably a few additional UX clean up tasks we should roll in if we are revving as a breaking change before we merge. I'll try to get at least a single proposal issue with a list filed shortly for discussion. I'd like to clean up the readme so a new user knows exactly which values they must provide for a minimum functional deployment. |
/rebase |
a13a397
to
dcabe81
Compare
--set initContainers[0].image=velero/velero-plugin-for-aws:v1.1.0 \ | ||
--set initContainers[0].volumeMounts[0].mountPath=/target \ | ||
--set initContainers[0].volumeMounts[0].name=plugins | ||
--set image.pullPolicy=IfNotPresent |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I like this idea that the user does not need to specify the plugin images anymore and I think the below code is a bit redundant from my usage.
--set initContainers[0].volumeMounts[0].mountPath=/target \
--set initContainers[0].volumeMounts[0].name=plugins
However, we have a use case that we'll mount two plugin images, one for AWS and the other one for CSI, so our command is:
--set initContainers[0].name=velero-plugin-for-aws \
--set initContainers[0].image=velero/velero-plugin-for-aws:v1.1.0 \
--set initContainers[0].volumeMounts[0].mountPath=/target \
--set initContainers[0].volumeMounts[0].name=plugins \
--set initContainers[1].name=velero-plugin-for-csi \
--set initContainers[1].image=velero/velero-plugin-for-csi:v0.1.1 \
--set initContainers[1].volumeMounts[0].mountPath=/target \
--set initContainers[1].volumeMounts[0].name=plugins
With this PR, I think our use case can't be fulfilled. But I'd rather see if it's possible to remove the redundant volumeMounts[0] and the charts would help us generate automatically, which becomes:
--set initContainers[0].name=velero-plugin-for-aws \
--set initContainers[0].image=velero/velero-plugin-for-aws:v1.1.0 \
--set initContainers[1].name=velero-plugin-for-csi \
--set initContainers[1].image=velero/velero-plugin-for-csi:v0.1.1
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You can add additional initContainers as before, the one for the provider will be added automatically. At present you'd add (assuming aws is the provider):
--set initContainers[0].name=velero-plugin-for-csi \
--set initContainers[0].image=velero/velero-plugin-for-csi:v1.1.0 \
--set initContainers[1].volumeMounts[0].mountPath=/target \
--set initContainers[1].volumeMounts[0].name=plugins
Are you using both because the other provider is in a backup storage location or snapshot location? Can we just add initContainers for all the various providers across the values file and leave initContainers as a standard extension point for fallback.
I'd recommend against automounting plugins at /target
for all initContainers as that is a general extension point. If you really want something similar perhaps it'sworth adding a list of plugin images to add over and above the provider one? Something like
additionalPlugins:
- name: velero-plugin-for-csi
image: velero/velero-plugin-for-csi:v0.1.1
which then creates an initContainer entry with the volume mount.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Are you using both because the other provider is in a backup storage location or snapshot location? Can we just add initContainers for all the various providers across the values file and leave initContainers as a standard extension point for fallback.
Yes, we're using S3-compatible (minion) which requires velero-plugin-for-aws and also deploy ceph-csi to take a volume snapshot by Kubernetes volume snapshot CRDs with velero-plugin-for-csi.
I'd recommend against automounting plugins at
/target
for all initContainers as that is a general extension point. If you really want something similar perhaps its worth adding a list of plugin images to add over and above the provider one? Something likeadditionalPlugins: - name: velero-plugin-for-csi image: velero/velero-plugin-for-csi:v0.1.1which then creates an initContainer entry with the volume mount.
Sounds good. Personally I'd like this approach then we could remove the initContainers
in values.yaml.
Any feedback on this? cc @carlisia @nrb @ashish-amarnath
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I added initContainers for each provider configured at the various levels in the values so if you have different providers for configuration, backupStorageLocation, and snapshotLocation it will add all the plugins.
It looks like only the configuration.provider has a secret injected into the primary container. Is that a concern? Should we be refactoring the credentials to allow for multiple providers here as well?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Also worth mentioning I removed the "data" in helpers and put it into values, it dumbs the logic down nicely and moves the images to standard patterns. This is nicer as it makes the images Flux HelmOperator friendly and I'm sure has other similar benefits
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Velero installs plugins as initContainers with the expectation that they will be copied to the correct mountpoint for invocation; all plugins should end up in the same directory for them to work properly at runtime.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Shouldn't be a problem. They are all copied to /target
in the plugins
volume as expected. As you mention re the secret though we'll only have one set of credentials right?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@kav sorry for the late reply.
I'd prefer to change the installation way from
helm install \
... \
--set initContainers[0].name=velero-plugin-for-aws \
--set initContainers[0].image=velero/velero-plugin-for-aws:v1.1.0 \
--set initContainers[0].volumeMounts[0].mountPath=/target \
--set initContainers[0].volumeMounts[0].name=plugins \
--set initContainers[1].name=velero-plugin-for-csi \
--set initContainers[1].image=velero/velero-plugin-for-csi:v0.1.1 \
--set initContainers[1].volumeMounts[0].mountPath=/target \
--set initContainers[1].volumeMounts[0].name=plugins
to
helm install \
... \
--set plugins velero/velero-plugin-for-aws:v1.1.0,velero/velero-plugin-for-csi:v0.1.1
then, the installation command is aligned to velero install
.
WDYT?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sounds great to me! I wrote a reply discussing the merits of that vs a yaml list and then realized it's a single split call and erased it and implemented support for both.
Updated, we now support
--set plugins velero/velero-plugin-for-aws:v1.1.0,velero/velero-plugin-for-csi:v0.1.1
and
plugins:
- velero/velero-plugin-for-aws:v1.1.0
- velero/velero-plugin-for-csi:v0.1.1
and even
plugins:
- velero/velero-plugin-for-aws:v1.1.0
- repository: velero/velero-plugin-for-csi
digest: sha256:60d47fd25216f13073525823a067eab223d12e695d4b41e480aa3ff13a58c916
pullPolicy: Always
Also added unit tests for all of that using https://github.com/quintush/helm-unittest
Also any open questions and changes for #139 should be resolved before this goes to main. Are there other breaking changes under discussion? I can start a major version branch if we've got other stuff to roll together |
5862737
to
0290724
Compare
Switched version to 3.0.0-alpha.1 so if we do merge and have additional breaking changes our versioning properly reflects that. |
72b42f4
to
28e8255
Compare
Signed-off-by: Cesar Okuti <[email protected]> Signed-off-by: W. Kavanaugh Latiolais <[email protected]>
Signed-off-by: Carlisia <[email protected]> Signed-off-by: W. Kavanaugh Latiolais <[email protected]>
BREAKING CHANGE Signed-off-by: W. Kavanaugh Latiolais <[email protected]>
BREAKING CHANGE Signed-off-by: W. Kavanaugh Latiolais <[email protected]>
Signed-off-by: W. Kavanaugh Latiolais <[email protected]>
Signed-off-by: W. Kavanaugh Latiolais <[email protected]>
Signed-off-by: W. Kavanaugh Latiolais <[email protected]>
Signed-off-by: W. Kavanaugh Latiolais <[email protected]>
Signed-off-by: W. Kavanaugh Latiolais <[email protected]>
Signed-off-by: W. Kavanaugh Latiolais <[email protected]>
Tagged version as alpha since there are other breaking change under discussion Signed-off-by: W. Kavanaugh Latiolais <[email protected]>
Signed-off-by: W. Kavanaugh Latiolais <[email protected]>
We create a secret if we have contents or envVars to put in it. We use a secret if we created one or if we have an existing one. Signed-off-by: W. Kavanaugh Latiolais <[email protected]>
Signed-off-by: W. Kavanaugh Latiolais <[email protected]>
Sorry for the late review.
But, thanks for bringing up the helm unit-test, I'm researching the helm unit test solution recently, I'll try it later~ |
Yep
I did, but I suppose we can re-add fairly easily. If we are moving to a breaking change version do you see back compat as a large issue here? Seems like a Release note saying to remove the configuration key and place values at the root is a fairly minor amount of work for anyone migrating to the new version. On the other hand allowing new patterns without forcing them is probably friendlier to users.
I'm not sure if the pattern is driven by static typing in the various languages backing helm and kubernetes but it's uncommon, outside file name cases, to have a dynamic key as a name in values rather than an array of elements with name properties. As a new user of the Velero chart I tripped over this and expect other folks used to kubernetes and helm standards to do so as well. That said it was a change to match standard patterns if you folks prefer it as an object that's fine, this one is a bit subjective.
In general though we should discuss "back-compat" here. Part of the assumption in the changed UX is that there are some changes that will require breaking existing values structure. We can avoid that but it will complexify the chart logic as bit as we do. Perhaps allowing the new structures in a minor version and then enforcing them in a major version later, and removing the legacy structures then, is a better strategy if you're concerned about a smooth transition? That seems like a reasonable balanced solution. I'll update the PR. |
@carlisia @nrb do you have any other concerns about this PR? |
Signed-off-by: W. Kavanaugh Latiolais <[email protected]>
0b5f629
to
42fb315
Compare
Added back compat and tests for |
fae8e5c
to
66f4b32
Compare
Signed-off-by: W. Kavanaugh Latiolais <[email protected]>
66f4b32
to
4d2343d
Compare
Also added back compat and tests for schedules |
A touch less graceful than I'd like but it does work. Signed-off-by: W. Kavanaugh Latiolais <[email protected]>
ff84aa9
to
26d1a8a
Compare
Signed-off-by: W. Kavanaugh Latiolais <[email protected]>
Signed-off-by: W. Kavanaugh Latiolais <[email protected]>
Ok, I think this is baked. Apologies for not setting it to draft it when I agreed to add backcompat |
y'all this branch is getting incredibly stale. I'm happy to bring it up to date but last time it was merge ready there were crickets from the repository team. Is there any interest in merging here or should I just go fork myself? |
Not a maintainer, but the changes in this PR seem great, but I'd recommend splitting them up into comprehensive chunks, as reviewing is a nightmare. Breakage might mean some backup system somewhere suddenly not working, which ain't great. Love the changes otherwise! |
I'd like to have separate PRs to addressing these issues, especially the PRs fulfill backward-compatible. |
This is fully back compatible and non-breaking for all changes at this point but for sure hear the concern about breaking backups; never a good look. I can split out changes but they'll still likely need to be merged in sequence or the combinatorics of merge order and back compatible will get a bit nightmarish. |
Addressing the Chart UX issues filed previously
existingSecretKey
now specifies the key inside theexistingSecret
containing the IAM credentials. This actually also controls a generated secret in this case.pluginImage
key. I did guess on the alicloud config so if someone could verify that I'd appreciate it. May end up with extra initContainers if using "old" values.deployRestic
is nowrestic.enabled
Fully supports back compat.useSecret
is now computed based on the existence of secret contents or the existing secret.configuration
wrapper, added tests to ensure behavior with or without wrapper.schedules
is now an array with a name property in each entry. Also with back compatStarted on adding a values table to the readme as is standard in most helm charts.As an aside: Not entirely sure I grok the credentials key. Prior to these change it appears the
credentials.secretContents
andcredentials.extraEnvars
ends up injected as both files and envvars and the deployments are looking for a key calledcloud
but that isn't specified in the docs. Perhaps there is velero cli context I don't have here.Fixes #88
Fixes #89
Fixes #90
Fixes #139