-
Notifications
You must be signed in to change notification settings - Fork 803
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Clean up helm chart + kustomize overlays #797
Comments
+1. We also need to go over the values.yaml to see if we can restructure it to be more consistent. For example controller values are at top level but daemonset values are under |
should be the default everwyerhe |
I can take a look at this. I will likely need some info from someone more familiar with the driver itself. I don't believe I have the necessary permissions to assign myself this issue. I can suck in several of the other helm issues while looking at this as well. |
Ideally we should have at least one version where we support both the old fields and the new fields, with new fields being prioritized. Then we can remove the old values in the next update. Is that feasible? I am not sure how big the changes are so the templates might get spaghetti real fast :/ |
@ayberk @wongma7 Here is the list of other issues I have looked at and think I can incorporate into the PR for this issue. I will likely need some advice on dealing with the CRDs. I will also need some information about what to call the organization on artifacthub.io when registering this project (They require a name for the org and can optionally have display name, home URL, and description). Let me know if you see any issues with my selections or any other information you think would relevant. Is this project still supporting helm v2 or should we bump the api version in the Chart.yaml? Issue #448 Issue #758 Issue #746 Issue #722 Issue #512 |
I can do this in the chart. Do you want an additional PR that does the second part of the move to new fields? Also, there are a fair number of values that are currently shared between the controller and the snapshot controller. Is that desired or should those be separated while we are making these changes? |
@wongma7 It seems to me like we should leave tolerateAllTaints on for the node as that is a fairly common toleration for daemonsets and just get rid of that functionality for the controller and the snapshot controller. Both of the controllers have the ability to add arbitrary tolerations anyway and if someone really wants that they can just add it back. It isn't really a normal toleration to have on something that isn't a daemonset and causes issues with draining nodes and evictions. Thoughts? |
@krmichel Really appreciate the help here.
Let's skip the artifacthub for now because there might be some internal approval requirements. I can follow up on that.
Hmmm. I'm pretty sure we've been using helm v3 exclusively, so we can bump it.
If in the first PR if you can add the new fields and give them priority over the old ones, that'd be great. Then in the next PR, with a new release, we can eventually remove the old fields. Does that answer your question?
They should be separated. Recommendation from the upstream is deploying the snapshot controller with your Kubernetes distribution, so we will probably just delete the snapshot controller from the driver. It's better if we split everything now. |
On 2nd thought maybe we shouldn't split the snapshot controller because it might end up being a throw-away work. |
I will bump the helm api in the second PR where I remove the deprecated values since it will be a breaking change anyway. |
@wongma7 @ayberk I need to know whether to remove the snapshot controller or to add the CRDs. It seems to me like removing the snapshot controller is the better course of action. One issue would be that If we add the CRDs to the chart I can't make it so they would only get installed if the snapshotting is enabled. CRDs aren't templated so they would just install if we added them (unless the |
@wongma7 Is this indicating to rename the kustomize files to have the same name as the files from the helm chart that generate them? |
@wongma7 I assume this means to move the snapshot related kustomize files from alpha to stable? Is that correct? Also, is the resizer still in alpha or should it move too? |
While we are splitting things out do we want to have different resources for each of the containers in the controller or continue to have them all use the same values? Same question for the containers in the node. I don't know a lot about the resource requirements of the various images, but it does seem unlikely that a side car image for liveness would need the same amount of resources as the controller itself. |
We should just get rid of the snapshot controller honestly. We just need to update the docs to point where to find the installation steps. This will most certainly break some customers, but we have the pull the plug eventually. @krmichel Let's not change what we have right now. One step at a time. We'll have perf testing later, we can adjust them then. |
I can remove the snapshot controller when I remove the old values and bump the helm api. I will create an issue for round two of these changes. |
@krmichel sorry I missed your questions. What I meant by "snapshot is not alpha anymore" is we should not gate its installation by a flag, and it should not be in the alpha kustomize overlay, it should be installed by default. As for snapshot controller and its associated CRDs, if there is a way to install it only if it DNE then I would like that. I think the vast majority of users will appreciate it being installed together with the driver, even if technically it can be installed separately. |
(sorry, just catching up with past discussion). However I am fine if we postpone the snapshot controller/CRD question for later. Because currently we don't do it, if we start doing it then we are setting the expectation that we will always do it. BTW, I was planning to do a refactor similar to kubernetes-sigs/aws-efs-csi-driver#406 taht more clearly splits up the |
latest discussion on enableVolumeSnapshot: maybe we should deprecate & rename it to installSnapshotController . on one hand I have heard argument that snapshot controller should be pre-installed by the k8s distro/vendor or cluster admin, because it is a reusable component for multiple csi drivers not specific to ebs, so we should not even install it. on other, I know for a fact that EKS doesn't do that : ) and so users should at least have option to deploy the snapshot controller. |
/kind bug
The text was updated successfully, but these errors were encountered: