-
Notifications
You must be signed in to change notification settings - Fork 499
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Migrating Helm chart template logic to server side #1121
Comments
This makes it harder for users to hack changes (Now they have to compile tidb-operator) or to even understand ahead of time what would happen. I am wondering if we can have a dry run feature to emit all the resources that tidb-operator would create. I guess this is similar to another feature request to have a plan feature like terraform, but that may be more focused on update? |
Since we will have the aggregated apiserver, before the CRs are written into k8s, we can emit a plan (server-side dry run). I guess this can solve your worries. |
yes, server side dry-run (create and update) would help. |
|
should we remove dependencies on these annotations keys? tidb-operator/charts/tidb-cluster/templates/tidb-cluster.yaml Lines 12 to 15 in b918463
lots of code use these keys. |
Yes, we should, the controller should not rely on certain labels |
tracked in #1418 |
closed |
Feature Request
Currently, many resources are created and managed solely by the Helm chart. Embedding logic in the chart template makes it hard to add some features, such as the unit computation and conversion between PD/TiKV's and K8s's. It also makes it difficult to write automatic tests, validation logic and backward compatible code. Besides, users have to use both kubectl and helm to manage tidb clusters, this adds extra complexity.
Now we have introduced aggregated apiserver. We should migrate Helm chart template logic to the controller-manager or aggregated apiserver.
Below are lists of resources we should migrate to server-side:
ref: server dry-run #999
The text was updated successfully, but these errors were encountered: