Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Migrating Helm chart template logic to server side #1121

Closed
6 of 8 tasks
tennix opened this issue Nov 8, 2019 · 9 comments · Fixed by pingcap/tipocket#5
Closed
6 of 8 tasks

Migrating Helm chart template logic to server side #1121

tennix opened this issue Nov 8, 2019 · 9 comments · Fixed by pingcap/tipocket#5
Assignees
Milestone

Comments

@tennix
Copy link
Member

tennix commented Nov 8, 2019

Feature Request

Currently, many resources are created and managed solely by the Helm chart. Embedding logic in the chart template makes it hard to add some features, such as the unit computation and conversion between PD/TiKV's and K8s's. It also makes it difficult to write automatic tests, validation logic and backward compatible code. Besides, users have to use both kubectl and helm to manage tidb clusters, this adds extra complexity.

Now we have introduced aggregated apiserver. We should migrate Helm chart template logic to the controller-manager or aggregated apiserver.

Below are lists of resources we should migrate to server-side:

ref: server dry-run #999

@gregwebs
Copy link
Contributor

gregwebs commented Nov 8, 2019

This makes it harder for users to hack changes (Now they have to compile tidb-operator) or to even understand ahead of time what would happen. I am wondering if we can have a dry run feature to emit all the resources that tidb-operator would create. I guess this is similar to another feature request to have a plan feature like terraform, but that may be more focused on update?

@tennix
Copy link
Member Author

tennix commented Nov 8, 2019

Since we will have the aggregated apiserver, before the CRs are written into k8s, we can emit a plan (server-side dry run). I guess this can solve your worries.
But yes, to change the spec or resources we don't expose requires changing Go code can re-compile tidb-operator. We lose some flexibility but gain more correctness. Just as dynamic programming languages vs. static programming languages.

@gregwebs
Copy link
Contributor

gregwebs commented Nov 8, 2019

yes, server side dry-run (create and update) would help.

@aylei
Copy link
Contributor

aylei commented Nov 11, 2019

@tennix @gregwebs I reference the dry-run issue in the description, however, one thing worries me is that the PD/TiKV/TiDB are still created with ordered dependencies in our controller. I remember that you have a proposal to create them simultaneously, how is it going then? @gregwebs

@aylei aylei self-assigned this Nov 11, 2019
@gregwebs
Copy link
Contributor

how is it going then? @gregwebs
We just need to finalize testing on the PR now.

@cofyc
Copy link
Contributor

cofyc commented Dec 25, 2019

should we remove dependencies on these annotations keys?

app.kubernetes.io/name: {{ template "chart.name" . }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
app.kubernetes.io/instance: {{ .Release.Name }}
app.kubernetes.io/component: tidb-cluster

lots of code use these keys.

@aylei
Copy link
Contributor

aylei commented Dec 25, 2019

Yes, we should, the controller should not rely on certain labels

@aylei
Copy link
Contributor

aylei commented Dec 25, 2019

tracked in #1418

@aylei
Copy link
Contributor

aylei commented Dec 30, 2019

closed

@aylei aylei closed this as completed Dec 30, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging a pull request may close this issue.

4 participants