-
Notifications
You must be signed in to change notification settings - Fork 1.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Expose system component ports to the cluster #358
Conversation
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: kawych Assign the PR to them by writing The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
/cc @serathius |
@kawych: GitHub didn't allow me to request PR reviews from the following users: serathius. Note that only kubernetes-sigs members and repo collaborators can review this PR, and authors cannot review their own PRs. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
/lgtm |
@serathius: changing LGTM is restricted to assignees, and only kubernetes-sigs/cluster-api repo collaborators may be assigned issues. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
/uncc @timothysc |
Please change the release notes to be written from the action based perspective of a user, for example, something like
Ideally the notes would specify an example of what a "system component" is. |
Done |
Also added Kube Proxy to the exposed components. PTAL |
@@ -252,6 +252,12 @@ data: | |||
api: | |||
advertiseAddress: ${PUBLICIP} | |||
bindPort: ${PORT} | |||
etcd: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please update clusterctl/examples/vsphere/provider-components.yaml.template as well.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There is no analogous MasterConfiguration there. For vSphere, this is configured in cloud/vshpere/templates.go (hardcoded in the machine controller image). In this PR I edit this file as well. Long term I think it would be useful to have a consistency, although I don't have a full context - it was probably implemented differently for a reason.
@kawych - I don't think it's a good idea to expose etcd to the cluster. etcd should only be accessible to the apiserver, otherwise you can bypass the authn/z and have root level access to the cluster. It's going to be difficult to lock it down later if it's exposed now as we will inevitably gain new dependencies that assume they can reach it. Can you explain more about why this is necessary? I've enqueued this PR into the community meeting tomorrow if you'd rather do it verbally than through comments. |
We would like to have access to metrics from cluster. Could we upgrade etcd deployed to 3.3+?
|
@roberthbailey |
All of the files modified by this PR have been migrated out of these repositories so I'm going to close this PR. Please re-open against the specific provider repositories that are affected. |
Update image to 0.3.0-alpha.1
What this PR does / why we need it:
Expose system component ports to the cluster. The goal is to expose metrics from master to a monitoring agent that runs on a cluster.
Special notes for your reviewer
I'm adding these endpoints to make sure the components running on master can be monitored. Is it OK to expose these endpoints from security point of view?
Also, the way how MasterConfiguration is specified is inconsistent between GCP (part of clusterctl input) and vSphere (part of machine controller image). Is there some fundamental reason to do it differently, or are there plans to converge?
Release note:
@kubernetes/kube-deploy-reviewers
@krousey @karan @jessicaochen