Skip to content
This repository has been archived by the owner on Dec 16, 2024. It is now read-only.

Kubelet use cluster-wide root CA, not per-node (bsc#1155810) #832

Merged
merged 1 commit into from
Nov 21, 2019
Merged

Kubelet use cluster-wide root CA, not per-node (bsc#1155810) #832

merged 1 commit into from
Nov 21, 2019

Conversation

jenting
Copy link

@jenting jenting commented Nov 18, 2019

Why is this PR needed?

Kubelet self-signed root CA per-node and also self-signed kubelet server certificate, this cause external service like metrics-server cannot use trusted TLS to get CPU/memory information from kubelet.

Now, all the recommended way to integrate metric-server is to add kubelet-insecure-tls flag, which is not suitable for a production-grade cluster.

Fixes https://github.com/SUSE/avant-garde/issues/1005

Reminder: Add the "fixes bsc#XXXX" to the title of the commit so that it will
appear in the changelog.

What does this PR do?

  1. skuba self-signed cluster-wide kubelet root CA with common name kubelet-ca and store the kubelet-ca.crt/kubelet-ca.key to local bootstrap folder.
  2. Upload the root CA certificate/key kubelet-ca.crt/kubelet-ca.key to all the control plane node /var/lib/kubelet/pki/kubelet-ca.crt//var/lib/kubelet/pki/kubelet-ca.key
  3. Upload the root CA certifcate kubelet-ca.crt to all the worker node /var/lib/kubelet/pki/kubelet-ca.crt
  4. Self-signed skuba server certificate/key with the cluster-wide kubelet root CA.
  5. Upload the server certificate/key kubelet.crt/kubelet.key to /var/lib/kubelet/pki/kubelet.crt//var/lib/kubelet/pki/kubelet.key

Anything else a reviewer needs to know?

This PR includes the following scenarios:

  1. Deploy a greenfield k8s cluster: generate a cluster-wide kubelet root CA (customer could provide it's own kubelet root CA cert/key before skuba node bootstrap)
  2. Upgrade an existing brownfield k8s cluster: re-generate a cluster-wide kubelet root CA
  3. Server certificate rotation when doing node upgrade.

Info for QA

Please follow the steps on Status BEFORE/AFTER applying the patch on how to validate this.
Please double verify on all the platforms: OpenStack, VMWare, Bare Metal.

Related info

Info that can be relevant for QA:

Status BEFORE applying the patch

The only way to install metrics-server is to use insecure-TLS mode.

kubectl create serviceaccount --namespace kube-system tiller
kubectl create clusterrolebinding tiller --clusterrole=cluster-admin --serviceaccount=kube-system:tiller
helm init --tiller-image registry.suse.com/caasp/v4/helm-tiller:2.14.2 --service-account tiller

helm install stable/metrics-server --set 'args={--kubelet-insecure-tls,--kubelet-preferred-address-types=InternalIP,InternalDNS,Hostname,ExternalIP,ExternalDNS}' --namespace kube-system --name metrics

metrics-server could fetch CPU/Memory from kubelet node port 10250 without TLS, so kubectl top node/kubectl top pod have information out.

Status AFTER applying the patch

kubectl create serviceaccount --namespace kube-system tiller
kubectl create clusterrolebinding tiller --clusterrole=cluster-admin --serviceaccount=kube-system:tiller
helm init --tiller-image registry.suse.com/caasp/v4/helm-tiller:2.14.2 --service-account tiller

helm fetch stable/metrics-server
tar zxvf metrics-server-2.8.8.tgz
vi metrics-server/values.yaml

Mount the kubelet root CA certificate from host path and tell the metrics-server where path of kubelet-certificate-authority is.

args:
- --kubelet-certificate-authority=/var/lib/kubelet/pki/kubelet-ca.crt
- --kubelet-preferred-address-types=InternalIP,InternalDNS,Hostname,ExternalIP,ExternalDNS

extraVolumeMounts:
- name: pki
  mountPath: /var/lib/kubelet/pki/kubelet-ca.crt
  readOnly: true
 
extraVolumes: 
- name: pki
  hostPath:
    path: /var/lib/kubelet/pki/kubelet-ca.crt
    type: File
helm install metrics-server --namespace kube-system --name metrics

metrics-server could fetch CPU/Memory from kubelet node port 10250 with TLS, so kubectl top node/kubectl top pod have information out.

Docs

SUSE/doc-caasp#602

Merge restrictions

(Please do not edit this)

We are in v4-maintenance phase, so we will restrict what can be merged to prevent unexpected surprises:

What can be merged (merge criteria):
    2 approvals:
        1 developer: code is fine
        1 QA: QA is fine
    there is a PR for updating documentation (or a statement that this is not needed)

Signed-off-by: JenTing Hsiao [email protected]

@jenting jenting added bug Something isn't working wip labels Nov 18, 2019
@jenting jenting changed the title Kubelet use cluster-wide root CA, not per-node Kubelet use cluster-wide root CA, not per-node (bsc#1155810) Nov 18, 2019
@jenting jenting self-assigned this Nov 19, 2019
@jenting jenting added documentation and removed wip labels Nov 19, 2019
@innobead
Copy link
Contributor

LGTM, just one more thing to confirm.

Do you have any idea for cert rotation?
Should we support in node upgrade?
or just manual operation supported in documentation?

P.S. iirc we also do not take care in the current skuba as well for kubelet cert rotation, because ServerTLSBootstrap default is false.

@jenting
Copy link
Author

jenting commented Nov 21, 2019

Do you have any idea for cert rotation?
Should we support in node upgrade?
or just manual operation supported in documentation?

This PR also address the server cert rotation via node upgrade.

P.S. iirc we also do not take care in the current skuba as well for kubelet cert rotation, because ServerTLSBootstrap default is false.

If we want to make server certificate rotation automatically not by node upgrade, we need to have a PKI server in K8s cluster to signing the CSR request from kubelet, but I did not take this design into skuba since we have an internal CSR signer like this repo and this might have other security issues.
Therefore, I choose to let skuba to managed the kubelet server certificate.

Ref: https://kubernetes.io/docs/reference/command-line-tools-reference/kubelet-tls-bootstrapping/#certificate-rotation

Copy link
Contributor

@c3y1huang c3y1huang left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@jenting jenting merged commit c574727 into SUSE:master Nov 21, 2019
@jenting jenting deleted the bsc-1155810 branch November 21, 2019 04:39
Copy link
Contributor

@ereslibre ereslibre left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you @jenting. Sorry for not being able to review before.

internal/pkg/skuba/deployments/ssh/kubelet.go Show resolved Hide resolved
pkg/skuba/actions/node/bootstrap/bootstrap.go Show resolved Hide resolved
pkg/skuba/actions/node/upgrade/apply.go Show resolved Hide resolved
internal/pkg/skuba/deployments/ssh/kubelet.go Show resolved Hide resolved
Comment on lines +94 to +114
host := t.target.Nodename
altNames := certutil.AltNames{}
if ip := net.ParseIP(host); ip != nil {
altNames.IPs = append(altNames.IPs, ip)
} else {
altNames.DNSNames = append(altNames.DNSNames, host)
}

// Create AltNames with defaults DNSNames/IPs
stdout, _, err := t.silentSsh("hostname", "-I")
if err != nil {
return err
}
for _, addr := range strings.Split(stdout, " ") {
if ip := net.ParseIP(addr); ip != nil {
altNames.IPs = append(altNames.IPs, ip)
}
}

alternateIPs := []net.IP{net.ParseIP(t.target.Target), net.IPv4(127, 0, 0, 1), net.IPv6loopback}
alternateDNS := []string{"localhost"}
Copy link
Contributor

@ereslibre ereslibre Nov 21, 2019

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I need to better understand the metrics-server, but I'm going to make some assumptions, please correct me if I'm wrong.

  • The metrics-server lists the nodes in the cluster using the kube api
  • The metrics-server then lists for each node the IP addresses and DNS names [if any] (internal, external...)

As I understand, its --kubelet-preferred-address-types argument determines how it will try to reach each Kubelet API (e.g. InternalIP,ExternalIP,Hostname -- as per https://github.com/kubernetes-sigs/metrics-server/blob/019bda9fb8562dbc7996f0c38e31c709628df316/README.md).

Then, what I wonder what our recommendation will be. Some thoughts:

  • The InternalIP will be detected by this script by running hostname -I (command that doesn't work on all distros btw, but that's a different story). However, the metrics-server might be unable to talk to the InternalIP of the kubelets -- maybe it isn't routable --

  • The ExternalIP won't be always available, e.g. if we deployed on top of OpenStack without cloud integration. Also, in case we had this enabled, our certificate would not include this external IP, since the machine doesn't know about this IP address, and is some upper layer using this IP address and routing traffic to the internal IP (the one that we know about), but the certificate wouldn't include the external IP and we wouldn't have any way to autodetect it prior to the kubelet registering.

  • About Hostname, I don't think our certificate would include this in any way, because we cannot know what external hostname the node has without cloud integration.

So I think we want to think about how are we going to do this integration, mostly for the different cases, and also, to what extent we want to play "smart" and autodetect many things, and to what extent we just want to ask the user on the CLI for extra SAN's directly, so they can populate those directly from the CLI, since I fear there are some cases that we will not be able to autodetect.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I need to better understand the metrics-server, but I'm going to make some assumptions, please correct me if I'm wrong.

  • The metrics-server lists the nodes in the cluster using the kube api

Yes, https://github.com/kubernetes-sigs/metrics-server/blob/019bda9fb8562dbc7996f0c38e31c709628df316/pkg/sources/summary/summary.go#L261

  • The metrics-server then lists for each node the IP addresses and DNS names [if any] (internal, external...)

https://github.com/kubernetes-sigs/metrics-server/blob/019bda9fb8562dbc7996f0c38e31c709628df316/pkg/sources/summary/addrs.go#L56-L67

As I understand, its --kubelet-preferred-address-types argument determines how it will try to reach each Kubelet API (e.g. InternalIP,ExternalIP,Hostname -- as per https://github.com/kubernetes-sigs/metrics-server/blob/019bda9fb8562dbc7996f0c38e31c709628df316/README.md).

Then, what I wonder what our recommendation will be. Some thoughts:

  • The InternalIP will be detected by this script by running hostname -I (command that doesn't work on all distros btw, but that's a different story). However, the metrics-server might be unable to talk to the InternalIP of the kubelets -- maybe it isn't routable --

In my opinion, the main purpose of metrics-server is to let kubectl top [pod|node] commands work, and further, let Horizontal Pod Autoscaling works.
Therefore, I assume that the user would install metrics-server inside the Kubernetes cluster that the in-cluster networking is routable to each component.

But there have other scenarios like the user use curl command or other tools outside Kubernetes cluster to scrape the kubelet node metrics, I have not thought this too much to be honest.

  • The ExternalIP won't be always available, e.g. if we deployed on top of OpenStack without cloud integration. Also, in case we had this enabled, our certificate would not include this external IP, since the machine doesn't know about this IP address, and is some upper layer using this IP address and routing traffic to the internal IP (the one that we know about), but the certificate wouldn't include the external IP and we wouldn't have any way to autodetect it prior to the kubelet registering.

ExternalIP is filled as skuba bootstrap/join --target <ip-address/fqdn>.
I had tested on OpenStack, and you are right, metrics-server can only access to kubelet with InternalIP, not ExternalIP.

  • About Hostname, I don't think our certificate would include this in any way, because we cannot know what external hostname the node has without cloud integration.

Now, the hostname is filled the same as skuba bootstrap/join .
Actually, kubelet self-signed server certificate SAN only contains the nodename, and mostly it is not accessible from metrics-server since the hostname cannot be resolved in CoreDNS.

So I think we want to think about how are we going to do this integration, mostly for the different cases, and also, to what extent we want to play "smart" and autodetect many things, and to what extent we just want to ask the user on the CLI for extra SAN's directly, so they can populate those directly from the CLI, since I fear there are some cases that we will not be able to autodetect.

Let the user inputs extra SAN's might not help too much since each node has a different IP address/FQDN.
The smarter way is to sign as many SANs into the server certificate as possible.

I had only double verified this PR on OpenStack, I think we could ask QA for help to double verify on VMWare and bare-metal (KVM might needed in the future).
The one way I am not pretty sure is integrated with the cloud provider. I am not sure which DHCP server assigns each node IP address. If you have any idea, provides it to me, please.

BTW, I take ETCD as a reference for the SAN's field, that's why the SAN have IPv4/IPv6 loopback interface in it.

Copy link
Contributor

@ereslibre ereslibre Nov 22, 2019

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In my opinion, the main purpose of metrics-server is to let kubectl top [pod|node] commands work, and further, let Horizontal Pod Autoscaling works.

👍, yes.

Therefore, I assume that the user would install metrics-server inside the Kubernetes cluster that the in-cluster networking is routable to each component.

Absolutely, all my comments were assuming the metrics-server is installed as a regular cluster workload.

Let the user inputs extra SAN's might not help too much since each node has a different IP address/FQDN.

Hm, what I had in mind is to provide the extra SANs on every bootstrap/join command for that specific Kubelet API, keep reading on the next point :)

The smarter way is to sign as many SANs into the server certificate as possible.

Let's keep as it is for now, as simple as possible, and if we see that we are lacking some entries on some environments that we cannot foresee right now.

If we see we are missing some SANs I would be in favour of being more secure, so for each skuba node bootstrap and skuba node join they would be able to provide a --kubelet-api-extra-sans argument to both commands. Ideally, this should be saved inside our skuba-config ConfigMap inside a map or something similar (e.g. with key nodename and value a list of extra sans), so we can keep this configuration when renewing certificates without having to ask for this arguments in the future again.

Also, as an aside, I would have to check, but I wonder how the kubelet behaves if it has cloud integration enabled. Will it recreate the kubelet API certificate if it has cloud integration, when ExternalDNS or ExternalIP is filled by the cloud integration, and will it add those entries to the local kubelet API certificate? In that case our solution wouldn't be explicitly a superset of the certificate generated by the kubelet.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just checked, everything should be fine, even with cloud integration enabled. This is the certificate we are generating for the kubelet API with this PR:

caasp-master-ereslibre-0:/var/lib/kubelet/pki # openssl x509 -in kubelet.crt -noout -text
Certificate:
    Data:
        Version: 3 (0x2)
        Serial Number: 6430524848358850734 (0x593dd04779c080ae)
    Signature Algorithm: sha256WithRSAEncryption
        Issuer: CN = kubelet-ca
        Validity
            Not Before: Nov 22 11:36:08 2019 GMT
            Not After : Nov 21 11:36:52 2020 GMT
        Subject: CN = caasp-master-ereslibre-0
        Subject Public Key Info:
            Public Key Algorithm: rsaEncryption
                Public-Key: (2048 bit)
                Modulus:
                    00:b4:0e:fb:51:84:5c:dc:5c:61:05:11:e3:bc:bf:
                    32:77:c4:8a:57:67:34:9b:17:0d:1e:ce:47:93:35:
                    de:dd:81:e6:56:07:ef:14:bc:1e:e5:09:2c:84:94:
                    de:57:39:0e:59:01:c2:37:01:71:f5:8e:45:cf:f3:
                    35:c4:29:6e:5a:17:2f:99:f7:74:55:6d:49:2c:0c:
                    e8:5d:54:d9:8e:53:44:fe:c3:02:3f:f7:06:4e:23:
                    b1:09:a3:04:c4:62:c0:9c:ca:d4:44:bc:d4:1a:48:
                    85:bd:d6:b7:47:9d:fe:9c:a2:81:a4:b5:5f:a4:4c:
                    5d:a6:de:d1:32:3f:ee:7a:6c:21:f2:65:e8:6e:7a:
                    97:97:0d:aa:7c:e5:85:b5:f4:8a:81:95:cd:b3:12:
                    e7:f8:30:f0:54:ca:04:06:5c:fd:3e:ac:97:3e:17:
                    32:10:ec:6a:c6:f4:48:6a:db:cf:bc:69:05:b5:52:
                    22:b2:3b:a1:79:d8:15:33:ea:5a:13:0e:c3:f1:e9:
                    a8:48:ca:dc:e3:a7:b8:cb:b0:b6:44:69:c5:38:86:
                    77:08:c2:38:a5:ab:0a:bf:d3:0a:5f:2a:64:a3:c7:
                    de:71:2e:0b:34:4c:e0:de:d6:c0:b2:70:e2:de:47:
                    4c:fb:79:a9:27:e1:35:72:53:03:75:a0:86:c8:69:
                    2a:3b
                Exponent: 65537 (0x10001)
        X509v3 extensions:
            X509v3 Key Usage: critical
                Digital Signature, Key Encipherment
            X509v3 Extended Key Usage: 
                TLS Web Server Authentication
            X509v3 Subject Alternative Name: 
                DNS:caasp-master-ereslibre-0, DNS:localhost, IP Address:172.28.0.6, IP Address:10.86.0.196, IP Address:127.0.0.1, IP Address:0:0:0:0:0:0:0:1
    Signature Algorithm: sha256WithRSAEncryption
         7e:c3:2a:e2:bb:fc:48:07:aa:f7:9f:61:c7:75:d8:17:f8:c2:
         fc:6d:24:bf:8c:ed:b3:43:97:ee:54:c0:1d:73:1c:7b:0c:9b:
         58:3f:83:4e:4d:00:d8:93:22:a2:c8:d2:0b:f3:78:c3:6e:2d:
         a1:3e:9e:3b:6f:63:a2:96:2e:55:59:69:f9:f1:f2:36:2b:5e:
         68:3d:6b:7c:d3:62:81:b8:95:18:a0:21:b6:4d:00:a0:a1:b2:
         37:7b:1a:d3:d7:fb:be:2a:00:37:ec:c9:71:d3:91:0d:ca:f9:
         cc:f1:12:c1:96:8b:16:50:03:53:42:35:21:77:9f:43:fa:2c:
         f5:8a:75:4f:c6:eb:70:61:c9:53:2b:f8:bf:23:78:3f:fc:60:
         b9:a2:76:2b:69:15:71:dd:69:03:cd:c6:f1:c4:d7:00:08:0a:
         c9:2e:69:81:07:9e:3a:ed:13:49:f0:93:83:8e:c9:e4:ac:d2:
         ac:31:aa:c6:a4:28:36:24:77:ff:9e:15:e1:43:68:9b:4c:2a:
         e6:74:65:84:12:ec:21:e4:0b:db:22:ef:83:96:a8:3f:e9:5d:
         4b:45:dc:80:69:e0:93:9e:a4:9a:0e:29:4f:a9:bb:1e:1c:67:
         8c:89:8b:9a:b9:da:c9:49:6d:ec:71:e5:69:dc:13:b1:7e:f9:
         43:a3:9d:de

I tried to remove all files: kubelet-ca.{crt,key} and kubelet.{crt,key}, restart the kubelet, and this was the certificate I got for kubelet.crt:

caasp-master-ereslibre-0:/var/lib/kubelet/pki # openssl x509 -in kubelet.crt -noout -text
Certificate:
    Data:
        Version: 3 (0x2)
        Serial Number: 2 (0x2)
    Signature Algorithm: sha256WithRSAEncryption
        Issuer: CN = caasp-master-ereslibre-0-ca@1574422971
        Validity
            Not Before: Nov 22 10:42:50 2019 GMT
            Not After : Nov 21 10:42:50 2020 GMT
        Subject: CN = caasp-master-ereslibre-0@1574422972
        Subject Public Key Info:
            Public Key Algorithm: rsaEncryption
                Public-Key: (2048 bit)
                Modulus:
                    00:c2:03:fb:70:52:d1:4a:53:8e:62:a9:1f:93:09:
                    00:a6:78:ee:59:e4:15:c4:cb:9c:24:90:59:f3:51:
                    a3:d5:25:d4:55:69:f0:3d:7d:43:70:f1:14:a5:77:
                    2a:e5:c5:65:2a:15:3e:7e:1a:8a:28:67:80:83:ad:
                    be:c4:ae:6a:cb:e4:27:8f:55:d6:90:85:2a:69:09:
                    d1:c1:72:b6:7b:b2:5e:80:fc:6d:1b:eb:b7:4b:b9:
                    9f:fb:57:2c:4c:59:35:ca:98:c2:61:f7:45:ad:ef:
                    6e:a8:dd:b1:fb:ad:eb:8f:ae:57:17:5a:67:25:2a:
                    d0:43:54:3f:c2:4b:db:e1:74:b5:ff:83:8f:54:dd:
                    b6:e3:89:2e:69:e7:24:38:8a:63:1a:8d:f4:18:be:
                    4f:9a:70:98:27:c8:85:8f:3c:5e:65:db:98:8b:fb:
                    6e:87:02:c0:6b:9b:9c:01:e2:ab:15:cc:e4:59:bc:
                    20:cd:c3:81:c4:75:28:1c:8e:78:45:f2:b6:32:cf:
                    92:23:65:37:76:b7:d9:0e:c4:72:9b:44:04:c1:89:
                    81:e6:c5:9b:b5:8d:b9:42:2d:ce:15:d6:7f:1d:58:
                    87:6b:3f:49:f7:eb:10:4a:e1:af:b8:f3:a8:b9:32:
                    21:78:3e:dc:2e:b0:a7:ee:f8:6c:8d:01:c3:fd:41:
                    a8:4f
                Exponent: 65537 (0x10001)
        X509v3 extensions:
            X509v3 Key Usage: critical
                Digital Signature, Key Encipherment
            X509v3 Extended Key Usage: 
                TLS Web Server Authentication
            X509v3 Basic Constraints: critical
                CA:FALSE
            X509v3 Subject Alternative Name: 
                DNS:caasp-master-ereslibre-0
    Signature Algorithm: sha256WithRSAEncryption
         0a:b7:2d:d3:82:fd:17:7e:44:b7:60:d8:14:79:e9:08:66:a7:
         f2:f1:fb:7c:d3:89:37:8a:4b:45:45:17:fb:82:28:06:d0:58:
         91:75:dd:5e:5e:b3:c1:8f:83:b2:e5:e1:7a:12:e1:31:15:a5:
         c1:22:b9:7f:86:f7:65:cd:da:56:2e:d3:0d:25:e7:86:d9:e1:
         a8:5f:ec:55:ac:cd:26:02:1c:3f:93:16:89:18:76:d9:bb:73:
         a0:c1:77:f9:ba:19:9a:b7:c0:b0:75:a3:60:5f:70:05:4c:58:
         76:02:6d:f5:a3:93:48:98:84:51:6f:78:f3:3f:4b:a9:a0:bc:
         3d:15:6e:35:8d:3f:a4:4f:3b:fe:c4:fc:fa:b7:35:20:57:b8:
         1f:02:19:f6:e7:0c:3f:22:59:cd:e7:a9:90:d6:82:bc:41:2c:
         c9:9d:b6:2a:9e:b0:6f:14:48:92:c6:b8:8e:03:f6:77:52:98:
         39:a7:f4:fd:d5:15:93:ee:76:a8:ed:e5:0d:9d:e3:d6:2d:3b:
         a9:c3:00:d1:e7:12:51:81:9e:d2:65:d7:a8:5e:83:a1:c8:48:
         47:2d:90:4a:01:d0:41:4a:d4:d6:89:57:69:16:e0:37:7f:8c:
         b8:bd:09:b6:22:fc:82:0b:56:ed:24:ae:d0:56:7e:b1:c1:13:
         47:e1:d0:fb

So everything should be fine. Thank you @jenting for all the information.

@jenting
Copy link
Author

jenting commented Nov 22, 2019

Just come out a thought, correct me if I am wrong.
Maybe we could leverage Kubernetes API as metrics-server to get the node's IP address/DNS.

Would it better?

@ereslibre
Copy link
Contributor

ereslibre commented Nov 22, 2019

Maybe we could leverage Kubernetes API as metrics-server to get the node's IP address/DNS.

Hm, I'm not sure. Because we would have to let the kubelet register first, then read the node using the Kubernetes API, create our certificate, upload certificate overriding the one created by the kubelet and restart the kubelet service.

I think the current approach is better in terms of simplicity and we can evolve with more knowledge from the field and stakeholders. The only thing I'm concerned about is how the kubelet behaves with cloud integration, whether it also adds to the kubelet API SANs the ExternalDNS and ExternalIP to the kubelet API certificate or not. Because this would be something that we just cannot anticipate prior to bootstrapping/joining the node.

@ereslibre
Copy link
Contributor

ereslibre commented Nov 22, 2019

More questions (sorry, I'm starting to dig into this):

  • How does this relate to --rotate-server-certificates on the kubelet? I understand that it would be good to leverage this automatic certificate server rotation, but then it would override our certificates at some point. With this PR we are handling the rotation, but it requires an action from the user side for us to rotate them, whereas if we leverage the RotateKubeletServerCertificate feature this would happen in an automatic fashion. How would this rotation work with a shared CA?

  • Do we need to upload the kubelet-ca.crt and specially the kubelet-ca.key to all nodes? I think it would be better to not upload any of those -- if any, only the kubelet-ca.crt should be though, but I don't think we need that one either.

@jenting
Copy link
Author

jenting commented Nov 22, 2019

More questions (sorry, I'm starting to dig into this):

  • How does this relate to --rotate-server-certificates on the kubelet? I understand that it would be good to leverage this automatic certificate server rotation, but then it would override our certificates at some point. With this PR we are handling the rotation, but it requires an action from the user side for us to rotate them, whereas if we leverage the RotateKubeletServerCertificate feature this would happen in an automatic fashion. How would this rotation work with a shared CA?

I had tested this feature before, it does not work since we do not have a custom CSR signer in current Kubernetes cluster. Therefore, when enable this feature, by default kubelet generates a CSR request, but kube-controller won’t sign the CSR to kubelet. Unless we provides a CSR signer in the cluster.

Please reference to this comment #832 (comment)

  • Do we need to upload the kubelet-ca.crt and specially the kubelet-ca.key to all nodes? I think it would be better to not upload any of those -- if any, only the kubelet-ca.crt should be though, but I don't think we need that one either.

Two reasons:

  1. Align with kubeadm behavior, kubeadm copy all the keys to the control nodes.
  2. For a situation the admin lost the bootstrap cluster folder and the cluster certificates expiry, admin still have a way to ssh to the control plane node to find the original cluster CA certificate and key and furthermore, manually renewal the cluster client and server certificates and keys.

@jenting
Copy link
Author

jenting commented Nov 22, 2019

--rotate-server-certificate flag description
https://kubernetes.io/docs/reference/command

Auto-request and rotate the kubelet serving certificates by requesting new certificates from the kube-apiserver when the certificate expiration approaches. Requires the RotateKubeletServerCertificate feature gate to be enabled, and approval of the submitted CertificateSigningRequest objects. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.)

@jenting jenting mentioned this pull request Nov 25, 2019
@maximenoel8
Copy link

maximenoel8 commented Dec 3, 2019

QA information :

Building skuba :

  • git pull on pull request 853
  • make ( for devel env )

Testing environment :

  • 1 lb / 3 masters / 3 nodes on vmware

Cluster setting :

skuba cluster init --control-plane <IP_LB> --strict-capability-defaults --kubernetes-version 1.15.2 cluster

I'm using the new option --kubernetes-version to be able to upgrade once the cluster is deployed

Scenario:

I want to deploy metrics on my cluster WITHOUT the insecure mode and get access to kubectl top nodes and kubectl top pods -n kube-system

Before :

  • Checking the kubelet certificates are correctly deploy on the 6 nodes
    Command :
openssl s_client -connect <Node Ip>:10250 -CAfile pki/kubelet-ca.crt <<< "Q" 2>&1 

Result:

CONNECTED(00000003)
depth=1 CN = kubelet-ca
verify return:1
depth=0 CN = caasp-worker-mnoel021219-0
verify return:1

and

    Verify return code: 0 (ok)
  • Deploying metrics server with helm using the custom values describe in AFTER
  • Checking metrics server is correctly deploy and running

Test:

  • Get status from top command for all the nodes
    Command:
kubectl top node

Result:

NAME                                                     CPU(cores)   CPU%   MEMORY(bytes)   MEMORY%   
caasp-master-mnoel-certificate-upgrade-metric-031219-0   247m         6%     1848Mi          23%       
caasp-master-mnoel-certificate-upgrade-metric-031219-1   178m         4%     1465Mi          18%       
caasp-master-mnoel-certificate-upgrade-metric-031219-2   207m         5%     1477Mi          18%       
caasp-worker-mnoel-certificate-upgrade-metric-031219-0   86m          1%     1091Mi          6%        
caasp-worker-mnoel-certificate-upgrade-metric-031219-1   72m          0%     1053Mi          6%        
caasp-worker-mnoel-certificate-upgrade-metric-031219-2   131m         1%     1117Mi          7%   
  • Get status from top command for all the pods
    Command:
kubectl top pods -n kube-system

Result:

NAME                                                                             CPU(cores)   MEMORY(bytes)   
cilium-7tmgt                                                                     36m          304Mi           
cilium-dmd42                                                                     40m          290Mi           
cilium-k6mds                                                                     29m          296Mi           
cilium-nrslt                                                                     41m          280Mi           
cilium-operator-585f97b879-xxl2c                                                 2m           41Mi            
cilium-qhqqb                                                                     40m          311Mi           
cilium-rs5lz                                                                     62m          308Mi   
...

Comments

The test is PASS on Vmware. I was able to access the metrics from metrics server. And the certificate are correctly deploy on all the nodes.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
bug Something isn't working documentation
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants