Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

kubeadm -1.21 ignoring enable-admission-plugins #2496

Closed
jvrahav opened this issue May 31, 2021 · 4 comments
Closed

kubeadm -1.21 ignoring enable-admission-plugins #2496

jvrahav opened this issue May 31, 2021 · 4 comments
Labels
kind/support Categorizes issue or PR as a support question. priority/awaiting-more-evidence Lowest priority. Possibly useful, but not yet enough support to actually get it done.

Comments

@jvrahav
Copy link

jvrahav commented May 31, 2021

Is this a BUG REPORT or FEATURE REQUEST?

Choose one: BUG REPORT or FEATURE REQUEST

Versions

kubeadm version (use kubeadm version):
kubeadm version: &version.Info{Major:"1", Minor:"21", GitVersion:"v1.21.0+vmware.1-wcp", GitCommit:"8b28790eac77cadc246ec1f3b5a67892239a21c8", GitTreeState:"clean", BuildDate:"2021-05-25T03:27:46Z", GoVersion:"go1.16.1", Compiler:"gc", Platform:"linux/amd64"}
Environment:

  • Kubernetes version (use kubectl version): 1.21
  • Cloud provider or hardware configuration:
  • OS (e.g. from /etc/os-release): vmware photon
  • Kernel (e.g. uname -a):
  • Others:

What happened?

Im performing upgrade of my cluster from 1.20 to 1.21
as part of 1.21, there are some changes in admission-plugins that i want to enable on 1.21 cluster.
i have generated a kubeadm.yaml for 1.21, with the modified set of enable-admission-plugins
when i run kubeadm join with the above kubeadm.yaml passed as config, the generated kube-apiserver.yaml manifest
does not have the admission-plugins specified in kubeadm.yaml.
The manifest has the admission-plugins retrieved from kubeadm-config configmap from 1.20 cluster.

What you expected to happen?

I expected that the admission plugins from kubeadm.yaml will be consumed and the generated kube-apiserver.yaml, as part of kubeadm join, would have the new admission-plugins
This mechanism works fine when upgrading from 1.19 cluster to 1.20 cluster. however it fails when upgrading from 1.20 to 1.21 cluster

How to reproduce it (as minimally and precisely as possible)?

deploy a 1.20 cluster with few admission plugins
generate a kubeadm.yaml for 1.21 by adding few admission-plugins or removing a few from 1.20 cluster
pass the kubeadm.yaml to kubeadm join control-plane-prepare all --config=kubeadm.yaml
the kube-apiserver.yaml generated in manifest directory will not have the new admission-plugins

Anything else we need to know?

@neolit123
Copy link
Member

neolit123 commented May 31, 2021

This mechanism works fine when upgrading from 1.19 cluster to 1.20 cluster. however it fails when upgrading from 1.20 to 1.21 cluster

i doubt that is the case. the logic around enable-admission-plugins or extraArgs for the kube-apiserver has not changed in a long time.

kubeadm join control-plane-prepare all --config=kubeadm.yaml

what configuration file are you passing to join?
join only supports JoinConfiguration and does not support configuring extraArgs for the kube-apiserver, which are a part of ClusterConfiguration.

kubeadm treats all control-plane nodes as replicas and uses the shared ClusterConfiguration for them.

the only way to apply custom settings per CP nodes currently is to use --experimental-patches

related to:
#2367

@neolit123 neolit123 added kind/support Categorizes issue or PR as a support question. priority/awaiting-more-evidence Lowest priority. Possibly useful, but not yet enough support to actually get it done. labels May 31, 2021
@neolit123 neolit123 added this to the v1.22 milestone May 31, 2021
@neolit123
Copy link
Member

also, note that configuring a cluster with a different set of admission plugins across kube-apiserver instances is not really advised.
this can happen temporary during immutable node upgrade, but should not be a configuration drift that persists.

@jvrahav
Copy link
Author

jvrahav commented Jun 1, 2021

Thanks @neolit123
this is a sample kubeadm.yaml that im passing to join

apiServer:
certSANs:

  • 127.0.0.1
  • 10.92.101.108
  • supervisor.default.svc
    extraArgs:
    admission-control-config-file: /etc/somename/admission-control.yaml
    anonymous-auth: "false"
    audit-log-maxage: "30"
    audit-log-maxbackup: "10"
    audit-log-maxsize: "100"
    audit-log-path: /var/log/somename/audit/kube-apiserver.log
    audit-policy-file: /etc/somename/wcp/audit-policy.yaml
    enable-admission-plugins: NamespaceLifecycle,ServiceAccount,NodeRestriction,EventRateLimit,LimitRanger,PersistentVolumeLabel,DefaultStorageClass,DefaultTolerationSeconds,ResourceQuota,ValidatingAdmissionWebhook,PodSecurityPolicy,MutatingAdmissionWebhook
    enable-bootstrap-token-auth: "true"
    experimental-encryption-provider-config: /etc/somename/encryption-config.yaml
    feature-gates: RemoveSelfLink=false,BoundServiceAccountTokenVolume=false
    insecure-port: "0"
    kubelet-https: "true"
    oidc-ca-file: /etc/somename.pem
    oidc-client-id: somename-tes:vc:vns:k8s
    oidc-groups-claim: group_names
    oidc-groups-prefix: 'sso:'
    oidc-issuer-url:someurl
    oidc-username-prefix: 'sso:'
    profiling: "false"
    runtime-config: admissionregistration.k8s.io/v1
    service-account-lookup: "true"
    service-cluster-ip-range: x.x.x.x/16
    tls-cipher-suites: TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256,TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,TLS_RSA_WITH_AES_128_GCM_SHA256,TLS_RSA_WITH_AES_128_CBC_SHA,TLS_RSA_WITH_AES_128_CBC_SHA256,TLS_RSA_WITH_AES_256_CBC_SHA,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA
    tls-filter-cert-key: somevalue
    tls-min-version: VersionTLS12
    tls-sni-cert-key: /etc/kubernetes/pki/apiserver.crt,/etc/kubernetes/pki/apiserver.key
    timeoutForControlPlane: 4m0s
    apiVersion: kubeadm.k8s.io/v1beta2
    controlPlaneEndpoint: x.x.x.x
    controllerManager:
    extraArgs:
    address: 127.0.0.1
    client-ca-file: /etc/somename.pem
    feature-gates: RotateKubeletServerCertificate=true
    profiling: "false"
    terminated-pod-gc-threshold: "1000"
    tls-cipher-suites: TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256,TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,TLS_RSA_WITH_AES_128_GCM_SHA256,TLS_RSA_WITH_AES_128_CBC_SHA,TLS_RSA_WITH_AES_128_CBC_SHA256,TLS_RSA_WITH_AES_256_CBC_SHA,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA
    tls-min-version: VersionTLS12
    dns:
    imageRepository: localhost:5000
    imageTag: v1.21
    etcd:
    local:
    dataDir: /var/lib/etcd
    extraArgs:
    election-timeout: "50000"
    heartbeat-interval: "5000"
    initial-cluster-token: domain-c50
    max-wals: "80"
    strict-reconfig-check: "false"
    imageRepository: somename
    imageTag: KUSTOMIZE
    peerCertSANs:
    • x.x.x.x
      imageRepository: somename
      kind: ClusterConfiguration
      kubernetesVersion: 1.21.0
      metadata:
      name: kubeadm-cluster
      networking:
      dnsDomain: cluster.local
      serviceSubnet: 172.24.0.0/16
      scheduler:
      extraArgs:
      address: 127.0.0.1
      policy-configmap: wcp-scheduler-extender-policy-config
      policy-configmap-namespace: kube-system
      profiling: "false"

apiVersion: kubeadm.k8s.io/v1beta2
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 10.92.98.3
bindPort: 6443
metadata:
name: kubeadm-init
nodeRegistration:
criSocket: /run/containerd/containerd.sock
kubeletExtraArgs:
pod-infra-container-image: somename/pause:1.21.0
name: 42079d8243c669a7e0f3e2d069737448
taints:

  • effect: NoSchedule
    key: node-role.kubernetes.io/master

apiVersion: kubeadm.k8s.io/v1beta2
controlPlane:
localAPIEndpoint:
advertiseAddress: 10.92.98.3
bindPort: 6443
discovery:
file:
kubeConfigPath: /dev/shm/bootstrap/node_k8s_bootstrap.conf
kind: JoinConfiguration
metadata:
name: kubeadm-join
nodeRegistration:
criSocket: /run/containerd/containerd.sock
kubeletExtraArgs:
pod-infra-container-image: somename/pause:1.21.0
name: 42079d8243c669a7e0f3e2d069737448
taints:

  • effect: NoSchedule
    key: node-role.kubernetes.io/master
  • effect: NoExecute
    key: node-role.kubernetes.io/cluster-network-unavailable

apiVersion: kubelet.config.k8s.io/v1beta1
cgroupDriver: systemd
clusterDNS:

  • 127.0.0.53
    containerLogMaxFiles: 10
    containerLogMaxSize: 10M
    eventRecordQPS: 0
    evictionHard:
    imagefs.available: 0%
    nodefs.available: 0%
    imageGCHighThresholdPercent: 100
    kind: KubeletConfiguration
    metadata:
    name: kubeadm-kubelet
    protectKernelDefaults: false
    readOnlyPort: 0
    resolvConf: /run/systemd/resolve/stub-resolv.conf
    rotateCertificates: false
    tlsCipherSuites:
  • TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256
  • TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384
  • TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256
  • TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384
  • TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256
  • TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA
  • TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA
  • TLS_RSA_WITH_AES_128_GCM_SHA256
  • TLS_RSA_WITH_AES_128_CBC_SHA
  • TLS_RSA_WITH_AES_128_CBC_SHA256
  • TLS_RSA_WITH_AES_256_CBC_SHA
  • TLS_RSA_WITH_AES_256_GCM_SHA384
  • TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA
  • TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256
  • TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA
    tlsMinVersion: VersionTLS12

apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
metadata:
name: kubeadm-kube-proxy

Yes, this is a transient step in upgrade. once upgrade is done all nodes in the cluster will have the same configuration.

@neolit123
Copy link
Member

neolit123 commented Jun 1, 2021

this is a sample kubeadm.yaml that im passing to join

like i've mentioned join does not support ClusterConfiguration.
only JoinConfiguration:
https://pkg.go.dev/k8s.io/kubernetes/cmd/kubeadm/app/apis/kubeadm/v1beta2#hdr-Kubeadm_join_configuration_types

please see the --help screen for the --experimental-patches flag to understand how to customize control-plane instances.

closing as this is not a bug.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/support Categorizes issue or PR as a support question. priority/awaiting-more-evidence Lowest priority. Possibly useful, but not yet enough support to actually get it done.
Projects
None yet
Development

No branches or pull requests

2 participants