From 52cecc3dbc1b25d8617b094ba3d1906185aa50b0 Mon Sep 17 00:00:00 2001 From: Shunde Zhang Date: Mon, 14 Oct 2019 11:15:12 +1100 Subject: [PATCH 01/54] Create Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md --- ...l-OpenStack-Cloud-Provider-With-Kubeadm.md | 742 ++++++++++++++++++ 1 file changed, 742 insertions(+) create mode 100644 content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md diff --git a/content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md b/content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md new file mode 100644 index 0000000000000..c989cec5e158b --- /dev/null +++ b/content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md @@ -0,0 +1,742 @@ +This document describes how to install a single-master Kubernetes cluster v1.15 with kubeadm on CentOS, and then deploy an external OpenStack cloud provider and Cinder CSI plugin in order to use Cinder volumes as persistent volumes in Kubernetes. + +This cluster will be running on OpenStack VMs so we'll create a few things in OpenStack for it. + +* a project/tenant for this kubernetes cluster +* a user in this project for Kubernetes, to query node information and attach volumes etc +* a private network and subnet +* a router for this private network and connect it to a public network for floating IPs +* a security group for all Kubernetes VMs +* a VM as master node and a few VMs as worker nodes + +The security group will have the following rules to open ports for Kubernetes. + +**Master Node** + +|Protocol | Port Number | Description| +|----------|-------------|------------| +|TCP |6443|Kubernetes API Server| +|TCP|2379-2380|etcd server client API| +|TCP|10250|Kubelet API| +|TCP|10251|kube-scheduler| +|TCP|10252|kube-controller-manager| +|TCP|10255|Read-only Kubelet API| + +**Worker Nodes** + +|Protocol | Port Number | Description| +|----------|-------------|------------| +|TCP|10250|Kubelet API| +|TCP|10255|Read-only Kubelet API| +|TCP|30000-32767|NodePort Services| + +**CNI Ports on both master and worker nodes** + +|Protocol | Port Number | Description| +|----------|-------------|------------| +|TCP|179|Calico BGP network| +|TCP|9099|Calico felix (health check)| +|UDP|8285|Flannel| +|UDP|8472|Flannel| +|TCP|6781-6784|Weave| +|UDP|6783-6784|Weave| + +The master needs at least 2 cores and 4GB RAM. After the VM is launched, verify its hostname and make sure it is the same as the node name in Nova. +If the hostname is not resolvable, add it to /etc/hosts. + +For example, if the VM is called master1, and it has an internal IP 192.168.1.4. Add that to /etc/hosts and set hostname to master1. +``` +echo "192.168.1.4 master1" >> /etc/hosts + +hostnamectl set-hostname master1 +``` +Next we'll follow official documents to install docker and Kubernetes using kubeadm. + +Install docker following the steps in https://kubernetes.io/docs/setup/production-environment/container-runtimes/ + +Note that it is recommend to use systemd as the cgroup driver for Kubernetes. +If you use internal repository servers, add them to docker's config too. +``` +# Install Docker CE +## Set up the repository +### Install required packages. + +yum install yum-utils device-mapper-persistent-data lvm2 + +### Add Docker repository. + +yum-config-manager \ + --add-repo \ + https://download.docker.com/linux/centos/docker-ce.repo + +## Install Docker CE. + +yum update && yum install docker-ce-18.06.2.ce + +## Create /etc/docker directory. + +mkdir /etc/docker + +# Setup daemon. + +cat > /etc/docker/daemon.json < /etc/yum.repos.d/kubernetes.repo +[kubernetes] +name=Kubernetes +baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64 +enabled=1 +gpgcheck=1 +repo_gpgcheck=1 +gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg +EOF + +# Set SELinux in permissive mode (effectively disabling it) +setenforce 0 +sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config + +yum install -y kubelet kubeadm kubectl --disableexcludes=kubernetes + +systemctl enable --now kubelet + +cat < /etc/sysctl.d/k8s.conf +net.bridge.bridge-nf-call-ip6tables = 1 +net.bridge.bridge-nf-call-iptables = 1 +EOF +sysctl --system + +# check if br_netfilter module is loaded +lsmod | grep br_netfilter + +# if not, load it explicitly with +modprobe br_netfilter +``` + +The official document about how to create a single-master cluster can be found in https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/ + +We'll largely follow that document but also add additional things for the cloud provider. +To make things clearer we'll use a kubeadm-config.yml for the master. +In this config we specify to use an external OpenStack cloud provider, and where to find its config. +We also enable storage API in API server's runtime config so we can use OpenStack volumes as persistent volumes in Kubernets. + +``` +apiVersion: kubeadm.k8s.io/v1beta1 +kind: InitConfiguration +nodeRegistration: + kubeletExtraArgs: + cloud-provider: "external" +--- +apiVersion: kubeadm.k8s.io/v1beta2 +kind: ClusterConfiguration +kubernetesVersion: "v1.15.1" +apiServer: + extraArgs: + enable-admission-plugins: NodeRestriction + runtime-config: "storage.k8s.io/v1=true" +controllerManager: + extraArgs: + external-cloud-volume-plugin: openstack + extraVolumes: + - name: "cloud-config" + hostPath: "/etc/kubernetes/cloud-config" + mountPath: "/etc/kubernetes/cloud-config" + readOnly: true + pathType: File +networking: + serviceSubnet: "10.96.0.0/12" + podSubnet: "10.224.0.0/16" + dnsDomain: "cluster.local" +``` + +Now we'll create cloud config, /etc/kubernetes/cloud-config, for OpenStack. +Note that the tenant here is the one we created for all Kubernets VMs in the beginning. +All VMs should be launched in this project/tenant. +In addition you need to create a user in this tenant for Kubernetes to do queries. +The ca-file is the CA root certitiface for OpenStack's API endpoint https://openstack.cloud:5000/v3 +At the time of writing the cloud provider doesn't allow insecure connections (skip CA check). + +``` +[Global] +region=RegionOne +username=username +password=password +auth-url=https://openstack.cloud:5000/v3 +tenant-id=14ba698c0aec4fd6b7dc8c310f664009 +domain-id=default +ca-file=/etc/kubernetes/ca.pem + +[LoadBalancer] +subnet-id=b4a9a292-ea48-4125-9fb2-8be2628cb7a1 +floating-network-id=bc8a590a-5d65-4525-98f3-f7ef29c727d5 + +[BlockStorage] +bs-version=v2 + +[Networking] +public-network-name=public +ipv6-support-disabled=false +``` + +Next run kubeadm to initiate the master +``` +kubeadm init --config=kubeadm-config.yml +``` + +When the initialisation is finished, copy admin config to .kube +``` + mkdir -p $HOME/.kube + sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config + sudo chown $(id -u):$(id -g) $HOME/.kube/config +``` + +At this stage the master node is created but not ready. All the nodes have the taint node.cloudprovider.kubernetes.io/uninitialized=true:NoSchedule and waiting for being initialized by cloud-controller-manager. +``` +# kubectl describe no master1 +Name: master1 +Roles: master +...... +Taints: node-role.kubernetes.io/master:NoSchedule + node.cloudprovider.kubernetes.io/uninitialized=true:NoSchedule + node.kubernetes.io/not-ready:NoSchedule +...... +``` +Now deploy openstack cloud controller manager into the cluster as per the instruction: https://github.com/kubernetes/cloud-provider-openstack/blob/master/docs/using-controller-manager-with-kubeadm.md + +Create a secret with cloud-config for openstack cloud provider. +``` +kubectl create secret -n kube-system generic cloud-config --from-literal=cloud.conf="$(cat /etc/kubernetes/cloud-config)" --dry-run -o yaml > cloud-config-secret.yaml +kubectl apply -f cloud-config-secret.yaml +``` + +Get ca certs of OpenStack API endpoints and put it in /etc/kubernetes/ca.pem. + +Create RBAC resources. +``` +kubectl apply -f https://github.com/kubernetes/cloud-provider-openstack/raw/release-1.15/cluster/addons/rbac/cloud-controller-manager-roles.yaml +kubectl apply -f https://github.com/kubernetes/cloud-provider-openstack/raw/release-1.15/cluster/addons/rbac/cloud-controller-manager-role-bindings.yaml +``` + +We'll run OpenStack cloud controller manager as a DaemonSet rather than a pod. +The manager will only run on the master, so if there are multiple masters, multiple pods will be run for high availability. +Create the DaemonSet yaml, openstack-cloud-controller-manager-ds.yaml, and apply it. + +``` +--- +apiVersion: v1 +kind: ServiceAccount +metadata: + name: cloud-controller-manager + namespace: kube-system +--- +apiVersion: apps/v1 +kind: DaemonSet +metadata: + name: openstack-cloud-controller-manager + namespace: kube-system + labels: + k8s-app: openstack-cloud-controller-manager +spec: + selector: + matchLabels: + k8s-app: openstack-cloud-controller-manager + updateStrategy: + type: RollingUpdate + template: + metadata: + labels: + k8s-app: openstack-cloud-controller-manager + spec: + nodeSelector: + node-role.kubernetes.io/master: "" + securityContext: + runAsUser: 1001 + tolerations: + - key: node.cloudprovider.kubernetes.io/uninitialized + value: "true" + effect: NoSchedule + - key: node-role.kubernetes.io/master + effect: NoSchedule + - effect: NoSchedule + key: node.kubernetes.io/not-ready + serviceAccountName: cloud-controller-manager + containers: + - name: openstack-cloud-controller-manager + image: docker.io/k8scloudprovider/openstack-cloud-controller-manager:v1.15.0 + args: + - /bin/openstack-cloud-controller-manager + - --v=1 + - --cloud-config=$(CLOUD_CONFIG) + - --cloud-provider=openstack + - --use-service-account-credentials=true + - --address=127.0.0.1 + volumeMounts: + - mountPath: /etc/kubernetes/pki + name: k8s-certs + readOnly: true + - mountPath: /etc/ssl/certs + name: ca-certs + readOnly: true + - mountPath: /etc/config + name: cloud-config-volume + readOnly: true + - mountPath: /usr/libexec/kubernetes/kubelet-plugins/volume/exec + name: flexvolume-dir + - mountPath: /etc/kubernetes + name: ca-cert + readOnly: true + resources: + requests: + cpu: 200m + env: + - name: CLOUD_CONFIG + value: /etc/config/cloud.conf + hostNetwork: true + volumes: + - hostPath: + path: /usr/libexec/kubernetes/kubelet-plugins/volume/exec + type: DirectoryOrCreate + name: flexvolume-dir + - hostPath: + path: /etc/kubernetes/pki + type: DirectoryOrCreate + name: k8s-certs + - hostPath: + path: /etc/ssl/certs + type: DirectoryOrCreate + name: ca-certs + - name: cloud-config-volume + secret: + secretName: cloud-config + - name: ca-cert + secret: + secretName: openstack-ca-cert +``` + +When the controller manager is running, it will query OpenStack to get information about the nodes and remove the taint. In the node info you'll see the VM's UUID in OpenStack. +``` +# kubectl describe no master1 +Name: master1 +Roles: master +...... +Taints: node-role.kubernetes.io/master:NoSchedule + node.kubernetes.io/not-ready:NoSchedule +...... +sage:docker: network plugin is not ready: cni config uninitialized +...... +PodCIDR: 10.224.0.0/24 +ProviderID: openstack:///548e3c46-2477-4ce2-968b-3de1314560a5 + +``` +Now install your favourite CNI and the master node will become ready. + +For example, to install weave net, run this command +``` +kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d '\n')" +``` + +Next we'll set up worker nodes. + +Firstly install docker and kubeadm in the same way as how they were installed in the master. +To join them to the cluster we need a token and ca cert hash from the output of master installation. +If it is expired or lost we can recreate it using these commands. + +``` +# check if token is expired +kubeadm token list + +# re-create token and show join command +kubeadm token create --print-join-command + +``` + +Create kubeadm-config.yml for worker nodes with the above token and ca cert hash. +``` +apiVersion: kubeadm.k8s.io/v1beta2 +discovery: + bootstrapToken: + apiServerEndpoint: 192.168.1.7:6443 + token: 0c0z4p.dnafh6vnmouus569 + caCertHashes: ["sha256:fcb3e956a6880c05fc9d09714424b827f57a6fdc8afc44497180905946527adf"] +kind: JoinConfiguration +nodeRegistration: + kubeletExtraArgs: + cloud-provider: "external" + +``` +apiServerEndpoint is the master node, token and caCertHashes can be taken from the join command printed in the output of 'kubeadm token create' command. + +Run kubeadm and the worker nodes will be joined to the cluster. +``` +kubeadm join --config kubeadm-config.yml +``` + +At this stage we'll have a working Kubernetes cluster with an external OpenStack cloud provider. +The provider tells Kubernetes about the mapping between Kubernetes nodes and OpenStack VMs. +If Kubernetes wants to attach a persistent volume to a pod, it can find out which OpenStack VM the pod is running on from the mapping, and attach the underlying OpenStack volume to the VM accordingly. + +The integration with Cinder is provided by an external Cinder CSI plugin, as described in https://github.com/kubernetes/cloud-provider-openstack/blob/master/docs/using-cinder-csi-plugin.md + +We'll perform the following steps to install the Clinder CSI plugin. +Firstly create a secret with CA certs for OpenStack's API endpoints. It is the same cert file as what we use in cloud provider above. +``` +kubectl create secret -n kube-system generic openstack-ca-cert --from-literal=ca.pem="$(cat /etc/kubernetes/ca.pem)" --dry-run -o yaml > openstack-ca-cert.yaml +kubectl apply -f openstack-ca-cert.yaml +``` +Then create RBAC resources. +``` +kubectl apply -f https://raw.githubusercontent.com/kubernetes/cloud-provider-openstack/release-1.15/manifests/cinder-csi-plugin/cinder-csi-controllerplugin-rbac.yaml +kubectl apply -f https://github.com/kubernetes/cloud-provider-openstack/raw/release-1.15/manifests/cinder-csi-plugin/cinder-csi-nodeplugin-rbac.yaml +``` + +The Cinder CSI plugin includes a controller plugin and a node plugin. +The controller communicates with Kubernetes APIs and Cinder APIs to create/attach/detach/delete Cinder volumes, while node plugin runs on each worker node to bind a storage device (attached volume) to a pod, and unbind it during deletion. +Create cinder-csi-controllerplugin.yaml and apply it to create csi controller. +``` +kind: Service +apiVersion: v1 +metadata: + name: csi-cinder-controller-service + namespace: kube-system + labels: + app: csi-cinder-controllerplugin +spec: + selector: + app: csi-cinder-controllerplugin + ports: + - name: dummy + port: 12345 + +--- +kind: StatefulSet +apiVersion: apps/v1 +metadata: + name: csi-cinder-controllerplugin + namespace: kube-system +spec: + serviceName: "csi-cinder-controller-service" + replicas: 1 + selector: + matchLabels: + app: csi-cinder-controllerplugin + template: + metadata: + labels: + app: csi-cinder-controllerplugin + spec: + serviceAccount: csi-cinder-controller-sa + containers: + - name: csi-attacher + image: quay.io/k8scsi/csi-attacher:v1.0.1 + args: + - "--v=5" + - "--csi-address=$(ADDRESS)" + env: + - name: ADDRESS + value: /var/lib/csi/sockets/pluginproxy/csi.sock + imagePullPolicy: "IfNotPresent" + volumeMounts: + - name: socket-dir + mountPath: /var/lib/csi/sockets/pluginproxy/ + - name: csi-provisioner + image: quay.io/k8scsi/csi-provisioner:v1.0.1 + args: + - "--provisioner=csi-cinderplugin" + - "--csi-address=$(ADDRESS)" + env: + - name: ADDRESS + value: /var/lib/csi/sockets/pluginproxy/csi.sock + imagePullPolicy: "IfNotPresent" + volumeMounts: + - name: socket-dir + mountPath: /var/lib/csi/sockets/pluginproxy/ + - name: csi-snapshotter + image: quay.io/k8scsi/csi-snapshotter:v1.0.1 + args: + - "--connection-timeout=15s" + - "--csi-address=$(ADDRESS)" + env: + - name: ADDRESS + value: /var/lib/csi/sockets/pluginproxy/csi.sock + imagePullPolicy: Always + volumeMounts: + - mountPath: /var/lib/csi/sockets/pluginproxy/ + name: socket-dir + - name: cinder-csi-plugin + image: docker.io/k8scloudprovider/cinder-csi-plugin:v1.15.0 + args : + - /bin/cinder-csi-plugin + - "--v=5" + - "--nodeid=$(NODE_ID)" + - "--endpoint=$(CSI_ENDPOINT)" + - "--cloud-config=$(CLOUD_CONFIG)" + - "--cluster=$(CLUSTER_NAME)" + env: + - name: NODE_ID + valueFrom: + fieldRef: + fieldPath: spec.nodeName + - name: CSI_ENDPOINT + value: unix://csi/csi.sock + - name: CLOUD_CONFIG + value: /etc/config/cloud.conf + - name: CLUSTER_NAME + value: kubernetes + imagePullPolicy: "IfNotPresent" + volumeMounts: + - name: socket-dir + mountPath: /csi + - name: secret-cinderplugin + mountPath: /etc/config + readOnly: true + - mountPath: /etc/kubernetes + name: ca-cert + readOnly: true + volumes: + - name: socket-dir + hostPath: + path: /var/lib/csi/sockets/pluginproxy/ + type: DirectoryOrCreate + - name: secret-cinderplugin + secret: + secretName: cloud-config + - name: ca-cert + secret: + secretName: openstack-ca-cert +``` + + +Create cinder-csi-nodeplugin.yaml and apply it to create csi node. +``` +kind: DaemonSet +apiVersion: apps/v1 +metadata: + name: csi-cinder-nodeplugin + namespace: kube-system +spec: + selector: + matchLabels: + app: csi-cinder-nodeplugin + template: + metadata: + labels: + app: csi-cinder-nodeplugin + spec: + serviceAccount: csi-cinder-node-sa + hostNetwork: true + containers: + - name: node-driver-registrar + image: quay.io/k8scsi/csi-node-driver-registrar:v1.1.0 + args: + - "--v=5" + - "--csi-address=$(ADDRESS)" + - "--kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)" + lifecycle: + preStop: + exec: + command: ["/bin/sh", "-c", "rm -rf /registration/cinder.csi.openstack.org /registration/cinder.csi.openstack.org-reg.sock"] + env: + - name: ADDRESS + value: /csi/csi.sock + - name: DRIVER_REG_SOCK_PATH + value: /var/lib/kubelet/plugins/cinder.csi.openstack.org/csi.sock + - name: KUBE_NODE_NAME + valueFrom: + fieldRef: + fieldPath: spec.nodeName + imagePullPolicy: "IfNotPresent" + volumeMounts: + - name: socket-dir + mountPath: /csi + - name: registration-dir + mountPath: /registration + - name: cinder-csi-plugin + securityContext: + privileged: true + capabilities: + add: ["SYS_ADMIN"] + allowPrivilegeEscalation: true + image: docker.io/k8scloudprovider/cinder-csi-plugin:v1.15.0 + args : + - /bin/cinder-csi-plugin + - "--nodeid=$(NODE_ID)" + - "--endpoint=$(CSI_ENDPOINT)" + - "--cloud-config=$(CLOUD_CONFIG)" + env: + - name: NODE_ID + valueFrom: + fieldRef: + fieldPath: spec.nodeName + - name: CSI_ENDPOINT + value: unix://csi/csi.sock + - name: CLOUD_CONFIG + value: /etc/config/cloud.conf + imagePullPolicy: "IfNotPresent" + volumeMounts: + - name: socket-dir + mountPath: /csi + - name: pods-mount-dir + mountPath: /var/lib/kubelet/pods + mountPropagation: "Bidirectional" + - name: kubelet-dir + mountPath: /var/lib/kubelet + mountPropagation: "Bidirectional" + - name: pods-cloud-data + mountPath: /var/lib/cloud/data + readOnly: true + - name: pods-probe-dir + mountPath: /dev + mountPropagation: "HostToContainer" + - name: secret-cinderplugin + mountPath: /etc/config + readOnly: true + - mountPath: /etc/kubernetes + name: ca-cert + readOnly: true + volumes: + - name: socket-dir + hostPath: + path: /var/lib/kubelet/plugins/cinder.csi.openstack.org + type: DirectoryOrCreate + - name: registration-dir + hostPath: + path: /var/lib/kubelet/plugins_registry/ + type: Directory + - name: kubelet-dir + hostPath: + path: /var/lib/kubelet + type: Directory + - name: pods-mount-dir + hostPath: + path: /var/lib/kubelet/pods + type: Directory + - name: pods-cloud-data + hostPath: + path: /var/lib/cloud/data + type: Directory + - name: pods-probe-dir + hostPath: + path: /dev + type: Directory + - name: secret-cinderplugin + secret: + secretName: cloud-config + - name: ca-cert + secret: + secretName: openstack-ca-cert + +``` +When they are both running, create a storage class for Cinder. + +``` +apiVersion: storage.k8s.io/v1 +kind: StorageClass +metadata: + name: csi-sc-cinderplugin +provisioner: csi-cinderplugin +``` +Then we can create a PVC with this class. +``` +apiVersion: v1 +kind: PersistentVolumeClaim +metadata: + name: myvol +spec: + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + storageClassName: csi-sc-cinderplugin + +``` + +When the PVC is created, a Cinder volume is created correspondingly. +``` +# kubectl get pvc +NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE +myvol Bound pvc-14b8bc68-6c4c-4dc6-ad79-4cb29a81faad 1Gi RWO csi-sc-cinderplugin 3s + +``` +In OpenStack the volume will be called "*pvc*-14b8bc68-6c4c-4dc6-ad79-4cb29a81faad" + +Now we can create a pod with the PVC. +``` +apiVersion: v1 +kind: Pod +metadata: + name: web +spec: + containers: + - name: web + image: nginx + ports: + - name: web + containerPort: 80 + hostPort: 8081 + protocol: TCP + volumeMounts: + - mountPath: "/usr/share/nginx/html" + name: mypd + volumes: + - name: mypd + persistentVolumeClaim: + claimName: myvol +``` +When the pod is running, the volume will be attached to the pod. +If we go back to OpenStack, we can see the Cinder volume is mounted to the worker node where the pod is running on. +``` +# openstack volume show 6b5f3296-b0eb-40cd-bd4f-2067a0d6287f ++--------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ +| Field | Value | ++--------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ +| attachments | [{u'server_id': u'1c5e1439-edfa-40ed-91fe-2a0e12bc7eb4', u'attachment_id': u'11a15b30-5c24-41d4-86d9-d92823983a32', u'attached_at': u'2019-07-24T05:02:34.000000', u'host_name': u'compute-6', u'volume_id': u'6b5f3296-b0eb-40cd-bd4f-2067a0d6287f', u'device': u'/dev/vdb', u'id': u'6b5f3296-b0eb-40cd-bd4f-2067a0d6287f'}] | +| availability_zone | nova | +| bootable | false | +| consistencygroup_id | None | +| created_at | 2019-07-24T05:02:18.000000 | +| description | Created by OpenStack Cinder CSI driver | +| encrypted | False | +| id | 6b5f3296-b0eb-40cd-bd4f-2067a0d6287f | +| migration_status | None | +| multiattach | False | +| name | pvc-14b8bc68-6c4c-4dc6-ad79-4cb29a81faad | +| os-vol-host-attr:host | rbd:volumes@rbd#rbd | +| os-vol-mig-status-attr:migstat | None | +| os-vol-mig-status-attr:name_id | None | +| os-vol-tenant-attr:tenant_id | 14ba698c0aec4fd6b7dc8c310f664009 | +| properties | attached_mode='rw', cinder.csi.openstack.org/cluster='kubernetes' | +| replication_status | None | +| size | 1 | +| snapshot_id | None | +| source_volid | None | +| status | in-use | +| type | rbd | +| updated_at | 2019-07-24T05:02:35.000000 | +| user_id | 5f6a7a06f4e3456c890130d56babf591 | ++--------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + +``` + From 705eb3c6573a45933389b077acdd9374bbc9abbb Mon Sep 17 00:00:00 2001 From: Shunde Zhang Date: Sat, 18 Jan 2020 22:04:01 +1100 Subject: [PATCH 02/54] Update content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md Co-Authored-By: Bob Killen --- ...Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-) diff --git a/content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md b/content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md index c989cec5e158b..16f03cf5dae78 100644 --- a/content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md +++ b/content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md @@ -2,7 +2,7 @@ This document describes how to install a single-master Kubernetes cluster v1.15 This cluster will be running on OpenStack VMs so we'll create a few things in OpenStack for it. -* a project/tenant for this kubernetes cluster +* A project/tenant for this kubernetes cluster * a user in this project for Kubernetes, to query node information and attach volumes etc * a private network and subnet * a router for this private network and connect it to a public network for floating IPs @@ -739,4 +739,3 @@ If we go back to OpenStack, we can see the Cinder volume is mounted to the worke +--------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ ``` - From 7c32f8e8f141837665fad46c89fbea16e9f7cffd Mon Sep 17 00:00:00 2001 From: Shunde Zhang Date: Sat, 18 Jan 2020 22:06:05 +1100 Subject: [PATCH 03/54] Update content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md Co-Authored-By: Bob Killen --- .../Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md b/content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md index 16f03cf5dae78..86ec15b58135a 100644 --- a/content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md +++ b/content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md @@ -11,7 +11,7 @@ This cluster will be running on OpenStack VMs so we'll create a few things in Op The security group will have the following rules to open ports for Kubernetes. -**Master Node** +**Control Plane Node** |Protocol | Port Number | Description| |----------|-------------|------------| From 742009b5e7545fe2a1f51d8818dd6fae9fb177dc Mon Sep 17 00:00:00 2001 From: Shunde Zhang Date: Sat, 18 Jan 2020 22:06:41 +1100 Subject: [PATCH 04/54] Update content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md Co-Authored-By: Bob Killen --- .../Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md b/content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md index 86ec15b58135a..0e411a8901984 100644 --- a/content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md +++ b/content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md @@ -52,7 +52,7 @@ hostnamectl set-hostname master1 ``` Next we'll follow official documents to install docker and Kubernetes using kubeadm. -Install docker following the steps in https://kubernetes.io/docs/setup/production-environment/container-runtimes/ +Install docker following the steps from the [Container Runtime Documentation.](/docs/setup/production-environment/container-runtimes/) Note that it is recommend to use systemd as the cgroup driver for Kubernetes. If you use internal repository servers, add them to docker's config too. From b721a868b42ab375a19087beaf8de1ed2196e726 Mon Sep 17 00:00:00 2001 From: Shunde Zhang Date: Sat, 18 Jan 2020 22:07:41 +1100 Subject: [PATCH 05/54] Update content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md Co-Authored-By: Bob Killen --- .../Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md b/content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md index 0e411a8901984..2b39ee5c55701 100644 --- a/content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md +++ b/content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md @@ -359,7 +359,7 @@ kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$(kubectl versio Next we'll set up worker nodes. -Firstly install docker and kubeadm in the same way as how they were installed in the master. +Firstly, install docker and kubeadm in the same way as how they were installed in the master. To join them to the cluster we need a token and ca cert hash from the output of master installation. If it is expired or lost we can recreate it using these commands. From cbaf5c38000526dec98be112d1d76c118562ac9c Mon Sep 17 00:00:00 2001 From: Shunde Zhang Date: Sat, 18 Jan 2020 22:08:17 +1100 Subject: [PATCH 06/54] Update content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md Co-Authored-By: Bob Killen --- .../Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md b/content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md index 2b39ee5c55701..552d512778c74 100644 --- a/content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md +++ b/content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md @@ -412,7 +412,7 @@ kubectl apply -f https://github.com/kubernetes/cloud-provider-openstack/raw/rele ``` The Cinder CSI plugin includes a controller plugin and a node plugin. -The controller communicates with Kubernetes APIs and Cinder APIs to create/attach/detach/delete Cinder volumes, while node plugin runs on each worker node to bind a storage device (attached volume) to a pod, and unbind it during deletion. +The controller communicates with Kubernetes APIs and Cinder APIs to create/attach/detach/delete Cinder volumes. The node plugin in-turn runs on each worker node to bind a storage device (attached volume) to a pod, and unbind it during deletion. Create cinder-csi-controllerplugin.yaml and apply it to create csi controller. ``` kind: Service From 8b718869ea862b11fedb9828878f6de8547c98a0 Mon Sep 17 00:00:00 2001 From: Shunde Zhang Date: Sat, 18 Jan 2020 22:08:33 +1100 Subject: [PATCH 07/54] Update content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md Co-Authored-By: Bob Killen --- .../Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md b/content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md index 552d512778c74..1b6aed6090759 100644 --- a/content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md +++ b/content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md @@ -400,7 +400,7 @@ If Kubernetes wants to attach a persistent volume to a pod, it can find out whic The integration with Cinder is provided by an external Cinder CSI plugin, as described in https://github.com/kubernetes/cloud-provider-openstack/blob/master/docs/using-cinder-csi-plugin.md We'll perform the following steps to install the Clinder CSI plugin. -Firstly create a secret with CA certs for OpenStack's API endpoints. It is the same cert file as what we use in cloud provider above. +Firstly, create a secret with CA certs for OpenStack's API endpoints. It is the same cert file as what we use in cloud provider above. ``` kubectl create secret -n kube-system generic openstack-ca-cert --from-literal=ca.pem="$(cat /etc/kubernetes/ca.pem)" --dry-run -o yaml > openstack-ca-cert.yaml kubectl apply -f openstack-ca-cert.yaml From 1772ff4ab1c843480d851cbe17efe2a1be0e9e59 Mon Sep 17 00:00:00 2001 From: Shunde Zhang Date: Sat, 18 Jan 2020 22:18:56 +1100 Subject: [PATCH 08/54] Update content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md Co-Authored-By: Bob Killen --- .../Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md b/content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md index 1b6aed6090759..d257aa795281b 100644 --- a/content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md +++ b/content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md @@ -171,7 +171,7 @@ networking: dnsDomain: "cluster.local" ``` -Now we'll create cloud config, /etc/kubernetes/cloud-config, for OpenStack. +Now we'll create cloud config, `/etc/kubernetes/cloud-config`, for OpenStack. Note that the tenant here is the one we created for all Kubernets VMs in the beginning. All VMs should be launched in this project/tenant. In addition you need to create a user in this tenant for Kubernetes to do queries. From 06128ececa73a4a3d44fe81da4a70671a09e03e3 Mon Sep 17 00:00:00 2001 From: Shunde Zhang Date: Sat, 18 Jan 2020 22:19:12 +1100 Subject: [PATCH 09/54] Update content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md Co-Authored-By: Bob Killen --- .../Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md b/content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md index d257aa795281b..002d5da7a19fa 100644 --- a/content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md +++ b/content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md @@ -399,7 +399,7 @@ If Kubernetes wants to attach a persistent volume to a pod, it can find out whic The integration with Cinder is provided by an external Cinder CSI plugin, as described in https://github.com/kubernetes/cloud-provider-openstack/blob/master/docs/using-cinder-csi-plugin.md -We'll perform the following steps to install the Clinder CSI plugin. +We'll perform the following steps to install the Cinder CSI plugin. Firstly, create a secret with CA certs for OpenStack's API endpoints. It is the same cert file as what we use in cloud provider above. ``` kubectl create secret -n kube-system generic openstack-ca-cert --from-literal=ca.pem="$(cat /etc/kubernetes/ca.pem)" --dry-run -o yaml > openstack-ca-cert.yaml From ea79e5db4cac042b369e43fd498fb1be932079fa Mon Sep 17 00:00:00 2001 From: Shunde Zhang Date: Sat, 18 Jan 2020 22:19:34 +1100 Subject: [PATCH 10/54] Update content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md Co-Authored-By: Bob Killen --- .../Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md b/content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md index 002d5da7a19fa..98b3f7296616e 100644 --- a/content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md +++ b/content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md @@ -680,7 +680,7 @@ NAME STATUS VOLUME CAPACITY ACCESS MO myvol Bound pvc-14b8bc68-6c4c-4dc6-ad79-4cb29a81faad 1Gi RWO csi-sc-cinderplugin 3s ``` -In OpenStack the volume will be called "*pvc*-14b8bc68-6c4c-4dc6-ad79-4cb29a81faad" +In OpenStack the volume name will match the Kubernetes persistent volume generated name. In this example it would be: _pvc-14b8bc68-6c4c-4dc6-ad79-4cb29a81faad_ Now we can create a pod with the PVC. ``` From ac0c9044f95daae6708111387e02e5856cbbd3bd Mon Sep 17 00:00:00 2001 From: Shunde Zhang Date: Sat, 18 Jan 2020 22:19:58 +1100 Subject: [PATCH 11/54] Update content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md Co-Authored-By: Taylor Dolezal --- .../Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md b/content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md index 98b3f7296616e..fac65aa6710e6 100644 --- a/content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md +++ b/content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md @@ -1,4 +1,4 @@ -This document describes how to install a single-master Kubernetes cluster v1.15 with kubeadm on CentOS, and then deploy an external OpenStack cloud provider and Cinder CSI plugin in order to use Cinder volumes as persistent volumes in Kubernetes. +This document describes how to install a single-master Kubernetes cluster v1.15 with kubeadm on CentOS, and then deploy an external OpenStack cloud provider and Cinder CSI plugin to use Cinder volumes as persistent volumes in Kubernetes. This cluster will be running on OpenStack VMs so we'll create a few things in OpenStack for it. From 8051f7c3c0a00356c8d1fa6396698df78ff9a067 Mon Sep 17 00:00:00 2001 From: Shunde Zhang Date: Sat, 18 Jan 2020 22:20:28 +1100 Subject: [PATCH 12/54] Update content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md Co-Authored-By: Taylor Dolezal --- .../Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md b/content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md index fac65aa6710e6..6b6d697bc40b9 100644 --- a/content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md +++ b/content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md @@ -7,7 +7,7 @@ This cluster will be running on OpenStack VMs so we'll create a few things in Op * a private network and subnet * a router for this private network and connect it to a public network for floating IPs * a security group for all Kubernetes VMs -* a VM as master node and a few VMs as worker nodes +* a VM as control plane node and a few VMs as worker nodes The security group will have the following rules to open ports for Kubernetes. From 62f23ea3d737980bac00237f56cded24160ed9ed Mon Sep 17 00:00:00 2001 From: Shunde Zhang Date: Sat, 18 Jan 2020 22:20:42 +1100 Subject: [PATCH 13/54] Update content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md Co-Authored-By: Taylor Dolezal --- .../Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md b/content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md index 6b6d697bc40b9..f51f98f05a8bf 100644 --- a/content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md +++ b/content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md @@ -50,7 +50,7 @@ echo "192.168.1.4 master1" >> /etc/hosts hostnamectl set-hostname master1 ``` -Next we'll follow official documents to install docker and Kubernetes using kubeadm. +Next, we'll follow official documents to install docker and Kubernetes using kubeadm. Install docker following the steps from the [Container Runtime Documentation.](/docs/setup/production-environment/container-runtimes/) From 7f0b118d8524bfad9b2aa8f0ab91275032520f63 Mon Sep 17 00:00:00 2001 From: Shunde Zhang Date: Sat, 18 Jan 2020 22:20:55 +1100 Subject: [PATCH 14/54] Update content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md Co-Authored-By: Taylor Dolezal --- .../Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md b/content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md index f51f98f05a8bf..c06706ad876f2 100644 --- a/content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md +++ b/content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md @@ -138,7 +138,7 @@ modprobe br_netfilter The official document about how to create a single-master cluster can be found in https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/ We'll largely follow that document but also add additional things for the cloud provider. -To make things clearer we'll use a kubeadm-config.yml for the master. +To make things more clear, we'll use a kubeadm-config.yml for the master. In this config we specify to use an external OpenStack cloud provider, and where to find its config. We also enable storage API in API server's runtime config so we can use OpenStack volumes as persistent volumes in Kubernets. From 22b16906a82b8f7e4ff4a88834f358c37d4b57bd Mon Sep 17 00:00:00 2001 From: Shunde Zhang Date: Sat, 18 Jan 2020 22:21:26 +1100 Subject: [PATCH 15/54] Update content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md Co-Authored-By: Taylor Dolezal --- .../Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md b/content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md index c06706ad876f2..99bc25d873a24 100644 --- a/content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md +++ b/content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md @@ -140,7 +140,7 @@ The official document about how to create a single-master cluster can be found i We'll largely follow that document but also add additional things for the cloud provider. To make things more clear, we'll use a kubeadm-config.yml for the master. In this config we specify to use an external OpenStack cloud provider, and where to find its config. -We also enable storage API in API server's runtime config so we can use OpenStack volumes as persistent volumes in Kubernets. +We also enable storage API in API server's runtime config so we can use OpenStack volumes as persistent volumes in Kubernetes. ``` apiVersion: kubeadm.k8s.io/v1beta1 From d024c3f1ff351bae41508dfd9c23640d70e14604 Mon Sep 17 00:00:00 2001 From: Shunde Zhang Date: Sat, 18 Jan 2020 22:21:48 +1100 Subject: [PATCH 16/54] Update content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md Co-Authored-By: Taylor Dolezal --- .../Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md b/content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md index 99bc25d873a24..e3055b0090b3d 100644 --- a/content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md +++ b/content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md @@ -205,7 +205,7 @@ Next run kubeadm to initiate the master kubeadm init --config=kubeadm-config.yml ``` -When the initialisation is finished, copy admin config to .kube +With the initialization completed, copy admin config to .kube ``` mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config From 19de783c8846c04391f850fa85c7fb6136a81f1a Mon Sep 17 00:00:00 2001 From: Shunde Zhang Date: Sat, 18 Jan 2020 22:24:57 +1100 Subject: [PATCH 17/54] Update content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md Co-Authored-By: Taylor Dolezal --- .../Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md b/content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md index e3055b0090b3d..c63911d89c31e 100644 --- a/content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md +++ b/content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md @@ -212,7 +212,7 @@ With the initialization completed, copy admin config to .kube sudo chown $(id -u):$(id -g) $HOME/.kube/config ``` -At this stage the master node is created but not ready. All the nodes have the taint node.cloudprovider.kubernetes.io/uninitialized=true:NoSchedule and waiting for being initialized by cloud-controller-manager. +At this stage, the control plane node is created but not ready. All the nodes have the taint node.cloudprovider.kubernetes.io/uninitialized=true:NoSchedule and waiting for being initialized by cloud-controller-manager. ``` # kubectl describe no master1 Name: master1 From 04058890f582502e9f945750ab15b6f9bb0840b6 Mon Sep 17 00:00:00 2001 From: Shunde Zhang Date: Sat, 18 Jan 2020 22:32:07 +1100 Subject: [PATCH 18/54] Update content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md Co-Authored-By: Taylor Dolezal --- .../Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md b/content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md index c63911d89c31e..30609271d15e5 100644 --- a/content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md +++ b/content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md @@ -386,7 +386,7 @@ nodeRegistration: cloud-provider: "external" ``` -apiServerEndpoint is the master node, token and caCertHashes can be taken from the join command printed in the output of 'kubeadm token create' command. +apiServerEndpoint is the control plane node, token and caCertHashes can be taken from the join command printed in the output of 'kubeadm token create' command. Run kubeadm and the worker nodes will be joined to the cluster. ``` From bfa8a0c5b4667a1800e14a1ccdccc91839577890 Mon Sep 17 00:00:00 2001 From: Shunde Zhang Date: Sun, 19 Jan 2020 10:54:50 +1100 Subject: [PATCH 19/54] Update Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md --- ...eploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md | 4 ++++ 1 file changed, 4 insertions(+) diff --git a/content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md b/content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md index 30609271d15e5..ebd5848e88202 100644 --- a/content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md +++ b/content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md @@ -1,3 +1,7 @@ +--- +layout: blog +title: "Deploying External OpenStack Cloud Provider with Kubeadm" +--- This document describes how to install a single-master Kubernetes cluster v1.15 with kubeadm on CentOS, and then deploy an external OpenStack cloud provider and Cinder CSI plugin to use Cinder volumes as persistent volumes in Kubernetes. This cluster will be running on OpenStack VMs so we'll create a few things in OpenStack for it. From 5f1e5384bddc9530e54e3499fded63a75b31a71c Mon Sep 17 00:00:00 2001 From: Shunde Zhang Date: Sun, 19 Jan 2020 10:57:09 +1100 Subject: [PATCH 20/54] Update Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md --- .../Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md | 2 ++ 1 file changed, 2 insertions(+) diff --git a/content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md b/content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md index ebd5848e88202..35b81bbb60837 100644 --- a/content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md +++ b/content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md @@ -1,6 +1,8 @@ --- layout: blog title: "Deploying External OpenStack Cloud Provider with Kubeadm" +date: 2020-01-20 +slug: Deploying-External-OpenStack-Cloud-Provider-with-Kubeadm --- This document describes how to install a single-master Kubernetes cluster v1.15 with kubeadm on CentOS, and then deploy an external OpenStack cloud provider and Cinder CSI plugin to use Cinder volumes as persistent volumes in Kubernetes. From 589f961f3ced93ce0c4ff107f55e83883794e918 Mon Sep 17 00:00:00 2001 From: Shunde Zhang Date: Sun, 19 Jan 2020 12:02:54 +1100 Subject: [PATCH 21/54] Update content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md Co-Authored-By: Taylor Dolezal --- .../Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md b/content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md index 35b81bbb60837..d435248ebdd6a 100644 --- a/content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md +++ b/content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md @@ -47,7 +47,7 @@ The security group will have the following rules to open ports for Kubernetes. |TCP|6781-6784|Weave| |UDP|6783-6784|Weave| -The master needs at least 2 cores and 4GB RAM. After the VM is launched, verify its hostname and make sure it is the same as the node name in Nova. +The control plane needs at least 2 cores and 4GB RAM. After the VM is launched, verify its hostname and make sure it is the same as the node name in Nova. If the hostname is not resolvable, add it to /etc/hosts. For example, if the VM is called master1, and it has an internal IP 192.168.1.4. Add that to /etc/hosts and set hostname to master1. From b958f0ddc745bc03306d3e6fd9f2d012abf8361b Mon Sep 17 00:00:00 2001 From: Shunde Zhang Date: Sun, 19 Jan 2020 12:03:48 +1100 Subject: [PATCH 22/54] Update content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md Co-Authored-By: Taylor Dolezal --- .../Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md b/content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md index d435248ebdd6a..7f1c5e2026294 100644 --- a/content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md +++ b/content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md @@ -60,7 +60,7 @@ Next, we'll follow official documents to install docker and Kubernetes using kub Install docker following the steps from the [Container Runtime Documentation.](/docs/setup/production-environment/container-runtimes/) -Note that it is recommend to use systemd as the cgroup driver for Kubernetes. +Note that it is best practice to use systemd as the cgroup driver for Kubernetes. If you use internal repository servers, add them to docker's config too. ``` # Install Docker CE From ca798f0e9e8bfc1b2e42a9a6743d612586c86f6f Mon Sep 17 00:00:00 2001 From: Shunde Zhang Date: Sun, 19 Jan 2020 12:49:18 +1100 Subject: [PATCH 23/54] Update Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md --- ...l-OpenStack-Cloud-Provider-With-Kubeadm.md | 32 ++++++++++++------- 1 file changed, 21 insertions(+), 11 deletions(-) diff --git a/content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md b/content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md index 7f1c5e2026294..7446b4e000e8e 100644 --- a/content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md +++ b/content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md @@ -6,6 +6,8 @@ slug: Deploying-External-OpenStack-Cloud-Provider-with-Kubeadm --- This document describes how to install a single-master Kubernetes cluster v1.15 with kubeadm on CentOS, and then deploy an external OpenStack cloud provider and Cinder CSI plugin to use Cinder volumes as persistent volumes in Kubernetes. +### Preparation in OpenStack + This cluster will be running on OpenStack VMs so we'll create a few things in OpenStack for it. * A project/tenant for this kubernetes cluster @@ -48,17 +50,19 @@ The security group will have the following rules to open ports for Kubernetes. |UDP|6783-6784|Weave| The control plane needs at least 2 cores and 4GB RAM. After the VM is launched, verify its hostname and make sure it is the same as the node name in Nova. -If the hostname is not resolvable, add it to /etc/hosts. +If the hostname is not resolvable, add it to `/etc/hosts`. -For example, if the VM is called master1, and it has an internal IP 192.168.1.4. Add that to /etc/hosts and set hostname to master1. +For example, if the VM is called master1, and it has an internal IP 192.168.1.4. Add that to `/etc/hosts` and set hostname to master1. ``` echo "192.168.1.4 master1" >> /etc/hosts hostnamectl set-hostname master1 ``` +### Install Docker and Kubernetes + Next, we'll follow official documents to install docker and Kubernetes using kubeadm. -Install docker following the steps from the [Container Runtime Documentation.](/docs/setup/production-environment/container-runtimes/) +Install docker following the steps from the [Container Runtime](/docs/setup/production-environment/container-runtimes/) Documentation. Note that it is best practice to use systemd as the cgroup driver for Kubernetes. If you use internal repository servers, add them to docker's config too. @@ -107,7 +111,7 @@ systemctl restart docker systemctl enable docker ``` -Install kubeadm following the steps in https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/install-kubeadm/ +Install kubeadm following the steps from the [Installing Kubeadm](/docs/setup/production-environment/tools/kubeadm/install-kubeadm/) documentation. ``` cat < /etc/yum.repos.d/kubernetes.repo @@ -141,7 +145,7 @@ lsmod | grep br_netfilter modprobe br_netfilter ``` -The official document about how to create a single-master cluster can be found in https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/ +The official document about how to create a single-master cluster can be found from the [Creating a single control-plane cluster with kubeadm](/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/) documentation. We'll largely follow that document but also add additional things for the cloud provider. To make things more clear, we'll use a kubeadm-config.yml for the master. @@ -237,7 +241,7 @@ kubectl create secret -n kube-system generic cloud-config --from-literal=cloud.c kubectl apply -f cloud-config-secret.yaml ``` -Get ca certs of OpenStack API endpoints and put it in /etc/kubernetes/ca.pem. +Get ca certs of OpenStack API endpoints and put it in `/etc/kubernetes/ca.pem`. Create RBAC resources. ``` @@ -247,7 +251,7 @@ kubectl apply -f https://github.com/kubernetes/cloud-provider-openstack/raw/rele We'll run OpenStack cloud controller manager as a DaemonSet rather than a pod. The manager will only run on the master, so if there are multiple masters, multiple pods will be run for high availability. -Create the DaemonSet yaml, openstack-cloud-controller-manager-ds.yaml, and apply it. +Create the DaemonSet yaml, `openstack-cloud-controller-manager-ds.yaml`, and apply it. ``` --- @@ -356,7 +360,7 @@ PodCIDR: 10.224.0.0/24 ProviderID: openstack:///548e3c46-2477-4ce2-968b-3de1314560a5 ``` -Now install your favourite CNI and the master node will become ready. +Now install your favourite CNI and the control plane node will become ready. For example, to install weave net, run this command ``` @@ -378,7 +382,7 @@ kubeadm token create --print-join-command ``` -Create kubeadm-config.yml for worker nodes with the above token and ca cert hash. +Create `kubeadm-config.yml` for worker nodes with the above token and ca cert hash. ``` apiVersion: kubeadm.k8s.io/v1beta2 discovery: @@ -403,6 +407,8 @@ At this stage we'll have a working Kubernetes cluster with an external OpenStack The provider tells Kubernetes about the mapping between Kubernetes nodes and OpenStack VMs. If Kubernetes wants to attach a persistent volume to a pod, it can find out which OpenStack VM the pod is running on from the mapping, and attach the underlying OpenStack volume to the VM accordingly. +### Deploy Cinder CSI + The integration with Cinder is provided by an external Cinder CSI plugin, as described in https://github.com/kubernetes/cloud-provider-openstack/blob/master/docs/using-cinder-csi-plugin.md We'll perform the following steps to install the Cinder CSI plugin. @@ -419,7 +425,7 @@ kubectl apply -f https://github.com/kubernetes/cloud-provider-openstack/raw/rele The Cinder CSI plugin includes a controller plugin and a node plugin. The controller communicates with Kubernetes APIs and Cinder APIs to create/attach/detach/delete Cinder volumes. The node plugin in-turn runs on each worker node to bind a storage device (attached volume) to a pod, and unbind it during deletion. -Create cinder-csi-controllerplugin.yaml and apply it to create csi controller. +Create `cinder-csi-controllerplugin.yaml` and apply it to create csi controller. ``` kind: Service apiVersion: v1 @@ -534,7 +540,7 @@ spec: ``` -Create cinder-csi-nodeplugin.yaml and apply it to create csi node. +Create `cinder-csi-nodeplugin.yaml` and apply it to create csi node. ``` kind: DaemonSet apiVersion: apps/v1 @@ -745,3 +751,7 @@ If we go back to OpenStack, we can see the Cinder volume is mounted to the worke +--------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ ``` + +### Summary + +In this walk-through, we deployed a Kubernetes cluster on OpenStack VMs and integrated it with OpenStack using an external OpenStack cloud provider. Then on this Kubernetes cluster we deployed Cinder CSI plugin which can create Cinder volumes and expose them in Kubernetes as persistent volumes. From a6971e52e2ced0b8c48f6e469f736eb6d55c8fb2 Mon Sep 17 00:00:00 2001 From: Shunde Zhang Date: Sun, 19 Jan 2020 12:50:10 +1100 Subject: [PATCH 24/54] Update content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md Co-Authored-By: Bob Killen --- .../Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md b/content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md index 7446b4e000e8e..2eb52c869dcf8 100644 --- a/content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md +++ b/content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md @@ -185,7 +185,7 @@ Now we'll create cloud config, `/etc/kubernetes/cloud-config`, for OpenStack. Note that the tenant here is the one we created for all Kubernets VMs in the beginning. All VMs should be launched in this project/tenant. In addition you need to create a user in this tenant for Kubernetes to do queries. -The ca-file is the CA root certitiface for OpenStack's API endpoint https://openstack.cloud:5000/v3 +The ca-file is the CA root certificate for OpenStack's API endpoint, for example `https://openstack.cloud:5000/v3` At the time of writing the cloud provider doesn't allow insecure connections (skip CA check). ``` From 2cb28bb20d55d6be4ec21dc8ccbd5b5bc6a6b125 Mon Sep 17 00:00:00 2001 From: Shunde Zhang Date: Sun, 19 Jan 2020 12:50:26 +1100 Subject: [PATCH 25/54] Update content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md Co-Authored-By: Bob Killen --- .../Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md b/content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md index 2eb52c869dcf8..c11802d34c92b 100644 --- a/content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md +++ b/content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md @@ -362,7 +362,7 @@ ProviderID: openstack:///548e3c46-2477-4ce2-968b-3de1314560a5 ``` Now install your favourite CNI and the control plane node will become ready. -For example, to install weave net, run this command +For example, to install weave net, run this command: ``` kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d '\n')" ``` From 623a995b6b0f95fb9988aa38842586ddbfca01be Mon Sep 17 00:00:00 2001 From: Shunde Zhang Date: Sun, 19 Jan 2020 12:51:08 +1100 Subject: [PATCH 26/54] Update content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md Co-Authored-By: Bob Killen --- .../Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md b/content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md index c11802d34c92b..c48914e988077 100644 --- a/content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md +++ b/content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md @@ -409,7 +409,7 @@ If Kubernetes wants to attach a persistent volume to a pod, it can find out whic ### Deploy Cinder CSI -The integration with Cinder is provided by an external Cinder CSI plugin, as described in https://github.com/kubernetes/cloud-provider-openstack/blob/master/docs/using-cinder-csi-plugin.md +The integration with Cinder is provided by an external Cinder CSI plugin, as described in [the Cinder CSI documentation](https://github.com/kubernetes/cloud-provider-openstack/blob/master/docs/using-cinder-csi-plugin.md) We'll perform the following steps to install the Cinder CSI plugin. Firstly, create a secret with CA certs for OpenStack's API endpoints. It is the same cert file as what we use in cloud provider above. From bf9ffa4dbc7418264cbcbce0f2a233f9a7fb1dc6 Mon Sep 17 00:00:00 2001 From: Shunde Zhang Date: Sun, 19 Jan 2020 12:57:47 +1100 Subject: [PATCH 27/54] Update Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md --- ...g-External-OpenStack-Cloud-Provider-With-Kubeadm.md | 10 ++++++---- 1 file changed, 6 insertions(+), 4 deletions(-) diff --git a/content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md b/content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md index c48914e988077..133447cb87570 100644 --- a/content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md +++ b/content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md @@ -46,8 +46,10 @@ The security group will have the following rules to open ports for Kubernetes. |TCP|9099|Calico felix (health check)| |UDP|8285|Flannel| |UDP|8472|Flannel| -|TCP|6781-6784|Weave| -|UDP|6783-6784|Weave| +|TCP|6781-6784|Weave net| +|UDP|6783-6784|Weave net| + +(CNI specific ports are only required to be opened when that particular CNI plugin is used. In this instruction we use Weave net, thus only those Weave net ports, TCP 6781-6784 and UDP 6783-6784, need to be opened in the security group.) The control plane needs at least 2 cores and 4GB RAM. After the VM is launched, verify its hostname and make sure it is the same as the node name in Nova. If the hostname is not resolvable, add it to `/etc/hosts`. @@ -233,7 +235,7 @@ Taints: node-role.kubernetes.io/master:NoSchedule node.kubernetes.io/not-ready:NoSchedule ...... ``` -Now deploy openstack cloud controller manager into the cluster as per the instruction: https://github.com/kubernetes/cloud-provider-openstack/blob/master/docs/using-controller-manager-with-kubeadm.md +Now deploy openstack cloud controller manager into the cluster as per the [using controller manager with kubeadm](https://github.com/kubernetes/cloud-provider-openstack/blob/master/docs/using-controller-manager-with-kubeadm.md) documentation. Create a secret with cloud-config for openstack cloud provider. ``` @@ -409,7 +411,7 @@ If Kubernetes wants to attach a persistent volume to a pod, it can find out whic ### Deploy Cinder CSI -The integration with Cinder is provided by an external Cinder CSI plugin, as described in [the Cinder CSI documentation](https://github.com/kubernetes/cloud-provider-openstack/blob/master/docs/using-cinder-csi-plugin.md) +The integration with Cinder is provided by an external Cinder CSI plugin, as described in the [Cinder CSI](https://github.com/kubernetes/cloud-provider-openstack/blob/master/docs/using-cinder-csi-plugin.md) documentation. We'll perform the following steps to install the Cinder CSI plugin. Firstly, create a secret with CA certs for OpenStack's API endpoints. It is the same cert file as what we use in cloud provider above. From c57a8ba6b8bb054771a33d43300a08eaf5279ca1 Mon Sep 17 00:00:00 2001 From: Shunde Zhang Date: Sun, 19 Jan 2020 12:58:13 +1100 Subject: [PATCH 28/54] Update Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md --- .../Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md b/content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md index 133447cb87570..94b07186783ac 100644 --- a/content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md +++ b/content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md @@ -49,7 +49,7 @@ The security group will have the following rules to open ports for Kubernetes. |TCP|6781-6784|Weave net| |UDP|6783-6784|Weave net| -(CNI specific ports are only required to be opened when that particular CNI plugin is used. In this instruction we use Weave net, thus only those Weave net ports, TCP 6781-6784 and UDP 6783-6784, need to be opened in the security group.) +CNI specific ports are only required to be opened when that particular CNI plugin is used. In this instruction we use Weave net, thus only those Weave net ports, TCP 6781-6784 and UDP 6783-6784, need to be opened in the security group. The control plane needs at least 2 cores and 4GB RAM. After the VM is launched, verify its hostname and make sure it is the same as the node name in Nova. If the hostname is not resolvable, add it to `/etc/hosts`. From 68f633f4b3fc6e797636de66cde88e8e54efac02 Mon Sep 17 00:00:00 2001 From: Shunde Zhang Date: Sun, 19 Jan 2020 12:58:38 +1100 Subject: [PATCH 29/54] Update Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md --- .../Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md b/content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md index 94b07186783ac..a9dac26a1bd4a 100644 --- a/content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md +++ b/content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md @@ -51,7 +51,7 @@ The security group will have the following rules to open ports for Kubernetes. CNI specific ports are only required to be opened when that particular CNI plugin is used. In this instruction we use Weave net, thus only those Weave net ports, TCP 6781-6784 and UDP 6783-6784, need to be opened in the security group. -The control plane needs at least 2 cores and 4GB RAM. After the VM is launched, verify its hostname and make sure it is the same as the node name in Nova. +The control plane node needs at least 2 cores and 4GB RAM. After the VM is launched, verify its hostname and make sure it is the same as the node name in Nova. If the hostname is not resolvable, add it to `/etc/hosts`. For example, if the VM is called master1, and it has an internal IP 192.168.1.4. Add that to `/etc/hosts` and set hostname to master1. From bc1fa52857af58e4c616ffc121a0b3e8abcf191d Mon Sep 17 00:00:00 2001 From: Shunde Zhang Date: Wed, 22 Jan 2020 11:10:28 +1100 Subject: [PATCH 30/54] Update content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md Co-Authored-By: Tim Bannister --- .../Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md b/content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md index a9dac26a1bd4a..9beb762902bbf 100644 --- a/content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md +++ b/content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md @@ -10,7 +10,7 @@ This document describes how to install a single-master Kubernetes cluster v1.15 This cluster will be running on OpenStack VMs so we'll create a few things in OpenStack for it. -* A project/tenant for this kubernetes cluster +* A project/tenant for this Kubernetes cluster * a user in this project for Kubernetes, to query node information and attach volumes etc * a private network and subnet * a router for this private network and connect it to a public network for floating IPs From beb3de8b7546572f5adf9c6e5e5d7ef7dadd65e6 Mon Sep 17 00:00:00 2001 From: Shunde Zhang Date: Wed, 22 Jan 2020 11:10:44 +1100 Subject: [PATCH 31/54] Update content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md Co-Authored-By: Tim Bannister --- .../Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md b/content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md index 9beb762902bbf..308664849659a 100644 --- a/content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md +++ b/content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md @@ -38,7 +38,7 @@ The security group will have the following rules to open ports for Kubernetes. |TCP|10255|Read-only Kubelet API| |TCP|30000-32767|NodePort Services| -**CNI Ports on both master and worker nodes** +**CNI ports on both master and worker nodes** |Protocol | Port Number | Description| |----------|-------------|------------| From 6c12f2ae59e9ca2721b556b637a7f3bc125e81df Mon Sep 17 00:00:00 2001 From: Shunde Zhang Date: Wed, 22 Jan 2020 11:11:23 +1100 Subject: [PATCH 32/54] Update content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md Co-Authored-By: Tim Bannister --- .../Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md b/content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md index 308664849659a..9fdfbc0cf50d2 100644 --- a/content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md +++ b/content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md @@ -64,7 +64,7 @@ hostnamectl set-hostname master1 Next, we'll follow official documents to install docker and Kubernetes using kubeadm. -Install docker following the steps from the [Container Runtime](/docs/setup/production-environment/container-runtimes/) Documentation. +Install Docker following the steps from the [container runtime](/docs/setup/production-environment/container-runtimes/) documentation. Note that it is best practice to use systemd as the cgroup driver for Kubernetes. If you use internal repository servers, add them to docker's config too. From 459fe0dc93d62718a512f51af1f1cc5aadb96009 Mon Sep 17 00:00:00 2001 From: Shunde Zhang Date: Wed, 22 Jan 2020 11:11:48 +1100 Subject: [PATCH 33/54] Update content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md Co-Authored-By: Tim Bannister --- .../Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md b/content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md index 9fdfbc0cf50d2..3964d6c4e4126 100644 --- a/content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md +++ b/content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md @@ -224,7 +224,7 @@ With the initialization completed, copy admin config to .kube sudo chown $(id -u):$(id -g) $HOME/.kube/config ``` -At this stage, the control plane node is created but not ready. All the nodes have the taint node.cloudprovider.kubernetes.io/uninitialized=true:NoSchedule and waiting for being initialized by cloud-controller-manager. +At this stage, the control plane node is created but not ready. All the nodes have the taint `node.cloudprovider.kubernetes.io/uninitialized=true:NoSchedule` and are waiting to be initialized by the cloud-controller-manager. ``` # kubectl describe no master1 Name: master1 From e6571b804a78539463dd57024fd636661759050f Mon Sep 17 00:00:00 2001 From: Shunde Zhang Date: Wed, 22 Jan 2020 11:12:04 +1100 Subject: [PATCH 34/54] Update content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md Co-Authored-By: Tim Bannister --- .../Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md b/content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md index 3964d6c4e4126..7723e640a8eb6 100644 --- a/content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md +++ b/content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md @@ -235,7 +235,7 @@ Taints: node-role.kubernetes.io/master:NoSchedule node.kubernetes.io/not-ready:NoSchedule ...... ``` -Now deploy openstack cloud controller manager into the cluster as per the [using controller manager with kubeadm](https://github.com/kubernetes/cloud-provider-openstack/blob/master/docs/using-controller-manager-with-kubeadm.md) documentation. +Now deploy the OpenStack cloud controller manager into the cluster, following [using controller manager with kubeadm](https://github.com/kubernetes/cloud-provider-openstack/blob/master/docs/using-controller-manager-with-kubeadm.md). Create a secret with cloud-config for openstack cloud provider. ``` From 0bd62b38c11608c533f45ef93f0b841780356230 Mon Sep 17 00:00:00 2001 From: Shunde Zhang Date: Wed, 22 Jan 2020 11:12:22 +1100 Subject: [PATCH 35/54] Update content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md Co-Authored-By: Tim Bannister --- .../Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md b/content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md index 7723e640a8eb6..7754ed383124a 100644 --- a/content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md +++ b/content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md @@ -243,7 +243,7 @@ kubectl create secret -n kube-system generic cloud-config --from-literal=cloud.c kubectl apply -f cloud-config-secret.yaml ``` -Get ca certs of OpenStack API endpoints and put it in `/etc/kubernetes/ca.pem`. +Get the CA certificate for OpenStack API endpoints and put that into `/etc/kubernetes/ca.pem`. Create RBAC resources. ``` From 938cafa6652974a433805ecbe986c28038311201 Mon Sep 17 00:00:00 2001 From: Shunde Zhang Date: Wed, 22 Jan 2020 11:12:41 +1100 Subject: [PATCH 36/54] Update content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md Co-Authored-By: Tim Bannister --- .../Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md b/content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md index 7754ed383124a..029a4521a0647 100644 --- a/content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md +++ b/content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md @@ -253,7 +253,7 @@ kubectl apply -f https://github.com/kubernetes/cloud-provider-openstack/raw/rele We'll run OpenStack cloud controller manager as a DaemonSet rather than a pod. The manager will only run on the master, so if there are multiple masters, multiple pods will be run for high availability. -Create the DaemonSet yaml, `openstack-cloud-controller-manager-ds.yaml`, and apply it. +Create `openstack-cloud-controller-manager-ds.yaml` containing the following manifests, then apply it. ``` --- From 015c655d6751e5c85a6d84cb8f6f2843967a7cd6 Mon Sep 17 00:00:00 2001 From: Shunde Zhang Date: Wed, 22 Jan 2020 11:15:17 +1100 Subject: [PATCH 37/54] Update content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md Co-Authored-By: Tim Bannister --- .../Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md b/content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md index 029a4521a0647..c87950586384d 100644 --- a/content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md +++ b/content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md @@ -89,7 +89,7 @@ yum update && yum install docker-ce-18.06.2.ce mkdir /etc/docker -# Setup daemon. +# Configure the Docker daemon cat > /etc/docker/daemon.json < Date: Wed, 22 Jan 2020 11:15:52 +1100 Subject: [PATCH 38/54] Update content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md Co-Authored-By: Tim Bannister --- .../Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md b/content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md index c87950586384d..c6f5197b113bc 100644 --- a/content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md +++ b/content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md @@ -150,7 +150,7 @@ modprobe br_netfilter The official document about how to create a single-master cluster can be found from the [Creating a single control-plane cluster with kubeadm](/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/) documentation. We'll largely follow that document but also add additional things for the cloud provider. -To make things more clear, we'll use a kubeadm-config.yml for the master. +To make things more clear, we'll use a `kubeadm-config.yml` for the master. In this config we specify to use an external OpenStack cloud provider, and where to find its config. We also enable storage API in API server's runtime config so we can use OpenStack volumes as persistent volumes in Kubernetes. From 671d6c9e0bd63e23e8742a58009bc0b2fd8f3b60 Mon Sep 17 00:00:00 2001 From: Shunde Zhang Date: Wed, 22 Jan 2020 11:16:11 +1100 Subject: [PATCH 39/54] Update content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md Co-Authored-By: Tim Bannister --- .../Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md b/content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md index c6f5197b113bc..06a84fced9320 100644 --- a/content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md +++ b/content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md @@ -184,7 +184,7 @@ networking: ``` Now we'll create cloud config, `/etc/kubernetes/cloud-config`, for OpenStack. -Note that the tenant here is the one we created for all Kubernets VMs in the beginning. +Note that the tenant here is the one we created for all Kubernetes VMs in the beginning. All VMs should be launched in this project/tenant. In addition you need to create a user in this tenant for Kubernetes to do queries. The ca-file is the CA root certificate for OpenStack's API endpoint, for example `https://openstack.cloud:5000/v3` From 3538645ca3435a905ccd6ad443ec193bb3b7dc63 Mon Sep 17 00:00:00 2001 From: Shunde Zhang Date: Wed, 22 Jan 2020 11:32:43 +1100 Subject: [PATCH 40/54] Update Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md --- ...l-OpenStack-Cloud-Provider-With-Kubeadm.md | 57 ++++++++++--------- 1 file changed, 29 insertions(+), 28 deletions(-) diff --git a/content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md b/content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md index 06a84fced9320..0cdd8b0924742 100644 --- a/content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md +++ b/content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md @@ -38,7 +38,7 @@ The security group will have the following rules to open ports for Kubernetes. |TCP|10255|Read-only Kubelet API| |TCP|30000-32767|NodePort Services| -**CNI ports on both master and worker nodes** +**CNI ports on both control plane and worker nodes** |Protocol | Port Number | Description| |----------|-------------|------------| @@ -46,8 +46,8 @@ The security group will have the following rules to open ports for Kubernetes. |TCP|9099|Calico felix (health check)| |UDP|8285|Flannel| |UDP|8472|Flannel| -|TCP|6781-6784|Weave net| -|UDP|6783-6784|Weave net| +|TCP|6781-6784|Weave Net| +|UDP|6783-6784|Weave Net| CNI specific ports are only required to be opened when that particular CNI plugin is used. In this instruction we use Weave net, thus only those Weave net ports, TCP 6781-6784 and UDP 6783-6784, need to be opened in the security group. @@ -55,7 +55,7 @@ The control plane node needs at least 2 cores and 4GB RAM. After the VM is launc If the hostname is not resolvable, add it to `/etc/hosts`. For example, if the VM is called master1, and it has an internal IP 192.168.1.4. Add that to `/etc/hosts` and set hostname to master1. -``` +```shell echo "192.168.1.4 master1" >> /etc/hosts hostnamectl set-hostname master1 @@ -68,7 +68,7 @@ Install Docker following the steps from the [container runtime](/docs/setup/prod Note that it is best practice to use systemd as the cgroup driver for Kubernetes. If you use internal repository servers, add them to docker's config too. -``` +```shell # Install Docker CE ## Set up the repository ### Install required packages. @@ -115,7 +115,7 @@ systemctl enable docker Install kubeadm following the steps from the [Installing Kubeadm](/docs/setup/production-environment/tools/kubeadm/install-kubeadm/) documentation. -``` +```shell cat < /etc/yum.repos.d/kubernetes.repo [kubernetes] name=Kubernetes @@ -127,6 +127,7 @@ gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cl EOF # Set SELinux in permissive mode (effectively disabling it) +# Caveat: In a production environment you may not want to disable SELinux, please refer to Kubernetes documents about SELinux setenforce 0 sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config @@ -154,7 +155,7 @@ To make things more clear, we'll use a `kubeadm-config.yml` for the master. In this config we specify to use an external OpenStack cloud provider, and where to find its config. We also enable storage API in API server's runtime config so we can use OpenStack volumes as persistent volumes in Kubernetes. -``` +```yaml apiVersion: kubeadm.k8s.io/v1beta1 kind: InitConfiguration nodeRegistration: @@ -190,7 +191,7 @@ In addition you need to create a user in this tenant for Kubernetes to do querie The ca-file is the CA root certificate for OpenStack's API endpoint, for example `https://openstack.cloud:5000/v3` At the time of writing the cloud provider doesn't allow insecure connections (skip CA check). -``` +```ini [Global] region=RegionOne username=username @@ -213,19 +214,19 @@ ipv6-support-disabled=false ``` Next run kubeadm to initiate the master -``` +```shell kubeadm init --config=kubeadm-config.yml ``` With the initialization completed, copy admin config to .kube -``` +```shell mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config ``` At this stage, the control plane node is created but not ready. All the nodes have the taint `node.cloudprovider.kubernetes.io/uninitialized=true:NoSchedule` and are waiting to be initialized by the cloud-controller-manager. -``` +```console # kubectl describe no master1 Name: master1 Roles: master @@ -238,7 +239,7 @@ Taints: node-role.kubernetes.io/master:NoSchedule Now deploy the OpenStack cloud controller manager into the cluster, following [using controller manager with kubeadm](https://github.com/kubernetes/cloud-provider-openstack/blob/master/docs/using-controller-manager-with-kubeadm.md). Create a secret with cloud-config for openstack cloud provider. -``` +```shell kubectl create secret -n kube-system generic cloud-config --from-literal=cloud.conf="$(cat /etc/kubernetes/cloud-config)" --dry-run -o yaml > cloud-config-secret.yaml kubectl apply -f cloud-config-secret.yaml ``` @@ -246,7 +247,7 @@ kubectl apply -f cloud-config-secret.yaml Get the CA certificate for OpenStack API endpoints and put that into `/etc/kubernetes/ca.pem`. Create RBAC resources. -``` +```shell kubectl apply -f https://github.com/kubernetes/cloud-provider-openstack/raw/release-1.15/cluster/addons/rbac/cloud-controller-manager-roles.yaml kubectl apply -f https://github.com/kubernetes/cloud-provider-openstack/raw/release-1.15/cluster/addons/rbac/cloud-controller-manager-role-bindings.yaml ``` @@ -255,7 +256,7 @@ We'll run OpenStack cloud controller manager as a DaemonSet rather than a pod. The manager will only run on the master, so if there are multiple masters, multiple pods will be run for high availability. Create `openstack-cloud-controller-manager-ds.yaml` containing the following manifests, then apply it. -``` +```yaml --- apiVersion: v1 kind: ServiceAccount @@ -348,7 +349,7 @@ spec: ``` When the controller manager is running, it will query OpenStack to get information about the nodes and remove the taint. In the node info you'll see the VM's UUID in OpenStack. -``` +```console # kubectl describe no master1 Name: master1 Roles: master @@ -365,7 +366,7 @@ ProviderID: openstack:///548e3c46-2477-4ce2-968b-3de1314560a5 Now install your favourite CNI and the control plane node will become ready. For example, to install weave net, run this command: -``` +```shell kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d '\n')" ``` @@ -375,7 +376,7 @@ Firstly, install docker and kubeadm in the same way as how they were installed i To join them to the cluster we need a token and ca cert hash from the output of master installation. If it is expired or lost we can recreate it using these commands. -``` +```shell # check if token is expired kubeadm token list @@ -385,7 +386,7 @@ kubeadm token create --print-join-command ``` Create `kubeadm-config.yml` for worker nodes with the above token and ca cert hash. -``` +```yaml apiVersion: kubeadm.k8s.io/v1beta2 discovery: bootstrapToken: @@ -401,7 +402,7 @@ nodeRegistration: apiServerEndpoint is the control plane node, token and caCertHashes can be taken from the join command printed in the output of 'kubeadm token create' command. Run kubeadm and the worker nodes will be joined to the cluster. -``` +```shell kubeadm join --config kubeadm-config.yml ``` @@ -415,12 +416,12 @@ The integration with Cinder is provided by an external Cinder CSI plugin, as des We'll perform the following steps to install the Cinder CSI plugin. Firstly, create a secret with CA certs for OpenStack's API endpoints. It is the same cert file as what we use in cloud provider above. -``` +```shell kubectl create secret -n kube-system generic openstack-ca-cert --from-literal=ca.pem="$(cat /etc/kubernetes/ca.pem)" --dry-run -o yaml > openstack-ca-cert.yaml kubectl apply -f openstack-ca-cert.yaml ``` Then create RBAC resources. -``` +```shell kubectl apply -f https://raw.githubusercontent.com/kubernetes/cloud-provider-openstack/release-1.15/manifests/cinder-csi-plugin/cinder-csi-controllerplugin-rbac.yaml kubectl apply -f https://github.com/kubernetes/cloud-provider-openstack/raw/release-1.15/manifests/cinder-csi-plugin/cinder-csi-nodeplugin-rbac.yaml ``` @@ -428,7 +429,7 @@ kubectl apply -f https://github.com/kubernetes/cloud-provider-openstack/raw/rele The Cinder CSI plugin includes a controller plugin and a node plugin. The controller communicates with Kubernetes APIs and Cinder APIs to create/attach/detach/delete Cinder volumes. The node plugin in-turn runs on each worker node to bind a storage device (attached volume) to a pod, and unbind it during deletion. Create `cinder-csi-controllerplugin.yaml` and apply it to create csi controller. -``` +```yaml kind: Service apiVersion: v1 metadata: @@ -543,7 +544,7 @@ spec: Create `cinder-csi-nodeplugin.yaml` and apply it to create csi node. -``` +```yaml kind: DaemonSet apiVersion: apps/v1 metadata: @@ -664,7 +665,7 @@ spec: ``` When they are both running, create a storage class for Cinder. -``` +```yaml apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: @@ -672,7 +673,7 @@ metadata: provisioner: csi-cinderplugin ``` Then we can create a PVC with this class. -``` +```yaml apiVersion: v1 kind: PersistentVolumeClaim metadata: @@ -688,7 +689,7 @@ spec: ``` When the PVC is created, a Cinder volume is created correspondingly. -``` +```console # kubectl get pvc NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE myvol Bound pvc-14b8bc68-6c4c-4dc6-ad79-4cb29a81faad 1Gi RWO csi-sc-cinderplugin 3s @@ -697,7 +698,7 @@ myvol Bound pvc-14b8bc68-6c4c-4dc6-ad79-4cb29a81faad 1Gi RWO In OpenStack the volume name will match the Kubernetes persistent volume generated name. In this example it would be: _pvc-14b8bc68-6c4c-4dc6-ad79-4cb29a81faad_ Now we can create a pod with the PVC. -``` +```yaml apiVersion: v1 kind: Pod metadata: @@ -721,7 +722,7 @@ spec: ``` When the pod is running, the volume will be attached to the pod. If we go back to OpenStack, we can see the Cinder volume is mounted to the worker node where the pod is running on. -``` +```console # openstack volume show 6b5f3296-b0eb-40cd-bd4f-2067a0d6287f +--------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | Field | Value | From 155c835c22685a80c74366430f142d81e87a223c Mon Sep 17 00:00:00 2001 From: Shunde Zhang Date: Thu, 30 Jan 2020 16:47:57 +1100 Subject: [PATCH 41/54] Update content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md Co-Authored-By: Taylor Dolezal --- .../Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md b/content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md index 0cdd8b0924742..6cf7d4565ee39 100644 --- a/content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md +++ b/content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md @@ -8,7 +8,7 @@ This document describes how to install a single-master Kubernetes cluster v1.15 ### Preparation in OpenStack -This cluster will be running on OpenStack VMs so we'll create a few things in OpenStack for it. +This cluster runs on OpenStack VMs, so let's create a few things in OpenStack first. * A project/tenant for this Kubernetes cluster * a user in this project for Kubernetes, to query node information and attach volumes etc From 0b6eb7d91e3b72d29458a1a6b1bb9e84f3fc4e39 Mon Sep 17 00:00:00 2001 From: Shunde Zhang Date: Thu, 30 Jan 2020 16:48:32 +1100 Subject: [PATCH 42/54] Update content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md Co-Authored-By: Taylor Dolezal --- .../Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md b/content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md index 6cf7d4565ee39..1fa40817b5243 100644 --- a/content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md +++ b/content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md @@ -10,7 +10,7 @@ This document describes how to install a single-master Kubernetes cluster v1.15 This cluster runs on OpenStack VMs, so let's create a few things in OpenStack first. -* A project/tenant for this Kubernetes cluster +* A project/tenant for this Kubernetes cluster. * a user in this project for Kubernetes, to query node information and attach volumes etc * a private network and subnet * a router for this private network and connect it to a public network for floating IPs From 4ec762cefd47b50153ca455a6b5e59097605649b Mon Sep 17 00:00:00 2001 From: Shunde Zhang Date: Thu, 30 Jan 2020 16:58:12 +1100 Subject: [PATCH 43/54] Update Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md --- ...l-OpenStack-Cloud-Provider-With-Kubeadm.md | 36 +++++++++---------- 1 file changed, 18 insertions(+), 18 deletions(-) diff --git a/content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md b/content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md index 1fa40817b5243..388af64fa3cfb 100644 --- a/content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md +++ b/content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md @@ -4,22 +4,22 @@ title: "Deploying External OpenStack Cloud Provider with Kubeadm" date: 2020-01-20 slug: Deploying-External-OpenStack-Cloud-Provider-with-Kubeadm --- -This document describes how to install a single-master Kubernetes cluster v1.15 with kubeadm on CentOS, and then deploy an external OpenStack cloud provider and Cinder CSI plugin to use Cinder volumes as persistent volumes in Kubernetes. +This document describes how to install a single control-plane Kubernetes cluster v1.15 with kubeadm on CentOS, and then deploy an external OpenStack cloud provider and Cinder CSI plugin to use Cinder volumes as persistent volumes in Kubernetes. ### Preparation in OpenStack This cluster runs on OpenStack VMs, so let's create a few things in OpenStack first. * A project/tenant for this Kubernetes cluster. -* a user in this project for Kubernetes, to query node information and attach volumes etc -* a private network and subnet -* a router for this private network and connect it to a public network for floating IPs -* a security group for all Kubernetes VMs -* a VM as control plane node and a few VMs as worker nodes +* A user in this project for Kubernetes, to query node information and attach volumes etc +* A private network and subnet +* A router for this private network and connect it to a public network for floating IPs +* A security group for all Kubernetes VMs +* A VM as control-plane node and a few VMs as worker nodes The security group will have the following rules to open ports for Kubernetes. -**Control Plane Node** +**Control-Plane Node** |Protocol | Port Number | Description| |----------|-------------|------------| @@ -38,7 +38,7 @@ The security group will have the following rules to open ports for Kubernetes. |TCP|10255|Read-only Kubelet API| |TCP|30000-32767|NodePort Services| -**CNI ports on both control plane and worker nodes** +**CNI ports on both control-plane and worker nodes** |Protocol | Port Number | Description| |----------|-------------|------------| @@ -51,7 +51,7 @@ The security group will have the following rules to open ports for Kubernetes. CNI specific ports are only required to be opened when that particular CNI plugin is used. In this instruction we use Weave net, thus only those Weave net ports, TCP 6781-6784 and UDP 6783-6784, need to be opened in the security group. -The control plane node needs at least 2 cores and 4GB RAM. After the VM is launched, verify its hostname and make sure it is the same as the node name in Nova. +The control-plane node needs at least 2 cores and 4GB RAM. After the VM is launched, verify its hostname and make sure it is the same as the node name in Nova. If the hostname is not resolvable, add it to `/etc/hosts`. For example, if the VM is called master1, and it has an internal IP 192.168.1.4. Add that to `/etc/hosts` and set hostname to master1. @@ -148,10 +148,10 @@ lsmod | grep br_netfilter modprobe br_netfilter ``` -The official document about how to create a single-master cluster can be found from the [Creating a single control-plane cluster with kubeadm](/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/) documentation. +The official document about how to create a single control-plane cluster can be found from the [Creating a single control-plane cluster with kubeadm](/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/) documentation. We'll largely follow that document but also add additional things for the cloud provider. -To make things more clear, we'll use a `kubeadm-config.yml` for the master. +To make things more clear, we'll use a `kubeadm-config.yml` for the control-plane node. In this config we specify to use an external OpenStack cloud provider, and where to find its config. We also enable storage API in API server's runtime config so we can use OpenStack volumes as persistent volumes in Kubernetes. @@ -213,7 +213,7 @@ public-network-name=public ipv6-support-disabled=false ``` -Next run kubeadm to initiate the master +Next run kubeadm to initiate the control-plane node ```shell kubeadm init --config=kubeadm-config.yml ``` @@ -225,7 +225,7 @@ With the initialization completed, copy admin config to .kube sudo chown $(id -u):$(id -g) $HOME/.kube/config ``` -At this stage, the control plane node is created but not ready. All the nodes have the taint `node.cloudprovider.kubernetes.io/uninitialized=true:NoSchedule` and are waiting to be initialized by the cloud-controller-manager. +At this stage, the control-plane node is created but not ready. All the nodes have the taint `node.cloudprovider.kubernetes.io/uninitialized=true:NoSchedule` and are waiting to be initialized by the cloud-controller-manager. ```console # kubectl describe no master1 Name: master1 @@ -253,7 +253,7 @@ kubectl apply -f https://github.com/kubernetes/cloud-provider-openstack/raw/rele ``` We'll run OpenStack cloud controller manager as a DaemonSet rather than a pod. -The manager will only run on the master, so if there are multiple masters, multiple pods will be run for high availability. +The manager will only run on the control-plane node, so if there are multiple control-plane nodes, multiple pods will be run for high availability. Create `openstack-cloud-controller-manager-ds.yaml` containing the following manifests, then apply it. ```yaml @@ -363,7 +363,7 @@ PodCIDR: 10.224.0.0/24 ProviderID: openstack:///548e3c46-2477-4ce2-968b-3de1314560a5 ``` -Now install your favourite CNI and the control plane node will become ready. +Now install your favourite CNI and the control-plane node will become ready. For example, to install weave net, run this command: ```shell @@ -372,8 +372,8 @@ kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$(kubectl versio Next we'll set up worker nodes. -Firstly, install docker and kubeadm in the same way as how they were installed in the master. -To join them to the cluster we need a token and ca cert hash from the output of master installation. +Firstly, install docker and kubeadm in the same way as how they were installed in the control-plane node. +To join them to the cluster we need a token and ca cert hash from the output of control-plane node installation. If it is expired or lost we can recreate it using these commands. ```shell @@ -399,7 +399,7 @@ nodeRegistration: cloud-provider: "external" ``` -apiServerEndpoint is the control plane node, token and caCertHashes can be taken from the join command printed in the output of 'kubeadm token create' command. +apiServerEndpoint is the control-plane node, token and caCertHashes can be taken from the join command printed in the output of 'kubeadm token create' command. Run kubeadm and the worker nodes will be joined to the cluster. ```shell From 2279c021715b6fb9754bcf098d6375d3fa53afdf Mon Sep 17 00:00:00 2001 From: Shunde Zhang Date: Fri, 7 Feb 2020 11:07:17 +1100 Subject: [PATCH 44/54] Update content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md Co-Authored-By: Bob Killen --- .../Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md b/content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md index 388af64fa3cfb..11dadfbedd9c1 100644 --- a/content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md +++ b/content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md @@ -15,7 +15,7 @@ This cluster runs on OpenStack VMs, so let's create a few things in OpenStack fi * A private network and subnet * A router for this private network and connect it to a public network for floating IPs * A security group for all Kubernetes VMs -* A VM as control-plane node and a few VMs as worker nodes +* A VM as a control-plane node and a few VMs as worker nodes The security group will have the following rules to open ports for Kubernetes. From 2ba5282875431e4935cfd73e82d09430b1abe3a5 Mon Sep 17 00:00:00 2001 From: Shunde Zhang Date: Fri, 7 Feb 2020 11:07:29 +1100 Subject: [PATCH 45/54] Update content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md Co-Authored-By: Bob Killen --- .../Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md b/content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md index 11dadfbedd9c1..42de0d69dc548 100644 --- a/content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md +++ b/content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md @@ -62,7 +62,7 @@ hostnamectl set-hostname master1 ``` ### Install Docker and Kubernetes -Next, we'll follow official documents to install docker and Kubernetes using kubeadm. +Next, we'll follow the official documents to install docker and Kubernetes using kubeadm. Install Docker following the steps from the [container runtime](/docs/setup/production-environment/container-runtimes/) documentation. From 3cd6546f3452d3b1263f4b58b835f0afecd6b8c6 Mon Sep 17 00:00:00 2001 From: Shunde Zhang Date: Fri, 7 Feb 2020 11:07:58 +1100 Subject: [PATCH 46/54] Update content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md Co-Authored-By: Bob Killen --- .../Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md b/content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md index 42de0d69dc548..a0cd465919207 100644 --- a/content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md +++ b/content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md @@ -66,7 +66,7 @@ Next, we'll follow the official documents to install docker and Kubernetes using Install Docker following the steps from the [container runtime](/docs/setup/production-environment/container-runtimes/) documentation. -Note that it is best practice to use systemd as the cgroup driver for Kubernetes. +Note that it is a [best practice to use systemd as the cgroup driver](/docs/setup/production-environment/container-runtimes/#cgroup-drivers) for Kubernetes. If you use internal repository servers, add them to docker's config too. ```shell # Install Docker CE From aaf1b60191c542ef6fbf240ee6746e581948a8ce Mon Sep 17 00:00:00 2001 From: Shunde Zhang Date: Fri, 7 Feb 2020 11:08:34 +1100 Subject: [PATCH 47/54] Update content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md Co-Authored-By: Bob Killen --- .../Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md b/content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md index a0cd465919207..b467c90ee0c0b 100644 --- a/content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md +++ b/content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md @@ -184,7 +184,7 @@ networking: dnsDomain: "cluster.local" ``` -Now we'll create cloud config, `/etc/kubernetes/cloud-config`, for OpenStack. +Now we'll create the cloud config, `/etc/kubernetes/cloud-config`, for OpenStack. Note that the tenant here is the one we created for all Kubernetes VMs in the beginning. All VMs should be launched in this project/tenant. In addition you need to create a user in this tenant for Kubernetes to do queries. From 760115f978239179f6cd76adf5d3aba7f5ba9b17 Mon Sep 17 00:00:00 2001 From: Shunde Zhang Date: Fri, 7 Feb 2020 11:08:56 +1100 Subject: [PATCH 48/54] Update content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md Co-Authored-By: Bob Killen --- .../Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md b/content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md index b467c90ee0c0b..0729ea1e5dabc 100644 --- a/content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md +++ b/content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md @@ -238,7 +238,7 @@ Taints: node-role.kubernetes.io/master:NoSchedule ``` Now deploy the OpenStack cloud controller manager into the cluster, following [using controller manager with kubeadm](https://github.com/kubernetes/cloud-provider-openstack/blob/master/docs/using-controller-manager-with-kubeadm.md). -Create a secret with cloud-config for openstack cloud provider. +Create a secret with the cloud-config for the openstack cloud provider. ```shell kubectl create secret -n kube-system generic cloud-config --from-literal=cloud.conf="$(cat /etc/kubernetes/cloud-config)" --dry-run -o yaml > cloud-config-secret.yaml kubectl apply -f cloud-config-secret.yaml From dd6292c31a59c5673a10e307fef9f3227eb9d1c4 Mon Sep 17 00:00:00 2001 From: Shunde Zhang Date: Fri, 7 Feb 2020 11:09:05 +1100 Subject: [PATCH 49/54] Update content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md Co-Authored-By: Bob Killen --- .../Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md b/content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md index 0729ea1e5dabc..a1c3bae48fb34 100644 --- a/content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md +++ b/content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md @@ -365,7 +365,7 @@ ProviderID: openstack:///548e3c46-2477-4ce2-968b-3de1314560a5 ``` Now install your favourite CNI and the control-plane node will become ready. -For example, to install weave net, run this command: +For example, to install Weave Net, run this command: ```shell kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d '\n')" ``` From 6b32b30995521d1a75047f988cc2df91d82e3ce8 Mon Sep 17 00:00:00 2001 From: Shunde Zhang Date: Fri, 7 Feb 2020 11:09:35 +1100 Subject: [PATCH 50/54] Update content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md Co-Authored-By: Bob Killen --- .../Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md b/content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md index a1c3bae48fb34..695bb545e4239 100644 --- a/content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md +++ b/content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md @@ -252,7 +252,7 @@ kubectl apply -f https://github.com/kubernetes/cloud-provider-openstack/raw/rele kubectl apply -f https://github.com/kubernetes/cloud-provider-openstack/raw/release-1.15/cluster/addons/rbac/cloud-controller-manager-role-bindings.yaml ``` -We'll run OpenStack cloud controller manager as a DaemonSet rather than a pod. +We'll run the OpenStack cloud controller manager as a DaemonSet rather than a pod. The manager will only run on the control-plane node, so if there are multiple control-plane nodes, multiple pods will be run for high availability. Create `openstack-cloud-controller-manager-ds.yaml` containing the following manifests, then apply it. From 2c016fa4b11ed5d211fd4170a3cd6fff13211b88 Mon Sep 17 00:00:00 2001 From: Shunde Zhang Date: Fri, 7 Feb 2020 11:09:51 +1100 Subject: [PATCH 51/54] Update content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md Co-Authored-By: Bob Killen --- .../Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md b/content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md index 695bb545e4239..2fecae670112f 100644 --- a/content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md +++ b/content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md @@ -67,7 +67,7 @@ Next, we'll follow the official documents to install docker and Kubernetes using Install Docker following the steps from the [container runtime](/docs/setup/production-environment/container-runtimes/) documentation. Note that it is a [best practice to use systemd as the cgroup driver](/docs/setup/production-environment/container-runtimes/#cgroup-drivers) for Kubernetes. -If you use internal repository servers, add them to docker's config too. +If you use an internal container registry, add them to the docker config. ```shell # Install Docker CE ## Set up the repository From 60983de8aff064f7208c9c76c2307c9400b6bd99 Mon Sep 17 00:00:00 2001 From: Shunde Zhang Date: Fri, 7 Feb 2020 11:10:23 +1100 Subject: [PATCH 52/54] Update content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md Co-Authored-By: Bob Killen --- .../Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md b/content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md index 2fecae670112f..e092059672007 100644 --- a/content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md +++ b/content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md @@ -49,7 +49,7 @@ The security group will have the following rules to open ports for Kubernetes. |TCP|6781-6784|Weave Net| |UDP|6783-6784|Weave Net| -CNI specific ports are only required to be opened when that particular CNI plugin is used. In this instruction we use Weave net, thus only those Weave net ports, TCP 6781-6784 and UDP 6783-6784, need to be opened in the security group. +CNI specific ports are only required to be opened when that particular CNI plugin is used. In this guide, we will use Weave Net. Only the Weave Net ports (TCP 6781-6784 and UDP 6783-6784), will need to be opened in the security group. The control-plane node needs at least 2 cores and 4GB RAM. After the VM is launched, verify its hostname and make sure it is the same as the node name in Nova. If the hostname is not resolvable, add it to `/etc/hosts`. From bb621500f064dc7747059ca983cc66936178cdb5 Mon Sep 17 00:00:00 2001 From: Shunde Zhang Date: Fri, 7 Feb 2020 11:11:32 +1100 Subject: [PATCH 53/54] Update Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md --- .../Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md b/content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md index e092059672007..8f83c04ffb7ba 100644 --- a/content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md +++ b/content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md @@ -10,7 +10,7 @@ This document describes how to install a single control-plane Kubernetes cluster This cluster runs on OpenStack VMs, so let's create a few things in OpenStack first. -* A project/tenant for this Kubernetes cluster. +* A project/tenant for this Kubernetes cluster * A user in this project for Kubernetes, to query node information and attach volumes etc * A private network and subnet * A router for this private network and connect it to a public network for floating IPs From 5b3c6533c51796adf4b191b0d2955acf96663739 Mon Sep 17 00:00:00 2001 From: Kaitlyn Barnard Date: Fri, 7 Feb 2020 09:31:10 -0800 Subject: [PATCH 54/54] Update and rename Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md to 2020-02-07-Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md --- ...Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md} | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) rename content/en/blog/_posts/{Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md => 2020-02-07-Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md} (99%) diff --git a/content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md b/content/en/blog/_posts/2020-02-07-Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md similarity index 99% rename from content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md rename to content/en/blog/_posts/2020-02-07-Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md index 8f83c04ffb7ba..eee67b3c6461b 100644 --- a/content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md +++ b/content/en/blog/_posts/2020-02-07-Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md @@ -1,7 +1,7 @@ --- layout: blog title: "Deploying External OpenStack Cloud Provider with Kubeadm" -date: 2020-01-20 +date: 2020-02-07 slug: Deploying-External-OpenStack-Cloud-Provider-with-Kubeadm --- This document describes how to install a single control-plane Kubernetes cluster v1.15 with kubeadm on CentOS, and then deploy an external OpenStack cloud provider and Cinder CSI plugin to use Cinder volumes as persistent volumes in Kubernetes.