The steps for performing a UPI-based install are outlined here. Several Terraform templates are provided as an example to help model your own.
-
Compute
-
Network topology requirements
-
DNS requirements
-
Getting Ignition configs for machines
-
Getting OS related assets for machines
-
Configuring RHCOS VMs with Ignition configs
-
Watching your installation (bootstrap_complete, cluster available)
-
Example vSphere UPI deployment
The smallest OpenShift 4.x clusters require the following VMs:
-
1 bootstrap machine.
-
3 control plane machines.
-
at least 1 worker machine.
NOTE: The cluster requires the bootstrap machine to deploy the OpenShift cluster on to the 3 control plane machines, and you can remove the bootstrap machine.
The bootstrap and control plane machines must use Red Hat Enterprise Linux CoreOS (RHCOS) as the operating system.
All of the VMs created must reside within the same folder in all of the vCenters used. For example, if the VM folder used is named MyFolder, then all of the VMs running OpenShift must be in the MyFolder folder.
The disk UUID on the VMs must be enabled: the disk.EnableUUID
value must be set to True
. This step is necessary so that the VMDK always presents a consistent UUID to the VM, thus allowing the disk to be mounted properly.
Processing Memory Storage Networking
[todo-link-to-minimum-resource-requirements]
OpenShift 4.x requires all nodes to have internet access to pull images for platform containers and provide telemetry data to Red Hat.
Before you install OpenShift, you must provision two load balancers.
-
A load balancer for the control plane machines that targets port 6443 (Kubernetes APIServer) and 22623(Machine Config server). Port 6443 must be accessible to both clients external to the cluster and nodes within the cluster, and port 22623 must be accessible to nodes within the cluster.
-
A load balancer for the machines that run the ingress router pods that balances ports 443 and 80. Both the ports must be accessible to both clients external to the cluster and nodes within the cluster.
NOTE: A working configuration for the ingress router is required for an OpenShift 4.x cluster.
NOTE: The default configuration for Cluster Ingress Operator deploys the ingress router to
worker
nodes in the cluster. The administrator needs to configure the ingress after the control plane has been bootstrapped.
You must configure the network connectivity between machines to allow cluster components to communicate.
-
etcd
As the etcd members are located on the control plane machines, each control plane machine requires connectivity to etcd server, etcd peer and etcd-metrics on every other control plane machine.
-
OpenShift SDN
All the machines require connectivity to certain reserved ports on every other machine to establish in-cluster networking. For more details refer [doc][snd-ports].
-
Kubernetes NodePort
All the machines require connectivity to Kubernetes NodePort range 30000-32767 on every other machine for OpenShift platform components.
-
OpenShift reserved
All the machines require connectivity to reserved port ranges 10250-12252 and 9000-9999 on every other machine for OpenShift platform components.
All the RHCOS machines require network in initramfs
during boot to fetch Ignition config from the Machine Config Server machine-config-server. During the initial boot, the machines requires a DHCP server in order to establish a network connection to download their Ignition configs. After the initial boot, the machines can be configured to use a static IP address, although it is recommended that you continue to use DHCP to configure IP addresses after the initial boot.
-
Kubernetes API
OpenShift 4.x requires the DNS records
api.$cluster_name.$base_domain
andapi-int.$cluster_name.$base_domain
to point to the Load balancer targeting the control plane machines. Both records must be resolvable from all the nodes within the cluster. Theapi.$cluster_name.$base_domain
must also be resolvable by clients external to the cluster. -
etcd
For each control plane machine, OpenShift 4.x requires DNS records
etcd-$idx.$cluster_name.$base_domain
to point to$idx
'th control plane machine. The DNS record must resolve to an unicast IPV4 address for the control plane machine and the records must be resolvable from all the nodes in the cluster.For each control plane machine, OpenShift 4.x also requires a SRV DNS record for etcd server on that machine with priority
0
, weight10
and port2380
. For 3 control plane cluster, the records look like:# _service._proto.name. TTL class SRV priority weight port target. _etcd-server-ssl._tcp.$cluster_name.$base_domain 86400 IN SRV 0 10 2380 etcd-0.$cluster_name.$base_domain. _etcd-server-ssl._tcp.$cluster_name.$base_domain 86400 IN SRV 0 10 2380 etcd-1.$cluster_name.$base_domain. _etcd-server-ssl._tcp.$cluster_name.$base_domain 86400 IN SRV 0 10 2380 etcd-2.$cluster_name.$base_domain.
-
OpenShift Routes
OpenShift 4.x requires the DNS record
*.apps.$cluster_name.$base_domain
to point to the Load balancer targeting the machines running the ingress router pods. This record must be resolvable by both clients external to the cluster and from all the nodes within the cluster.
The OpenShift Installer provides administrators various assets that are required to create an OpenShift cluster, namely:
-
Ignition configs: The OpenShift Installer provides Ignition configs that should be used to configure the RHCOS based bootstrap and control plane machines using
bootstrap.ign
andmaster.ign
respectively. The OpenShift Installer also providesworker.ign
that can be used to configure the RHCOS basedworker
machines, but also can be used as source for configuring RHEL based machines [todo-link-to-BYO-RHEL]. -
Admin Kubeconfig: The OpenShift Installer provides a kubeconfig with admin level privileges to Kubernetes APIServer.
NOTE: This kubeconfig is configured to use
api.$cluster_name.$base_domain
DNS name to communicate with the Kubernetes APIServer.
The OpenShift installer uses an Install Config to drive all install time configuration.
An example install config for vSphere UPI is as follows:
apiVersion: v1beta4
## The base domain of the cluster. All DNS records will be sub-domains of this base and will also include the cluster name.
baseDomain: example.com
compute:
- name: worker
replicas: 1
controlPlane:
name: master
replicas: 3
metadata:
## The name for the cluster
name: test
platform:
vsphere:
## The hostname or IP address of the vCenter
vcenter: your.vcenter.server
## The name of the user for accessing the vCenter
username: your_vsphere_username
## The password associated with the user
password: your_vsphere_password
## The datacenter in the vCenter
datacenter: your_datacenter
## The default datastore to use.
defaultDatastore: your_datastore
## The pull secret that provides components in the cluster access to images for OpenShift components.
pullSecret: ''
## The default SSH key that will be programmed for `core` user.
sshKey: ''
Create a directory that will be used by the OpenShift installer to provide all the assets. For example test-vsphere
,
$ mkdir test-vsphere
$ tree test-vsphere
test-vsphere
0 directories, 0 files
Copy your install-config
to the INSTALL_DIR
. For example using the test-vsphere
as our INSTALL_DIR
,
$ cp <your-instal-config> test-vsphere/install-config.yaml
$ tree test-vsphere
test-vsphere
└── install-config.yaml
0 directories, 1 file
NOTE: The filename for install-config
in the INSTALL_DIR
must be install-config.yaml
Given that you have setup the INSTALL_DIR
with the appropriate install-config
, you can create the Ignition configs by using the create ignition-configs
target. For example,
$ openshift-install --dir test-vsphere create ignition-configs
INFO Consuming "Install Config" from target directory
$ tree test-vsphere
test-vsphere
├── auth
│ └── kubeconfig
├── bootstrap.ign
├── master.ign
├── metadata.json
└── worker.ign
1 directory, 5 files
TODO RHEL CoreOS does not have assets for vSphere.
Set the vApp properties of the VM to set the Ignition config for the VM. The guestinfo.ignition.config.data
property is the base64-encoded Ignition config. The guestinfo.ignition.config.data.encoding
should be set to base64
.
The Ignition configs supplied in the vApp properties of the control plane and worker VMs should be copies of the master.ign
and worker.ign
created by the OpenShift Installer.
The Ignition config supplied in the vApp properties of the bootstrap VM should be an Ignition config that has a URL from which the bootstrap VM can download the bootstrap.ign
created by the OpenShift Installer. Note that the URL must be accessible by the bootstrap VM.
The Ignition config created by the OpenShift Installer cannot be used directly because there is a size limit on the length of vApp properties, and the Ignition config will exceed that size limit.
{
"ignition": {
"config": {
"append": [
{
"source": "bootstrap_ignition_config_url",
"verification": {}
}
]
},
"timeouts": {},
"version": "2.1.0"
},
"networkd": {},
"passwd": {},
"storage": {},
"systemd": {}
}
The hostname of each control plane and worker machine must be resolvable from all nodes within the cluster.
Preferrably, the hostname will be set via DHCP. If you need to manually set a hostname, you can create a hostname file by adding an entry in the .storage.files
list in an Ignition config.
For example, the following Ignition config will create a hostname file that sets the hostname of a machine to `control-plane-0.
{
"ignition": {
"config": {},
"timeouts": {},
"version": "2.1.0"
},
"networkd": {},
"passwd": {},
"storage": {
"files": [
{
"filesystem": "root",
"group": {},
"path": "/etc/hostname",
"user": {},
"contents": {
"source": "data:text/plain;charset=utf-8,control-plane-0",
"verification": {}
},
"mode": 420
}
]
},
"systemd": {}
}
Preferrably, the IP address for each machine will be set via DHCP. If you need to use a static IP address, you can be set one for a machine by creating an ifcfg file. You can create an ifcfg file by adding an entry in the .storage.files
list in an Ignition config.
For example, the following Ignition config will create an ifcfg file that sets the IP address of the ens192 device to 10.0.0.2.
{
"ignition": {
"config": {},
"timeouts": {},
"version": "2.1.0"
},
"networkd": {},
"passwd": {},
"storage": {
"files": [
{
"filesystem": "root",
"group": {},
"path": "/etc/sysconfig/network-scripts/ifcfg-ens192",
"user": {},
"contents": {
"source": "data:text/plain;charset=utf-8;base64,VFlQRT1FdGhlcm5ldApCT09UUFJPVE89bm9uZQpOQU1FPWVuczE5MgpERVZJQ0U9ZW5zMTkyCk9OQk9PVD15ZXMKSVBfQUREUj0xMC4wLjAuMgpQUkVGSVg9MjQKR0FURVdBWT0xMC4wLjAuMQpET01BSU49bXlkb21haW4uY29tCkROUzE9OC44LjguOAo=",
"verification": {}
},
"mode": 420
}
]
},
"systemd": {}
}
The ifcfg file will have the following contents.
TYPE=Ethernet
BOOTPROTO=none
NAME=ens192
DEVICE=ens192
ONBOOT=yes
IP_ADDR=10.0.0.2
PREFIX=24
GATEWAY=10.0.0.1
DOMAIN=mydomain.com
DNS1=8.8.8.8
The administrators can use the wait-for bootstrap-complete
command of the OpenShift Installer to monitor cluster bootstrapping. The command succeeds when it notices bootstrap-complete
event from Kubernetes APIServer. This event is generated by the bootstrap machine after the Kubernetes APIServer has been bootstrapped on the control plane machines. For example,
$ openshift-install --dir test-vsphere wait-for bootstrap-complete
INFO Waiting up to 30m0s for the Kubernetes API at https://api.test.example.com:6443...
INFO API v1.12.4+c53f462 up
INFO Waiting up to 30m0s for the bootstrap-complete event...
The administrators can use the wait-for install-complete
command of the OpenShift Installer to monitor install completion. The command succeeds when it notices that Cluster Version Operator has completed rolling out the OpenShift cluster from Kubernetes APIServer.
$ openshift-install --dir test-vsphere wait-for install-complete
INFO Waiting up to 30m0s for the cluster to initialize...
Terraform [templates][upi-vsphere] provides an example of using OpenShift Installer to create a vSphere UPI OpenShift cluster.
-
Compute: Uses
public
IPv4 addresses for each machine, so that all the machines are accessible on the Internet. -
DNS and Load Balancing Uses AWS Route53 to configure the all the DNS records. Uses Round-Robin DNS RRDNS in place of load balancing solutions.
Refer to the pre-requisites for using the example here
Use the OpenShift Installer to create [Ignition configs][#getting-ignition-configs-for-machines] that will be used to create bootstrap, control plane and worker machines.
Use the example create_tfvars.sh
script to create a Terraform variable file, and edit the tfvars
file on your favorite editor.
cd test-vsphere
create_tfvars.sh
At a minimum, you will need to provide values for the following variables.
- bootstrap_ignition_url
- bootstrap_ip
- control_plane_ips
- compute_ips
Move the tfvars
file to the directory where the example terraform is.
Initialize terraform to download all the required providers. For more info on terraform init and terraform providers
terraform init
Create all the resources using terraform by invoking apply
terraform apply -auto-approve
Use the bootstrap [monitoring][#monitor-for-bootstrap-complete] to track when cluster bootstrapping has finished. After the Kubernetes APIServer has been bootstrapped on the control plane machines, the bootstrap VM can be destroyed by following:
terraform apply -auto-approve -var 'bootstrap_complete=true'
To allow Kube APIServer to communicate with the kubelet running on nodes for logs, rsh etc. The administrator needs to approve the CSR [requests][csr-requests] generated by each kubelet.
You can approve all Pending
CSR requests using,
oc get csr -ojson | jq -r '.items[] | select(.status == {} ) | .metadata.name' | xargs oc adm certificate approve
The Cluster Image Registry [Operator][cluster-image-registry-operator] does not pick an storage backend for vSphere
platform. Therefore, the cluster operator will be stuck in progressing because it is waiting for administrator to [configure][cluster-image-registry-operator-configuration] a storage backend for the image-registry. You can pick emptyDir
for non-production clusters by following:
oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{"spec":{"storage":{"emptyDir":{}}}}'
Use the cluster finish [monitoring][#monitor-for-cluster-completion] to track when cluster has completely finished deploying.
Use terraform destroy to destroy all the resources for the cluster. For example,
terraform destroy -auto-approve