This project deploys the OpenShift Assisted Installer in Minikube and spawns libvirt VMs that represent bare metal hosts.
Table of contents
- Test-Infra
- Prerequisites
- Installation Guide
- OS parameters used for configuration
- Instructions
- Usage
- Full flow cases
- Run full flow with install
- Run full flow without install
- Run full flow with ipv6
- Run only deploy nodes (without pre deploy of all assisted service)
- Redeploy nodes
- Redeploy with assisted services
- Cleaning
- Install cluster
- Create cluster and download ISO
- Deploy Assisted Service and Monitoring stack
deploy_assisted_service
and Create cluster and download ISO- start_minikube and Deploy UI and open port forwarding on port 6008, allows to connect to it from browser
- Kill all open port forwarding commands, will be part of destroy target
- Test
assisted-service
image - Test installer, controller,
assisted-service
and agent images in the same flow - In case you would like to build the image with a different
assisted-service
client - Test with Authentication
- Single Node - Bootstrap in place with Assisted Service
- Single Node - Bootstrap in place with Assisted Service and IPv6
- On-prem
- CentOS 8 or RHEL 8 host
- File system that supports d_type
- Ideally on a bare metal host with at least 64G of RAM.
- Run as a user with password-less
sudo
access or be ready to entersudo
password for prepare phase. - Make sure to unset the KUBECONFIG variable in the same shell where you run
make
. - Get a valid pull secret (JSON string) from redhat.com if you want to test the installation (not needed for testing only the discovery flow). Export it as:
export PULL_SECRET='<pull secret JSON>'
# or alternatively, define PULL_SECRET_FILE="/path/to/pull/secret/file"
Check the Install Guide for installation instructions.
Variable | Description |
---|---|
AGENT_DOCKER_IMAGE | agent docker image to use, will update assisted-service config map with given value |
ASSISTED_SERVICE_HOST | FQDN or IP address to where assisted-service is deployed. Used when DEPLOY_TARGET="onprem". |
BASE_DNS_DOMAINS | base DNS domains that are managed by assisted-service, format: domain_name:domain_id/provider_type. |
BASE_DOMAIN | base domain, needed for DNS name, default: redhat.com |
CLUSTER_ID | cluster id , used for install_cluster command, default: the last spawned cluster |
CLUSTER_NAME | cluster name, used as prefix for virsh resources, default: test-infra-cluster |
DEPLOY_MANIFEST_PATH | the location of a manifest file that defines image tags images to be used |
DEPLOY_MANIFEST_TAG | the Git tag of a manifest file that defines image tags to be used |
DEPLOY_TAG | the tag to be used for all images (assisted-service, assisted-installer, agent, etc) this will override any other os parameters |
DEPLOY_TARGET | Specifies where assisted-service will be deployed. Defaults to "minikube". "onprem" will deploy assisted-service in a pod on the localhost. |
AUTH_TYPE | configure the type of authentication assisted-service will use, default: none |
HTTPS_PROXY_URL | A proxy URL to use for creating HTTPS connections outside the cluster |
HTTP_PROXY_URL | A proxy URL to use for creating HTTP connections outside the cluster |
IMAGE_BUILDER | image-builder image to use, will update assisted-service config map with given value |
INSTALLER_IMAGE | assisted-installer image to use, will update assisted-service config map with given value |
IPv4 | Boolean value indicating if IPv4 is enabled. Default is yes |
IPv6 | Boolean value indicating if IPv6 is enabled. Default is no |
ISO | path to ISO to spawn VM with, if set vms will be spawn with this iso without creating cluster. File must have the '.iso' suffix |
KUBECONFIG | kubeconfig file path, default: /.kube/config |
MASTER_MEMORY | memory for master VM, default: 16984MB |
NETWORK_CIDR | network CIDR to use for virsh VM network, default: "192.168.126.0/24" |
NETWORK_NAME | virsh network name for VMs creation, default: test-infra-net |
NO_PROXY_VALUES | A comma-separated list of destination domain names, domains, IP addresses, or other network CIDRs to exclude proxying |
NUM_MASTERS | number of VMs to spawn as masters, default: 3 |
NUM_WORKERS | number of VMs to spawn as workers, default: 0 |
OCM_BASE_URL | OCM API URL used to communicate with OCM and AMS, default: https://api.integration.openshift.com/ |
OCM_CLIENT_ID | ID of Service Account used to communicate with OCM and AMS for Agent Auth and Authz |
OCM_CLIENT_SECRET | Password of Service Account used to communicate with OCM and AMS for Agent Auth and Authz |
OC_MODE | if set, use oc instead of minikube |
OC_SCHEME | Scheme for assisted-service url on oc, default: http |
OC_SERVER | server for oc login, required if oc-token is provided, default: https://api.ocp.prod.psi.redhat.com:6443 |
OC_TOKEN | token for oc login (an alternative for oc-user & oc-pass) |
OFFLINE_TOKEN | token used to fetch JWT tokens for assisted-service authentication (from https://cloud.redhat.com/openshift/token) |
OPENSHIFT_VERSION | OpenShift version to install, default: "4.7" |
PROXY | Set HTTP and HTTPS proxy with default proxy targets. The target is the default gateway in the network having the machine network CIDR |
PULL_SECRET | pull secret to use for cluster installation command, no option to install cluster without it. |
PULL_SECRET_FILE | path and name to the file containing the pull secret to use for cluster installation command, no option to install cluster without it. |
REMOTE_SERVICE_URL | URL to remote assisted-service - run infra on existing deployment |
ROUTE53_SECRET | Amazon Route 53 secret to use for DNS domains registration. |
SERVICE | assisted-service image to use |
SERVICE_BASE_URL | update assisted-service config map SERVICE_BASE_URL parameter with given URL, including port and protocol |
SERVICE_BRANCH | assisted-service branch to use, default: master |
SERVICE_NAME | assisted-service target service name, default: assisted-service |
SERVICE_REPO | assisted-service repository to use, default: https://github.com/openshift/assisted-service |
SSH_PUB_KEY | SSH public key to use for image generation, gives option to SSH to VMs, default: ssh_key/key_pub |
SSO_URL | URL used to fetch JWT tokens for assisted-service authentication |
WITH_AMS_SUBSCRIPTIONS | configure assisted-service to create AMS subscription for each registered cluster, default: false |
WORKER_MEMORY | memory for worker VM, default: 8892MB |
PUBLIC_CONTAINER_REGISTRIES | comma-separated list of registries that do not require authentication for pulling assisted installer images |
CHECK_CLUSTER_VERSION | If "True", the controller will wait for CVO to finish |
ENABLE_KUBE_API | If set, deploy assisted-service with Kube API controllers (minikube only) |
On the bare metal host:
Note: don't do it from /root folder - it will breaks build image mounts and fail to run
dnf install -y git make
cd /home/test
git clone https://github.com/openshift/assisted-test-infra.git
When using this infra for the first time on a host, run:
make create_full_environment
This will install required packages, configure libvirt, pull relevant Docker images, and start Minikube.
There are different options to use test-infra, which can be found in the makefile.
The following is a list of stages that will be run:
- Start Minikube if not started yet
- Deploy services for assisted deployment on Minikube
- Create cluster in
assisted-service
service - Download ISO image
- Spawn required number of VMs from downloaded ISO with parameters that can be configured by OS environment (check makefile)
- Wait until nodes are up and registered in
assisted-service
- Set nodes roles in
assisted-service
by matching VM names (worker/master) - Verify all nodes have required hardware to start installation
- Install nodes
- Download
kubeconfig-noingress
to build/kubeconfig - Waiting till nodes are in
installed
state, while verifying that they don't move toerror
state - Verifying cluster is in state
installed
- Download kubeconfig to build/kubeconfig
Note: Please make sure no previous cluster is running before running a new one (it will rewrite its build files).
To run the full flow, including installation:
make run_full_flow_with_install
Or to run it together with create_full_environment (requires sudo
password):
make all
To run the flow without the installation stage:
make run_full_flow
To run the flow with default IPv6 settings without install. It is identical to running
make run_full_flow IPv4=no IPv6=yes PROXY=yes VIP_DHCP_ALLOCATION=no
make run_full_flow_with_ipv6
make deploy_nodes or make deploy_nodes_with_install
make redeploy_nodes or make redeploy_nodes_with_install
make redeploy_all or make redeploy_all_with_install
Following sections show how to perform cleaning of test-infra environment.
make destroy
make destroy_nodes
Sometimes you may need to delete all libvirt resources
make delete_all_virsh_resources
Install cluster after nodes were deployed. Can take ClusterId as OS environment
make install_cluster
make download_iso
make run
make deploy_monitoring
make download_iso_for_remote_use
start_minikube and Deploy UI and open port forwarding on port 6008, allows to connect to it from browser
make deploy_ui
make kill_all_port_forwardings
make redeploy_all SERVICE=<image to test>
or
export PULL_SECRET='<pull secret JSON>'; make redeploy_all_with_install SERVICE=<image to test>
make redeploy_all AGENT_DOCKER_IMAGE=<image to test>
or
make redeploy_all_with_install AGENT_DOCKER_IMAGE=<image to test>
make redeploy_all INSTALLER_IMAGE=<image to test> CONTROLLER_IMAGE=<image to test>
or
export PULL_SECRET='<pull secret JSON>'; make redeploy_all_with_install INSTALLER_IMAGE=<image to test> CONTROLLER_IMAGE=<image to test>
make redeploy_all INSTALLER_IMAGE=<image to test> AGENT_DOCKER_IMAGE=<image to test> SERVICE=<image to test>
or
export PULL_SECRET='<pull secret JSON>'; make redeploy_all_with_install INSTALLER_IMAGE=<image to test> CONTROLLER_IMAGE=<image to test> AGENT_DOCKER_IMAGE=<image to test> SERVICE=<image to test>
Assisted-test-infra builds an image including all the prerequisites to handle this repository.
make image_build
make image_build SERVICE=<assisted service image URL>
To test with Authentication, the following additional environment variables are required:
export AUTH_TYPE=rhsso
export OCM_CLIENT_ID=<SSO Service Account Name>
export OCM_CLIENT_SECRET=<SSO Service Account Password>
export OCM_BASE_URL=https://api.openshift.com
export OFFLINE_TOKEN=<User token from https://cloud.redhat.com/openshift/token>
- UI is not available when Authentication is enabled.
- The PULL_SECRET variable should be taken from the same Red Hat cloud environment as defined in OCM_URL (integration, stage or production).
To test single node bootstrap in place flow with assisted service
export PULL_SECRET='<pull secret JSON>'
export OPENSHIFT_INSTALL_RELEASE_IMAGE=<relevant release image if needed>
export NUM_MASTERS=1
make redeploy_all_with_install or if service is up make redeploy_nodes_with_install
To test single node bootstrap in place flow with assisted service and ipv6
export PULL_SECRET='<pull secret JSON>'
export OPENSHIFT_INSTALL_RELEASE_IMAGE=<relevant release image if needed>
export NUM_MASTERS=1
make run_full_flow IPv6=yes IPv4=no PROXY=yes VIP_DHCP_ALLOCATION=no
To test on-prem in the e2e flow, two additonal environment variables need to be set:
export DEPLOY_TARGET=onprem
export ASSISTED_SERVICE_HOST=<fqdn-or-ip>
Setting DEPLOY_TARGET to "onprem" configures assisted-test-infra to deploy the assisted-service using a pod on your local host.
ASSISTED_SERVICE_HOST defines where the assisted-service will be deployed. For "onprem" deployments, set it to the FQDN or IP address of the host.
Optionally, you can also provide OPENSHIFT_INSTALL_RELEASE_IMAGE and PUBLIC_CONTAINER_REGISTRIES:
export OPENSHIFT_INSTALL_RELEASE_IMAGE=quay.io/openshift-release-dev/ocp-release:4.7.0-x86_64
export PUBLIC_CONTAINER_REGISTRIES=quay.io
If you do not export the optional variables, it will run with the default specified in assisted-service/onprem-environment.
Then run the same commands described in the instructions above to execute the test.
To run the full flow:
make all
To cleanup after the full flow:
make destroy