Skip to content

Latest commit

 

History

History
executable file
·
435 lines (334 loc) · 40.1 KB

user-guide-v1.adoc

File metadata and controls

executable file
·
435 lines (334 loc) · 40.1 KB

Runtime Component Operator v1.2.0+

This generic Operator is capable of deploying any application image and can be imported into any runtime-specific Operator as library of application capabilities. This architecture ensures compatibility and consistency between all runtime Operators, allowing everyone to benefit from the functionality added in this project.

System requirements

Your environment must meet cluster, sizing, persistent storage, and network requirements for Runtime Component operator.

OpenShift Container Platform Requirements

If you are installing a Runtime Component operator on an Red Hat OpenShift cluster, your environment must meet Red Hat OpenShift Container Platform (OCP) cluster requirements.

OCP requirements

Runtime Component operator requires an OCP version 4.16, OCP version 4.15, OCP version 4.14, or OCP version 4.12 cluster on Linux x86_64 (amd64), Linux on Power (ppc64le), or Linux on IBM Z (s390x) platform, with cluster-admin permissions. To manage OCP projects with OCP CLI (oc) commands, the installation also requires the OCP CLI.

By default, certificates are generated by using the OpenShift certificate manager. If you want to use the manageTLS capability and use a different certificate manager (such as cert-manager) to generate and manage the certificates, you must install it.

Kubernetes Requirements

If you are installing a Runtime Component operator on a Kubernetes cluster, your environment must meet the Kubernetes cluster requirements.

Kubernetes requirements

Runtime Component operator requires a Kubernetes version 1.29, 1.28, 1.27, 1.26, or 1.25 cluster on Linux x86_64 (amd64), Linux on Power (ppc64le), or Linux on IBM Z (s390x) platform, with cluster-admin permissions.

If you plan to use Operator Lifecycle Manager (OLM), it must be installed on your cluster.

If you want to use the manageTLS capability, you must have a certificate manager (such as cert-manager) installed.

Before you can use the Ingress resource to expose your application, you must install an ingress controller such as Nginx or Traefik.

Sizing Requirements

Your environment must meet sizing requirements for Runtime Component operator.

Runtime Component operator sizing

Table 1. Operator sizing requirements

Project

CPU request (cores)

Memory request (Mi)

Disk space (Gi)

Notes

Runtime Component operator

0.2 (limit: 0.4)

128 (limit: 1024)

N/A

Applications that are deployed and managed by the operator have their own resource requests and limits as specified in the Runtime Component operator custom resources.

Note
The values in the tables do not include any requirements inherent in the storage provider. The storage infrastructure might require more resources (for example, CPU or memory) from the worker nodes.

Storage requirements

No storage requirements exist for Runtime Component operator.

You are responsible for configuring and managing storage for any applications that you deploy with Runtime Component operator.

Network requirements

Your environment must meet network requirements for Runtime Component operator.

Table 2. External network requirements

Hostnames

Ports and Protocols

Purpose

icr.io, cp.icr.io

443 (HTTP over TLS)

The listed domain is the container image registry that is used as part of the Runtime Component operator installation. This registry is also used when Runtime Component operator and dependency software levels are updated.

Operator installation

Use the instructions for one of the releases to install the operator into a Kubernetes cluster.

The Runtime Component Operator is available for the following CPU architectures:

  • Linux® x86_64 (amd64)

  • Linux® on IBM® Z (s390x)

  • Linux® on Power® (ppc64le)

The Runtime Component Operator can be installed to:

  • watch own namespace

  • watch another namespace

  • watch all namespaces in the cluster

Appropriate cluster roles and bindings are required to watch another namespace, or to watch all namespaces.

Note
The Runtime Component Operator can only interact with resources it is given permission to interact through Role-based access control (RBAC). Some of the operator features described in this document require interacting with resources in other namespaces. In that case, the operator must be installed with correct ClusterRole definitions.

Overview

The architecture of the Runtime Component Operator follows the basic controller pattern: the Operator container with the controller is deployed into a Pod and listens for incoming resources with Kind: RuntimeComponent. Creating a RuntimeComponent custom resource (CR) triggers the Runtime Component Operator to create, update or delete Kubernetes resources needed by the application to run on your cluster.

Each instance of RuntimeComponent CR represents the application to be deployed on the cluster:

apiVersion: rc.app.stacks/v1
kind: RuntimeComponent
metadata:
  name: my-app
spec:
  applicationImage: quay.io/my-repo/my-app:1.0
  service:
    type: ClusterIP
    port: 9080
  expose: true
  statefulSet:
    storage:
      size: 2Gi
      mountPath: "/logs"

Configuration

Custom Resource Definition (CRD)

The following table lists configurable fields of the RuntimeComponent CRD. For complete OpenAPI v3 representation of these values, view the files under /deploy/releases/<operator-version>/kubectl/runtime-component-crd.yaml. For example, the RuntimeComponent CRD for release 0.8.2.

Each RuntimeComponent CR must at least specify the .spec.applicationImage field. Specifying other fields is optional.

Table 3. Runtime Component Resource Definition
Field Description

affinity

Configures pods to run on specific nodes. For examples, see Limit a pod to run on specified nodes.

affinity.architecture

An array of architectures to be considered for deployment. Their position in the array indicates preference.

affinity.nodeAffinity

A YAML object that represents a NodeAffinity.

affinity.nodeAffinityLabels

A YAML object that contains set of required labels and their values.

affinity.podAffinity

A YAML object that represents a PodAffinity.

affinity.podAntiAffinity

A YAML object that represents a PodAntiAffinity.

applicationImage

The absolute name of the image to be deployed, containing the registry and the tag. On OpenShift, it can also be set to <project name>/<image stream name>[:tag] to reference an image from an image stream. If <project name> and <tag> values are not defined, they default to the namespace of the CR and the value of latest, respectively.

applicationName

The name of the application this resource is part of. If not specified, it defaults to the name of the CR.

applicationVersion

The current version of the application. Label app.kubernetes.io/version will be added to all resources when the version is defined.

autoscaling

Configures the wanted resource consumption of pods. For examples, see Configure multiple application instances for high availability.

autoscaling.maxReplicas

Required field for autoscaling. Upper limit for the number of pods that can be set by the autoscaler. It cannot be lower than the minimum number of replicas.

autoscaling.minReplicas

Lower limit for the number of pods that can be set by the autoscaler.

autoscaling.targetCPUUtilizationPercentage

Target average CPU utilization (represented as a percentage of requested CPU) over all the pods.

createKnativeService

A Boolean to toggle the creation of Knative resources and use of Knative serving. To create a Knative service, set the parameter to true. For examples, see Deploy serverless applications with Knative and Expose applications externally.

deployment

The wanted state and cycle of the deployment and resources owned by the deployment.

deployment.annotations

Annotations to be added only to the deployment and resources owned by the deployment.

deployment.updateStrategy

A field to specify the update strategy of the deployment. For examples, see updateStrategy

deployment.updateStrategy.type

The type of update strategy of the deployment. The type can be set to RollingUpdate or Recreate, where RollingUpdate is the default update strategy.

dns

DNS settings for the application pods. For more information, see Configure DNS

dns.config

The DNS Config for the application pods.

dns.policy

The DNS Policy for the application pod. Defaults to ClusterFirst.

disableServiceLinks

Disable information about services being injected into the application pod as environment variables. The default value for this field is false.

env

An array of environment variables following the format of {name, value}, where value is a simple string. It may also follow the format of {name, valueFrom}, where valueFrom refers to a value in a ConfigMap or Secret resource. For examples, see Set environment variables for an application container and Override console logging environment variable default values.

envFrom

An array of references to ConfigMap or Secret resources containing environment variables. Keys from ConfigMap or Secret resources become environment variable names in your container. For examples, see Set environment variables for an application container.

expose

A boolean that toggles the external exposure of this deployment via a Route or a Knative Route resource.

initContainers

The list of Init Container definitions.

manageTLS

A boolean to toggle automatic certificate generation and mounting TLS secret into the pod. The default value for this field is true.

monitoring

Specifies parameters for Service Monitor. For examples, see Monitor resources and Specify multiple service ports.

monitoring.endpoints

A YAML snippet representing an array of Endpoint component from ServiceMonitor.

monitoring.labels

Labels to set on ServiceMonitor.

networkPolicy

Defines the network policy. For examples, see Allowing or limiting incoming traffic.

networkPolicy.disable

A Boolean to disable the creation of the network policy. The default value is false. By default, network policies for an application are created and limit incoming traffic.

networkPolicy.fromLabels

The labels of one or more pods from which incoming traffic is allowed.

networkPolicy.namespaceLabels

The labels of namespaces from which incoming traffic is allowed.

probes

Defines health checks on an application container to determine whether it is alive or ready to receive traffic. For examples, see Configure probes.

probes.liveness

A YAML object configuring the Kubernetes liveness probe that controls when Kubernetes needs to restart the pod.

probes.readiness

A YAML object configuring the Kubernetes readiness probe that controls when the pod is ready to receive traffic.

probes.startup

A YAML object configuring the Kubernetes startup probe that controls when Kubernetes needs to startup the pod on its first initialization.

pullPolicy

The policy used when pulling the image. One of: Always, Never, and IfNotPresent.

pullSecret

If using a registry that requires authentication, the name of the secret containing credentials.

replicas

The static number of desired replica pods that run simultaneously.

resources.limits.cpu

The upper limit of CPU core. Specify integers, fractions (e.g. 0.5), or millicores values(e.g. 100m, where 100m is equivalent to .1 core).

resources.limits.memory

The memory upper limit in bytes. Specify integers with suffixes: E, P, T, G, M, K, or power-of-two equivalents: Ei, Pi, Ti, Gi, Mi, Ki.

resources.requests.cpu

The minimum required CPU core. Specify integers, fractions (e.g. 0.5), or millicore values(e.g. 100m, where 100m is equivalent to .1 core). Required field for autoscaling.

resources.requests.memory

The minimum memory in bytes. Specify integers with one of these suffixes: E, P, T, G, M, K, or power-of-two equivalents: Ei, Pi, Ti, Gi, Mi, Ki.

route.annotations

Annotations to be added to the Route.

route.certificateSecretRef

A name of a secret that already contains TLS key, certificate and CA to be used in the Route. It can also contain destination CA certificate. The following keys are valid in the secret: ca.crt, destCA.crt, tls.crt, and tls.key.

route.host

Hostname to be used for the Route.

route.insecureEdgeTerminationPolicy

HTTP traffic policy with TLS enabled. Can be one of Allow, Redirect and None.

route.path

Path to be used for the Route.

route.pathType

Path type to be used. Required field for Ingress. See Ingress path types.

route.termination

TLS termination policy. Can be one of edge, reencrypt and passthrough.

securityContext

A security context to control privilege and permission settings for the application container. For examples, see Set privileges and permissions for a pod or container. If set, the fields of SecurityContext override the equivalent fields of PodSecurityContext. For examples, see Configure a Security Context for a Pod or Container.

securityContext.allowPrivilegeEscalation

A Boolean that controls whether a process can gain more privileges than its parent process. This Boolean controls whether the no_new_privs flag is set on the container process. AllowPrivilegeEscalation is true always when the container is run as Privileged and has CAP_SYS_ADMIN.

securityContext.capabilities

The capabilities to add or drop when containers are run. Defaults to the default set of capabilities that the container runtime grants.

securityContext.capabilities.add

An array of added capabilities of POSIX capabilities type.

securityContext.capabilities.drop

An array of removed capabilities of POSIX capabilities type.

securityContext.privileged

A Boolean to specify whether to run a container in privileged mode. Processes in privileged containers are equivalent to root on the host. The default is false.

securityContext.procMount

The type of proc mount to use for the containers. The default is DefaultProcMount, which uses the container runtime defaults for read-only paths and masked paths. To use procMount, the ProcMountType feature flag must be enabled.

securityContext.readOnlyRootFilesystem

A Boolean to specify whether this container has a read-only root file system. The default is false.

securityContext.runAsGroup

The GID to run the entrypoint of the container process. If unset, runAsGroup uses the runtime default. The value can be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the SecurityContext value takes precedence.

securityContext.runAsNonRoot

A Boolean that specifies whether the container must run as a nonroot user. If true, the kubelet validates the image at run time to ensure that it does not run as UID 0 (root), and fails to start the container if it does. If unset or false, the validation is not performed. The value can be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the SecurityContext value takes precedence.

securityContext.runAsUser

The UID to run the entrypoint of the container process. If unset, the default is the user that is specified in image metadata. The value can be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the SecurityContext value takes precedence.

securityContext.seLinuxOptions

The SELinux context to be applied to the container. Its properties include level, role, type, and user. If unspecified, the container runtime allocates a random SELinux context for each container. The value can be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the SecurityContext value takes precedence.

securityContext.seccompProfile

The seccomp options to use by this container. If seccomp options are set at both the pod and container level, the container options override the pod options.

securityContext.seccompProfile.localhostProfile

A profile that is defined in a file on the node. The profile must be preconfigured on the node to work. Specify a descending path, relative to the kubelet configured seccomp profile location. Only set localhostProfile if type is Localhost.

securityContext.seccompProfile.type

(Required) The kind of seccomp profile to use. Valid options are Localhost (use a profile that is defined in a file on the node), RuntimeDefault (use the container runtime default profile), and Unconfined (use no profile).

securityContext.windowsOptions

The Windows specific settings to apply to all containers. If unset, the options from the PodSecurityContext are used. If set in both SecurityContext and PodSecurityContext, the SecurityContext value takes precedence. The windowsOptions properties include gmsaCredentialSpec, gmsaCredentialSpecName, hostProcess, and runAsUserName.

service

Configures parameters for the network service of pods. For an example, see Specify multiple service ports.

service.annotations

Annotations to be added to the service.

service.bindable

A boolean to toggle whether the operator expose the application as a bindable service. Defaults to false. For examples, see Bind applications with operator-managed backing services.

service.certificate

Configure the TLS certificates for the service. The annotations property is available for this parameter. Set annotations on the .spec.service.certificate.annotations parameter to add them to the certificate.

service.certificateSecretRef

A name of a secret that already contains TLS key, certificate and CA to be mounted in the pod. The following keys are valid in the secret: ca.crt, tls.crt, and tls.key.

service.nodePort

Node proxies this port into your service. Please note once this port is set to a non-zero value it cannot be reset to zero.

service.port

The port exposed by the container.

service.ports

An array consisting of service ports.

service.portName

The name for the port exposed by the container.

service.targetPort

The port that the operator assigns to containers inside pods. Defaults to the value of service.port.

service.type

The Kubernetes Service Type.

serviceAccountName

Deprecated. Use serviceAccount.name instead.

serviceAccount

The service account to use for application deployment. If a service account name is not specified, a service account is automatically created. For examples, see Configure a service account.

serviceAccount.name

Name of the service account to use for deploying the application.

serviceAccount.mountToken

A Boolean to toggle whether the service account’s token should be mounted in the application pods. If unset or true, the token will be mounted.

sidecarContainers

The list of sidecar containers. These are additional containers to be added to the pods. Note: Sidecar containers should not be named app.

statefulSet

The wanted state and cycle of stateful applications. For examples, see Persist resources.

statefulSet.annotations

Annotations to be added only to the StatefulSet and resources owned by the StatefulSet.

statefulSet.storage.mountPath

The directory inside the container where this persisted storage will be bound to.

statefulSet.storage.size

A convenient field to set the size of the persisted storage. Can be overridden by the storage.volumeClaimTemplate property. Operator will create a StatefulSet instead of a Deployment when storage is configured. For examples, see Persist resources.

statefulSet.storage.volumeClaimTemplate

A YAML object representing a volumeClaimTemplate component of a StatefulSet.

statefulSet.updateStrategy

A field to specify the update strategy of the StatefulSet. For examples, see updateStrategy

statefulSet.updateStrategy.type

The type of update strategy of the StatefulSet. The type can be set to RollingUpdate or OnDelete, where RollingUpdate is the default update strategy.

tolerations

Tolerations to be added to application pods. Tolerations allow the scheduler to schedule pods on nodes with matching taints. For more information, see Configure tolerations.

volumeMounts

A YAML object representing a pod volumeMount. For examples, see Persist Resources.

volumes

A YAML object representing a pod volume.

Basic usage

To deploy a Docker image that contains a runtime component to a Kubernetes environment, you can use the following CR:

apiVersion: rc.app.stacks/v1
kind: RuntimeComponent
metadata:
  name: my-app
spec:
  applicationImage: quay.io/my-repo/my-app:1.0

The applicationImage value must be defined in the RuntimeComponent CR. On OpenShift, the operator tries to find an image stream name with the applicationImage value. The operator falls back to the registry lookup if it is not able to find any image stream that matches the value. If you want to distinguish an image stream called my-company/my-app (project: my-company, image stream name: my-app) from the Docker Hub my-company/my-app image, you can use the full image reference as docker.io/my-company/my-app.

To get information on the deployed CR, use either of the following:

oc get runtimecomponent my-app
oc get comp my-app

The short name for runtimecomponent is comp.

Viewing operator application status

An application administrator can view the status of an application that is deployed in a container. To get information about the deployed custom resource (CR), use a CLI or the Red Hat OpenShift console.

Status types for .status.condition

The status types for the .status.condition parameter in the RuntimeComponent CR are Ready, ResourcesReady, Reconciled.

Reconciled

  • Indicates whether the current version of the operator successfully processed the configurations in the CR.

ResourcesReady

  • Indicates whether the application resources created and managed by the operator are ready.

Ready

  • Indicates the overall status of the application. If true, the application configuration was reconciled and its resource are in ready state.

Viewing status with the CLI

To use the CLI to get information about a deployed CR, run a kubectl get or oc get command.

To run kubectl commands, you need the Kubernetes command line tool or the Red Hat OpenShift command-line interface (CLI). To run oc commands, you need the Red Hat OpenShift CLI.

In the following get commands, replace my-app with your CR name. Run any one of the commands. comp and comps are short names for runtimecomponent and runtimecomponents.

  • Run any of the following kubectl get commands.

kubectl get comp my-app
kubectl get comps my-app
kubectl get runtimecomponent my-app
  • Run any of the following oc get commands.

oc get comp my-app
oc get comps my-app
oc get runtimecomponent my-app

The results of the command resemble the following.

NAME     IMAGE                       EXPOSED   RECONCILED   RESOURCESREADY   READY   AGE
my-app   quay.io/my-repo/my-app:1.0            True         True             True    18m

The value in the READY column is True when the application is successfully installed. If the value in the READY column is not True, see Troubleshooting Runtime Component operators.

Viewing status with the Red Hat OpenShift console

To use the Red Hat OpenShift console to get information about a deployed CR, view the deployed RuntimeComponent instance and inspect the .status section.

status:
  conditions:
    - lastTransitionTime: '2022-05-10T15:59:04Z'
      status: 'True'
      type: Reconciled
    - lastTransitionTime: '2022-05-10T15:59:16Z'
      message: 'Deployment replicas ready: 3/3'
      reason: MinimumReplicasAvailable
      status: 'True'
      type: ResourcesReady
    - lastTransitionTime: '2022-05-10T15:59:16Z'
      message: Application is reconciled and resources are ready.
      status: 'True'
      type: Ready
  imageReference: 'quay.io/my-repo/my-app:1.0'
  references:
    svcCertSecretName: my-app-svc-tls-ocp
  versions:
    reconciled: 1.0.0
  observedGeneration: 1

If the .status.conditions.type Ready type does not have a status of True, see Troubleshooting Runtime Component operators.

The value of the .status.versions.reconciled parameter is the version of the operand that is deployed into the cluster after the reconcile loop completes.

At the end of the reconcile loop, the operator will also update the .status.observedGeneration parameter to match the value of .metadata.generation.

Viewing reconciliation frequency in the status

The operator controller periodically runs reconciliation to match the current state to the wanted state so that the managed resources remain functional. Runtime Component operator allows for increasing the reconciliation interval to reduce the controller’s workload when status remains unchanged. The reconciliation frequency can be configured with the Operator ConfigMap settings.

The value of the .status.conditions.unchangedConditionCount parameter represents the number of reconciliation cycles during which the condition status type remains unchanged. Each time this value becomes an even number, the reconciliation interval increases according to the configurations in the ConfigMap. The reconciliation interval increase feature is enabled by default but can be disabled if needed.

The .status.reconcileInterval parameter represents the current reconciliation interval of the instance. The parameter increases by the increase percentage, which is specified in the ConfigMap, based on the current interval. The calculation uses the base reconciliation interval, the increase percentage, and the count of unchanged status conditions, with the increases compounding over time. The maximum reconciliation interval is 240 seconds for repeated failures and 120 seconds for repeated successful status conditions.

Operator ConfigMap

The ConfigMap named runtime-component-operator is used for configuring the managed resources. It is created once when the operator starts and is located in the operator’s installed namespace.

For more information on ConfigMap configurations, see under Open Liberty Operator’s "Operator ConfigMap" section. Any references to Open Liberty Operator-specific resources can be mapped over to Runtime Component Operator using the table below.

Table 4. Open Liberty Operator Name Mapping

Data

Open Liberty Operator

Runtime Component Operator

Kind

OpenLibertyApplication

RuntimeComponent

ConfigMap

open-liberty-operator

runtime-component-operator

Operator configuration examples

Browse the RuntimeComponent examples to learn how to use custom resource (CR) parameters to configure your operator. The complete component documentation can be found under Open Liberty Operator’s "Common Component" section. Any references to Open Liberty Operator-specific resources can be mapped over to Runtime Component Operator using the table below.

Table 5. Open Liberty Operator Name Mapping

Data

Open Liberty Operator

Runtime Component Operator

Api Version

apps.openliberty.io/v1beta2

rc.app.stacks/v1

Kind

OpenLibertyApplication

RuntimeComponent

ConfigMap

open-liberty-operator

runtime-component-operator

ClusterRole Prefix

openlibertyapplications.apps.openliberty.io-v1beta2

runtimecomponents.rc.app.stacks-v1

Resource Prefix

olo-*

rco-*

Day-2 Operations

You can easily perform day-2 operations using the RuntimeOperation custom resource (CR), which allows you to specify the commands to run on a container within a Pod.

Table 6. Configurable Fields

Field

Description

podName

The name of the Pod, which must be in the same namespace as the RuntimeOperation CR.

containerName

The name of the container within the Pod. The default value is the name of the main container, which is app.

command

Command to run. The command doesn’t run in a shell.

Example:

apiVersion: rc.app.stacks/v1
kind: RuntimeOperation
metadata:
  name: example-runtime-operation
spec:
  # Specify the name of the pod. The pod must be in the same namespace as this RuntimeOperation CR.
  podName: Specify_Pod_Name_Here
  # Specify the name of the container. The default value is the name of the main container, which is `app`.
  containerName: app
  # Run the following command. The command does not run in a shell.
  command:
    - /bin/sh
    - '-c'
    - echo "Hello" > /tmp/runtime-operation.log

You can check the status of a runtime operation by using the status field inside the CR YAML file. You can also run the oc get runtimeop -o wide command to see the status of all operations in the current namespace.

The operator will retry to run the RuntimeOperation when it fails to start due to specified pod or container not being found or when the pod is not in running state. The retry interval will be doubled with each failed attempt.

Note
The RuntimeOperation CR must be created in the same namespace as the Pod to operate on. After the RuntimeOperation CR starts, the CR cannot be reused for more operations. A new CR needs to be created for each day-2 operation. The operator can process only one RuntimeOperation instance at a time. Long running commands can cause other runtime operations to wait before they start.

Troubleshooting

See the troubleshooting guide for information on how to investigate and resolve deployment problems.