This generic Operator is capable of deploying any application image and can be imported into any runtime-specific Operator as library of application capabilities. This architecture ensures compatibility and consistency between all runtime Operators, allowing everyone to benefit from the functionality added in this project.
This documentation refers to the latest codebase. For documentation and samples of older releases, please check out the main releases page and navigate the corresponding tag.
Important: This user guide only applies to operator versions 0.8.0, 0.8.1 and 0.8.2. For operator versions 1.2.0 and above, refer to this user guide. For operator versions 0.7.1 and below, refer to this user guide.
Important: If you are upgrading from Runtime Component Operator versions 0.7.1 and below, note that API version of the custom resources (CRs) RuntimeComponent
and RuntimeOperation
changed. Custom resources with apiVersion: app.stacks/v1beta1
are not handled by Runtime Component Operator versions 0.8.0 and above. You must delete existing custom resources with apiVersion: app.stacks/v1beta1
and create new custom resources with apiVersion: rc.app.stacks/v1beta2
.
Use the instructions for one of the releases to install the operator into a Kubernetes cluster.
The Runtime Component Operator can be installed to:
-
watch own namespace
-
watch another namespace
-
watch all namespaces in the cluster
Appropriate cluster roles and bindings are required to watch another namespace, or to watch all namespaces.
Note
|
The Runtime Component Operator can only interact with resources it is given permission to interact through Role-based access control (RBAC). Some of the operator features described in this document require interacting with resources in other namespaces. In that case, the operator must be installed with correct ClusterRole definitions.
|
The architecture of the Runtime Component Operator follows the basic controller pattern: the Operator container with the controller is deployed into a Pod and listens for incoming resources with Kind: RuntimeComponent
. Creating a RuntimeComponent
custom resource (CR) triggers the Runtime Component Operator to create, update or delete Kubernetes resources needed by the application to run on your cluster.
Each instance of RuntimeComponent
CR represents the application to be deployed on the cluster:
apiVersion: rc.app.stacks/v1beta2
kind: RuntimeComponent
metadata:
name: my-app
spec:
applicationImage: quay.io/my-repo/my-app:1.0
service:
type: ClusterIP
port: 9080
expose: true
statefulSet:
storage:
size: 2Gi
mountPath: "/logs"
The following table lists configurable fields of the RuntimeComponent
CRD. For complete OpenAPI v3 representation of these values please see RuntimeComponent
CRD.
Each RuntimeComponent
CR must at least specify the .spec.applicationImage
field. Specifying other fields is optional.
Field | Description |
---|---|
|
The current version of the application. Label |
|
The name of the OpenShift service account to be used during deployment. |
|
The Docker image name to be deployed. On OpenShift, it can also be set to |
|
The name of the application this resource is part of. If not specified, it defaults to the name of the CR. |
|
The policy used when pulling the image. One of: |
|
If using a registry that requires authentication, the name of the secret containing credentials. |
|
The list of Init Container definitions. |
|
The list of |
|
A boolean to toggle whether the operator expose the application as a bindable service. The default value for this field is |
|
The port exposed by the container. |
|
The port that the operator assigns to containers inside pods. Defaults to the value of |
|
The name for the port exposed by the container. |
|
An array consisting of service ports. |
|
The Kubernetes Service Type. |
|
Node proxies this port into your service. Please note once this port is set to a non-zero value it cannot be reset to zero. |
|
Annotations to be added to the service. |
|
A name of a secret that already contains TLS key, certificate and CA to be mounted in the pod. The following keys are valid in the secret: |
|
A boolean to toggle the creation of Knative resources and usage of Knative serving. |
|
A boolean that toggles the external exposure of this deployment via a Route or a Knative Route resource. |
|
A field to specify the update strategy of the deployment. For more information, see updateStrategy |
|
The type of update strategy of the deployment. The type can be set to |
|
Annotations to be added only to the deployment and resources owned by the deployment. |
|
A field to specify the update strategy of the StatefulSet. For more information, see updateStrategy |
|
The type of update strategy of the StatefulSet. The type can be set to |
|
Annotations to be added only to the StatefulSet and resources owned by the StatefulSet. |
|
A convenient field to set the size of the persisted storage. Can be overridden by the |
|
The directory inside the container where this persisted storage will be bound to. |
|
A YAML object that represents a volumeClaimTemplate component of a |
|
The static number of desired replica pods that run simultaneously. |
|
Required field for autoscaling. Upper limit for the number of pods that can be set by the autoscaler. It cannot be lower than the minimum number of replicas. |
|
Lower limit for the number of pods that can be set by the autoscaler. |
|
Target average CPU utilization (represented as a percentage of requested CPU) over all the pods. |
|
The minimum required CPU core. Specify integers, fractions (e.g. 0.5), or millicore values(e.g. 100m, where 100m is equivalent to .1 core). Required field for autoscaling. |
|
The minimum memory in bytes. Specify integers with one of these suffixes: E, P, T, G, M, K, or power-of-two equivalents: Ei, Pi, Ti, Gi, Mi, Ki. |
|
The upper limit of CPU core. Specify integers, fractions (e.g. 0.5), or millicores values(e.g. 100m, where 100m is equivalent to .1 core). |
|
The memory upper limit in bytes. Specify integers with suffixes: E, P, T, G, M, K, or power-of-two equivalents: Ei, Pi, Ti, Gi, Mi, Ki. |
|
An array of environment variables following the format of |
|
An array of references to |
|
A YAML object configuring the Kubernetes readiness probe that controls when the pod is ready to receive traffic. |
|
A YAML object configuring the Kubernetes liveness probe that controls when Kubernetes needs to restart the pod. |
|
A YAML object configuring the Kubernetes startup probe that controls when Kubernetes needs to startup the pod on its first initialization. |
|
A YAML object that represents a pod volume. |
|
A YAML object that represents a pod volumeMount. |
|
Labels to set on ServiceMonitor. |
|
A YAML snippet representing an array of Endpoint component from ServiceMonitor. |
|
Annotations to be added to the Route. |
|
Hostname to be used for the Route. |
|
Path to be used for Route. |
|
Path type to be used. Required field for Ingress. See Ingress path types. |
|
TLS termination policy. Can be one of |
|
HTTP traffic policy with TLS enabled. Can be one of |
|
A name of a secret that already contains TLS key, certificate and CA to be used in the route. It can also contain destination CA certificate. The following keys are valid in the secret: |
|
A YAML object that represents a NodeAffinity. |
|
A YAML object that contains set of required labels and their values. |
|
A YAML object that represents a PodAffinity. |
|
A YAML object that represents a PodAntiAffinity. |
|
An array of architectures to be considered for deployment. Their position in the array indicates preference. |
To deploy a Docker image that contains a runtime component to a Kubernetes environment, you can use the following CR:
apiVersion: rc.app.stacks/v1beta2
kind: RuntimeComponent
metadata:
name: my-app
spec:
applicationImage: quay.io/my-repo/my-app:1.0
The applicationImage
value must be defined in the RuntimeComponent
CR. On OpenShift, the operator tries to find an image stream name with the applicationImage
value. The operator falls back to the registry lookup if it is not able to find any image stream that matches the value. If you want to distinguish an image stream called my-company/my-app
(project: my-company
, image stream name: my-app
) from the Docker Hub my-company/my-app
image, you can use the full image reference as docker.io/my-company/my-app
.
To get information on the deployed CR, use either of the following:
oc get runtimecomponent my-app
oc get comp my-app
The short name for runtimecomponent
is comp
.
To deploy an image from an image stream, use the following CR:
apiVersion: rc.app.stacks/v1beta2
kind: RuntimeComponent
metadata:
name: my-app
spec:
applicationImage: my-namespace/my-image-stream:1.0
The previous example looks up the 1.0
tag from the my-image-stream
image stream in the my-namespace
project and populates the CR .status.imageReference
field with the exact referenced image similar to the following one: image-registry.openshift-image-registry.svc:5000/my-namespace/my-image-stream@sha256:8a829d579b114a9115c0a7172d089413c5d5dd6120665406aae0600f338654d8
. The operator watches the specified image stream and deploys new images as new ones are available for the specified tag.
To reference an image stream, the .spec.applicationImage
field must follow the <project name>/<image stream name>[:<tag>]
format. If <project name>
or <tag>
is not specified, the operator defaults the values to the namespace of the CR and the value of latest
, respectively. For example, the applicationImage: my-image-stream
configuration is the same as the applicationImage: my-namespace/my-image-stream:latest
configuration.
The Operator tries to find an image stream name first with the <project name>/<image stream name>[:<tag>]
format and falls back to the registry lookup if it is not able to find any image stream that matches the value.
This feature is only available if you are running on OKD or OpenShift.
Note
|
The operator requires ClusterRole permissions if the image stream resource is in another namespace.
|
The operator can create a ServiceAccount
resource when deploying a RuntimeComponent
custom resource (CR). If .spec.serviceAccountName
is not specified in a CR, the operator creates a service account with the same name as the CR (e.g. my-app
).
Users can also specify serviceAccountName
when they want to create a service account manually.
If applications require specific permissions but still want the operator to create a ServiceAccount
, users can still manually create a role binding to bind a role to the service account created by the operator. To learn more about Role-based access control (RBAC), see Kubernetes documentation.
By default, the operator adds the following labels into all resources created
for a RuntimeComponent
CR:
Label | Default | Description |
---|---|---|
|
|
A unique name or identifier for this component. This cannot be modified. |
|
|
A name that represents this component. |
|
|
The tool being used to manage this component. |
|
|
The type of component being created. See OpenShift documentation for full list. |
|
|
The name of the higher-level application this component is a part of. Configure this if the component is not a standalone application. |
|
|
The version of the component. |
You can set new labels in addition to the pre-existing ones or overwrite them,
excluding the app.kubernetes.io/instance
label. To set labels, specify them in
your CR as key/value pairs.
apiVersion: rc.app.stacks/v1beta2
kind: RuntimeComponent
metadata:
name: my-app
labels:
my-label-key: my-label-value
spec:
applicationImage: quay.io/my-repo/my-app:1.0
After the initial deployment of RuntimeComponent
, any changes to its labels would be applied only when one of the fields from spec
is updated.
When running in OpenShift, there are additional labels and annotations that are standard on the platform. It is recommended that you overwrite our defaults where applicable and add any labels from the list that are not set by default using the above instructions. See documentation for a full list.
To add new annotations into all resources created for a RuntimeComponent
, specify them in your CR as key/value pairs. Annotations specified in CR would override any annotations specified on a resource, except for the annotations set on Service
using .spec.service.annotations
.
apiVersion: rc.app.stacks/v1beta2
kind: RuntimeComponent
metadata:
name: my-app
annotations:
my-annotation-key: my-annotation-value
spec:
applicationImage: quay.io/my-repo/my-app:1.0
After the initial deployment of RuntimeComponent
, any changes to its annotations would be applied only when one of the fields from spec
is updated.
When running in OpenShift, there are additional annotations that are standard on the platform. It is recommended that you overwrite our defaults where applicable and add any annotations from the list that are not set by default using the above instructions. See documentation for a full list.
You can set environment variables for your application container. To set
environment variables, specify env
and/or envFrom
fields in your CR. The
environment variables can come directly from key/value pairs, ConfigMap`s or
`Secret`s. The environment variables set using the `env
or envFrom
fields will
override any environment variables specified in the container image.
apiVersion: rc.app.stacks/v1beta2
kind: RuntimeComponent
metadata:
name: my-app
spec:
applicationImage: quay.io/my-repo/my-app:1.0
env:
- name: DB_NAME
value: "database"
- name: DB_PORT
valueFrom:
configMapKeyRef:
name: db-config
key: db-port
- name: DB_USERNAME
valueFrom:
secretKeyRef:
name: db-credential
key: adminUsername
- name: DB_PASSWORD
valueFrom:
secretKeyRef:
name: db-credential
key: adminPassword
envFrom:
- configMapRef:
name: env-configmap
- secretRef:
name: env-secrets
Use envFrom
to define all data in a ConfigMap
or a Secret
as environment variables in a container. Keys from ConfigMap
or Secret
resources become environment variable name in your container.
Run multiple instances of your application for high availability using one of the following mechanisms:
-
specify a static number of instances to run at all times using
.spec.replicas
field.
OR
-
configure auto-scaling to create (and delete) instances based on resource consumption using the
.spec.autoscaling
field. -
Fields
.spec.autoscaling.maxReplicas
and.spec.resources.requests.cpu
MUST be specified for auto-scaling.
Runtime Component Operator allows you to provide multiple service ports in addition to the primary service port. The primary port is exposed from the container running the application and it’s values are used to configure the Route (or Ingress), Service binding and Knative service.
The primary service port can be configured using .spec.service.port
, .spec.service.targetPort
, .spec.service.portName
, and .spec.service.nodePort
fields.
You can also specify an alternative port for Service Monitor using the .spec.monitoring.endpoints
field and specifying either the port
or targetPort
field, otherwise it defaults to the primary port.
The primary port is specified using .spec.service.port
field and the additional ports can be specified using .spec.service.ports
field as shown below.
apiVersion: rc.app.stacks/v1beta2
kind: RuntimeComponent
metadata:
name: my-app
spec:
applicationImage: quay.io/my-repo/my-app:1.0
service:
type: NodePort
port: 9080
portName: http
targetPort: 9080
nodePort: 30008
ports:
- port: 9443
name: https
monitoring:
endpoints:
- basicAuth:
password:
key: password
name: metrics-secret
username:
key: username
name: metrics-secret
interval: 5s
port: https
scheme: HTTPS
tlsConfig:
insecureSkipVerify: true
labels:
app-monitoring: 'true'
Runtime Component Operator is capable of creating a StatefulSet
and PersistentVolumeClaim
for each pod if storage is specified in the RuntimeComponent
CR. If storage is not specified, StatefulSet resource will be created without persistent storage.
Users also can provide mount points for their application. There are 2 ways to enable storage.
You can choose to create StatefulSet resources without storage, if you only require ordering and uniqueness of a set of pods.
apiVersion: rc.app.stacks/v1beta2
kind: RuntimeComponent
metadata:
name: my-app
spec:
applicationImage: quay.io/my-repo/my-app:1.0
statefulSet: {}
With the RuntimeComponent
CR definition below the operator will create PersistentVolumeClaim
called pvc
with the size of 1Gi
and ReadWriteOnce
access mode.
The operator will also create a volume mount for the StatefulSet
mounting to /data
folder. You can use volumeMounts
field instead of statefulSet.storage.mountPath
if you require to persist more then one folder.
apiVersion: rc.app.stacks/v1beta2
kind: RuntimeComponent
metadata:
name: my-app
spec:
applicationImage: quay.io/my-repo/my-app:1.0
statefulSet:
storage:
size: 1Gi
mountPath: "/data"
Runtime Component Operator allows users to provide entire volumeClaimTemplate
for full control over automatically created PersistentVolumeClaim
.
It is also possible to create multiple volume mount points for persistent volume using volumeMounts
field as shown below. You can still use statefulSet.storage.mountPath
if you require only a single mount point.
apiVersion: rc.app.stacks/v1beta2
kind: RuntimeComponent
metadata:
name: my-app
spec:
applicationImage: quay.io/my-repo/my-app:1.0
volumeMounts:
- name: pvc
mountPath: /data_1
subPath: data_1
- name: pvc
mountPath: /data_2
subPath: data_2
statefulSet:
storage:
volumeClaimTemplate:
metadata:
name: pvc
spec:
accessModes:
- "ReadWriteMany"
storageClassName: 'glusterfs'
resources:
requests:
storage: 1Gi
The Service Binding Operator enables application developers to bind applications together with operator-managed backing services. This can be achieved by creating a ServiceBindingRequest
custom resource.
This feature is only available if you have Service Binding Operator installed on your cluster.
A RuntimeComponent
application can be configured to behave as a Provisioned Service defined by the Service Binding Specification.
According to the specification, a Provisioned Service resource must define a .status.binding.name
which is a reference to a Secret.
To expose your application as a Provisioned Service, set the .spec.service.bindable
field to a value of true
. The Runtime Component Operator creates a binding secret named <CR_NAME>-expose-binding
and adds the following entries to the secret: host
, port
, protocol
, basePath
and uri
.
To override the default values for the entries in the binding secret or to add new entries to the secret, create an override secret named <CR_NAME>-expose-binding-override
and add any entries to the secret. The operator reads the content of the override secret and overrides the default values in the binding secret.
Once a RuntimeComponent
application is exposed as a Provisioned Service, a service binding request can refer to the application as a backing service.
Runtime Component Operator can create a ServiceMonitor
resource to integrate with Prometheus Operator
.
This feature does not support integration with Knative Service. Prometheus Operator is required to use ServiceMonitor.
At minimum, a label needs to be provided that Prometheus expects to be set on ServiceMonitor
objects. In this case, it is apps-prometheus
.
apiVersion: rc.app.stacks/v1beta2
kind: RuntimeComponent
metadata:
name: my-app
spec:
applicationImage: quay.io/my-repo/my-app:1.0
monitoring:
labels:
apps-prometheus: ''
For advanced scenarios, it is possible to set many ServicerMonitor
settings such as authentication secret using Prometheus Endpoint
apiVersion: rc.app.stacks/v1beta2
kind: RuntimeComponent
metadata:
name: my-app
spec:
applicationImage: quay.io/my-repo/my-app:1.0
monitoring:
labels:
app-prometheus: ''
endpoints:
- interval: '30s'
basicAuth:
username:
key: username
name: metrics-secret
password:
key: password
name: metrics-secret
tlsConfig:
insecureSkipVerify: true
Runtime Component Operator can deploy serverless applications with Knative on a Kubernetes cluster. To achieve this, the operator creates a Knative Service
resource which manages the whole life cycle of a workload.
To create Knative service, set createKnativeService
to true
:
apiVersion: rc.app.stacks/v1beta2
kind: RuntimeComponent
metadata:
name: my-app
spec:
applicationImage: quay.io/my-repo/my-app:1.0
createKnativeService: true
By setting the .spec.createKnativeService
field, the operator creates a Knative service in the cluster and populates the resource with applicable RuntimeComponent
fields. Also, it ensures non-Knative resources including Kubernetes Service
, Route
, Deployment
and etc. are deleted.
The CRD fields which are used to populate the Knative service resource include .spec.applicationImage
, .spec.serviceAccountName
, .spec.probes.liveness
, .spec.probes.readiness
, .spec.service.Port
, .spec.volumes
, .spec.volumeMounts
, .spec.env
, .spec.envFrom
, .spec.pullSecret
and .spec.pullPolicy
. Startup probe is not fully supported by Knative, hence .spec.probes.startup
will not apply when Knative service is enabled.
For more details on how to configure Knative for tasks such as enabling HTTPS connections and setting up a custom domain, checkout Knative Documentation.
Autoscaling related fields in RuntimeComponent
are not used to configure Knative Pod Autoscaler (KPA). To learn more about how to configure KPA, see Configuring the Autoscaler.
This feature is only available if you have Knative installed on your cluster.
To expose your application externally, set expose
to true
:
apiVersion: rc.app.stacks/v1beta2
kind: RuntimeComponent
metadata:
name: my-app
spec:
applicationImage: quay.io/my-repo/my-app:1.0
expose: true
By setting .spec.expose
field, the operator creates an unsecured route based on your application service. Setting this field is the same as running oc expose service <service-name>
.
To create a secured HTTPS route, see the Certificates section.
Before you can use the Ingress resource to expose your cluster, you must install an ingress controller, such a Nginx or Traefik.
The Ingress resource is created only if the Route
resource is not available on the cluster.
To use the Ingress resource, set the defaultHostName
variable in the runtime-component-operator ConfigMap object to a host name such as mycompany.com
apiVersion: rc.app.stacks/v1beta2
kind: RuntimeComponent
metadata:
name: my-app
namespace: backend
spec:
applicationImage: quay.io/my-repo/my-app:1.0
expose: true
With default hostname of mycompany.com, the application is available at the http://my-app-backend.mycompany.com URL.
Generate your certificate and specify the secret containing the certificate using .spec.route.certificateSecretRef
field:
apiVersion: rc.app.stacks/v1beta2
kind: RuntimeComponent
metadata:
name: my-app
namespace: backend
spec:
applicationImage: quay.io/my-repo/my-app:1.0
expose: true
route:
certificateSecretRef: mycompany-tls
Most of the Ingress configuration is achieved through annotations. Annotations such as Nginx, HAProxy, Traefik, and others are specific to the ingress controller implementation.
You can provide an existing TLS secret and set a custom hostname.
apiVersion: rc.app.stacks/v1beta2
kind: RuntimeComponent
metadata:
name: my-app
namespace: backend
spec:
applicationImage: quay.io/my-repo/my-app:1.0
expose: true
route:
annotations:
# You can use this annotation to specify the name of the ingress controller to use.
# You can install multiple ingress controllers to address different types of incoming traffic such as an external or internal DNS.
kubernetes.io/ingress.class: "nginx"
# The following nginx annotation enables a secure pod connection:
nginx.ingress.kubernetes.io/ssl-redirect: true
nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
# The following traefik annotation enables a secure pod connection:
traefik.ingress.kubernetes.io/service.serversscheme: https
# Use a custom hostname for the Ingress
host: app-v1.mycompany.com
# Reference a pre-existing TLS secret:
certificateSecretRef: mycompany-tls
To expose your application as a Knative service externally, set expose
to true
:
apiVersion: rc.app.stacks/v1beta2
kind: RuntimeComponent
metadata:
name: my-app
spec:
applicationImage: quay.io/my-repo/my-app:1.0
createKnativeService: true
expose: true
When expose
is not set to true
, the Knative service is labeled with serving.knative.dev/visibility=cluster-local
which makes the Knative route to only be available on the cluster-local network (and not on the public Internet). However, if expose
is set true
, the Knative route would be accessible externally.
To configure secure HTTPS connections for your Knative deployment, see Configuring HTTPS with TLS certificates for more information.
Specify your own certificates for the Service and Route using fields .spec.service.certificateSecretRef
and .spec.route.certificateSecretRef
.
Example of cerificates specified for the Route:
apiVersion: rc.app.stacks/v1beta2
kind: RuntimeComponent
metadata:
name: my-app
namespace: test
spec:
applicationImage: quay.io/my-repo/my-app:1.0
expose: true
route:
host: myapp.mycompany.com
termination: reencrypt
certificateSecretRef: my-app-rt-tls
service:
port: 9443
Example of the manually provided route secret
kind: Secret
apiVersion: v1
metadata:
name: my-app-rt-tls
data:
ca.crt: >-
Certificate Authority public certificate...(base64)
tls.crt: >-
Route public certificate...(base64)
tls.key: >-
Route private key...(base64)
destCA.crt: >-
Pod/Service certificate Certificate Authority (base64). Might be required when using reencrypt termination policy.
type: kubernetes.io/tls
Using affinity you can constrain a Pod to only be able to run on particular Node(s), or to prefer to run on particular nodes.
Use nodeAffinityLabels
field to set required labels for pod scheduling on specific nodes:
apiVersion: rc.app.stacks/v1beta2
kind: RuntimeComponent
metadata:
name: my-app
namespace: test
spec:
applicationImage: quay.io/my-repo/my-app:1.0
affinity:
nodeAffinityLabels:
customNodeLabel: label1, label2
customNodeLabel2: label3
The following example requires a node type of Large and preferences for two zones, which are named zoneA and zoneB
apiVersion: rc.app.stacks/v1beta2
kind: RuntimeComponent
metadata:
name: my-app
namespace: test
spec:
applicationImage: quay.io/my-repo/my-app:1.0
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: node.kubernetes.io/instance-type
operator: In
values:
- large
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 60
preference:
matchExpressions:
- key: failure-domain.beta.kubernetes.io/zone
operator: In
values:
- zoneA
- weight: 20
preference:
matchExpressions:
- key: failure-domain.beta.kubernetes.io/zone
operator: In
values:
- zoneB
Pod affinity and anti-affinity allow you to constrain which nodes your pod is eligible to be scheduled based on labels on pods that are already running on the node rather than based on labels on node.
The following example shows that pod affinity is required and that the pods for Service-A and Service-B must be in the same zone. Through pod anti-affinity, it is preferred not to schedule Service_B and Service_C on the same host.
apiVersion: rc.app.stacks/v1beta2
kind: RuntimeComponent
metadata:
name: Service-B
namespace: test
spec:
applicationImage: quay.io/my-repo/my-app:1.0
affinity:
podAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: service
operator: In
values:
- Service-A
topologyKey: failure-domain.beta.kubernetes.io/zone
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 100
podAffinityTerm:
labelSelector:
matchExpressions:
- key: service
operator: In
values:
- Service-C
topologyKey: kubernetes.io/hostname
See Affinity Example for more details
You can easily perform day-2 operations using the RuntimeOperation
custom resource (CR), which allows you to specify the commands to run on a container within a Pod.
Field |
Description |
|
The name of the Pod, which must be in the same namespace as the |
|
The name of the container within the Pod. The default value is the name of the main container, which is |
|
Command to run. The command doesn’t run in a shell. |
Example:
apiVersion: rc.app.stacks/v1beta2
kind: RuntimeOperation
metadata:
name: example-runtime-operation
spec:
# Specify the name of the pod. The pod must be in the same namespace as this RuntimeOperation CR.
podName: Specify_Pod_Name_Here
# Specify the name of the container. The default value is the name of the main container, which is `app`.
containerName: app
# Run the following command. The command does not run in a shell.
command:
- /bin/sh
- '-c'
- echo "Hello" > /tmp/runtime-operation.log
You can check the status of a runtime operation by using the status
field inside the CR YAML file. You can also run the oc get runtimeop -o wide
command to see the status of all operations in the current namespace.
The operator will retry to run the RuntimeOperation
when it fails to start due to specified pod or container not being found or when the pod is not in running state. The retry interval will be doubled with each failed attempt.
Note
|
The RuntimeOperation CR must be created in the same namespace as the Pod to operate on. After the RuntimeOperation CR starts, the CR cannot be reused for more operations. A new CR needs to be created for each day-2 operation. The operator can process only one RuntimeOperation instance at a time. Long running commands can cause other runtime operations to wait before they start.
|
See the troubleshooting guide for information on how to investigate and resolve deployment problems.