title | authors | reviewers | approvers | creation-date | last-updated | status | see-also | replaces | superseded-by | ||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
kata-operator |
|
|
|
2020-05-11 |
2020-05-11 |
This enhancement proposes the creation of the new Operator to facilitate the installation of the Kata Runtime on all or selected worker nodes of the cluster.
The Kata Operator
is positioned as the preferred alternative to installing Kata Runtime on the OpenShift/K8s cluster.
- Enhancement is
implementable
- Design details are appropriately documented from clear requirements
- Test plan is defined
- Graduation criteria for dev preview, tech preview, GA
- User-facing documentation is created in [openshift/docs]
Customers that require additional isolation than that of provided by the standard cgroups/namespace based containers opt for using Kata as their runtime. Kata Runtime
uses virtualization to launch containers inside virtual machines to provide elivated levels of isolation for the containered workloads.
However, in order to use Kata Runtime seamlessly with OpenShift/Kubernetes customers first need to make sure the Kata Runtime
is properly installed and configured correctly on the worker nodes. Kata Operator aims to provide the a seamless experience in not only installing the Kata Runtime but performing lifecycle management of the runtime itself.
Over the years due to discoveries in the vulnerabilities in the Kernel, such as CVE-2019-5736 aka Dirty COW and container runtimes, such as CVE-2019-5736 it become very evident that certain security sensitive workloads need to be protected better than simple cgroups/namespace isolation provided for the regular containers.
This is the time Kata started getting more attention because it mitigates those threats quite successfully by using highly optimized virtualization targetted for containers without breaking the standard kubernetes container workflow by being an OCI complient runtime. This way users of OpenShift/Kubernetes can benefit from the ease and flexibility of the containers they have come to love and also gain the additional isolation provided by the traditional virtualization.
In order to use Kata Runtime, administrators need to login to the worker nodes of the OpenShift/Kubernetes cluster and manually install/configure the Kata Runtime as well as CRI runtime, such as CRI-O. Kata Operator
aims to help administrators install, upgrade, and uninstall the Kata runtime (and it's dependencies) by extending OpenShift/Kubernetes API using Custom Resources and an Operator.
To help us develop this Kata Operator, we will leverage CoreOS
Operator Framework which provides neccessary boilerplate
code.
Create an API which supports,
- Installation of Kata Runtime on all or selected worker nodes
- Configure the CRI Runtime, such as CRI-O, to use Kata Runtime on those worker nodes
- Perform updates to the Kata runtime
- Uninstall Kata Runtime and reconfigure CRI-O to not use it.
- To keep the Kata Operator's goal to the lifecycle management of the Kata Runtime, it will only support installation configuration of the Kata Runtime. This operator will not interact with any runtime configuration, such as
Pod Annoations
supported by Kata.
The following new API is proposed kataconfiguration.openshift.io/v1
.
// Kataconfig is the Schema for the kataconfigs API
type Kataconfig struct {
metav1.TypeMeta `json:",inline"`
metav1.ObjectMeta `json:"metadata,omitempty"`
Spec KataconfigSpec `json:"spec,omitempty"`
Status KataconfigStatus `json:"status,omitempty"`
}
// KataconfigSpec defines the desired state of Kataconfig
type KataconfigSpec struct {
config KataInstallConfig `json:"config"`
}
type KataInstallConfig struct {
// +optional
runtimeClassName string `json:"runtimeClassName"`
// +required
kataImage string `json:"kataImage"`
}
// KataconfigStatus defines the observed state of Kataconfig
type KataconfigStatus struct {
Nodes []string `json:"nodes"`
}
One of the ways Administrators can interact with the Kata Operator by providing a yaml
file to the standard oc
or kubectl
command.
apiVersion: kataconfiguration.openshift.io/v1
kind: KataConfig
metadata:
name: install-kata-1.0
spec:
machineConfigPoolSelector:
matchLabels:
install-kata: kata-1.0
config:
runtimeClass: kata-qemu # optional
kataImage: quay.io/kata-image/kata-install:1.0
A detailed list of User Stories is maintained in this document.
One of the critical aspect of this Operator is how the Kata Runtime binaries are installed on the worker nodes. We have following choices,
- Install the kata packages on the host machine - Easy to implement but installing hundreds of megabytes on the CoreOS based rootfs is not desirable.
- Download a container image containing Kata binaries and extract them on the host machine - Either the Kata and it's dependencies need to be statically built and packaged in a container image (very similar to kata-deploy) or Kata binaries (and it's dependencies) need to be built matching exact same OS level as that of used by worker nodes.
- Download a container image containing Kata binaries and bind mount the overlay path containing that image on the host - This is the long term desirable method.
For the first iteration of the Kata Operator, we will implement option 2 from above.
Not assessed yet.
- All the major components will have corresponding unit tests.
- Integration tests will run on OpenShift/Kubernetes cluster, and they will be executed before merging any future PR to this project.
- Proof-of-concept ready and showcased.
If kata installation is supported by a project like Machine Config Operator then we don't need to have a dedicated Operator for Kata installation.
Kata Deploy is an upstream project to install Kata on Kubernetes. It's not a full fledged Operator, but rather a simple deamonset that downloads and extracts statically built Kata and Qemu binaries.