// TODO(user): Add simple overview of use/purpose
// TODO(user): An in-depth paragraph about your project and overview of use
You’ll need a Kubernetes cluster to run against. Our recommendation for the time being is to use OpenShift Local (formerly known as CRC / Code Ready Containers). We have companion development tools available that will install OpenShift Local for you.
- Install Instances of Custom Resources:
kubectl apply -f config/samples/
- Build and push your image to the location specified by
IMG
:
make docker-build docker-push IMG=<some-registry>/cinder-operator:tag
- Deploy the controller to the cluster with the image specified by
IMG
:
make deploy IMG=<some-registry>/cinder-operator:tag
To delete the CRDs from the cluster:
make uninstall
UnDeploy the controller to the cluster:
make undeploy
The Cinder services can be configured to interact with an external Ceph cluster.
In particular, the customServiceConfig
parameter must be used, for each defined
cinder-volume
and cinder-backup
instance, to override the enabled_backends
parameter and inject the Ceph related parameters.
The ceph.conf
and the client keyring
must exist as secrets, and can be
mounted by the cinder pods using the extraMounts
feature.
Create a secret by generating the following file and then apply it using the oc
cli.
apiVersion: v1 kind: Secret metadata: name: ceph-client-conf namespace: openstack stringData: ceph.client.openstack.keyring: | [client.openstack] key = caps mgr = "allow *" caps mon = "profile rbd" caps osd = "profile rbd pool=images" ceph.conf: | [global] fsid = 7a1719e8-9c59-49e2-ae2b-d7eb08c695d4 mon_host = 10.1.1.2,10.1.1.3,10.1.1.4
Add the following to the spec of the Cinder CR and then apply it using the oc
cli.
extraMounts:
- name: v1
region: r1
extraVol:
- propagation:
- CinderVolume
- CinderBackup
volumes:
- name: ceph
projected:
sources:
- secret:
name: ceph-client-conf
mounts:
- name: ceph
mountPath: "/etc/ceph"
readOnly: true
The following represents an example of the entire Cinder object that can be used to trigger the Cinder service deployment, and enable the Cinder backend that points to an external Ceph cluster.
apiVersion: cinder.openstack.org/v1beta1
kind: Cinder
metadata:
name: cinder
namespace: openstack
spec:
serviceUser: cinder
databaseInstance: openstack
databaseUser: cinder
cinderAPI:
replicas: 1
containerImage: quay.io/podified-antelopecentos9/openstack-cinder-api:current-podified
cinderScheduler:
replicas: 1
containerImage: quay.io/podified-antelopecentos9/openstack-cinder-scheduler:current-podified
cinderBackup:
replicas: 1
containerImage: quay.io/podified-antelopecentos9/openstack-cinder-backup:current-podified
customServiceConfig: |
[DEFAULT]
backup_driver = cinder.backup.drivers.ceph.CephBackupDriver
backup_ceph_pool = backups
backup_ceph_user = admin
secret: cinder-secret
cinderVolumes:
volume1:
containerImage: quay.io/podified-antelopecentos9/openstack-cinder-volume:current-podified
replicas: 1
customServiceConfig: |
[DEFAULT]
enabled_backends=ceph
[ceph]
volume_backend_name=ceph
volume_driver=cinder.volume.drivers.rbd.RBDDriver
rbd_ceph_conf=/etc/ceph/ceph.conf
rbd_user=admin
rbd_pool=volumes
rbd_flatten_volume_from_snapshot=False
rbd_secret_uuid=<Ceph_FSID>
extraMounts:
- name: cephfiles
region: r1
extraVol:
- propagation:
- CinderVolume
- CinderBackup
extraVolType: Ceph
volumes:
- name: ceph
projected:
sources:
- secret:
name: ceph-client-files
mounts:
- name: ceph
mountPath: "/etc/ceph"
readOnly: true
When the service is up and running, it's possible to interact with the Cinder
API and create the Ceph cinder type
backend which is associated with the Ceph
tier specified in the config file.
The Cinder spec can be used to configure Cinder to have the pods being attached to additional networks to e.g. connect to a Ceph RBD server on a dedicated storage network.
Create a network-attachement-definition which then can be referenced from the Cinder CR.
---
apiVersion: k8s.cni.cncf.io/v1
kind: NetworkAttachmentDefinition
metadata:
name: storage
namespace: openstack
spec:
config: |
{
"cniVersion": "0.3.1",
"name": "storage",
"type": "macvlan",
"master": "enp7s0.21",
"ipam": {
"type": "whereabouts",
"range": "172.18.0.0/24",
"range_start": "172.18.0.50",
"range_end": "172.18.0.100"
}
}
The following represents an example of Cinder resource that can be used to trigger the service deployment, and have the Cinder Volume and Cinder Backup service pods attached to the storage network using the above NetworkAttachmentDefinition.
apiVersion: cinder.openstack.org/v1beta1
kind: Cinder
metadata:
name: cinder
spec:
...
cinderBackup:
...
networkAttachents:
- storage
cinderVolume:
volume1:
...
networkAttachents:
- storage
...
When the service is up and running, it will now have an additional nic configured for the storage network:
# oc rsh cinder-volume-volume1-0
Defaulted container "cinder-volume" out of: cinder-volume, probe, init (init)
sh-5.1# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
3: eth0@if334: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UP group default
link/ether 0a:58:0a:82:01:44 brd ff:ff:ff:ff:ff:ff link-netns 7ea23955-d990-449d-81f9-fd57cb710daa
inet 10.130.1.68/23 brd 10.130.1.255 scope global eth0
valid_lft forever preferred_lft forever
inet6 fe80::bc92:5cff:fef3:2e27/64 scope link
valid_lft forever preferred_lft forever
4: net1@if17: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
link/ether 26:35:09:03:71:b3 brd ff:ff:ff:ff:ff:ff link-netns 7ea23955-d990-449d-81f9-fd57cb710daa
inet 172.18.0.20/24 brd 172.18.0.255 scope global net1
valid_lft forever preferred_lft forever
inet6 fe80::2435:9ff:fe03:71b3/64 scope link
valid_lft forever preferred_lft forever
The Cinder spec can be used to configure Cinder API to register e.g. the internal endpoint to an isolated network. MetalLB is used for this scenario.
As a pre requisite, MetalLB needs to be installed and worker nodes prepared to work as MetalLB nodes to serve the LoadBalancer service.
In this example the following MetalLB IPAddressPool is used:
---
apiVersion: metallb.io/v1beta1
kind: IPAddressPool
metadata:
name: osp-internalapi
namespace: metallb-system
spec:
addresses:
- 172.17.0.200-172.17.0.210
autoAssign: false
The following represents an example of Cinder resource that can be used
to trigger the service deployment, and have the cinderAPI endpoint
registerd as a MetalLB service using the IPAddressPool osp-internal
,
request to use the IP 172.17.0.202
as the VIP and the IP is shared with
other services.
apiVersion: cinder.openstack.org/v1beta1
kind: Cinder
metadata:
name: cinder
spec:
...
cinderAPI:
...
override:
service:
internal:
metadata:
annotations:
metallb.universe.tf/address-pool: osp-internalapi
metallb.universe.tf/allow-shared-ip: internalapi
metallb.universe.tf/loadBalancerIPs: 172.17.0.202
spec:
type: LoadBalancer
...
...
The internal cinder endpoint gets registered with its service name. This
service name needs to resolve to the LoadBalancerIP
on the isolated network
either by DNS or via /etc/hosts:
# openstack endpoint list -c 'Service Name' -c Interface -c URL --service cinderv3
+--------------+-----------+------------------------------------------------------------------+
| Service Name | Interface | URL |
+--------------+-----------+------------------------------------------------------------------+
| cinderv3 | internal | http://cinder-internal.openstack.svc:8776/v3 |
| cinderv3 | public | http://cinder-public-openstack.apps.ostest.test.metalkube.org/v3 |
+--------------+-----------+------------------------------------------------------------------+
// TODO(user): Add detailed information on how you would like others to contribute to this project
This project aims to follow the Kubernetes Operator pattern
It uses Controllers which provides a reconcile function responsible for synchronizing resources untile the desired state is reached on the cluster
- Install the CRDs into the cluster:
make install
- Run your controller (this will run in the foreground, so switch to a new terminal if you want to leave it running):
make run
NOTE: You can also run this in one step by running: make install run
If you are editing the API definitions, generate the manifests such as CRs or CRDs using:
make manifests
NOTE: Run make --help
for more information on all potential make
targets
More information can be found via the Kubebuilder Documentation
Copyright 2022.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.