To succesfully run a kubernetes cluster in openstack, you will need to configure a few essential properties, then ensure that they are added to your machines.yaml file. The following components are necessary:
- private network
- public network
- floating ip address
- router connecting private network to public network
- Specific security group rules
- At least one of the supported operating system images
After running the generate-yaml.sh
, the file cmd/clusterctl/examples/openstack/out/machines.yaml
will be created at that location. This file stores information on what openstack elements to use to create the cluster on, and which cluster components to create. We provide you with a template to create one master and one worker node, however the template is incomplete and needs to be filled in. It looks like this:
Most openstack clusters come with a private network already, but if you would like to create a private network just for kubernetes, then the following openstack commands will create it, and a subnet for the nodes:
openstack network create <name of network>
openstack subnet create <name of subnet> --network <name of network> --subnet-range <CIDR ip range>
Once you have a network that you want to host the cluster on, you can add the id of that network to the config where it says <Kubernetes Network ID>
in the machines.yaml
file.
If your openstack cluster does not already have a public network, you should contact your cloud service provider. We will not review how to troubeshoot this here.
Create a Floating IP via OpenStack for the master node before creating a cluster with the clusterctl tool. This IP will be used by clusterctl
to access the master node via SSH to retrieve the kubeconfig file generated by kubeadm on the master node.
For example, openstack floating ip create <public_net>
to create a floating ip, e.g:
+--------------------------------------+------------------+---------------------+---------+
| id | fixed_ip_address | floating_ip_address | port_id |
+--------------------------------------+------------------+---------------------+---------+
| aeefa79a-0e76-4c14-9da7-e5a9a6cc3787 | | 172.17.0.117 | |
+--------------------------------------+------------------+---------------------+---------+
Once you have an available floating ip, then you can add it to the machines.yaml
script where it says <Available Floating IP>
. You only need to create and use one floating ip.
Your kubernetes cluster must be reachable from wherever cluster-api-provider-openstack is being run from to set it up, and probably needs to be reachable by external trafic for use. To make your cluster reachable by external traffic, you will need to set up an openstack router that connects your private network to your public network. For this example, lets say you have a subnet named kube-nodes-subnet
in the private network you created, and a public network named public
that you are trying to connect with a router named kube-router
.
openstack router create kube-router
openstack router set kube-router --external-gateway public
openstack router add subnet kube-nodes-subnet
For another example on networking in openstack, look here https://developer.openstack.org/firstapp-libcloud/networking.html
For the installer to work, a few security groups are required to be open. These may be different from the security groups needed to reach a cluster once its running. The following security group rules should be added to the security group of your chosing. For this example, we will suppose you created a security group names kubernetes
that you will use for the cluster.
openstack security group rule create --ingress --protocol tcp --dst-port 22 kubernetes
openstack security group rule create --ingress --protocol tcp --dst-port 3000:32767 kubernetes
openstack security group rule create --ingress --protocol tcp --dst-port 443 kubernetes
openstack security group rule create --egress
In machines.yaml, you can specify openstack security groups to be applied to each server in the securityGroups
section of the YAML. You can specify the security group in 3 ways: by ID, by Name, or by filters. When you specify a security group by ID it will always return 1 security group or an error if it fails to find the security group specified. Please note that it is possible to add more than one security group to your machine when using Name or a Filter to specify it. The following filters are available to you:
- TenantID
- ProjectID
- Limit
- Marker
- SortKey
- SortDir
- Tags
- TagsAny
- NotTags
- NotTagsAny
Each security group can be specified by its uuid, its name, and a filter. It is recommended that you check to make sure that the name or filters you use return the security group or groups you are expecting using an openstack query. An example of the correct syntax for each of these use cases is below:
securityGroups:
- uuid: < your security group ID >
- name: < your security group Name >
- filter:
projectId: < you project ID >
tags: < a tag >
- name: < your security group Name >
filter:
tags: < a tag >
We don't currently have specific version requriements, and so the choice is yours. However, we do require that you have either a ubuntu image or a centos image available in your cluster. For this step, we would like to refer you to the following doccumentation, https://docs.openstack.org/image-guide/obtain-images.html.
You can reference which operating system image you want to use in the machines.yaml script where it says <Image Name>
. If you are using ubuntu, then replace <SSH Username>
in machines.yaml with ubuntu
. If you are using centos, then replace <SSH Username>
in machines.yaml with centos
.
Rather than just using a network, you have the option of specifying a specific subnet to connect your server to. The following is an example of how to specify a specific subnet of a network to use for a server.
- apiVersion: "cluster.k8s.io/v1alpha1"
kind: Machine
metadata:
generateName: openstack-node-
labels:
set: node
spec:
providerSpec:
value:
networks:
- subnet_id: < subnet id >
If you have a complex query that you want to use to lookup a network, then you can do this by using a network filter. The filter will allow you to look up a network by the following network features:
- status
- name
- adminStateUp
- tenantId
- projectId
- shared
- id
- marker
- limit
- sortKey
- sortDir
- tags
- tagsAny
- notTags
- notTagsAny
By using filters to look up a network, please note that it is possible to get multiple networks as a result. This should not be a problem, however please test your filters with openstack network list
to be certian that it returns the networks you want. Please refer to the following usage example:
- apiVersion: "cluster.k8s.io/v1alpha1"
kind: Machine
metadata:
generateName: openstack-node-
labels:
set: node
spec:
providerSpec:
value:
networks:
- filters:
name: myNetwork
tags: myTag
You can specify multiple networks (or subnets) to connect your server to. To do this, simply add another entry in the networks array. The following example connects the server to 3 different networks using all of the ways to connect discussed above:
- apiVersion: "cluster.k8s.io/v1alpha1"
kind: Machine
metadata:
generateName: openstack-node-
labels:
set: node
spec:
providerSpec:
value:
networks:
- filters:
name: myNetwork
tags: myTag
- uuid: your_network_id
- subnet_id: your_subnet_id
By default, all resources will be tagged with the values: clusterName
and cluster-api-provider-openstack
. The minimum microversion of the nova api that you need to support server tagging is 2.52. If your cluster does not support this, then disable tagging servers by setting disableServerTags: true
in cluster.yaml. By default, this value is false, so there is no need so set it in machines.yaml. If your cluster supports tagging servers, you have the ability to tag all resources created by the cluster in the cluster.yaml script. Here is the example of the tagging options available in cluster.yaml.
apiVersion: "cluster.k8s.io/v1alpha1"
kind: Cluster
metadata:
name: test1
spec:
clusterNetwork:
services:
cidrBlocks: ["10.96.0.0/12"]
pods:
cidrBlocks: ["192.168.0.0/16"]
serviceDomain: "cluster.local"
providerSpec:
value:
apiVersion: "openstackproviderconfig/v1alpha1"
kind: "OpenstackProviderSpec"
disableServerTags: false
tags:
- cluster-tag
To tag resources specific to a machine, add a value to the tags field in machines.yaml like this.
- apiVersion: "cluster.k8s.io/v1alpha1"
kind: Machine
metadata:
generateName: openstack-node-
labels:
set: node
spec:
providerSpec:
value:
tags:
- machine-tag
Instead of tagging, you also have the option to add metadata to instances. This functionality should be more commonly available than tagging. Here is a usage example:
- apiVersion: "cluster.k8s.io/v1alpha1"
kind: Machine
metadata:
generateName: openstack-node-
labels:
set: node
spec:
providerSpec:
value:
serverMetadata:
name: bob
nickname: bobbert
-
In
examples/openstack/<os>/out/machines.yaml
, generated withgenerate-yaml.sh, set
spec.providerSpec.value.rootVolume.diskSize` to great than 0 means boot from volume.items: - apiVersion: "cluster.k8s.io/v1alpha1" kind: Machine ... spec: providerSpec: value: ... rootVolume: diskSize: 0 sourceType: "" SourceUUID: "" securityGroups: ...
During some heavy workload cloud, the time for create and delete openstack instance might takes long time, by default it's 5 minute.
you can set:
CLUSTER_API_OPENSTACK_INSTANCE_DELETE_TIMEOUT
for instance delete timeout value.
CLUSTER_API_OPENSTACK_INSTANCE_CREATE_TIMEOUT
for instance create timeout value.