Skip to content
This repository has been archived by the owner on Jun 25, 2024. It is now read-only.

Commit

Permalink
Update the docs
Browse files Browse the repository at this point in the history
Signed-off-by: Fabricio Aguiar <[email protected]>
  • Loading branch information
fao89 committed Mar 26, 2024
1 parent 7099ef0 commit f3d3a67
Show file tree
Hide file tree
Showing 9 changed files with 305 additions and 243 deletions.
5 changes: 5 additions & 0 deletions .pre-commit-config.yaml
Original file line number Diff line number Diff line change
@@ -1,6 +1,11 @@
repos:
- repo: local
hooks:
- id: kustomize-docs
name: kustomize-docs
language: system
require_serial: true
entry: docs/kustomize_to_docs.sh
- id: gotidy
name: gotidy
language: system
Expand Down
36 changes: 12 additions & 24 deletions docs/assemblies/proc_creating-a-set-of-data-plane-nodes.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -8,31 +8,19 @@ Create an `OpenStackDataPlaneNodeSet` CR for each logical grouping of nodes in y

.Procedure

. Copy the https://kustomize.io/[kustomize] examples on your workstation:

git clone -n --depth=1 --filter=tree:0 \
https://github.com/openstack-k8s-operators/dataplane-operator
cd dataplane-operator
git sparse-checkout set --no-cone examples
git checkout
cd examples

. Use kustomize to create the `OpenStackDataPlaneDeployment` and `OpenStackDataPlaneNodeSet` CRs:

* Pre-provisioned nodes:

oc kustomize --load-restrictor LoadRestrictionsNone preprovisioned > openstack-edpm.yaml

* Bare metal nodes:

oc kustomize --load-restrictor LoadRestrictionsNone baremetal > openstack-edpm.yaml

. Create an `OpenStackDataPlaneNodeSet` CR and save it to a file named `openstack-edpm.yaml` on your workstation:
+
[NOTE]
====
If desired, update the `values.yaml` file with values from your environment and add other kustomizations as needed before generating the CRs. Alternatively, the generated CRS can be directly edited before being applied.
====
----
apiVersion: dataplane.openstack.org/v1beta1
kind: OpenStackDataPlaneNodeSet
metadata:
name: openstack-edpm-ipam
spec:
...
----
+
* xref:ref_example-OpenStackDataPlaneNodeSet-CR-for-preprovisioned-nodes_dataplane[Example `OpenStackDataPlaneNodeSet` CR for pre-provisioned nodes]
* xref:ref_example-OpenStackDataPlaneNodeSet-CR-for-bare-metal-nodes_dataplane[Example `OpenStackDataPlaneNodeSet` CR for bare metal nodes]

. The sample `OpenStackDataPlaneNodeSet` CR is connected to `cell1` by default. If you added additional Compute cells to the control plane and you want to connect the node set to one of the other cells, then you must create a custom service for the node set that includes the `Secret` CR for the cell:

Expand Down Expand Up @@ -310,7 +298,7 @@ endif::[]

. Save the `openstack-edpm.yaml` definition file.

. Create and <<proc_deploying-the-data-plane_dataplane,deploy>> the data plane resources:
. Create the data plane resources:
+
----
$ oc create -f openstack-edpm.yaml
Expand Down
8 changes: 4 additions & 4 deletions docs/assemblies/proc_deploying-the-data-plane.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -6,9 +6,7 @@ You use the `OpenStackDataPlaneDeployment` CRD to configure the services on the

.Procedure

. Complete the steps from <<proc_creating-a-set-of-data-plane-nodes_dataplane,Creating a DataPlane>> procedure, ensuring that you have a file named `openstack-edpm.yaml` on your workstation.

. Optional: Update `nodeSets` to include all the `OpenStackDataPlaneNodeSet` CRs that you want to deploy:
. Create an `OpenStackDataPlaneDeployment` CR and save it to a file named `openstack-edpm-deploy.yaml` on your workstation.
+
----
apiVersion: dataplane.openstack.org/v1beta1
Expand All @@ -25,10 +23,12 @@ spec:
+
* Replace `<nodeSet_name>` with the names of the `OpenStackDataPlaneNodeSet` CRs that you want to include in your data plane deployment.

. Save the `openstack-edpm-deploy.yaml` deployment file.

. Deploy the data plane:
+
----
$ oc create -f openstack-edpm.yaml
$ oc create -f openstack-edpm-deploy.yaml
----
+
You can view the Ansible logs while the deployment executes:
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -9,119 +9,103 @@ apiVersion: dataplane.openstack.org/v1beta1
kind: OpenStackDataPlaneNodeSet
metadata:
name: openstack-edpm-ipam
namespace: openstack
spec:
env: <1>
baremetalSetTemplate: #<1>
bmhLabelSelector:
app: openstack
cloudUserName: cloud-admin
ctlplaneInterface: enp1s0
env: #<2>
- name: ANSIBLE_FORCE_COLOR
value: "True"
services: <2>
networkAttachments: #<3>
- ctlplane
nodeTemplate: #<4>
ansible:
ansibleUser: cloud-admin #<5>
ansibleVars: #<6>
edpm_network_config_template: | #<7>
---
{% set mtu_list = [ctlplane_mtu] %}
{% for network in role_networks %}
{{ mtu_list.append(lookup('vars', networks_lower[network] ~ '_mtu')) }}
{%- endfor %}
{% set min_viable_mtu = mtu_list | max %}
network_config:
- type: ovs_bridge
name: {{ neutron_physical_bridge_name }}
mtu: {{ min_viable_mtu }}
use_dhcp: false
dns_servers: {{ ctlplane_dns_nameservers }}
domain: {{ dns_search_domains }}
addresses:
- ip_netmask: {{ ctlplane_ip }}/{{ ctlplane_cidr }}
routes: {{ ctlplane_host_routes }}
members:
- type: interface
name: nic1
mtu: {{ min_viable_mtu }}
# force the MAC address of the bridge to this interface
primary: true
{% for network in role_networks %}
- type: vlan
mtu: {{ lookup('vars', networks_lower[network] ~ '_mtu') }}
vlan_id: {{ lookup('vars', networks_lower[network] ~ '_vlan_id') }}
addresses:
- ip_netmask:
{{ lookup('vars', networks_lower[network] ~ '_ip') }}/{{ lookup('vars', networks_lower[network] ~ '_cidr') }}
routes: {{ lookup('vars', networks_lower[network] ~ '_host_routes') }}
{% endfor %}
edpm_nodes_validation_validate_controllers_icmp: false
edpm_nodes_validation_validate_gateway_icmp: false
edpm_sshd_allowed_ranges:
- 192.168.111.0/24
enable_debug: false
gather_facts: false
neutron_physical_bridge_name: br-ex
neutron_public_interface_name: eth0
ansibleSSHPrivateKeySecret: dataplane-ansible-ssh-private-key-secret #<8>
networks: #<9>
- defaultRoute: true
name: ctlplane
subnetName: subnet1
- name: internalapi
subnetName: subnet1
- name: storage
subnetName: subnet1
- name: tenant
subnetName: subnet1
nodes:
edpm-compute-0: #<10>
hostName: edpm-compute-0
preProvisioned: false
services: #<11>
- bootstrap
- download-cache
- configure-network
- validate-network
- install-os
- configure-os
- ssh-known-hosts
- run-os
- reboot-os
- install-certs
- ovn
- neutron-metadata
- libvirt
- nova
- telemetry
baremetalSetTemplate: <3>
bmhLabelSelector:
app: openstack
ctlplaneInterface: enp1s0
cloudUserName: cloud-admin
nodes:
edpm-compute-0: <4>
hostName: edpm-compute-0
networkAttachments: <5>
- ctlplane
nodeTemplate: <6>
ansibleSSHPrivateKeySecret: dataplane-ansible-ssh-private-key-secret <7>
networks: <8>
- name: CtlPlane
subnetName: subnet1
defaultRoute: true
- name: InternalApi
subnetName: subnet1
- name: Storage
subnetName: subnet1
- name: Tenant
subnetName: subnet1
managementNetwork: ctlplane
ansible:
ansibleUser: cloud-admin <9>
ansiblePort: 22
ansibleVars: <10>
service_net_map:
nova_api_network: internal_api
nova_libvirt_network: internal_api
edpm_chrony_ntp_servers:
- pool.ntp.org
edpm_network_config_hide_sensitive_logs: false
edpm_network_config_template: | <11>
---
{% set mtu_list = [ctlplane_mtu] %}
{% for network in role_networks %}
{{ mtu_list.append(lookup('vars', networks_lower[network] ~ '_mtu')) }}
{%- endfor %}
{% set min_viable_mtu = mtu_list | max %}
network_config:
- type: ovs_bridge
name: {{ neutron_physical_bridge_name }}
mtu: {{ min_viable_mtu }}
use_dhcp: false
dns_servers: {{ ctlplane_dns_nameservers }}
domain: {{ dns_search_domains }}
addresses:
- ip_netmask: {{ ctlplane_ip }}/{{ ctlplane_subnet_cidr }}
routes: {{ ctlplane_host_routes }}
members:
- type: interface
name: nic1
mtu: {{ min_viable_mtu }}
# force the MAC address of the bridge to this interface
primary: true
{% for network in role_networks %}
- type: vlan
mtu: {{ lookup('vars', networks_lower[network] ~ '_mtu') }}
vlan_id: {{ lookup('vars', networks_lower[network] ~ '_vlan_id') }}
addresses:
- ip_netmask:
{{ lookup('vars', networks_lower[network] ~ '_ip') }}/{{ lookup('vars', networks_lower[network] ~ '_cidr') }}
routes: {{ lookup('vars', networks_lower[network] ~ '_host_routes') }}
{% endfor %}
edpm_network_config_hide_sensitive_logs: false
# These vars are for the network config templates themselves and are
# considered EDPM network defaults.
neutron_physical_bridge_name: br-ex
neutron_public_interface_name: eth0
role_networks:
- InternalApi
- Storage
- Tenant
networks_lower:
External: external
InternalApi: internal_api
Storage: storage
Tenant: tenant
# edpm_nodes_validation
edpm_nodes_validation_validate_controllers_icmp: false
edpm_nodes_validation_validate_gateway_icmp: false
gather_facts: false
enable_debug: false
# edpm firewall, change the allowed CIDR if needed
edpm_sshd_configure_firewall: true
edpm_sshd_allowed_ranges: ['192.168.122.0/24']
# SELinux module
edpm_selinux_mode: enforcing
----

<1> Optional: A list of environment variables to pass to the pod.
<2> The services that are deployed on the data plane nodes in this `OpenStackDataPlaneNodeSet` CR.
<3> Configure the bare metal template for bare metal nodes that must be provisioned when creating the resource.
<4> The node definition reference, for example, `edpm-compute-0`. Each node in the node set must have a node definition.
<5> The networks the `ansibleee-runner` connects to, specified as a list of `netattach` resource names.
<6> The common configuration to apply to all nodes in this set of nodes.
<7> The name of the secret that you created in xref:proc_creating-the-SSH-key-secrets_{context}[Creating the SSH key secrets].
<8> Networks for the bare metal nodes.
<9> The user associated with the secret you created in xref:proc_creating-the-SSH-key-secrets_{context}[Creating the SSH key secrets].
<10> The Ansible variables that customize the set of nodes. For a complete list of Ansible variables, see https://openstack-k8s-operators.github.io/edpm-ansible/.
<11> The network configuration template to apply to nodes in the set. For sample templates, see https://github.com/openstack-k8s-operators/edpm-ansible/tree/main/roles/edpm_network_config/templates.
<1> Configure the bare metal template for bare metal nodes that must be provisioned when creating the resource.
<2> Optional: A list of environment variables to pass to the pod.
<3> The networks the `ansibleee-runner` connects to, specified as a list of `netattach` resource names.
<4> The common configuration to apply to all nodes in this set of nodes.
<5> The user associated with the secret you created in xref:proc_creating-the-SSH-key-secrets_{context}[Creating the SSH key secrets].
<6> The Ansible variables that customize the set of nodes. For a complete list of Ansible variables, see https://openstack-k8s-operators.github.io/edpm-ansible/.
<7> The network configuration template to apply to nodes in the set. For sample templates, see https://github.com/openstack-k8s-operators/edpm-ansible/tree/main/roles/edpm_network_config/templates.
<8> The name of the secret that you created in xref:proc_creating-the-SSH-key-secrets_{context}[Creating the SSH key secrets].
<9> Networks for the bare metal nodes.
<10> The node definition reference, for example, `edpm-compute-0`. Each node in the node set must have a node definition.
<11> The services that are deployed on the data plane nodes in this `OpenStackDataPlaneNodeSet` CR.
Loading

0 comments on commit f3d3a67

Please sign in to comment.