Skip to content

Commit

Permalink
Rework Ceph RBD migration documentation
Browse files Browse the repository at this point in the history
This patch represents a rework of the current RBD documentation to move
it from a POC to a procedure that we can test in CI.
In particular:
- the procedure is split between Ceph Mgr and Ceph Mons migration
- Ceph MGR and Mon docs are more similar to procedures that
  the user should follow
- the order is fixed as rbd should be last

Signed-off-by: Francesco Pantano <[email protected]>
  • Loading branch information
fmount committed May 30, 2024
1 parent 5f0ff6b commit 61b75d7
Show file tree
Hide file tree
Showing 4 changed files with 527 additions and 6 deletions.
14 changes: 12 additions & 2 deletions docs_user/assemblies/assembly_migrating-ceph-rbd.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -11,6 +11,16 @@ To migrate Red Hat Ceph Storage Rados Block Device (RBD), your environment must
* {Ceph} is running version 6 or later and is managed by cephadm/orchestrator.
* NFS (ganesha) is migrated from a {OpenStackPreviousInstaller}-based deployment to cephadm. For more information, see xref:creating-a-ceph-nfs-cluster_migrating-databases[Creating a NFS Ganesha cluster].
* Both the {Ceph} public and cluster networks are propagated, with {OpenStackPreviousInstaller}, to the target nodes.
* Ceph Monitors need to keep their IPs to avoid cold migration.
* Ceph MDS, Ceph Monitoring stack, Ceph MDS, Ceph RGW and other services have been migrated already to the target nodes;
ifeval::["{build}" != "upstream"]
* The daemons distribution follows the cardinality constraints described in the doc link:https://access.redhat.com/articles/1548993[Red Hat Ceph Storage: Supported configurations]
endif::[]
* The Ceph cluster is healthy, and the `ceph -s` command returns `HEALTH_OK`
* The procedure keeps the mon IP addresses by moving them to the {Ceph} nodes
* Drain the existing Controller nodes
* Deploy additional monitors to the existing nodes, and promote them as
_admin nodes that administrators can use to manage the {CephCluster} cluster and perform day 2 operations against it.

include::../modules/proc_migrating-mon-and-mgr-from-controller-nodes.adoc[leveloffset=+1]
include::../modules/proc_migrating-mgr-from-controller-nodes.adoc[leveloffset=+1]

include::../modules/proc_migrating-mon-from-controller-nodes.adoc[leveloffset=+1]
8 changes: 4 additions & 4 deletions docs_user/main.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -22,12 +22,12 @@ include::assemblies/assembly_adopting-openstack-control-plane-services.adoc[leve

include::assemblies/assembly_adopting-the-data-plane.adoc[leveloffset=+1]

include::assemblies/assembly_migrating-ceph-rbd.adoc[leveloffset=+1]
include::assemblies/assembly_migrating-the-object-storage-service.adoc[leveloffset=+1]

include::assemblies/assembly_migrating-ceph-rgw.adoc[leveloffset=+1]
include::assemblies/assembly_migrating-ceph-monitoring-stack.adoc[leveloffset=+1]

include::modules/proc_migrating-ceph-mds.adoc[leveloffset=+1]

include::assemblies/assembly_migrating-ceph-monitoring-stack.adoc[leveloffset=+1]
include::assemblies/assembly_migrating-ceph-rgw.adoc[leveloffset=+1]

include::assemblies/assembly_migrating-the-object-storage-service.adoc[leveloffset=+1]
include::assemblies/assembly_migrating-ceph-rbd.adoc[leveloffset=+1]
100 changes: 100 additions & 0 deletions docs_user/modules/proc_migrating-mgr-from-controller-nodes.adoc
Original file line number Diff line number Diff line change
@@ -0,0 +1,100 @@
[id="migrating-mgr-from-controller-nodes_{context}"]

= Migrating Ceph Mgr daemons to {Ceph} nodes

The following section describes how to move Ceph Mgr daemons from the
OpenStack controller nodes to a set of target nodes. Target nodes might be
pre-existing {Ceph} nodes, or OpenStack Compute nodes if Ceph is deployed by
{OpenStackPreviousInstaller} with an HCI topology.

.Prerequisites

Configure the target nodes (CephStorage or ComputeHCI) to have both `storage`
and `storage_mgmt` networks to ensure that you can use both {Ceph} public and
cluster networks from the same node. This step requires you to interact with
{OpenStackPreviousInstaller}. From {rhos_prev_long} {rhos_prev_ver} and later
you do not have to run a stack update.

.Procedure

This procedure assumes that cephadm and the orchestrator are the tools that
drive the Ceph Mgr migration. As done with the other Ceph daemons (MDS,
Monitoring and RGW), the procedure uses the Ceph spec to modify the placement
and reschedule the daemons. Ceph Mgr is run in an active/passive fashion, and
it's also responsible to provide many modules, including the orchestrator.

. Before start the migration, ssh into the target node and enable the firewall
rules required to reach a Mgr service.
[source,bash]
+
----
dports="6800:7300"
ssh heat-admin@<target_node> sudo iptables -I INPUT \
-p tcp --match multiport --dports $dports -j ACCEPT;
----

[NOTE]
Repeat the previous action for each target_node.

. Check the rules are properly applied and persist them:
+
[source,bash]
----
sudo iptables-save
sudo systemctl restart iptables
----

. Prepare the target node to host the new Ceph Mgr daemon, and add the `mgr`
label to the target node:
+
[source,bash]
----
ceph orch host label add <target_node> mgr; done
----

- Replace <target_node> with the hostname of the hosts listed in the {Ceph}
through the `ceph orch host ls` command.

Repeat this action for each node that will be host a Ceph Mgr daemon.

Get the Ceph Mgr spec and update the `placement` section to use `label` as the
main scheduling strategy.

. Get the Ceph Mgr spec:
+
[source,yaml]
----
sudo cephadm shell -- ceph orch ls --export mgr > mgr.yaml
----

.Edit the retrieved spec and add the `label: mgr` section:
+
[source,yaml]
----
service_type: mgr
service_id: mgr
placement:
label: mgr
----

. Save the spec in `/tmp/mgr.yaml`
. Apply the spec with cephadm using the orchestrator:
+
----
sudo cephadm shell -m /tmp/mgr.yaml -- ceph orch apply -i /mnt/mgr.yaml
----

According to the numner of nodes where the `mgr` label is added, you will see a
Ceph Mgr daemon count that matches the number of hosts.

. Verify new Ceph Mgr have been created in the target_nodes:
+
----
ceph orch ps | grep -i mgr
ceph -s
----
+
[NOTE]
The procedure does not shrink the Ceph Mgr daemons: the count is grown by the
number of target nodes, and the xref:migrating-mon-from-controller-nodes[Ceph Mon migration procedure]
will decommission the stand-by Ceph Mgr instances.
Loading

0 comments on commit 61b75d7

Please sign in to comment.