-
Notifications
You must be signed in to change notification settings - Fork 57
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
Merge pull request #461 from fmount/ceph_doc
Rework Ceph RBD migration documentation
- Loading branch information
Showing
10 changed files
with
612 additions
and
63 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,44 @@ | ||
ifdef::context[:parent-context: {context}] | ||
|
||
[id="ceph-migration_{context}"] | ||
|
||
= Migrating the {CephCluster} Cluster | ||
|
||
:context: migrating-ceph | ||
|
||
:toc: left | ||
:toclevels: 3 | ||
|
||
ifdef::parent-context[:context: {parent-context}] | ||
ifndef::parent-context[:!context:] | ||
|
||
In the context of data plane adoption, where the {rhos_prev_long} | ||
({OpenStackShort}) services are redeployed in {OpenShift}, you migrate a | ||
{OpenStackPreviousInstaller}-deployed {CephCluster} cluster by using a process | ||
called “externalizing” the {CephCluster} cluster. | ||
|
||
There are two deployment topologies that include an internal {CephCluster} | ||
cluster: | ||
|
||
* {OpenStackShort} includes dedicated {CephCluster} nodes to host object | ||
storage daemons (OSDs) | ||
|
||
* Hyperconverged Infrastructure (HCI), where Compute and Storage services are | ||
colocated on hyperconverged nodes | ||
|
||
In either scenario, there are some {Ceph} processes that are deployed on | ||
{OpenStackShort} Controller nodes: {Ceph} monitors, Ceph Object Gateway (RGW), | ||
Rados Block Device (RBD), Ceph Metadata Server (MDS), Ceph Dashboard, and NFS | ||
Ganesha. To migrate your {CephCluster} cluster, you must decommission the | ||
Controller nodes and move the {Ceph} daemons to a set of target nodes that are | ||
already part of the {CephCluster} cluster. | ||
|
||
include::../modules/con_ceph-daemon-cardinality.adoc[leveloffset=+1] | ||
|
||
include::assembly_migrating-ceph-monitoring-stack.adoc[leveloffset=+1] | ||
|
||
include::../modules/proc_migrating-ceph-mds.adoc[leveloffset=+1] | ||
|
||
include::assembly_migrating-ceph-rgw.adoc[leveloffset=+1] | ||
|
||
include::assembly_migrating-ceph-rbd.adoc[leveloffset=+1] |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file was deleted.
Oops, something went wrong.
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
95 changes: 95 additions & 0 deletions
95
docs_user/modules/proc_migrating-mgr-from-controller-nodes.adoc
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,95 @@ | ||
= Migrating Ceph Manager daemons to {Ceph} nodes | ||
|
||
The following section describes how to move Ceph Manager daemons from the | ||
{rhos_prev_long} Controller nodes to a set of target nodes. Target nodes might | ||
be pre-existing {Ceph} nodes, or {OpenStackShort} Compute nodes if {Ceph} is | ||
deployed by {OpenStackPreviousInstaller} with an HCI topology. | ||
This procedure assumes that Cephadm and the {Ceph} Orchestrator are the tools | ||
that drive the Ceph Manager migration. As is done with the other Ceph daemons | ||
(MDS, Monitoring, and RGW), the procedure uses the Ceph spec to modify the | ||
placement and reschedule the daemons. Ceph Manager is run in an active/passive | ||
fashion, and it also provides many modules, including the Ceph orchestrator. | ||
|
||
|
||
.Prerequisites | ||
|
||
* Configure the target nodes (CephStorage or ComputeHCI) to have both `storage` | ||
and `storage_mgmt` networks to ensure that you can use both {Ceph} public and | ||
cluster networks from the same node. This step requires you to interact with | ||
{OpenStackPreviousInstaller}. From {rhos_prev_long} {rhos_prev_ver} and later | ||
you do not have to run a stack update. | ||
.Procedure | ||
|
||
. Ssh into the target node and enable the firewall rules that are required to | ||
reach a Manager service: | ||
+ | ||
---- | ||
dports="6800:7300" | ||
ssh heat-admin@<target_node> sudo iptables -I INPUT \ | ||
-p tcp --match multiport --dports $dports -j ACCEPT; | ||
---- | ||
+ | ||
Repeat this step for each `<target_node>`. | ||
|
||
. Check that the rules are properly applied and persist them: | ||
+ | ||
---- | ||
$ sudo iptables-save | ||
$ sudo systemctl restart iptables | ||
---- | ||
+ | ||
. Prepare the target node to host the new Ceph Manager daemon, and add the `mgr` | ||
label to the target node: | ||
+ | ||
---- | ||
ceph orch host label add <target_node> mgr; done | ||
---- | ||
+ | ||
* Replace `<target_node>` with the hostname of the hosts listed in the {Ceph} | ||
through the `ceph orch host ls` command | ||
+ | ||
Repeat the actions described above for each `<target_node> that will host a | ||
Ceph Manager daemon. | ||
. Get the Ceph Manager spec: | ||
+ | ||
[source,yaml] | ||
---- | ||
sudo cephadm shell -- ceph orch ls --export mgr > mgr.yaml | ||
---- | ||
|
||
. Edit the retrieved spec and add the `label: mgr` section to the `placement` | ||
section: | ||
+ | ||
[source,yaml] | ||
---- | ||
service_type: mgr | ||
service_id: mgr | ||
placement: | ||
label: mgr | ||
---- | ||
|
||
. Save the spec in the `/tmp/mgr.yaml` file. | ||
. Apply the spec with cephadm by using the orchestrator: | ||
+ | ||
---- | ||
sudo cephadm shell -m /tmp/mgr.yaml -- ceph orch apply -i /mnt/mgr.yaml | ||
---- | ||
+ | ||
As a result of this procedure, you see a Ceph Manager daemon count that matches | ||
the number of hosts where the `mgr` label is added. | ||
|
||
. Verify that the new Ceph Manager are created in the target nodes: | ||
+ | ||
---- | ||
ceph orch ps | grep -i mgr | ||
ceph -s | ||
---- | ||
+ | ||
[NOTE] | ||
The procedure does not shrink the Ceph Manager daemons. The count is grown by | ||
the number of target nodes, and migrating Ceph Monitor daemons to {Ceph} nodes | ||
decommissions the stand-by Ceph Manager instances. For more information, see | ||
xref:migrating-mon-from-controller-nodes_migrating-ceph-rbd[Migrating Ceph Monitor | ||
daemons to {Ceph} nodes]. |
Oops, something went wrong.