Skip to content

Commit

Permalink
[Doc] correct ceph migration steps
Browse files Browse the repository at this point in the history
update/correct few ceph rbd migration steps
  • Loading branch information
katarimanojk committed Dec 12, 2024
1 parent f3818fe commit 43e9c56
Show file tree
Hide file tree
Showing 4 changed files with 21 additions and 14 deletions.
Original file line number Diff line number Diff line change
Expand Up @@ -53,10 +53,10 @@ $ sudo nft list ruleset | grep ceph_mgr
label to the target node:
+
----
$ ceph orch host label add <target_node> mgr; done
$ sudo cephadm shell -- ceph orch host label add <target_node> mgr
----

. Repeat steps 1-3 for each target node that hosts a Ceph Manager daemon.
. Repeat steps 1-7 for each target node that hosts a Ceph Manager daemon.

. Get the Ceph Manager spec:
+
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@

= Draining the source node

Drain the existing Controller nodes and remove the source node host from the {CephCluster} cluster.
Drain the source node and remove the source node host from the {CephCluster} cluster.

.Procedure

Expand All @@ -19,7 +19,7 @@ $ sudo cp -R /etc/ceph $HOME/ceph_client_backup
$ sudo cephadm shell -- ceph mgr stat
----

. Fail the `ceph-mgr` if it is active on the source node or target node:
. Fail the `ceph-mgr` if it is active on the source node:
+
----
$ sudo cephadm shell -- ceph mgr fail <mgr_instance>
Expand All @@ -31,28 +31,28 @@ $ sudo cephadm shell -- ceph mgr fail <mgr_instance>
+
----
$ for label in mon mgr _admin; do
sudo cephadm shell -- ceph orch host rm label <source_node> $label;
sudo cephadm shell -- ceph orch host label rm <source_node> $label;
done
----
+
* Replace `<source_node>` with the hostname of the source node.

. Remove the running Ceph Monitor daemon from the source node:
. (Optional) Ensure that you remove the Ceph Monitor daemon from the source node if it is still running:
+
----
$ sudo cephadm shell -- ceph orch daemon rm mon.<source_node> --force"
$ sudo cephadm shell -- ceph orch daemon rm mon.<source_node> --force
----

. Drain the source node:
. Drain the source node to remove any leftover daemons:
+
----
$ sudo cephadm shell -- ceph drain <source_node>
$ sudo cephadm shell -- ceph orch host drain <source_node>
----

. Remove the source node host from the {CephCluster} cluster:
+
----
$ sudo cephadm shell -- ceph orch host rm <source_node> --force"
$ sudo cephadm shell -- ceph orch host rm <source_node> --force
----
+
[NOTE]
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -11,12 +11,19 @@ IP address migration assumes that the target nodes are originally deployed by
// w/ an EDPM node that has already been adopted.
.Procedure

. Get the original Ceph Monitor IP address from the existing `/etc/ceph/ceph.conf` file on the `mon_host` line, for example:
. Get the original Ceph Monitor IP addresses from `$HOME/ceph_client_backup/ceph.conf` file on the `mon_host` line, for example:
+
----
mon_host = [v2:172.17.3.60:3300/0,v1:172.17.3.60:6789/0] [v2:172.17.3.29:3300/0,v1:172.17.3.29:6789/0] [v2:172.17.3.53:3300/0,v1:172.17.3.53:6789/0]
----

. Match the IP address retrieved in the previous step with the storage network IP addresses on the source node, and find the Ceph Monitor IP address:
----
[tripleo-admin@controller-0 ~]$ ip -o -4 a | grep 172.17.3
9: vlan30 inet 172.17.3.60/24 brd 172.17.3.255 scope global vlan30\ valid_lft forever preferred_lft forever
9: vlan30 inet 172.17.3.13/32 brd 172.17.3.255 scope global vlan30\ valid_lft forever preferred_lft forever
----

. Confirm that the Ceph Monitor IP address is present in the `os-net-config` configuration that is located in the `/etc/os-net-config` directory on the source node:
+
----
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -39,7 +39,7 @@ The Ceph Monitor daemons are marked as `unmanaged`, and you can now redeploy the
. Delete the existing Ceph Monitor on the target node:
+
----
$ sudo cephadm shell -- ceph orch daemon add rm mon.<target_node> --force
$ sudo cephadm shell -- ceph orch daemon rm mon.<target_node> --force
----
+
* Replace `<target_node>` with the hostname of the target node that is included in the {Ceph} cluster.
Expand Down Expand Up @@ -84,7 +84,7 @@ The new Ceph Monitor runs on the target node with the original IP address.
. Identify the running `mgr`:
+
----
$ sudo cephadm shell -- mgr stat
$ sudo cephadm shell -- ceph mgr stat
----
+
. Refresh the Ceph Manager information by force-failing it:
Expand All @@ -101,5 +101,5 @@ $ sudo cephadm shell -- ceph orch reconfig osd.default_drive_group

.Next steps

Repeat the procedure for each node that you want to decommission.
Repeat the procedure starting from step xref:draining-the-source-node_{context}[Draining the source node] for each node that you want to decommission.
Proceed to the next step xref:verifying-the-cluster-after-ceph-mon-migration_{context}[Verifying the {CephCluster} cluster after Ceph Monitor migration].

0 comments on commit 43e9c56

Please sign in to comment.