Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Use bdev_rbd_register_cluster #28

Closed
PepperJo opened this issue Jun 30, 2022 · 4 comments
Closed

Use bdev_rbd_register_cluster #28

PepperJo opened this issue Jun 30, 2022 · 4 comments
Assignees

Comments

@PepperJo
Copy link

Use bdev_rbd_register_cluster to allow multiple Ceph cluster configuration or multiple io contexts to a single Ceph cluster.

@rahullepakshi
Copy link
Contributor

rahullepakshi commented Aug 22, 2023

@PepperJo
To address this issue I believe we should create Ceph bdev RPC command with bdev_rbd_register_cluster followed by bdev_rbd_create.

In such case, how do we define number of RBD bdevs sharing same rados cluster with one connection in librbd module?

Say I have 256 images, I created Rados cluster object with bdev_rbd_register_cluster and now I need to create 256 bdevs. Now I feel we should come up with a design specifying how many bdevs should share this Rados cluster object or do I need to another Rados cluster object for next set of bdevs for effective memory consumption and performance optimization or any other factors that matter. Also how flexible we can keep this implementation

Plus I also get below in GW server logs-

[2023-08-22 06:45:23.353164] subsystem.c:1201:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.210.235/5001, even though this listener was not added to the discovery subsystem.  This behavior is deprecated and will be removed in a future release.
ERROR:control.server:spdk_get_version failed with:
 Timeout while waiting for response:



INFO:control.server:Terminating SPDK...
INFO:control.server:Stopping the server...
INFO:control.server:Exiting the gateway process.
WARNING:bdev_rbd_create should be used with specifying -c to have a cluster name after bdev_rbd_register_cluster.
WARNING:bdev_rbd_create should be used with specifying -c to have a cluster name after bdev_rbd_register_cluster.
WARNING:bdev_rbd_create should be used with specifying -c to have a cluster name after bdev_rbd_register_cluster.
WARNING:bdev_rbd_create should be used with specifying -c to have a cluster name after bdev_rbd_register_cluster.
WARNING:bdev_rbd_create should be used with specifying -c to have a cluster name after bdev_rbd_register_cluster.
WARNING:bdev_rbd_create should be used with specifying -c to have a cluster name after bdev_rbd_register_cluster.
WARNING:bdev_rbd_create should be used with specifying -c to have a cluster name after bdev_rbd_register_cluster.

@caroav
Copy link
Collaborator

caroav commented Aug 30, 2023

@rahullepakshi you are right. We plan to use multiple cluster contexts. We need to decide how many cluster contexts we create (might be a function of the number of bdevs, and/or number of initiators connecting. One way or another, it should be configurable so it can be easily manipulated.

@sdpeters
Copy link
Contributor

@rahullepakshi you are right. We plan to use multiple cluster contexts. We need to decide how many cluster contexts we create (might be a function of the number of bdevs, and/or number of initiators connecting. One way or another, it should be configurable so it can be easily manipulated.

Since some failures affect all the namespaces sharing a cluster context, some users may want the ability to specify which namespaces share each cluster context. I'm sure other users will consider that unnecessary complexity, and want this determined automatically.

@caroav caroav moved this from 🆕 New to 🏗 In progress in NVMe-oF Sep 5, 2023
baum pushed a commit to baum/ceph-nvmeof that referenced this issue Sep 11, 2023
Signed-off-by: Alexander Indenbaum <[email protected]>
baum pushed a commit to baum/ceph-nvmeof that referenced this issue Sep 11, 2023
Signed-off-by: Alexander Indenbaum <[email protected]>
baum pushed a commit to baum/ceph-nvmeof that referenced this issue Sep 11, 2023
Signed-off-by: Alexander Indenbaum <[email protected]>
baum pushed a commit to baum/ceph-nvmeof that referenced this issue Sep 11, 2023
Signed-off-by: Alexander Indenbaum <[email protected]>
baum pushed a commit to baum/ceph-nvmeof that referenced this issue Sep 12, 2023
Signed-off-by: Alexander Indenbaum <[email protected]>
baum pushed a commit to baum/ceph-nvmeof that referenced this issue Sep 12, 2023
Signed-off-by: Alexander Indenbaum <[email protected]>
baum pushed a commit to baum/ceph-nvmeof that referenced this issue Sep 18, 2023
Signed-off-by: Alexander Indenbaum <[email protected]>
@baum
Copy link
Collaborator

baum commented Sep 19, 2023

Merged in #230

@baum baum closed this as completed Sep 19, 2023
@github-project-automation github-project-automation bot moved this from 🏗 In progress to ✅ Done in NVMe-oF Sep 19, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
Archived in project
Development

No branches or pull requests

5 participants