-
Notifications
You must be signed in to change notification settings - Fork 1.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Can not create ceph primary storage? #5741
Comments
6789 as port number, do the MONs run on that port? New monitors might only bind on 3300 |
@wido I will use the 3300 port to do some test. thanks |
@wido I have tested on port 3300, But It is still not work. The cloudstack client error message is Failed to create RBD storage pool: org.libvirt.LibvirtException: failed to connect to the RADOS monitor on: 10.29.44.1:3300,: No such file or directory and the details is :
|
@xuanyuanaosheng |
@weizhouapache I use the command on the kvm node:
I think Cloud you please give some advice? |
@weizhouapache @wido Could you please give some advice? |
@xuanyuanaosheng |
This seems like something outside CloudStack, double check:
|
I have check all the config as your advices.
But I can not find the storage pool d8dabcb0-1a57-4e13-8a82-339b2052dec1 on cloudstack UI and the storage pool d8dabcb0-1a57-4e13-8a82-339b2052dec1 will change when I reclick the add primary stotage button. After checked all the config, I restart the management-server and cloudstack-agent services. The error is still the same:
Any idears? Can you give some test scripts that can test the ceph storage is ok on the KVM node? |
in my opinion, it is not a cloudstack issue. Could you please try to add ceph storage by libvirt on kvm nodes ?
|
I find the below error when create storage pool on virt-manager.
I use the methods as you advice, But the error info is too little, I don't know how to repair it. I google a lot, But failed. # virsh pool-create --file pool.xml The ceph error is
|
This seems like a client <> server issue with Ceph and does not have anything to do with CloudStack. attempt to reclaim global_id 392850 without presenting ticket That suggest that you have something misconfigured in Ceph, there have been recent changes around that. Please refer to the Release Notes of Ceph. Once that is fixed it should also work in CloudStack. |
@wido @weizhouapache: This is a problem in ceph.
To fix this problem:
|
@wido @weizhouapache Thanks for you help. I will close this issue. |
ISSUE TYPE
The doc is using:
CLOUDSTACK VERSION
The OS is Red Hat Enterprise Linux release 8.3 (Ootpa)
The cloudstack management version: 4.15.2.0
The cloudstack agent version: cloudstack-agent-4.15.2.0-1.el8.x86_64
The ceph version: 15.2.13
The KVM host has ceph-common
The ceph storage is
Using cloudstack web UI:
The cloudstack management error is:
The cloudstack client error is: failed to create the RBD IoCTX. Does the pool 'cloudstack' exist?: No such file or directory
The detail is:
Coud you please give some advice ?
The text was updated successfully, but these errors were encountered: