You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I've managed to setup a ceph cluster/cephfs using the provided docker-compose file in ARM devices. This repo gave a me a head start. Thanks a lot for that!
The only problem that I encounter is that all the OSDs are listed as 10GB each which is not the case.
I have 2 SSD disks of 128GB and one HDD of 500GB and all of the are recognized by the cluster as 10GB.
Running ceph df outputs
RAW STORAGE:
CLASS SIZE AVAIL USED RAW USED %RAW USED
hdd 30 GiB 25 GiB 2.4 GiB 5.4 GiB 18.16
TOTAL 30 GiB 25 GiB 2.4 GiB 5.4 GiB 18.16
POOLS:
POOL ID STORED OBJECTS USED %USED MAX AVAIL
cephfs_data 1 797 MiB 219 2.3 GiB 9.21 7.7 GiB
cephfs_metadata 2 31 MiB 30 93 MiB 0.39 7.7 GiB
Running rados df outputs:
POOL_NAME USED OBJECTS CLONES COPIES MISSING_ON_PRIMARY UNFOUND DEGRADED RD_OPS RD WR_OPS WR USED COMPR UNDER COMPR
cephfs_data 2.3 GiB 219 0 657 0 0 0 922 2.1 GiB 9674 3.0 GiB 0 B 0 B
cephfs_metadata 93 MiB 30 0 90 0 0 0 396 418 KiB 20839 44 MiB 0 B 0 B
The volume mounts for the physical disks on the containers are working fine. I can see the actual size of the hard disks from within the container but somehow ceph thinks that they only have 10GB available each.
I didn't create any partitions on the hard drives so I expected to be fully utilized by ceph.
I don't expect the problem to be related with the configuration that is provided in this repo but maybe you might know why such an issue occurs. My bet is that it has to do something either with the ceph docker container or the combination of that with the kernel that the ARM devices use. I already spend a lot of time searching online and I thought to also ask you in case you have encountered something similar in the past.
Cheers!
The text was updated successfully, but these errors were encountered:
jahnestacado
changed the title
OSDs are not utilizing full disk space of hard disk
Question: OSDs are not utilizing full disk space of hard disk
Aug 6, 2019
I've managed to setup a ceph cluster/cephfs using the provided docker-compose file in ARM devices. This repo gave a me a head start. Thanks a lot for that!
The only problem that I encounter is that all the OSDs are listed as 10GB each which is not the case.
I have 2 SSD disks of 128GB and one HDD of 500GB and all of the are recognized by the cluster as 10GB.
Running
ceph df
outputsRAW STORAGE:
CLASS SIZE AVAIL USED RAW USED %RAW USED
hdd 30 GiB 25 GiB 2.4 GiB 5.4 GiB 18.16
TOTAL 30 GiB 25 GiB 2.4 GiB 5.4 GiB 18.16
POOLS:
POOL ID STORED OBJECTS USED %USED MAX AVAIL
cephfs_data 1 797 MiB 219 2.3 GiB 9.21 7.7 GiB
cephfs_metadata 2 31 MiB 30 93 MiB 0.39 7.7 GiB
Running
rados df
outputs:POOL_NAME USED OBJECTS CLONES COPIES MISSING_ON_PRIMARY UNFOUND DEGRADED RD_OPS RD WR_OPS WR USED COMPR UNDER COMPR
cephfs_data 2.3 GiB 219 0 657 0 0 0 922 2.1 GiB 9674 3.0 GiB 0 B 0 B
cephfs_metadata 93 MiB 30 0 90 0 0 0 396 418 KiB 20839 44 MiB 0 B 0 B
total_objects 249
total_used 5.4 GiB
total_avail 25 GiB
total_space 30 GiB
The volume mounts for the physical disks on the containers are working fine. I can see the actual size of the hard disks from within the container but somehow ceph thinks that they only have 10GB available each.
I didn't create any partitions on the hard drives so I expected to be fully utilized by ceph.
I don't expect the problem to be related with the configuration that is provided in this repo but maybe you might know why such an issue occurs. My bet is that it has to do something either with the ceph docker container or the combination of that with the kernel that the ARM devices use. I already spend a lot of time searching online and I thought to also ask you in case you have encountered something similar in the past.
Cheers!
The text was updated successfully, but these errors were encountered: