-
Notifications
You must be signed in to change notification settings - Fork 554
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
cephfs: save space in the cluster by avoiding roundoff #2471
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can you please add an E2E to validate the size like 1.1Mi or 1.2Gib etc?
Same changes are required for cephfs resizing and RBD createVolume and resize also. could be a separate PR's.
internal/cephfs/controllerserver.go
Outdated
@@ -185,7 +185,12 @@ func (cs *ControllerServer) CreateVolume( | |||
defer volOptions.Destroy() | |||
|
|||
if req.GetCapacityRange() != nil { | |||
volOptions.Size = util.RoundOffBytes(req.GetCapacityRange().GetRequiredBytes()) | |||
// we were doing round off ( by util.RoundOffBytes) to the size passed | |||
// in from the provisioner before ceph csi v3.4.0, however this can |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ceph csi
to Ceph CSI
and after this Fix, if anyone uses an older version of external-provisioner they will face the same issue as before. are we documenting to use provisioner version which does not roundoff anymore?
can you please open an issue with more details about this one like kubernetes-csi/external-provisioner#199?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@Madhu-1 I would rather see this change in this way , regardless of the roundoff or adjustments external provisioner does , from next version of ceph csi onwards, we try to create the same size of the volume which has been requested by the sidecar request. On return path again, the roundoff or adjustments done by sidecar or console or some client is again left to them. In short, cephfs driver here act like a dump and serve what has been requested and no internal manipulation in the driver code in between. That seems like a right approach to me. wdyt?
852df7b
to
9238207
Compare
CephFS controller round off ( by util.RoundOffBytes) on the size passed in from the provisioner at present, however this can cause wastage of space in the cluster. The CephFS filesystem is capable of creating volume with the exact size passed in. so no need of Roundoff here before we reach out to the cluster ``` $ ceph fs subvolume create <vol_name> <subvol_name> [--size <size_in_bytes>]... ``` Clone scenario: This shouldnt be an issue as cephfs keep the parent size while the clone is performed, we also have an extra measure at clone to make the requested size is achieved by resize if the backend is not having the requested size. This shouldnt be a case falling into with this patch though. Signed-off-by: Humble Chirammal <[email protected]>
9238207
to
8855f81
Compare
This pull request has been automatically marked as stale because it has not had recent activity. It will be closed in two weeks if no further activity occurs. Thank you for your contributions. |
This pull request has been automatically closed due to inactivity. Please re-open if these changes are still required. |
This pull request has been automatically closed due to inactivity. Please re-open if these changes are still required. |
removing the milestone tracker from this issue. |
This pull request now has conflicts with the target branch. Could you please resolve conflicts and force push the corrected changes? 🙏 |
2 similar comments
This pull request now has conflicts with the target branch. Could you please resolve conflicts and force push the corrected changes? 🙏 |
This pull request now has conflicts with the target branch. Could you please resolve conflicts and force push the corrected changes? 🙏 |
Hello there! There hasn't been any activity on this PR for quite some time, so I have removed the "keepalive" label. To avoid it being closed as stale shortly, please revisit and give it some attention. Thank you! |
This pull request has been automatically marked as stale because it has not had recent activity. It will be closed in two weeks if no further activity occurs. Thank you for your contributions. |
This pull request has been automatically closed due to inactivity. Please re-open if these changes are still required. |
CephFS controller round off ( by util.RoundOffBytes) on the size
passed in from the provisioner at present, however this can
cause wastage of space in the cluster. The CephFS filesystem is
capable of creating volume with the exact size passed in. so no
need of Roundoff here before we reach out to the cluster
Clone scenario:
This shouldnt be an issue as cephfs keep the parent size while the
clone is performed, we also have an extra measure at clone to make
the requested size is achieved by resize if the backend is not
having the requested size. This shouldnt be a case falling into with
this patch though.
Signed-off-by: Humble Chirammal [email protected]