Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

cephfs: save space in the cluster by avoiding roundoff #2471

Closed
wants to merge 1 commit into from

Conversation

humblec
Copy link
Collaborator

@humblec humblec commented Sep 1, 2021

CephFS controller round off ( by util.RoundOffBytes) on the size
passed in from the provisioner at present, however this can
cause wastage of space in the cluster. The CephFS filesystem is
capable of creating volume with the exact size passed in. so no
need of Roundoff here before we reach out to the cluster

$ ceph fs subvolume create <vol_name> <subvol_name> [--size
<size_in_bytes>]...

Clone scenario:

This shouldnt be an issue as cephfs keep the parent size while the
clone is performed, we also have an extra measure at clone to make
the requested size is achieved by resize if the backend is not
having the requested size. This shouldnt be a case falling into with
this patch though.

Signed-off-by: Humble Chirammal [email protected]

@humblec humblec marked this pull request as draft September 1, 2021 07:12
@mergify mergify bot added the component/cephfs Issues related to CephFS label Sep 1, 2021
@humblec humblec marked this pull request as ready for review September 1, 2021 09:00
@humblec humblec requested review from Madhu-1 and nixpanic September 1, 2021 09:00
Copy link
Collaborator

@Madhu-1 Madhu-1 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you please add an E2E to validate the size like 1.1Mi or 1.2Gib etc?

Same changes are required for cephfs resizing and RBD createVolume and resize also. could be a separate PR's.

@@ -185,7 +185,12 @@ func (cs *ControllerServer) CreateVolume(
defer volOptions.Destroy()

if req.GetCapacityRange() != nil {
volOptions.Size = util.RoundOffBytes(req.GetCapacityRange().GetRequiredBytes())
// we were doing round off ( by util.RoundOffBytes) to the size passed
// in from the provisioner before ceph csi v3.4.0, however this can
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ceph csi to Ceph CSI and after this Fix, if anyone uses an older version of external-provisioner they will face the same issue as before. are we documenting to use provisioner version which does not roundoff anymore?

can you please open an issue with more details about this one like kubernetes-csi/external-provisioner#199?

Copy link
Collaborator Author

@humblec humblec Sep 1, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@Madhu-1 I would rather see this change in this way , regardless of the roundoff or adjustments external provisioner does , from next version of ceph csi onwards, we try to create the same size of the volume which has been requested by the sidecar request. On return path again, the roundoff or adjustments done by sidecar or console or some client is again left to them. In short, cephfs driver here act like a dump and serve what has been requested and no internal manipulation in the driver code in between. That seems like a right approach to me. wdyt?

CephFS controller round off ( by util.RoundOffBytes) on the size
passed in from the provisioner at present, however this can
cause wastage of space in the cluster. The CephFS filesystem is
capable of creating volume with the exact size passed in. so no
need of Roundoff here before we reach out to the cluster

```
$ ceph fs subvolume create <vol_name> <subvol_name> [--size
<size_in_bytes>]...
```

Clone scenario:

This shouldnt be an issue as cephfs keep the parent size while the
clone is performed, we also have an extra measure at clone to make
the requested size is achieved by resize if the backend is not
having the requested size. This shouldnt be a case falling into with
this patch though.

Signed-off-by: Humble Chirammal <[email protected]>
@github-actions
Copy link

This pull request has been automatically marked as stale because it has not had recent activity. It will be closed in two weeks if no further activity occurs. Thank you for your contributions.

@github-actions github-actions bot added the stale label Oct 14, 2021
@github-actions
Copy link

This pull request has been automatically closed due to inactivity. Please re-open if these changes are still required.

@github-actions github-actions bot closed this Oct 28, 2021
@humblec humblec reopened this Oct 29, 2021
@github-actions
Copy link

This pull request has been automatically closed due to inactivity. Please re-open if these changes are still required.

@github-actions github-actions bot closed this Nov 12, 2021
@Rakshith-R Rakshith-R added keepalive This label can be used to disable stale bot activiity in the repo and removed stale labels Nov 25, 2021
@Rakshith-R Rakshith-R reopened this Nov 25, 2021
@Rakshith-R Rakshith-R added this to the release-3.5.0 milestone Nov 25, 2021
@humblec
Copy link
Collaborator Author

humblec commented Jan 4, 2022

removing the milestone tracker from this issue.

@humblec humblec removed this from the release-3.5.0 milestone Jan 4, 2022
@mergify
Copy link
Contributor

mergify bot commented Jul 13, 2022

This pull request now has conflicts with the target branch. Could you please resolve conflicts and force push the corrected changes? 🙏

2 similar comments
@mergify
Copy link
Contributor

mergify bot commented Sep 29, 2022

This pull request now has conflicts with the target branch. Could you please resolve conflicts and force push the corrected changes? 🙏

@mergify
Copy link
Contributor

mergify bot commented Mar 22, 2023

This pull request now has conflicts with the target branch. Could you please resolve conflicts and force push the corrected changes? 🙏

@Madhu-1
Copy link
Collaborator

Madhu-1 commented Apr 28, 2023

Hello there! There hasn't been any activity on this PR for quite some time, so I have removed the "keepalive" label. To avoid it being closed as stale shortly, please revisit and give it some attention. Thank you!

@Madhu-1 Madhu-1 removed the keepalive This label can be used to disable stale bot activiity in the repo label Apr 28, 2023
@github-actions
Copy link

This pull request has been automatically marked as stale because it has not had recent activity. It will be closed in two weeks if no further activity occurs. Thank you for your contributions.

@github-actions github-actions bot added the stale label May 28, 2023
@github-actions
Copy link

This pull request has been automatically closed due to inactivity. Please re-open if these changes are still required.

@github-actions github-actions bot closed this Jun 11, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
component/cephfs Issues related to CephFS stale
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants