-
Notifications
You must be signed in to change notification settings - Fork 336
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Restoring from a snapshot to a larger size #485
Comments
/kind bug |
Another open issue: should we apply the same logic to clone? |
cc @j-griffith |
@Jiawei0227 will help take a look at implementing this |
issue @jsafrane created in k/k repo has summary of discussion so far - kubernetes/kubernetes#94929 |
The text from the CreateVolumeRequest message in the spec says: // This field is OPTIONAL. This allows the CO to specify the capacity The current implication is that any expansions are magically taken care of inside the CreateVolume call. We definitely need spec language here clarifying that while the new size must be honored insofar as the controller's notion of the volume size, the CO may need to do the NodeExpand portion of the work to bring the node's notion of the volume size into sync. |
There is currently a discussion here: container-storage-interface/spec#452 |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
/remove-lifecycle stale I am grooming my backlog and found this one. It seems from the last CSI meeting, we decide that to let each CSI driver to handle it. They should call the expand. So I guess there is no more work involved in the csi-provisioner for now? |
Yes. CSI driver needs to handle this. @jsafrane has updated the CSI spec PR: container-storage-interface/spec#452 |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-contributor-experience at kubernetes/community. |
Stale issues rot after 30d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-contributor-experience at kubernetes/community. |
Rotten issues close after 30d of inactivity. Send feedback to sig-contributor-experience at kubernetes/community. |
@fejta-bot: Closing this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle rotten |
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /close |
@k8s-triage-robot: Closing this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
/reopen |
@jsafrane: Reopened this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
/remove-lifecycle rotten |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle rotten |
/remove-lifecycle rotten |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
@msau42 @xing-yang @jsafrane afaict, this is handled in provisioner code already and I am closing this issue. Please reopen if required. |
@humblec Could you please point to where in the provisioner code this is handled? |
@amacaskill the conclusion of this bug is that each CSI driver needs to make changes to their NodeStage/NodePublish implementations to resize the filesystem if possible. See https://github.com/container-storage-interface/spec/pull/452/files for the spec clarification and https://github.com/kubernetes-sigs/aws-ebs-csi-driver/pull/595/files for an example. Resize utilities have been exported in https://github.com/kubernetes/kubernetes/pull/99088/files so that CSI drivers can share similar logic. For pd csi driver it looks like it was added in kubernetes-sigs/gcp-compute-persistent-disk-csi-driver#972 |
@amacaskill addition to what @msau42 mentioned above, this make sure the controller allows to have a capacity >= snapshot ->restore size. https://github.com/kubernetes-csi/external-provisioner/blob/master/pkg/controller/controller.go#L1134 |
Today, what happens with many block-based plugins is that the disk will be created at the larger size, but the filesystem (from the snapshot) is at the smaller size. Filesystem expansion won't be triggered because PV is created at the requested PVC size.
We discussed this at the csi standup last week and agreed that the behavior should be:
This bug can track the following aspects:
Updating individual csi drivers to return the proper behavior can be tracked separately.
The text was updated successfully, but these errors were encountered: