Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

resize failed and can't recover due to pvc rejection "Forbidden: field can not be less than previous value " #132

Closed
hoyho opened this issue Dec 16, 2020 · 10 comments
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.

Comments

@hoyho
Copy link

hoyho commented Dec 16, 2020

How to reproduce:

  1. create a pvc with size 1Gi

  2. resize to 10Pi, kubectl edit pvc xxx and update spec.resources.requests.storage to 10Pi

  3. assume the new request size 10Pi is too large, csi-driver and backend storage refuse to update and response an error to call of ControllerExpandVolume
    image

  4. pvc status change into Resizing
    image

  5. when realize 10Pi is to large for a storage backend, try reedit(kubectl edit pvc xxx) and set to a lower value, for example,1T

  6. pvc reject to save: spec.resources.requests.storage: Forbidden: field can not be less than previous value
    image

That mean we can never update the size again, because csi driver(storage backend) only accept a smaller size
but pvc don't allow resizing to a capacity less than previous value.
Since external-resizer have received a failed rpc response , can it do something to recover the pvc
for example reset the size?

@ripulpatel
Copy link

cc: @gnufied
I think there should be a way to recover PVC when any kind of unintended failures or bugs due to storage sub-system, else PVC will forever be in the state pv resizing.

pohly added a commit to pohly/external-resizer that referenced this issue Feb 3, 2021
7bc70e5 Merge pull request kubernetes-csi#129 from pohly/squash-documentation
e0b02e7 README.md: document usage of --squash
316cb95 Merge pull request kubernetes-csi#132 from yiyang5055/bugfix/boilerplate
26e2ab1 fix: default boilerplate path
1add8c1 Merge pull request kubernetes-csi#133 from pohly/kubernetes-1.20-tag
3e811d6 prow.sh: fix "on-master" prow jobs

git-subtree-dir: release-tools
git-subtree-split: 7bc70e5264a5ce5f47780bdbc6c7b7f4e79243fa
@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Mar 28, 2021
@hoyho
Copy link
Author

hoyho commented Mar 29, 2021

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Mar 29, 2021
@stoneshi-yunify
Copy link

Any plan to fix this issue?

@gnufied
Copy link
Contributor

gnufied commented Jun 21, 2021

yes - kubernetes/enhancements#1790

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Sep 19, 2021
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Oct 19, 2021
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue or PR with /reopen
  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close

@k8s-ci-robot
Copy link
Contributor

@k8s-triage-robot: Closing this issue.

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue or PR with /reopen
  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@andreclaro
Copy link

Is there any workaround to manually fix this issue?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.
Projects
None yet
Development

No branches or pull requests

8 participants