Skip to content
This repository has been archived by the owner on May 6, 2022. It is now read-only.

We should retry Provisioning/Binding when the user corrects the spec #1672

Closed
kibbles-n-bytes opened this issue Jan 19, 2018 · 6 comments
Closed
Labels
kind/feature Categorizes issue or PR as related to a new feature. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.

Comments

@kibbles-n-bytes
Copy link
Contributor

For example, if a user makes a mistake in their instance spec and the provision fails, often they assume they can edit the resource to fix the mistake and the provision will retry. It's a painful flow to have to delete the resource, make the correction, and recreate the resource. We should make things as easy as possible for the user and let them correct their provision mistake within the same resource.

Internally, we should orphan mitigate and retry provisioning/binding with the new properties. Instead of judging whether a reconciliation is an update or an add based on ReconciledGeneration, we can use whether ExternalProperties is set.

@kibbles-n-bytes kibbles-n-bytes added the kind/feature Categorizes issue or PR as related to a new feature. label Jan 19, 2018
@dmitri-d
Copy link
Contributor

dmitri-d commented Feb 6, 2018

Apologies if I'm missing something blatantly obvious.

I just tried updating a spec (clusterServicePlanExternalName, using patch) for a ServiceInstance that failed to provision originally. It worked and the service instance was successfully provisioned after the update. Also worked for adding a new parameter, but didn't work for updating an existing parameter (not sure why atm).

What is the scenario that this is meant to address?

@nilebox
Copy link
Contributor

nilebox commented Feb 12, 2018

We have noticed the same issue with retrying after connection timeout or 400 Bad Request. Agree that we should be able to retry, see a table in #1715

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Apr 22, 2019
@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels May 22, 2019
@fejta-bot
Copy link

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

@k8s-ci-robot
Copy link
Contributor

@fejta-bot: Closing this issue.

In response to this:

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
kind/feature Categorizes issue or PR as related to a new feature. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.
Projects
None yet
Development

No branches or pull requests

5 participants