Skip to content
This repository has been archived by the owner on May 6, 2022. It is now read-only.

Implement Custom Resource Definitions storage backend #1088

Closed
1 task
arschles opened this issue Aug 1, 2017 · 23 comments
Closed
1 task

Implement Custom Resource Definitions storage backend #1088

arschles opened this issue Aug 1, 2017 · 23 comments
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. third-party-resources
Milestone

Comments

@arschles
Copy link
Contributor

arschles commented Aug 1, 2017

As part of the transition from Third Party Resources (TPRs) to Custom Resource Definitions (CRDs) (see #987 for more detail), we'll need to implement a CRD storage backend, similar to the TPR storage backend. It will likely be possible to copy much of the code from the TPR implementation (at /pkg/storage/tpr) to implement CRD storage, but it's important that we don't overwrite the TPR implementation so that we can allow either to be configured until we decide to deprecate and remove TPR support.

In addition to the work to implement this storage backend, the API server will need to gain some configuration (via command line flags) to turn on CRD storage.

cc/ @nilebox @mengqiy

Tasks

@ash2k
Copy link
Contributor

ash2k commented Aug 1, 2017

See atlassian/smith#113 and atlassian/smith#114 for inspiration :)

@vaikas
Copy link
Contributor

vaikas commented Aug 16, 2017

@nilebox
Copy link
Contributor

nilebox commented Nov 17, 2017

Closing the #1105 PR as I think that we need to revisit the requirements and decide whether we want to support CRDs in Service Catalog

@kibbles-n-bytes
Copy link
Contributor

@nilebox Is there anything in particular that blocks using CRDs as a backing for our custom API server? Are the versioning issues still a problem even in this case?

@carolynvs
Copy link
Contributor

They just added support for Status sub-resources to CRDs but versioning isn't in yet.

@pmorie
Copy link
Contributor

pmorie commented Mar 13, 2018 via email

@carolynvs
Copy link
Contributor

Oops, I was referring to what's supported if we moved to CRDs (the latter). Never mind! 😊

@duglin
Copy link
Contributor

duglin commented Mar 13, 2018

Well, its a good question. I can't remember who from the main k/k team recommended it, but moving to use CRDs instead of our own API server is an option we may want to discuss at the f2f.

@carolynvs
Copy link
Contributor

It was Eric Tune.

@fabiand
Copy link

fabiand commented Mar 13, 2018

For the record we - KubeVirt - were also on this question - we actually wrote our own API server before going back to CRDs. The primary reason was data storage for our custom API server.
Today CRDs look promising, with validation, initializers and admission controllers you can do quite a lot.

@pmorie
Copy link
Contributor

pmorie commented Mar 13, 2018 via email

@n3wscott
Copy link
Contributor

@pmorie we are interested in bringing up CRDs as a backing store for
the our existing API server. We want this to be able to remove the dependency on another etcd instance.

@nilebox
Copy link
Contributor

nilebox commented Mar 13, 2018

@n3wscott you can take a look at #1105 which had most stuff working, but required quite a lot of code and still had some issues (IIRC watches in kubectl were dropping from time to time for some reason).
I am not sure this is a good idea. The better way would be supporting such storage out of the box in k8s.io/apimachinery or k8s.io/apiserver, otherwise it's a PITA to having to support this code.

@nilebox
Copy link
Contributor

nilebox commented Mar 13, 2018

We do a lot of things in our API that I am not
at all sure are on the roadmap for CRD, let alone currently achievable.

Another aspect that with using so many features (admission controllers, validation etc) it might require as much code to use CRDs (if not more), as well as possibly even more complicated code.

So the only real benefit here is no need to manage a dedicated etcd (or share core etcd directly).

@n3wscott
Copy link
Contributor

I would like to work on this issue if no one else is. #dibs?

@nilebox
Copy link
Contributor

nilebox commented Mar 22, 2018

@n3wscott sure, but I think it would be better to discuss first whether we do want to bring it back, and if we do what's the best approach.

Also it would be nice to talk to API machinery folks to check whether they have any new recommendations for this problem (e.g. there might be some "blob store" coming in future, or they might want to support such use case in k8s.io/apiserver out of the box).

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Apr 24, 2019
@jberkhahn jberkhahn removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label May 23, 2019
@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Aug 21, 2019
@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Sep 20, 2019
@fejta-bot
Copy link

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

@k8s-ci-robot
Copy link
Contributor

@fejta-bot: Closing this issue.

In response to this:

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@mszostok
Copy link
Contributor

done by: #2633

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. third-party-resources
Projects
None yet
Development

No branches or pull requests