Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Hello from kubeclient 👋 #20

Closed
cben opened this issue Aug 13, 2018 · 12 comments
Closed

Hello from kubeclient 👋 #20

cben opened this issue Aug 13, 2018 · 12 comments
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.

Comments

@cben
Copy link

cben commented Aug 13, 2018

Hi 👋, just wanted to reach out from the existing https://github.com/abonas/kubeclient project to ask how you see the relation between the 2 projects, and whether there are aspects we should work together on?

Taking a quick look, differences in approach that stand out:

  1. The API in this gem is close to other official k8s clients. kubeclient tried to evolve a Ruby-esque api from scratch.

  2. This gem is generated from specific k8s API, kubeclient uses discovery to define methods dynamically.
    Did you deliberately choose generation for specific benefits, or simply as outgrowth of generation tooling that worked well for other languages?
    Discovery in principle allows same client to talk to future k8s versions, to extra APIs (e.g. openshift), and perhaps most importantly to deal with custom resources (TBH I never tried kubeclient on CRDs myself but I hear it works).
    How would one deal with CRDs in your approach? Use same generator to create a separate gem?

  3. This gem feels more "strongly typed". You have tons of classes e.g. V1PersistentVolume constiting of V1PersistentVolumeSpec and others... kubeclient previously dynamically created a class for top-level resources only, but gave up even that in favor of a generic Resource class.

  4. Your generated docs are WAY more detailed :-)

  5. Communication overhead? kubeclient currently uses rest-client for http, no persistent connections, full TLS setup for each request :( What do you use?

  6. I'm interested to learn your approach to API groups, as this is a pain point in kubeclient we're trying to rethink. Currently kubeclient requires a separate client object for each API group, and lacks an easy way to share cluster configuration between those.
    IIUC (Is configuration global? #19), you still need an object for each group you talk to but config sharing is trivial for single-global-config case and easy enough by passing ApiClient instance in multi-cluster case?

@brendandburns
Copy link
Contributor

@cben Hello, I know this is pretty old, this repo was mostly abandoned, but @drubin and I are going to start working on this. We'll try to answer some of your questions as we go.

@cben
Copy link
Author

cben commented Apr 28, 2019

BTW, recently kubeclient is getting surprisingly many contributions around config & authentication (we got generic exec, google default credentials & gcloud cmd-path, OIDC, amazon EKS...).
Though they're not organized correctly yet to support renewal of expired credentials (ManageIQ/kubeclient#393), we'll need to rework our design.

I think it'd make sense to extract these into a separate gem ("kubeconfig"? "kubeauth"?) that both kubeclient and kubernetes-client/ruby could use. Are you interested in this as a goal?

@grosser
Copy link

grosser commented Jun 4, 2019

Yes please, anything that can handle the raw auth/connection as a standalone gem would be great, best not tied to a particular http gem

@drubin
Copy link
Contributor

drubin commented Jun 5, 2019

@brendandburns How about potentially taking a similar approach with Fluent bindings we are experimenting with on the JS client.

I can see a huge value in having a shared config / auth library

The config/auth/exec handling is actually a really complex topic because it's expected to behave the exact same across all the client libraries as well as kubectl most of this behavior is documented but some of it is pretty subtle.

Wondering if we can somehow abstract some of all this into a better feature matrix that we can then see which parts of this logic is supported by each client. I know there is the official but this seems a bit too broad?

Given some of the issues we have seen recently, or some of the subtle features.

@kke
Copy link

kke commented Jul 29, 2019

k8s-client agrees :)

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Oct 27, 2019
@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Nov 26, 2019
@fejta-bot
Copy link

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

@k8s-ci-robot
Copy link
Contributor

@fejta-bot: Closing this issue.

In response to this:

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@cben
Copy link
Author

cben commented Dec 26, 2019

non-update: I've since shifted projects at work and barely find time for minimal maintainance of kubeclient 🙁. I still think pulling out a shared config gem is worthy, and I'd be happy to help, but for it to happen someone else would need to push it forward.

@pre
Copy link

pre commented Jan 30, 2020

non-update 2: Kontena.io went bankcrupt at the end of the year 2019. There is now a community fork of the kontena/k8s-client under a community organization k8s-ruby/k8s-ruby.

The k8s-ruby library is a community effort to keep k8s-client maintained without any dependencies to the former Kontena.io organization. The library was renamed in order to publish it to Rubygems without conflicting with k8s-client.

It would be wonderful to combine the maintainership for Kubernetes' Ruby projects under a common organization, be it k8s-ruby or something else.

It would make sense to extract eg. the authentication logic and other common parts under a Gem of its own. Ultimately it would be great to kill the k8s-ruby gem and replace it with k8s-auth, k8s-api-client, k8s-resource and such smaller building blocks.

@cben
Copy link
Author

cben commented Feb 23, 2020

@abonas how do you feel about moving kubeclient too under k8s-ruby org?
I was thinking an org is the next logical step anyway.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.
Projects
None yet
Development

No branches or pull requests

8 participants