Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

protobuf encoding exploration #371

Closed
clux opened this issue Jan 5, 2021 · 26 comments
Closed

protobuf encoding exploration #371

clux opened this issue Jan 5, 2021 · 26 comments
Assignees
Labels
client-gold gold client requirements

Comments

@clux
Copy link
Member

clux commented Jan 5, 2021

Is it reasonable/possible for us to get protobuf encoding with generated material?
This is just a bit of a ramble on potential ideas. There's no concrete plans as of writing this. Any help on this is appreciated.

This is for the last Gold Requirement Client Capabilities
Official documentation on kubernetes.io/api-concepts#protobuf

Continuing/summarising the discussion from #127, we see conflicting uses of client-gold in other clients that do not support it, but let us assume good faith and try our best here.

We see that the go api has protobuf codegen hints (api/types.go) like:

// +optional
// +patchMergeKey=name
// +patchStrategy=merge
EphemeralContainers []EphemeralContainer `json:"ephemeralContainers,omitempty" patchStrategy:"merge" patchMergeKey:"name" protobuf:"bytes,34,rep,name=ephemeralContainers"`

whereas the (huge) swagger codegen has the equivalent json for that part of the PodSpec:

        "ephemeralContainers": {
          "description": "...",
          "items": {
            "$ref": "#/definitions/io.k8s.api.core.v1.EphemeralContainer"
          },
          "type": "array",
          "x-kubernetes-patch-merge-key": "name",
          "x-kubernetes-patch-strategy": "merge"
        },

Here, the ordering 34 is missing so this is probably difficult to solve for k8s-openapi as it stands.

However kubernetes/api does have generated.proto files (see core/v1/genreated.proto) and it has the following to say about the entry:

  // +optional
  // +patchMergeKey=name
  // +patchStrategy=merge
  repeated EphemeralContainer ephemeralContainers = 34;

We could maybe load those files with prost, but AFAIKT that will create structs that conflict with the generated structs from k8s-openapi, and we rely on k8s-openapi for trait implementations. Unless there's a way to associate these structs with the ones from k8s-openapi structs of the same name, this would be hard. Sounds like another codegen project if it is possible.

On the other hand, if the swagger schemas had these tags, then k8s-openapi could optionally enable prost-tagging, but based on the existance of kubernetes/api repo maybe they don't want to evolve the swagger schemas anymore? Maybe it's worth requesting upstream?

@clux clux added question Direction unclear; possibly a bug, possibly could be improved. client-gold gold client requirements help wanted Not immediately prioritised, please help! labels Jan 5, 2021
@clux
Copy link
Member Author

clux commented Mar 18, 2021

Now that our Meta trait have been decoupled from k8s_openapi (#385), we can in theory allow for an alternative path to serialisation (which could additionally help avoid pathological problems with swagger like #284 and the option-heaviness).

So, if we had an something like an alternative k8s-openapi built on top of kubernetes/api, it's possible there could be some big benefits (aside from the expected serialization overhead drop from protobuf).

@nightkr
Copy link
Member

nightkr commented Mar 18, 2021

This is really three separate questions:

  1. Should we support protobuf at all?
  2. Should the swagger or the protobuf definitions be considered the canonical API reference?
  3. How opinionated should kube-rs be about whatever choice is made? (Should we still support serde? )

IMO, 2 is the big question that we need to start with. If Protobuf aligns better with K8s semantics (and IIRC it does), then switching to it might enable us to get rid of a bunch of the hacks in k8s-openapi (and maybe better support third-party CRDs?).

Regardless, we should probably at least try to keep @Arnavion in the loop before committing to anything. While an ecosystem split might end up inevitable due to backwards compatibility concerns, that would be a sad outcome and should probably be a strategy of last resort rather than a starting assumption.

@Arnavion
Copy link

I'd looked into supporting protobufs instead of JSON before, and had found that upstream didn't want anything except golang code to use protobufs. I see https://github.com/kubernetes/api#recommended-use still seems to imply that.

If you want to store or interact with proto-formatted Kubernetes API objects, we recommend using the "official" serialization stack in k8s.io/apimachinery. Directly serializing these types to proto will not result in data that matches the wire format or is compatible with other kubernetes ecosystem tools.

That said, I assume whatever clients you found that do use protobufs are doing the double-(de)serialization themselves anyway?

@nightkr
Copy link
Member

nightkr commented Mar 18, 2021

While that is a bit annoying, presumably that "just" means that we need to wrap Prost's serializer ourselves rather than relying on it directly. It doesn't seem to affect the actual object serialization, and still has to be stable(ish) since they can't just break old Go clients whenever either.

@Arnavion
Copy link

So does using the .protos in https://github.com/kubernetes/api with prost give you what you want? It's closer to the source than the swagger spec so a lot of the fixups that k8s-openapi ought to be unnecessary. IIRC the only things that still require JSON are CRDs and patches, and at least for the former I believe you already have your own custom derive instead of using k8s-openapi-derive.

@clux
Copy link
Member Author

clux commented Aug 3, 2021

FWIW: The official way to get the protos for a client seems to be through kubernetes-client/gen/proto, and kubernetes is definitely keen on us using that repo.

I still have no idea how good the output is (it doesn't seem to like an up to date libprotobuf?), but might try to spend some time on it.
EDIT: the script is expecting protoc support and we probably need to accommodate prost therein.

@kazk
Copy link
Member

kazk commented Aug 5, 2021

Default output from prost for k8s.io/api/core/v1: https://gist.github.com/kazk/daa748f8448269591ff61da814896137

Pod

prost:

/// Pod is a collection of containers that can run on a host. This resource is created
/// by clients and scheduled onto hosts.
#[derive(Clone, PartialEq, ::prost::Message)]
pub struct Pod {
    /// Standard object's metadata.
    /// More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata
    /// +optional
    #[prost(message, optional, tag="1")]
    pub metadata: ::core::option::Option<super::super::super::apimachinery::pkg::apis::meta::v1::ObjectMeta>,
    /// Specification of the desired behavior of the pod.
    /// More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status
    /// +optional
    #[prost(message, optional, tag="2")]
    pub spec: ::core::option::Option<PodSpec>,
    /// Most recently observed status of the pod.
    /// This data may not be up to date.
    /// Populated by the system.
    /// Read-only.
    /// More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status
    /// +optional
    #[prost(message, optional, tag="3")]
    pub status: ::core::option::Option<PodStatus>,
}

k8s_openapi:

/// Pod is a collection of containers that can run on a host. This resource is created by clients and scheduled onto hosts.
#[derive(Clone, Debug, Default, PartialEq)]
pub struct Pod {
    /// Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata
    pub metadata: crate::apimachinery::pkg::apis::meta::v1::ObjectMeta,

    /// Specification of the desired behavior of the pod. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status
    pub spec: Option<crate::api::core::v1::PodSpec>,

    /// Most recently observed status of the pod. This data may not be up to date. Populated by the system. Read-only. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status
    pub status: Option<crate::api::core::v1::PodStatus>,
}

ephemeral_containers

    /// List of ephemeral containers run in this pod. Ephemeral containers may be run in an existing
    /// pod to perform user-initiated actions such as debugging. This list cannot be specified when
    /// creating a pod, and it cannot be modified by updating the pod spec. In order to add an
    /// ephemeral container to an existing pod, use the pod's ephemeralcontainers subresource.
    /// This field is alpha-level and is only honored by servers that enable the EphemeralContainers feature.
    /// +optional
    /// +patchMergeKey=name
    /// +patchStrategy=merge
    #[prost(message, repeated, tag="34")]
    pub ephemeral_containers: ::prost::alloc::vec::Vec<EphemeralContainer>,

https://gist.github.com/kazk/daa748f8448269591ff61da814896137#file-1_k8s-io-api-core-v1-rs-L3528-L3537

To Try

  1. Create a project following https://docs.rs/prost-build/0.8.0
  2. Create protos/ next to src/
  3. Download protos with https://github.com/kubernetes-client/gen/blob/master/proto/dependencies.sh (some 404s, so remove those) in protos/. (Getting release tarballs and extracting .proto files is better.)
  4. Add build.rs with the following code:
use std::io::Result;
fn main() -> Result<()> {
    prost_build::compile_protos(&["protos/k8s.io/api/core/v1/generated.proto"], &["protos/"])?;
    Ok(())
}
  1. Run cargo build and check output (k8s.io.api.core.v1.rs).

Code generation can be customized (not sure how much yet) with ServiceGenerator:

use std::io::Result;

fn main() -> Result<()> {
    let mut config = prost_build::Config::new();
    config.service_generator(Box::new(KubeServiceGenerator::new()));
    config.compile_protos(&["protos/k8s.io/api/core/v1/generated.proto"], &["protos/"])?;
    Ok(())
}

@clux
Copy link
Member Author

clux commented Aug 5, 2021

I think that's promising. The output looks good. We don't need the ServiceList, SecretList objects, but i guess that type of logic is what goes into the service_generator. Feel free to make a repo within here to test out, or if you have commands for how you got the output, i'm happy to try a bit as well (never used prost).

@kazk
Copy link
Member

kazk commented Aug 5, 2021

I had misunderstood ServiceGenerator. It's for service descriptor, which we don't have any. So, looks like with prost_build, we can only add attributes with type_attribute and field_attribute.

if you have commands for how you got the output, i'm happy to try a bit as well (never used prost).

I've never used prost either. It's generated with cargo build (see the last section of my previous comment). I might create a repo later.

@kazk
Copy link
Member

kazk commented Aug 5, 2021

There's also https://github.com/stepancheg/rust-protobuf and https://github.com/tafia/quick-protobuf

@kazk
Copy link
Member

kazk commented Aug 6, 2021

https://github.com/kazk/k8s-pb

Not very useful, but shows the following:

The generated code looks nice and readable, but I don't think we can do much to it.

How Kubernetes wraps them for responses (application/vnd.kubernetes.protobuf) is documented in https://kubernetes.io/docs/reference/using-api/api-concepts/#protobuf-encoding

@kazk
Copy link
Member

kazk commented Aug 6, 2021

prost has the best output by far. I don't think prost_build provides a way for us to customize much, but we can try writing our code generator based on theirs (using prost_types::FileDescriptorSet). Or maybe extract tags (and anything else useful) for k8s-openapi-codegen.

rust-protobuf

  • Docs are lost
  • Doesn't respect package and uses the input file name. generated.proto becomes a module generated.rs.
    • Seems to assume unique input file names, and each output overwrites them, so if we pass all k8s pbs as input (all of them are generated.proto), we end up with only one generated.rs containing whatever was generated last.
  • Imported packages are included in the same module. See ObjectMeta below. This is actually not true. It just looks odd because of the module name issue above.
  • Adds special fields
  • Verbose because it doesn't derive serialize/deserialize
#[derive(PartialEq,Clone,Default)]
pub struct Pod {
    // message fields
    pub metadata: ::protobuf::SingularPtrField<super::generated::ObjectMeta>,
    pub spec: ::protobuf::SingularPtrField<PodSpec>,
    pub status: ::protobuf::SingularPtrField<PodStatus>,
    // special fields
    pub unknown_fields: ::protobuf::UnknownFields,
    pub cached_size: ::protobuf::CachedSize,
}

Provides a crate that can be used in build.rs to generate:

use std::io::Result;

fn main() -> Result<()> {
    protoc_rust::Codegen::new()
        .out_dir("protoc_rust")
        .inputs(&["protos/api/core/v1/generated.proto"])
        .include("protos")
        .run()
        .expect("protoc");
}

quick-protobuf

  • Docs are lost
  • Has borrowed fields
  • Rust code are split into module tree, which is nice
  • k8s.io.apimachinery.pkg.runtime.schema crashes pb-rs because it doesn't contain any message. Workaround by adding message Dummy {}.
  • Verbose because it doesn't derive serialize/deserialize
// use super::super::super::*;
#[derive(Debug, Default, PartialEq, Clone)]
pub struct Pod<'a> {
    pub metadata: Option<apimachinery::pkg::apis::meta::v1::ObjectMeta<'a>>,
    pub spec: Option<api::core::v1::PodSpec<'a>>,
    pub status: Option<api::core::v1::PodStatus<'a>>,
}

Uses CLI to generate:

cargo install pb-rs
pb-rs -I $(pwd)/protos -d $(pwd)/pbrs protos/**/*.proto

@clux
Copy link
Member Author

clux commented Aug 6, 2021

Wow. That repo looks great, just tried it all myself and it loads it all in with the given instructions. Also great to see that we're not losing anything by going with prost over anything else. Code in out looks good, and a quick explore of cargo doc output seems like a great module structure already.

The problem here is that we need to inject the generic properties out-of-band.
Since we're already using include!(concat!(..outputcode)) so heavily, maybe it would suit us to find a way to extract this separately, and then inline the two bits together.

I.e. maybe we also have to grab the swagger schema to get the data for the Resource impls.

Anyway, feel free to move that into kube-rs org if you feel like it, I'd love to help out here :D

@clux
Copy link
Member Author

clux commented Aug 6, 2021

RE: Generating Resource impls out of band from swagger.json

noting down a naive/incomplete path here because I was curious (there is probably a more precise solution for this in k8s-openapi, please ignore):

  • start loop over d in swgr.definitions to find objects, and identify d's GVKs using x-kubernetes-group-version-kind key
  • cross reference with p in swgr.paths as we go along - listables will have a corresponding x-kubernetes-group-version-kind to cross reference with to identify resource/root url
  • use paths information to identify scope (Namespaced: namespaces in url, subres: resource name in url and something after)
  • should have all information about the current d to write impl Resource to an appropriate file now

@kazk
Copy link
Member

kazk commented Aug 8, 2021

Yeah, we'll need to use swagger.json to add missing information.

By the way, there are some irregular camel case to snake case conversion failing (e.g., clusterIPs becomes cluster_i_ps), and it needs to be adjusted (k8s-openapi does it too), but there's no way at the moment. Maybe we can work with prost devs to make it possible to customize more.
If not or if we want to customize more than they want to support, we'll need to write a custom code generator that takes FileDescriptorSet (a set of parsed protobufs). Custom code generator can take swagger.json as well and output however we want.

I'm also thinking of avoiding include!, and create k8s-pb-codegen that outputs like k8s-openapi-codegen by moving api.core.v1.rs to api/core/v1/mod.rs and creating any necessary modules in between. It's nice to be able to diff the output, and we can also continue to support version features like k8s-openapi this way.

Anyway, feel free to move that into kube-rs org if you feel like it, I'd love to help out here :D

I'd like to play around with it some more, but suggestions on what to experiment with, or sharing what you tried is welcomed there. Once we have a better idea, I'll transfer to the org, or create a new repo based on it.

What should be the goal? Are we still trying to fit in gen?

Some things to explore:

  • Compare generated code against k8s-openapi more closely to find differences other than the obvious ones.
  • FileDescriptorSet of all Kubernetes protos. We should be able to use that to extract anything useful. Maybe this can be used to add prost tags in k8s-openapi-codegen, or anything swagger.json is missing.
  • swagger.json contains information of which resource supports protobufs, so we can use that to mark resources that can use protobufs, and add methods for them to support both if we want to.
  • Transforming swagger.json to something easier to use? More like APIResourceList ({name, singularName,namespaced,kind,verbs})?
  • impl Resource
  • Adding serde support. Need to include apiVersion and kind. Can it be derived?
  • application/vnd.kubernetes.protobuf (envelope wrapper. 4 bytes prefix + Unknown message)

    If you want to store or interact with proto-formatted Kubernetes API objects, we recommend using the "official" serialization stack in k8s.io/apimachinery. Directly serializing these types to proto will not result in data that matches the wire format or is compatible with other kubernetes ecosystem tools. The reason is that the wire format includes a magic prefix and an envelope proto. Please see: kubernetes.io/docs/reference/using-api/api-concepts/#protobuf-encoding

    For the same reason, we do not recommend embedding these proto objects within your own proto definitions. It is better to store Kubernetes objects as byte arrays, in the wire format, which is self-describing. This permits you to use either JSON or binary (proto) wire formats without code changes. It will be difficult for you to operate on both Custom Resources and built-in types otherwise.
    https://github.com/kubernetes/api/blob/master/README.md#recommended-use

@MikailBag
Copy link
Contributor

noting down a naive/incomplete path here because I was curious

I think this should work. However, I'd propose to use API discovery instead. I think it also provides all needed information (resource name, scope, GVK, and so on), and is easier to consume.

I don't have much free time these days, but I will try to make a POC of integrating a prost-generated client with API discovery.

@clux
Copy link
Member Author

clux commented Aug 8, 2021

I'd like to play around with it some more, but suggestions on what to experiment with, or sharing what you tried is welcomed there. Once we have a better idea, I'll transfer to the org, or create a new repo based on it.

Sounds good. I'll direct PRs in that direction at some point (probably after Tuesday).

I might try my hands at something like:

Transforming swagger.json to something easier to use? More like APIResourceList ({name, singularName,namespaced,kind,verbs})?

I think it sounds like a nice separation of concerns to have as much generic information about api resources as output for the main library. We could even analyse the various subpaths of the api/{resource}/* urls to figure out exactly what verbs are supported (which i'm sure Teo would be happy with).

Adding serde support. Need to include apiVersion and kind. Can it be derived?

I'd assume so. The way we hide the TypeMeta is admittedly slightly awkward in kube-derive (with the optional Default derives, and a ::new method), so I imagine maintaining the k8s-openapi pattern of always injecting the static values would be preferable.

What should be the goal? Are we still trying to fit in gen?

Well, unless you have any objections, I am probably going to propose to the kubernetes/org repo that we start kube-rs as a CNCF sandbox project. Based on the options they presented us with.

So with that in mind. I think if we can make something we would use and can build upon, and then got that setup in kubernetes-client with gen (even if we only use it for downloading), that would be an ideal scenario to me. We would have accomplished kubernetes org's philosophical goals of having something official based on the specs for rust, and we would avoid direct competition (because dependency). And if we're building it, we kind of get to have a similar official status anyway (e.g. heavy linkage and recommendation from the generation repo and docs), and we'd likely get help from sig-apimachinery for the core setup.

What do you think?

@clux
Copy link
Member Author

clux commented Aug 8, 2021

I think this should work. However, I'd propose to use API discovery instead. I think it also provides all needed information (resource name, scope, GVK, and so on), and is easier to consume.

@MikailBag I imagine the desired interface would sit somewhere in the realm between the current dynamic ApiResource, teo's neokubism proposal in #594, and the associated const world of k8s-openapi. Having more information about what a resource supports (like out of discovery) + have it easy to consume would be a goal.

But on the other hand, the core types do not change except between kubernetes releases, so I can't think of a reason to encourage running unnecessary discovery of unchanging core objects like Pod when the information is right there. Is there some other operational concern you have about using inferred properties?

@MikailBag
Copy link
Contributor

MikailBag commented Aug 8, 2021

Sorry if I expressed my point in an unclear way.

I don't mean that kube should do some discovery at runtime (unless user manually uses Discovery).

I mean that a code generator tool somehow used by kube should infer k8s-specific metadata not from an openapi schema, but from the apiserver.

i.e:

  1. kazk/k8s-pb already contains API definitions with protobuf serialization/deserialization derived. However, someone (or something) has to write impl Resource for ... for all of them.
  2. So there should be a tool that somehow gets a list of all API resources along with metadata (such as GVK) and generates those impl Resource for ... impls.
  3. And I think this tool should consume k8s API rather than the openapi schema, because:
  • It is simpler (no need to collect data from pieces by parsing URLs)
  • It should work better for custom resources

@clux
Copy link
Member Author

clux commented Aug 18, 2021

@MikailBag : That's an interesting idea. I was thinking a bit about it last week. I fear that would increase the ops complexity of such a repo quite a bit compared to a straight parser with fix-ups. We'd need to stand up a full k8s server as part of CI - one with inclusive feature gates, and even then I'm not sure we would get all the api resources that's available in the spec.

It might be easier from a software point of view, than a complex parser, but otoh the parser is kind of already done in k8s-openapi.
Do you mind elaborating a bit on why it would be better for custom resources?

@nightkr
Copy link
Member

nightkr commented Aug 18, 2021

One way would be to basically do it like prost-build, where the application developer "brings their own" manifest. But that has a few downsides..

  1. Discovery isn't a single manifest, you more or less need direct API access, that would be very weird to require at compile time
  2. Depending on how it's implemented we'd lose the shared "API vocabulary", which would make third-party libraries tricky to implement
  3. It'd be easier to confuse CRDs that happen to be installed in your cluster/distro with core resources (not a big issue for people building bespoke software interacting with their own cluster, bigger issue for people writing reusable controllers for clusters they don't operate themselves)

@kazk
Copy link
Member

kazk commented Aug 19, 2021

I haven't had much time since I commented, but I wrote some jq script to explore swagger.json, and output something similar to APIResourceList (output is https://github.com/kazk/k8s-pb/blob/main/openapi/api-resources.json and it includes some redundant fields right now).

Pod looks like this:

{
  "name": "pods",
  "namespaced": true,
  "subresource": false,
  "apiGroupVersion": "v1",
  "group": "",
  "version": "v1",
  "kind": "Pod",
  "rust": "api::core::v1::Pod",
  "verbs": ["create", "delete", "deletecollection", "get", "list", "patch", "update"],
  "scopedVerbs": {
    "all": ["list"],
    "namespaced": ["create", "delete", "deletecollection", "get", "list", "patch", "update"]
  },
  "paths": [
    "/api/v1/pods",
    "/api/v1/namespaces/{namespace}/pods",
    "/api/v1/namespaces/{namespace}/pods/{name}"
  ]
}

@clux
Copy link
Member Author

clux commented Aug 19, 2021

hahaha, that is beautiful. i didn't think you'd be able to get that good information from just jq, and in 0.2s to run!

i was envisioning some several hundred lines of rust to do more or less the same thing, but now i'm thinking maybe we just write something on top of the json output there.

@clux
Copy link
Member Author

clux commented Aug 19, 2021

What are the limitations of this method can you see?

# Only process definitions with GVK array. <- does this miss anything crucial?

@kazk
Copy link
Member

kazk commented Aug 20, 2021

i was envisioning some several hundred lines of rust to do more or less the same thing, but now i'm thinking maybe we just write something on top of the json output there.

I love Rust, but I can't think of a good reason to use it to transform JSON like this. It's not much, and should be pretty obvious when something goes wrong (e.g., script will fail, generated code based on these won't compile, etc.).

Regardless of if we end up using it or not, I'm thinking of creating a repo that keeps tracks of these JSON derived from swagger.json for each version of Kubernetes. I think it can be automated (triggered by Kubernetes' GitHub releases), and it's interesting to see the changes between versions concisely (diffing swagger.json works, but that's huge).

What are the limitations of this method can you see?

I don't see anything new at the moment. swagger.json might be buggy, but that's all we have.
Using transformed JSON should make code generation trivial for impl Resource. It does involve extra stage, but shouldn't make a difference if we download the derived JSON instead of swagger.json.

# Only process definitions with GVK array. <- does this miss anything crucial?

I don't think so, that part is building a translation map from definition name to Rust path for Resource. If it doesn't have GVK defined, we shouldn't need them. We can also change the script to fail if the map doesn't contain the path.

To see definition names without GVK field:

[
  .definitions | to_entries[]
  | select(.value | has("x-kubernetes-group-version-kind") | not)
  | .key
]

It's also excluding definitions with multiple GVKs. To see that:

[
  .definitions | to_entries[]
  | .value["x-kubernetes-group-version-kind"]? as $gvks
  | select($gvks != null and ($gvks | length > 1))
  | .key
]

which is just these two

[
  "io.k8s.apimachinery.pkg.apis.meta.v1.DeleteOptions",
  "io.k8s.apimachinery.pkg.apis.meta.v1.WatchEvent"
]

@clux clux moved this to In Progress in Kube Roadmap Oct 25, 2021
@clux clux assigned clux and kazk Oct 25, 2021
@clux clux removed the question Direction unclear; possibly a bug, possibly could be improved. label Oct 30, 2021
@clux clux changed the title protobuf encoding protobuf encoding exploration Nov 21, 2021
@clux
Copy link
Member Author

clux commented Nov 21, 2021

Closing this in favour of #725 to keep things more reasonably scoped.

With the creation and buildability of the k8s-pb library, i'd say this is good enough for this chaotic exploration issue.

@clux clux closed this as completed Nov 21, 2021
Repository owner moved this from In Progress to Done in Kube Roadmap Nov 21, 2021
@clux clux removed the help wanted Not immediately prioritised, please help! label Nov 21, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
client-gold gold client requirements
Projects
Status: Done
Development

No branches or pull requests

5 participants