-
Notifications
You must be signed in to change notification settings - Fork 91
[WIP] Make KPNG more like a library and decouple from the GRPC API #389
[WIP] Make KPNG more like a library and decouple from the GRPC API #389
Conversation
Signed-off-by: Andrew Stoycos <[email protected]>
Signed-off-by: Andrew Stoycos <[email protected]>
Add first attempt at exposing a simple interface rather than using GRPC and an API. This new job analyzes the proxystore and returns ENLS for the specified node via a golang interface rather than the API. It removes the extra protobuff marshelling that happens as part of the store2localdiff job and allows KPNG to be used as a true library (see example) rather than a daemon. Signed-off-by: Andrew Stoycos <[email protected]>
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: astoycos The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
/assign @mcluseau @jayunit100 |
@@ -21,14 +21,14 @@ import ( | |||
proxystore "sigs.k8s.io/kpng/server/proxystore" | |||
) | |||
|
|||
type eventHandler struct { | |||
type kpngEventHandler struct { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
why prefix an internal type with the project's name, or any part of the package path btw?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Agreed :)
} | ||
|
||
type NodeLocalStateConsumer interface { | ||
UpdateServices(services <-chan *localnetv1.Service) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
what about just 1 stream with the Op object but made for internal use, like:
type Op struct {
Sync bool // when it's a sync signal
Set localnetv1.Set
Path string
Delete bool
// 1st option:
Value any
// 2nd option
Service *localnetv1.Service
Endpoint *localnetv1.EndpointInfo
}
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
also, if we're speaking internal / same memory space, is forcing a diff the way? We could nearly give the lightdiffstore instances, or maybe just a fullstate-like primary interface that could then be fed to the lightdiffstore in watchstate / the Local gRPC API
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
1 stream with the Op object
Are you referring to an unidirectional GRPC stream here or just the golang struct? Sorry getting overloaded with terms :)
I'm fine with anything on the design of the transfer interface (struct interface etc) and just took the inspiration from Tim's Slides and this ancient comment from the KEP lol
My end goal here is that it's super simple for folks to write backends, with or without any GRPC knowledge/familiarity.
also, if we're speaking internal / same memory space, is forcing a diff the way?
I'm not sure :) from an ease of use if we force the diff, or add the option to, then it takes the overhead of having to learn how to use a new library off of the user which IMO is a good thing?
and if we're diffing ENLS I don't think it matters whether we do it before or after passing data to the user, from a scale concern at least.
We could nearly give the lightdiffstore instances, or maybe just a fullstate-like primary interface that could then be fed to the lightdiffstore in watchstate / the Local gRPC API
Totally... I'm not against that either, however I think collapsing some of the modules(watchstate/fullstate) into a single one wouldn't be the worse thing? It took me quite some time to dig in and understand all the various pieces modules (and where they live)
Are you referring to an unidirectional GRPC stream
<https://grpc.io/docs/what-is-grpc/core-concepts/#server-streaming-rpc>
here or just the golang struct? Sorry getting overloaded with terms :)
golang's chan :)
I'm fine with anything on the design of the transfer interface (struct
interface etc) and just took the inspiration from Tim's Slides and this
ancient comment
<kubernetes/enhancements#2094 (comment)>
from the KEP lol
My end goal here is that it's super simple for folks to write backends,
with or without any GRPC knowledge/familiarity.
And `fullstate` is probably the simplest (and non-gRPC) model.
and if we're diffing ENLS I don't think it matters whether we do it before
or after passing data to the user, from a scale concern at least.
Without diff, no hash, so no direct gRPC dependency on the way (we're still
manipulating gRPC-generated models of course, but no gRPC imports shown).
But yes, O(N) optimization so no change on the scaling side.
Totally... I'm not against that either, however I think collapsing some of
the modules(watchstate/fullstate) into a single one wouldn't be the worse
thing? It took me quite some time to dig in and understand all the various
pieces modules (and where they live)
The goal of `watchstate` is to hold `lightdiffstore` instances and provide
a consistent way to convert it to diffs, be they for the local *or* the
global model. `fullstate` is a local-model only thing.
I think we could split the code from
`server/jobs/store2localdiff/store2localdiff.go#Update` into one part that
build the local model (ie `FullStateFromStore(tx *proxystore.Tx)
[]*fullstate.ServiceEndpoints`), called from `Update(...)`, and a second
part in `store2localdiff.Job.SendDiff` updating the lightdiffstore
instances (hashing, keys... all that goes out of the critical section).
It should then be easy to build a "fullstate callback" style library
interface like (why am I coding in a mail client now?!)
```
package finding_a_name_left_to_andrew
func WatchStore(store *proxystore.Store, callback fullstate.Callback) {
var (
rev uint64
closed bool
)
for {
var model []*fullstate.ServiceEndpoints;
rev, closed = store.View(rev, func(tx *proxystore.Tx) {
model = FullStateFromStore(tx);
})
if closed { return }
ch := make(chan *fullstate.ServiceEndpoints, 1)
go func() {
for _, seps := range model { ch <- seps }
close(ch)
}()
callback(ch)
}
}
```
… Message ID: ***@***.***>
|
Awesome thanks for all the feedback, sorry my bandwidth has been limited while at kubecon, Let me take a stab at this and I'll keep asking questions here. |
@astoycos: PR needs rebase. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
The Kubernetes project currently lacks enough active contributors to adequately respond to all PRs. This bot triages PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle rotten |
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs. This bot triages PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /close |
@k8s-triage-robot: Closed this PR. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
This is an attempt to make KPNG a bit more streamline and align with the following diagram
i.e It stops KPNG from ever encoding into protobuf and allows it to be used more like a library.
Specifically this PR adds a new job
store2localinterface
which passes services/Endpoints exprected node local state to a backend/consumer via a simple go interface.The Existing flow (for a single process deployment looked like):
K8s API ---> kpng.ProxyStore ---> kpng.store2localdiff (uses the kpng.watchState package) -------[Data encoded to protobuff]-----> kpng.client.sink ------> backend
With the new job it looks something like this:
K8s API ---> kpng.ProxyStore ---> kpng.store2localinterface ----> backend interface
Please see the included example for more details :)
Also @mcluseau Please feel free to tell me if this makes no sense!!!!