-
Notifications
You must be signed in to change notification settings - Fork 29
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
CRD implementation #20
Conversation
…builder output to be top level. Left much of the boilerplate for historical reference. Would like to delete much of the config
And including output.
/label tide/merge-method-squash |
… a Service objectRef, and regenerating output. Also updating proposal to reflect this change.
/assign Failed my first one, sorry for the abusive ping |
@Joffref: GitHub didn't allow me to assign the following users: reminder, --, Just, as, a. Note that only kubernetes-sigs members with read permissions, repo collaborators and people who have commented on this issue/PR can be assigned. Additionally, issues/PRs can only have 10 assignees at the same time. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
/assign @Joffref |
api/README.md
Outdated
- kubectl version v1.11.3+. | ||
- Access to a Kubernetes v1.11.3+ cluster. | ||
|
||
### To Deploy on the cluster |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
can we update this with the instructions to deploy the CRDs?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Added to the top-level README
// that should be included in the LLMServerPool. ModelServers should not | ||
// be with any other Service or LLMServerPool, that behavior is not supported | ||
// and will result in sub-optimal utilization. | ||
ModelServerSelector metav1.LabelSelector `json:"modelServerSelector,omitempty"` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thinking out loud. Does that imply we manage pods availability directly eg. readiness? For example in a multi-host setup probably we could also select a headless service to route to.
Also would it add any extra operation flexibility if we allowed more than one selectors?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The selector defines the pods that are running the servers, the expectation is that those pods define a readiness probe and this readiness status is reflected on the pod status; the expectation from the extension is to send traffic to the pods with status ready.
Also would it add any extra operation flexibility if we allowed more than one selectors?
The LabelSelector type is very flexible already, see https://github.com/kubernetes/kubernetes/blob/81ce66f059ec9c07cccf4069c8913e31959dea78/staging/src/k8s.io/apimachinery/pkg/apis/meta/v1/types.go#L1256
/lgtm |
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: ahg-g, kfswain The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
This PR is rather large but much of this is generated code from these 2 tools:
The actual changes were:
make generate
commandController logic for the LLMServerPool is intended to be handled by gateway Implementations: #19
And Controller logic for LLMService will be handled by the ext-proc deployment