-
Notifications
You must be signed in to change notification settings - Fork 835
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Sherif akoush/sc 2543/alibi explain mlserver runtime #3707
Sherif akoush/sc 2543/alibi explain mlserver runtime #3707
Conversation
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: The full list of commands accepted by this bot can be found here.
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
Check out this pull request on See visual diffs & provide feedback on Jupyter Notebooks. Powered by ReviewNB |
@@ -1091,12 +1092,6 @@ func (r *SeldonDeploymentReconciler) createIstioServices(components *components, | |||
func (r *SeldonDeploymentReconciler) createServices(components *components, instance *machinelearningv1.SeldonDeployment, all bool, log logr.Logger) (bool, error) { | |||
ready := true | |||
for _, svc := range components.services { | |||
if !all { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@axsaucedo we are created the svc components without waiting for all pods to be ready as the explainer pod needs to send a dummy request via the svc to the graph
/test integration |
/test notebook |
/test notebooks |
/test integration |
/test integration |
/retest |
/test integration |
1 similar comment
/test integration |
@sakoush: The following tests failed, say
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the jenkins-x/lighthouse repository. I understand the commands that are listed here. |
* Initial POC to integrate mlserver alibi * Create svc without waiting for deployment to be ready * add explainer image_v2 in operator configmap.yaml * map core explainer type in mlserver type * wire up explainer image_v2 * add mlserver env constants and use them * refactor to utils to mlserver.go * allow to pass explainer init parameters + tests * add initParameters to explainer in the CRD * add initial explainer_examples_v2.ipynb * use mlserver:0.6.0.dev2 * add integration test for explainer v2 (anchor tabular) * add tests * add v2 explainer doc reference
What this PR does / why we need it:
This PR adds the ability to run
alibi
models usingmlserver
, to server v2 protocol endpoints for explanation.Which issue(s) this PR fixes:
Fixes #3675
Special notes for your reviewer:
We had to relax the requirement to have the deployment ready before creating the services. This is required to get around the issue that alibi explain models require to do a dummy call to the underling inference model as load time.
The consequences are that the explainer pod will crash until the inference model is loaded successfully. We can fix that in a follow up PR.
Also whitebox explainers are not yet integrated and I will add a separate ticket for it.
Does this PR introduce a user-facing change?:
TODO: