-
Notifications
You must be signed in to change notification settings - Fork 835
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Ensure request is chained before payload is logged #4301
Conversation
@@ -228,7 +228,6 @@ github.com/jstemmer/go-junit-report | |||
github.com/jtolds/gls | |||
github.com/julienschmidt/httprouter | |||
github.com/jung-kurt/gofpdf | |||
github.com/kedacore/keda |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is this from a missed license run from a different PR?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It seems that way. Either that, or some subdep got updated. The linter failed within the GH Action, and re-generating the licenses file locally fixed it.
@@ -100,6 +100,11 @@ func (p *PredictorProcess) transformInput(node *v1.PredictiveUnit, msg payload.S | |||
modelName := p.getModelName(node) | |||
|
|||
if callModel || callTransformInput { | |||
msg, err := p.Client.Chain(p.Ctx, modelName, msg) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This makes sense, also for the Tensorflow protocol - thinking it may be worth also implementing this for the transformOutput
method?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Here if the chain function fails and throws an error, the message will not be logged, is that a concern?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good spot @axsaucedo , just added the same for transformOutput
.
@SachinVarghese that's a good point. Although if the chaining fails, the request will also fail. Should we log the request if we know it's gonna fail?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I had a look at this again, current behavior seems to be good.
/test integration |
@adriangonz: The following test failed, say
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the jenkins-x/lighthouse repository. I understand the commands that are listed here. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: SachinVarghese The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
Failed to merge this PR due to:
|
3 similar comments
Failed to merge this PR due to:
|
Failed to merge this PR due to:
|
Failed to merge this PR due to:
|
What this PR does / why we need it:
Ensure responsed is chained to the next request before logging the request. Otherwise, we will log as "request" the response from the previous model in the inference graph. In the V2 payload, since the request and response have different fields, this currently ends up in an error later on at the request logger.
Which issue(s) this PR fixes:
Fixes #
Special notes for your reviewer: