-
Notifications
You must be signed in to change notification settings - Fork 2.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Export :tensorflow:serving:...
metrics by signature names
#1959
Comments
Are you still looking for a resolution? We are planning on prioritising the issues based on the community interests. Please let us know if this issue still persists with the latest TF Serving 1.12.1 release so that we can work on fixing it. Thank you for your contributions. |
@singhniraj08 I wrote a PR for this issue #2152 |
@jeongukjae, Thank you for your contributions. We will discuss this internally and update this thread. Thanks |
@singhniraj08 Thank you. + I wrote another issue that is similar to this issue: #2157 |
Feature Request
If this is a feature request, please fill out the following form in full:
Describe the problem the feature is intended to solve
For now, tensorflow serving exports metrics by model like below.
We cannot collect metrics by signatures, even if the latencies of each signature are very different.
Related codes:
serving/tensorflow_serving/servables/tensorflow/util.h
Lines 118 to 119 in 21360c7
serving/tensorflow_serving/servables/tensorflow/util.h
Lines 122 to 123 in 21360c7
Describe the solution
It must be better if runtime latency and request latency are recorded with signature names.
Describe alternatives you've considered
Additional context
The text was updated successfully, but these errors were encountered: