Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Feedback API is returning empty string, how send_feedback works ? #910

Closed
vackysh opened this issue Oct 4, 2019 · 8 comments
Closed

Feedback API is returning empty string, how send_feedback works ? #910

vackysh opened this issue Oct 4, 2019 · 8 comments
Milestone

Comments

@vackysh
Copy link

vackysh commented Oct 4, 2019

Hi Seldon experts,

I have a requirement of predicting if customer will pay loan or not and predict api is working fine without any issue. I am able to predict the output ("Charged Off"."Paid Off"). Now, I want to implement send_feedback which i am assuming will verify if truth is matching with predicted value and give a reward if matched.

def predict(self, X, feature_names):
    logging.warning(X)
    prediction = self._cl_model.predict(X)
    logging.warning(prediction)
    return prediction

def send_feedback(self,X, feature_names, reward, truth, routing=None):
    print("Send feedback called")
    print('Original Request {}'.format(X))
    print('features names {}'.format(feature_names))
    print('Truth {}'.format(truth))
    print('Reward {}'.format(reward))
    return []

I don't understand what will be the return value from send_feedback and what can we interpret from send_feedback ? There is not much documentation available on its working except code snippets. Can you please explain little what is the purpose of send_feedback ?

The below is the input i am giving to predict API

{
"request": {
"data": {
"names": ["a", "b","c", "d","e", "f","g", "h","i", "j","k", "l"],
"ndarray": [
[
-0.360401125437509,
-1.6118801888587946,
4.578304752033222,
-0.6537760204057684,
-0.8343145759984982,
1.117818816508468,
0.5250776596051238,
-0.022866028481639277,
-0.997698231112318,
1.3715932476110648,
-0.34868264699412893,
-0.49288047614152425
],
[
2.7762444096949888,
0.6203935049961723,
-0.2641776865169515,
-0.26441276083322596,
-0.8343145759984982,
0.0696409382326457,
-0.3541843043004386,
-0.6884393702300454,
1.1832526792948979,
-1.2232972516480227,
-0.34868264699412893,
0.49361481432592424
],
[
2.7762444096949888,
0.6203935049961723,
-0.24608622037043779,
1.0307550242583157,
-0.8343145759984982,
-0.9785369400431766,
1.4043396235106862,
0.275033793803416,
1.211761841522443,
1.9704141320554696,
-0.34868264699412893,
-0.6960790094245382
]
]
}
},
"response": {
"data": {
"names": ["a", "b","c", "d","e", "f","g", "h","i", "j","k", "l"],
"ndarray": [
"Charged Off",
"Fully Paid",
"Fully Paid"
]

    }
},
"reward": 1

}

I would appreciate your inputs.

Regards,
Vackysh

@ukclivecox
Copy link
Contributor

At present send-feedback does not return a Body payload but just success. We should look at adding an informative body as well?

@vackysh
Copy link
Author

vackysh commented Oct 4, 2019

Hi @cliveseldon ,

Thanks for your reply.

What exactly is the purpose of send_feedback ?
We are providing all parameters to function (from its definition) and how it will decide on success and failure ?

could you please reply as its crucial for us to implemented feedback loop for machine learning model ?

Our requirement is to get the data, predicted and truth value into csv format and then feed to training model again to check for the accuracy. we found send_feedback having truth values as a parameter which it would compare with predicted values (that would be predicted from input data) and give rewards if matched. It seems my understanding for send_feedback is incorrect as it will not return anything.

Please provide clarifications for the questioned i mentioned.

Regards,
Vackysh

@ukclivecox
Copy link
Contributor

Hi @vackysh

The feedback call is meant for online learning and multi-armed bandit scenarios for users to send feedback to the model/MABs to allow them to update. We don't do anything automatic with the data so its up to the MODEL/ROUTER creator to add the appropriate methods (e.g., via the python wrapper) to process the requests.

We'd love to hear in more detail your use case as we want to get requirements for improving this for the roadmap.

@vackysh
Copy link
Author

vackysh commented Oct 7, 2019

Hi @cliveseldon,

I agree and understand that automation with data won't be as simple to implement.

Still i dont understand the part how feedback call is sending feedback to model ? As you told that it will only tell about if call is success or not.

In my scenario. I will be sending inputs to send_feedback using feedback API and i can see the call is made and values are printed but there is nothing where i could see it sent feedback to model.

Could you please explain about working of send_feedback in details ?

Regards,
Vackysh

@ukclivecox
Copy link
Contributor

If it is not reaching the code this is indeed a major bug. Will need to test more closely to validate.

@ukclivecox ukclivecox added this to the 0.5.x milestone Oct 8, 2019
@vackysh
Copy link
Author

vackysh commented Oct 11, 2019

Hi @cliveseldon ,

I would like to explain my use-case to understand more about the relevance of send_feedback

We are automating the end-to-end machine learning pipeline, which completes with feedback loop.
As part of feedback loop, We are looking to re-train the model with real time data set. Preferrably when we see model is not performing as expected(accuracy drop).

  • We have a classification model which has been deployed on seldon-core. We are using predict API call to perform predicitions on the model.

We assume Feedback API is api to perform online training(update the model to learn from real-time data).
We are validating the feasibility of send_feedback

  1. How do we use send_feedback in the automation process,Where do we get reward, truth values to be provided in the request payload? Is there any callback API to get these values from?
  2. How do we know model is updated with new learning?
    Additionally,
  3. Is there any API in seldon-core to fetch the serving model performance(accuracy)?
  4. Is there any alerting system which can be activated if model performance decrease?

Regards,
Vackysh

@ukclivecox
Copy link
Contributor

Sorry for late reply.

Hi @cliveseldon ,

I would like to explain my use-case to understand more about the relevance of send_feedback

We are automating the end-to-end machine learning pipeline, which completes with feedback loop.
As part of feedback loop, We are looking to re-train the model with real time data set. Preferrably when we see model is not performing as expected(accuracy drop).

* We have a classification model which has been deployed on seldon-core. We are using predict API call to perform predicitions on the model.

We assume Feedback API is api to perform online training(update the model to learn from real-time data).
We are validating the feasibility of send_feedback

1. How do we use send_feedback in the automation process,Where do we get reward, truth values to be provided in the request payload?  Is there any callback API to get these values from?

Its assume you save the prediction response and can send it back when you do feedback. The reward is up to you.

2. How do we know model is updated with new learning?

You can add your own logging and or custom metrics for each update perhaps?

   Additionally,

3. Is there any API in seldon-core to fetch the serving model performance(accuracy)?

There is accuracy by default on the prometheus metrics exposed.

4. Is there any alerting system which can be activated if model performance decrease?

We don't provide an alterting system by default but if you are using Grafana it does have this.

Regards,
Vackysh

@ukclivecox
Copy link
Contributor

I'm going to close this now. If there are particular updates you would like please open specific issues.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants