Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[code_search] Compute an interpretable loss/quality metric #254

Closed
jlewi opened this issue Sep 28, 2018 · 4 comments
Closed

[code_search] Compute an interpretable loss/quality metric #254

jlewi opened this issue Sep 28, 2018 · 4 comments

Comments

@jlewi
Copy link
Contributor

jlewi commented Sep 28, 2018

We need an evaluation metric that gives us some qualitative sense of how well a model is performing.

From a performance what we care about is whether a search query is correctly mapped to the code that goes with that query. So for the test/evaluation we can compute the number of correctly matched and incorrectly matched examples.

Given a training example (Qi, Ci) where Qi is the query and Ci is the code that matches it, the example is correctly classified if

distance(Qi, Ci) <= (Qi, Cj) for j not equal to i for some set of code examples

Related to #239 Train a high quality model

@cwbeitel
Copy link
Contributor

So with this metric you're taking the mean distance to all the non-matching examples (that you sample) and asking wither the distance to the matching example is much less? That's a good measure. You can also relate that to the distance you plan to use when looking up queries. Also maybe typo, I think you meant distance(Qi, Ci).

@jlewi
Copy link
Contributor Author

jlewi commented Sep 28, 2018

Yup its a typo.

@jlewi
Copy link
Contributor Author

jlewi commented Sep 29, 2018

I think we can use our existing inference structure to compute this.

  • We already compute the embeddings for all the code examples
    • These should be available both in nmslib and BQ
  • We have TFServing to compute the embeddings of the search code
  • We use nmslib to look up the nearest neighbor

So we need to make the following changes

  • Use nmslib to return K most similar docs and feature embeddings
  • Lookup actual embedding for the code that goes with the search query
  • Compute distance between the query embedding and the actual code embedding
  • If we do the above as an RPC, we can write a beam job to send a bunch of these requests and write results to bigquery for analysis

@stale
Copy link

stale bot commented Jun 27, 2019

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.

@stale stale bot closed this as completed Jul 4, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants