-
Notifications
You must be signed in to change notification settings - Fork 2.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support local vector search for the selection of key text segments of page and video context #36801
Comments
Verification passed on
|
Verified only Verified on
STEPS:
ACTUAL RESULTS:
|
Scenario 2 is verified as part of #41481 (comment) |
Verified on
Component updater, the dir name would be "AIChatLocalModels"STEPS:
ACTUAL RESULTS:
Page content refine (truncated/refined indicator doesn't apply on iOS)STEPS:
ACTUAL RESULTS:
2024-10-22_17-04-51.mp4 |
Key use case to support:
What is the Gnarly nutrition discount code in this video?
Revealed at 26:56 of the video, but because of our truncation method, Leo can't answer.
Often times our summarization and Q&A leads to truncation of the input text we send up. For now we warn about it, but users get this warning very often. And often we can't produce the answers people need because they are not within the first N characters.
Your mission, should you chose to accept, is to support this local vector search.
This should only be employed when the context is too long. For summarization tasks, it might be useful to select dissimilar things. We could perhaps see if the input query is a summarization query via a similar similarity comparison.
It likely involves some steps like:
I'm not sure if we'd use TFLite or just custom code for this type of stuff, so it involves some investigation.
The text was updated successfully, but these errors were encountered: