-
Notifications
You must be signed in to change notification settings - Fork 138
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[FEATURE] Support conversional search in ML Inference Search Response Processor with memory #3242
Comments
Hey, Is the memory id an index? If so what are some responsibilities of this index it looks like the ml inference processor will load the index and store info in the index on return is there a restriction on this memory id or can any other index be used? How do we know we should clean up these memories after sometime? |
I worry that this might blur the line between the ML response processor and the existing RAG processor. We may be adding too much to the ML inference processor interface. |
Hi @austintlee , we are working on OpenSearch Flow Project, you can refer here https://github.com/opensearch-project/dashboards-flow-framework/blob/main/documentation/tutorial.md for the tutorial. This OpenSearch Flow is aiming at using ML Inference Processors (ingest/search) as a generic processor to run inference during ingest and search in a workflow to simplify set up and configurations. Of course, if users are familiar with Rag processors or others existing processors, users can use other processors as well, there are drop down options in the processors option that user can pick and bindle, it's up to the users choice for their use cases. |
yes the memory would be story in index. Indeed the memory and message API were releases. checkout this doc https://opensearch.org/docs/latest/ml-commons-plugin/api/memory-apis/get-memory/ https://opensearch.org/docs/latest/ml-commons-plugin/api/memory-apis/get-message/ |
Is your feature request related to a problem?
to support conversational search, when sending the request to the remote model, we not only needs to send the questions, but also the historical context.
For example,
OpenAI API:
Bedrock converse API:
in ml inference search response processor, introduce a new parameter, "conversational_search", to be true or false. when it's true, and the input_map config to read the memory id from query extension. ml inference processors will read the memory from GetConversationsRequest action, try to send the message list and questions together to the remote model api.
when search in query, users can use the ml inference search extension to ask question.
To reuse the current memory and message API, propose to add a new field in interaction and message API to allow custom message.
propose new interface for message, interaction:
What alternatives have you considered?
A clear and concise description of any alternative solutions or features you've considered.
Do you have any additional context?
[[Add any other context or screenshots about the feature request here.](https://github.com//issues/1150)
](#1877)
The text was updated successfully, but these errors were encountered: