Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Preserved custom keys in RetrievalQA #6647

Closed
wants to merge 2 commits into from

Conversation

Atom-101
Copy link

Description

Currently RetrievalQA drops any extra keys in the prompt apart from the memory and user input. So if I want a prompt like this:

"""You are a friendly bot talking to a human.
Use the following pieces of context to answer the users question.
----------------
Context relevant to the question is: \n{context}
----------------
Recent conversation history is: \n{history}
----------------
You also have the following information:  \n{extra_context1}\n{extra_context2}
----------------

Human: {query}
AI:
"""

Here context is retrieved from the database, history comes from memory and query is the last input provided by the user. If some extra context has to be provided apart from conversation history and database, like extra_context1 and extra_context2, RetrievalQA cannot currently handle it.

With this change one can simply provide extra keys in in the .run() function of RetrievalQA. Like so:

qa = KwargsRetrievalQA.from_chain_type(
    llm=ChatOpenAI(),
    chain_type='stuff',
    retriever=retriever,
    verbose=False,
    chain_type_kwargs={
        "verbose": True,
        "prompt": prompt_template,
        "memory": memory,
    }
)

inp = input("Human: ")
query = inp

response = qa.run(
    query=query,
    history=memory,
    extra_context1=extra_context1,
    extra_context2=extra_context2,
)

Maintainer

@rlancemartin

@vercel
Copy link

vercel bot commented Jun 23, 2023

The latest updates on your projects. Learn more about Vercel for Git ↗︎

Name Status Preview Comments Updated (UTC)
langchain ✅ Ready (Inspect) Visit Preview 💬 Add feedback Jun 27, 2023 7:42am

@vercel vercel bot temporarily deployed to Preview June 24, 2023 05:49 Inactive
@vercel vercel bot temporarily deployed to Preview June 24, 2023 06:28 Inactive
@vercel vercel bot temporarily deployed to Preview June 24, 2023 06:54 Inactive
@Atom-101
Copy link
Author

@rlancemartin, @eyurtsev can you take a look?

@vercel vercel bot temporarily deployed to Preview June 27, 2023 07:42 Inactive
@dosubot dosubot bot added Ɑ: memory Related to memory module 🤖:improvement Medium size change to existing code to handle new use-cases labels Jul 14, 2023
@leo-gan
Copy link
Collaborator

leo-gan commented Sep 18, 2023

@Atom-101 Hi , could you, please, resolve the merging issues? After that ping me and I push this PR for the review. Thanks!

@Atom-101
Copy link
Author

This has been fixed in the current version of langchain. I am closing this. Thanks!

@Atom-101 Atom-101 closed this Oct 14, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
🤖:improvement Medium size change to existing code to handle new use-cases Ɑ: memory Related to memory module
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants