You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Conversations with some language models on the model-config branch result in the model prepending its output with AI: or Jupyter AI:. Take this example with ai21:j2-jumbo-instruct:
It looks like this is being caused by a failure to include an empty AI message, which indicates the the language model that it should not attempt to generate its own message prefix. This can be observed from the server logs:
(DefaultActor pid=40821) > Entering new ConversationChain chain...
(DefaultActor pid=40821) Prompt after formatting:
(DefaultActor pid=40821) System: The following is a friendly conversation between a human and an AI, whose name is Jupyter AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know.
(DefaultActor pid=40821) Human: Hello!
(DefaultActor pid=40821)
(DefaultActor pid=40821) > Finished chain.
> Entering new ConversationChain chain...
Prompt after formatting:
The following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know.
Current conversation:
Human: Hi there!
AI:
> Finished ConversationChain chain.
The text was updated successfully, but these errors were encountered:
This is happening because we are overriding the default prompt template defined in langchain::langchain/chains/conversation/prompt.py:
_DEFAULT_TEMPLATE = """The following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know.
Current conversation:
{history}
Human: {input}
AI:"""
PROMPT = PromptTemplate(
input_variables=["history", "input"], template=_DEFAULT_TEMPLATE
)
When the default is overridden, an empty AI message must be supplied.
Description
Conversations with some language models on the
model-config
branch result in the model prepending its output withAI:
orJupyter AI:
. Take this example withai21:j2-jumbo-instruct
:It looks like this is being caused by a failure to include an empty AI message, which indicates the the language model that it should not attempt to generate its own message prefix. This can be observed from the server logs:
Contrast this with a log from the documentation examples:
The text was updated successfully, but these errors were encountered: