-
Notifications
You must be signed in to change notification settings - Fork 11
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support LLama-2 prompt style. #5
Comments
Hey, thanks for the suggestion. I was thinking about it during the weekend. The thing is, that I would assume that this would be handled on a lower level at langchain. For chat messages, the langchain LLM handles "translation" from AIMessage and Human messages into OpenAI format. On the other hand, I agree that it would look great from the developer's perspective to just define
and let some other layers handle it... BTW... what is the reason here why not to use directly LLAMA2 tags? ( |
Yes, I want prompts compatible with ChatGPT and LLAMA2 at the same time. ChatGPT is a market leader, but LLAMA2 is a game changer with all its derivated models coming on, so interoperability is nice to have. Moreover, they have different behavior. I would like to send the same prompts to multiple LLM to reduce hallucinations or generate more variants but still reproductible with temperature=0, and/or asking each LLM to check the answer of other one. Another nice and complementary addition to your library would have to get the chain and kwarg, without running the request to the LLM. Something like: chain, kwargs = FakeCompanyGenerator().generate(company_business="sells cookies", _mode=RETURN_CHAIN)
chain.run(*kwargs) That way we could benefit of Chains directly, for exemple to use some recent langchain addition like LangChain Expression Language. These 2 improvements would be strong added values to langchain-decorators, IMO. |
Hey, I was kind of busy, so I couldn't take a proper look at this up until now So from the beginning, I was thinking that this should be handled similarly to ChatMessage type... Actually I expected this to be handled by langchain, but I checked and it is not, and after reviewing their code, I can see why Anyway... You can define a custom prompt blocks:
You can now define your own
You can control the implementation by prompt_type You could also build a dynamic builder that would build the template based on a input kwarg parameter and then access Regarding the possibility to get the However, I admit that there are other benefits of getting the Chain itself. As a matter of fact, I tried to implement it since the early days, but I couldn't figure out how to do it in a meaningful way (so that it would be of any advantage)...
Just be aware, that the chain returns something else than just executing the prompt function. |
LLama-2-Chat is becoming a serious alternative to ChatGPT-3.5. However, the prompts must be structured in a special way to make it effective: see https://www.pinecone.io/learn/llama-2/
It could be quite easy I think for langchain-decorators to generate prompts that follow that style, while being compatible with ChatGPT. That would open lot of opportunities...
I did some try with the tags defind as fields in the model, but that could be more generic. It runs on https://deepinfra.com/ .
The JSON parsing fails sometimes, but I guess that could be easy to fix.
The text was updated successfully, but these errors were encountered: