- Initial release of the package.
- fixed typo that caused broken async executions
- fix #2 (shoutout to @lukestanley who fixed it using Anthropic's LLM :)
- better pydantic and markdown output parser... now can self fix casing problems with keys (i.e. field in schema is "ceo_salary", now will accept "CEO salary" as well as "CeoSalary" and any other variations)
- improved
pydantic
parser is not more tolerant to the casing (accepts pascalCase, snake_case, CamelCase field names, no matter what casing uses model) - added boolean output parser
- support for openAI functions 🚀
- fix some issues with async prompts
- fixed streaming capture
- better handling for missing docs for llm_function
- support for parsing via OpenAI functions 🚀
- support for controlling function_call
- add BIG_CONTEXT prompt type
- ton of bugfixes
- fix some scenarios of LLM response that raised error
- save AIMessage with function call in output wrapper
- fix logging that we are out or stream context, when stream is not on
- async screaming callback support
- LlmSelector for automatic selection of LLM based on the model context window and prompt length
- fixed streaming
- multiple little bugfixes
- option to set the expected generated token count as a hint for LLM selector
- add argument schema option for llm_function
New parameters in llm decorator
- support for
llm_selector_rule_key
to sub selection of LLM's to for consideration during selection. This enables you to enforce pick only some models (like GPT4 for instance) for particular prompts, or even for particular runs - support for
function_source
andmemory_source
to point pick properties/attributes of the instance prompt is bound to (akaself
) as source of functions and memories, so we wont need to send pass it in every time
- Support for dynamic function schema, that allows augment the function schema dynamically based on the input more here
- Support Functions provider, that allows control function/tool selection that will be fed into LLM more here
- Minor fix for JSON output parser for array scenarios
- Support for custom template building, to support any kind of prompt block types (#5)
- Support for retrieving a chain object with preconfigured kwargs for more convenient use with the rest of LangChain ecosystem
- support for followup handle for convenient simple followup to response without using a history object
- hotfix support for pydantic v2
- Hotfix of bug causing simple (without prompt blocks) prompts not working
- Minor bugfix of LlmSelector causing error in specific cases
- Fix verbose result longing when not verbose mode
- fix langchain logging warnings for using deprecated imports
- Support for new OpenAI models (set as default, you can turn it off by setting env variable
LANGCHAIN_DECORATORS_USE_PREVIEW_MODELS=0
) - automatically turn on new OpenAI JSON mode if
dict
is the output type / JSON output parser - added timeouts to default models definitions
- you can now reference input variables from
__self__
of the object thellm_function
is bound to (not only thellm_prompt
) - few bug fixes
- Input kwargs augmentations by implementing the llm_prompt function (checkout example: code_examples/augmenting_llm_prompt_inputs.py )
- support for automatic json fix using if
json_repair
is installed (not even OpenAI JSON format is not yet perfect)
- support for func_description passed as part of llm_function decorator
- allowed not having func_description
- minor fixes
- critical bugfix - Assistant messages without context (text) but only with arguments were ignored
- ability to pass in function to augment function arguments before executing in OutputWithFunctionCall
- break hard dependency on promptwatch
- add new models that support response_format="json"
- minor improvement in JSON output parser
- minor improvement in llm func docs parser to handle arg description not starting with letter
- fix json/dict output for async
- support for llm control kwarg to by able to pass in llm dynamically
- hotfix, unbound variable error
- support for Runnables as llms, that allows use of
llm.with_fallback
syntax when defining llms - support for passing llm directly as kwarg to prompt
- fix llm.with_fallback (runable) for sync promp