Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Updating tests as requested by @brainlid #1

Open
wants to merge 58 commits into
base: ollama_support_tools
Choose a base branch
from

Conversation

samharnack
Copy link

@samharnack samharnack commented Jan 12, 2025

I think this covers everything. Thank you for putting this together!

The formats look correct based on:
https://github.com/ollama/ollama/blob/main/docs/api.md#chat-request-with-tools

joelpaulkoch and others added 30 commits August 18, 2024 06:33
* 'main' of github.com:brainlid/langchain:
  adds OpenAI project authentication. (brainlid#166)
  Fix PromptTemplate example (brainlid#167)
* return list of exchanged messages after LLMChain.run
- not just the last message

* remove "last_message" from LLMChain.run return tuple
- updates to all usages and tests

* updated code examples in notebooks
…#175)

* 🐛 cast tool_calls inside deltas correctly

* Update test/chat_models/chat_open_ai_test.exs
…#174)

* Do not duplicate tool call parameters if they are identical

* Fix primary_name == new_name

---------

Co-authored-by: Michał <[email protected]>
* feat: add OpenAI's new structured output API

* test(openai): add test cases for response_format

* fix: clean up tests

* chore: revert env comment from testing
* Define in-module version of split_system_message

* Split messages prior to processing

* Conditionally add system_instruction to map

* Remove for_api handler for system messages

* Add tests

* Extract util function for splitting system messages

* Use extracted function in ChatGoogleAI

* Use extracted function in ChatAnthropic
* Add test for function

* Implement filter function

* Add tests for empty text parts

* Filter empty text parts
* Add wrapper function for converting functions

* Use wrapper function when building request

* Add test for function with parameters

* Add test for current behaviour for functions without parameters

* Adjust test to reflect required map for API

* Update existing test

* Handle empty parameters

---------

Co-authored-by: Mark Ericksen <[email protected]>
* Update deps for req (>= 0.5.1 is needed)

req >=0.5.1 is needed to support bedrock:
wojtekmach/req#374

* Bedrock support for Anthropic

* Extract BedrockStreamDecoder from ChatAnthropic

* Move function

* Use same relevant_event? function as anthropic api

* Extract BedrockConfig

* Extract module var for aws anthropic version

* Use same tests as anthropic on bedrock

* Move config to setup

* Move anthropic_version to bedrock config

* Rename function

* Pull bedrock url functions to BedrockConfig

* Consistent tag name for anthropic_bedrock

* Pass through case where chunk.bytes isn't present in stream

* Pull aws_sigv4_opts into BedrockConfig

* Improve pattern matching on bedrock config

* Handle bedrock http error messages

* Use Mimic & add tests around error cases

* Update req

* Require latest req for aws path encoding fix + session token support

* Support session token if returned from credentials fn

* Simplify - pass a keyword list through to req sigv4 opts instead of wrangling tuples

* Update tests

* Remove duplicate tests

* Add stub aws creds to github workflow

* Move commented out test back
- let the LLM know in a more clear way that a function execution failed.
- This is for situations where the function didn't error during execution, but it couldn't be called at all.
* 'main' of github.com:brainlid/langchain:
  Add AWS Bedrock support to ChatAnthropic (brainlid#154)
  Handle functions with no parameters for Google AI (brainlid#183)
  Handle missing token usage fields for Google AI (brainlid#184)
  Handle empty text parts from GoogleAI responses (brainlid#181)
  Support system instructions for Google AI (brainlid#182)
  feat: add OpenAI's new structured output API (brainlid#180)
  Support strict mode for tools (brainlid#173)
  Do not duplicate tool call parameters if they are identical (brainlid#174)
  🐛 cast tool_calls arguments correctly inside message_deltas (brainlid#175)
* Extract function to map finish reason to status

* Handle all possible finish reasons
* Add field for safety settings

* Add tests for for_api

* Conditionally add safety settings to API request

* Cast safety settings in changeset

* Serialize safety settings config
* add tool_choice to openai and anthropic

* modify tool_choice from struct to map

* rename set_tool_choice to get_tool_choice

* add live tests for tool_choice to openai and anthropic models

* fix finish_reason issue, extend live test for openai tool_choice

---------

Co-authored-by: Mark Ericksen <[email protected]>
- OpenAI module doc updated for example on forcing a `tool_choice`
- Anthropic module doc updated for example on forcing a `tool_choice`
- fixed code merge issue
- added module doc with examples
- support providing examples for consistency

closes brainlid#163
brainlid and others added 26 commits November 20, 2024 19:20
…rainlid#192)

- added test
- documentation updates
- LLMChain.execute_tool_call supports Elixir functions returning {:ok, "llm result text", elixir_data_to_keep}
- handle Antrhropic "overload_error"
- return {:error, %LangChainError{}} instead of a {:error, String.t()}
- updated lots of chat models and chains
- LLMChain.run returns an error struct
- LangChainError.type is a string code that is easier to detect and have coded handling for more scenarios
* working on documenting AWS Bedrock support with Anthropic

* added documentation and helper functions

* removing unused functions

* cleaning up
- in apply_delta, the error can be returned as a delta because that's how it was received
- adds :with_fallbacks option to LLMChain.run/2 options
- adds :before_fallback function for chain modification
* Make JsonProcessor process ContentPart properly

* Explicitly remove ```json ```

* Add a failing test for brainlid#209

* Pass the tests for brainlid#209

* Fix JasonProcessor content processing when regex is present

* Add live google ai call tests for messages with image parts
* Update documentation to use the LLMChain.run() return type

* Fix specs

* More spec fixes

* More spec fixes

* More spec fixes

retry_count is supposed to be an non-negative integer, not a function.
Since the function raises sometimes, added no_return() as well

* Fix typo

* Fix another typo

* Update README.md

Clarified the description for the list of models
* fixes
- fixed LangChainError type spec
- LLMChain.run - raise specific exception when being run without messages
- try/rescue errors in LLMChain.run and return error tuple (fixes spec)

* TextToTitleChain updates
- updated docs with examples
- support full `override_system_prompt` for greater customization

* tweak

* adds LangChain.Chains.SummarizeConversationChain
- operates on an LLMChain to shorten and summarize the messages

* updated TextToTitleChain docs

* summarized chain's last_message get updated
- changes when the keep_count is 0

* fixed failing test
* update changelog

* updated changelog for migration instructions

* added missing entries to changelog

* formatting

* prep for release
* add support to SummarizeConversationChain to explicitly control the messages
- makes it easier to get better control and performance

* tweaks to resulting output message in summarized chain
- unused import after PR merged
* breaking change to consolidate LLM callback functions

* update to CI builds
@samharnack samharnack reopened this Jan 12, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.