Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix: possible unsent function call in the last chunk of streaming response in OpenAI provider #2422

Merged
merged 1 commit into from
Feb 9, 2024

Conversation

bowenliang123
Copy link
Contributor

@bowenliang123 bowenliang123 commented Feb 8, 2024

  • fix possible unsent function call in the last chunk of streaming response in OpenAI provider
  • the delta_assistant_message_function_call_storage caches the function calls when function_call is found and then it continues to process the next chuck. However, if the last message itself contains a full function calling, it's skipped and never sent
  • the corner case comes with using OpenAI provider to call the LLMs under Open-compatible proxies, like we have GLM3 and Qwen

@dosubot dosubot bot added the size:XS This PR changes 0-9 lines, ignoring generated files. label Feb 8, 2024
@bowenliang123
Copy link
Contributor Author

cc @Yeuoly @takatost

@bowenliang123 bowenliang123 changed the title fix: possible unsent function call in last chuck of streaming response in OpenAI provider fix: possible unsent function call in last chunk of streaming response in OpenAI provider Feb 8, 2024
@bowenliang123 bowenliang123 changed the title fix: possible unsent function call in last chunk of streaming response in OpenAI provider fix: possible unsent function call in the last chunk of streaming response in OpenAI provider Feb 8, 2024
@crazywoola crazywoola requested review from takatost and Yeuoly and removed request for takatost February 8, 2024 08:18
@dosubot dosubot bot added the lgtm label Feb 9, 2024
@Yeuoly
Copy link
Collaborator

Yeuoly commented Feb 9, 2024

BTW, openai_api_compatible will support function call soon, you can then use it as your proxies of GLM3 and Qwen.

Thanks! best wishes to your new year.

@Yeuoly Yeuoly merged commit 589099a into langgenius:main Feb 9, 2024
4 checks passed
@bowenliang123 bowenliang123 deleted the last-fc branch February 9, 2024 06:49
@bowenliang123
Copy link
Contributor Author

bowenliang123 commented Feb 9, 2024

Thanks. We probably continue to use the openai provider to connect the LLMs via the openai compatible proxies.

  1. for the earlier support of new features, Dify often consider OpenAI services first.
  2. a set of predefined models of OpenAI helps us to provide all the selected services to out internal users, instead of forcing them to add the LLMs one by one in every workspace.

I raised this PR against OpenAI provider, for the reason that it does wrongly skipped the last message of function call. Especially in the future, OpenAI models may be able to return the arguments with the function calling at one message.

And Happy new Year!

HuberyHuV1 pushed a commit to HuberyHuV1/dify that referenced this pull request Jul 22, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
size:XS This PR changes 0-9 lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants