-
Notifications
You must be signed in to change notification settings - Fork 38
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
No tools found! #4
Comments
Thx for flagging that! I fixed the issue with sending a regular message. Now, if a request is sent without tools or with an empty "tools" key, it just goes through Groq with anything. Can you share an example of the parsing problem? If you can show me the LLM output before parsing, I can take a closer look. @Shakkaw |
I made some adjustments to avoid that pesky |
Hi @unclecode, great job on the quick fix, it completly fixed the behavior for normal chat responses. Unfortunatly, it only did so when there are no tools on For the below examples I tested with the prompt If i add a print like
but I get no output from python. I'm using a modified version of your
So the LLM response is getting lost somewhere and I believe it's because of the There was one time in my tests that it did give me a response but it was because it called |
I see. Currently, the flow is as follows:
There is a potential option to define a new property that defines the proxy behavior when no tools are detected. It can continue to behave as it does now, or we can set it to proceed in default mode. Additionally, I noticed that for your query "write a poem about the moon,", if you pass the search tools to proxy, sometimes it may decide to take the "search" function to collect some facts about the moon. While this is a subjective debate, I have made adjustments to be more restrictive in picking up a tool. Finally, the current approach is zero-shot. I suggest we try to keep it that way. Once we collect enough data, we can consider using a few-shot approach or fine-tuning a small model, such as Phi, Orca-mini, or Gemma:2b, for function detection. |
I understand your point of view, and also that a new tool should start small and grow with data and feedback. I always try to look at this type of interactions with a LLM from the point of a "normal user" so in my opinion the ability to ask a question that requires a function call or one that does not should be seamless, and both cases should be available in tandem. Furthermore, I believe a great number of use-cases for these tools are some form of interactive chat, and that would always require the user to be able to ask normal questions that may or may not trigger a function call. Hope we can work in that direction, maybe with a v2 of the proxy so people can use the one that makes more sense for their use-case. |
@Shakkaw totally agree, therefore I’ll add that extra parameter in request to define proxy behavior when there’s no tool. Gotta make sure these changes work well with other libraries like Litellm, Langchain, or Llamainde when they chat with proxy in OpenAI interface. In addition maybe we can think of a client SDK for the proxy, then we can add a more user-friendly interface. I think this will be a great help! |
When prompting the LLM through the proxy for something that does not require any of the tools, (Ex:
"Write me a poem about the moon"
or"How are you today"
) I getError: 500 {"tool_calls":[]}
.I see there is logic in place for
is_normal_chat
but as long as the user has any function defined this variable can never be true.Also, if forcing
is_normal_chat
, I get a different errorError calling GroqProvider: Object of type ChatCompletion is not JSON serializable
.And I can see with some debug tests that the LLM is returning a good response, but the proxy is messing up the parsing of it.
I'm testing and trying to find a different check to be implemented so we can have tools defined but still ask a normal question, but I'm not even sure it's possible without major code changes, so it might take a while.
Maybe @unclecode or someone more skilled than me can find a solution first 👍
The text was updated successfully, but these errors were encountered: