You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
It seems that specifying JSON response format leads to an error since the post-processing done by litellm assumes that the JSON response corresponds to a function call.
LiteLLM version: 1.55.9
from litellm import completion
response = completion(
model="ollama/phi3:latest",
messages=[{"role": "user", "content": "What is the capital of France?"}],
response_format={"type": "json_object"}
)
The model actually ends up generating the JSON below, but it errors out in the postprocessing.
{"answer": "Paris"}
It seems to be due to this transformation.py code, which expects name and arguments fields. Is this the intended behavior?
Relevant log output
File "/<...>/litellm/llms/ollama/completion/transformation.py", line 263, in transform_response
"name": function_call["name"],
KeyError: 'name'
Are you a ML Ops Team?
No
What LiteLLM version are you on ?
v1.55.9
Twitter / LinkedIn details
No response
The text was updated successfully, but these errors were encountered:
What happened?
It seems that specifying JSON response format leads to an error since the post-processing done by
litellm
assumes that the JSON response corresponds to a function call.LiteLLM version: 1.55.9
The model actually ends up generating the JSON below, but it errors out in the postprocessing.
It seems to be due to this transformation.py code, which expects
name
andarguments
fields. Is this the intended behavior?Relevant log output
Are you a ML Ops Team?
No
What LiteLLM version are you on ?
v1.55.9
Twitter / LinkedIn details
No response
The text was updated successfully, but these errors were encountered: