-
Notifications
You must be signed in to change notification settings - Fork 364
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Is it possible to activate json_mode when using openai? #275
Comments
hi we don't support it explicitly, instead we support structured responses, as described here - https://ai.pydantic.dev/results/. Internally this uses tool calls. Can you explain your use case? |
hi, thanks for the reply! So I guess I could do: class LanguageLesson(BaseModel):
languageLesson: Json[Any] When you say that internally this uses tools you mean that internally a function is in charge of outputting in the required type? Do you also coherce the LLM with a prompt to answer in a certain format? I guess I could fork pydantic ai and add the code to the openai call to enable json_mode if I needed to? Our use case is a language learning app (parakeet). We saw that LLMs couldn't generate the language lessons we wanted in terms of structure so what we do is that we ask the LLM (chatGPT) to generate a specific (more or less complex) JSON with all the language lessons parts (characters, translations, etc.). We then get the data from this structure to create a precisee language lesson structure. This worked nicely so we want to continue using this approach with other use cases. |
This issue is stale, and will be closed in 3 days if no reply is received. |
Closing this issue as it has been inactive for 10 days. |
I couldn't find info in the docs regarding openai's json_mode. Is it possible to activate it when using openai as the model for an agent?
Thanks!
The text was updated successfully, but these errors were encountered: