-
Notifications
You must be signed in to change notification settings - Fork 5.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Documentation of use_cache in llm_config #323
Comments
when will a user write both |
They could do this by accident. |
The current doc is: llm_config dict or False - llm inference configuration. Please refer to Completion.create for available options. To disable llm-based auto reply, set to False. https://microsoft.github.io/autogen/docs/reference/agentchat/conversable_agent#__init__ If any clarification is desired, this is the place. |
We are closing this issue due to inactivity; please reopen if the problem persists. |
…323) * add notebooks for documentation * Merge remote-tracking branch 'origin/main' into ekzhu-notebooks * Add install * Add to checks
@qingyun-wu informed me that
llm_config
has been used in the past to setuse_cache
, but this is not currently documented anywhere. Clear documentation is needed since ourChatCompletion.create
method carries ause_cache
argument, and users will hit a runtime error if they passuse_cache
values throughllm_config
and directly toChatCompletion.create
at the same time.The text was updated successfully, but these errors were encountered: