-
Notifications
You must be signed in to change notification settings - Fork 5.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How to stream text output? #587
Comments
@victordibia , do you know what is the current status on streaming? I know there are other issues related to this, but I'm not up to date. |
#491 was close to merging but a new PR hasn't been created. I assume it'll be created soon. |
Please follow #217 |
Closing this issue due to inactivity. If you have further questions, please open a new issue or join the discussion in AutoGen Discord server: https://discord.com/invite/Yb5gwGVkE5 |
* allow class associated subscriptions to be skipped on register * format
I tried adding "stream":True into the llm config but it doesn't seem to stream any text and just crashes it. I was planning to use the stream output for my TTS just to reduce the response time (speak every time a sentence is formed), so I tried using stream=True to see if it streams an output.
Code used:
import autogen
import random
config_list = [
{
"api_type": "open_ai",
"api_base": "http://localhost:1234/v1",
"api_key": "NULL"
}
]
random_seed = random.randint(0, 10000) # Generate a random seed
llm_config = {
"request_timeout": 1000,
"seed": random_seed, # Use the random seed here
"config_list": config_list,
"stream":True,
"temperature": 0
}
assistant = autogen.AssistantAgent(
name="assistant",
system_message="You are a coder specialized in Python",
llm_config=llm_config
)
user_proxy = autogen.UserProxyAgent(
name="user_proxy",
human_input_mode="ALWAYS",
max_consecutive_auto_reply=10,
is_termination_msg=lambda x: "TERMINATE" in x.get("content", ""),
code_execution_config={"work_dir": "web"},
llm_config=llm_config,
system_message="""End with TERMINATE if the task has been solved to full satisfaction. Otherwise, reply CONTINUE or the reason why the task is not solved yet."""
)
task = input("Please write a task: ")
user_proxy.initiate_chat(assistant, message=task)
Output:
D:\AI\ChatBots\Autogen>python instruct.py
Please write a task: Hello
user_proxy (to assistant):
Hello
Traceback (most recent call last):
File "D:\AI\ChatBots\Autogen\instruct.py", line 40, in
user_proxy.initiate_chat(assistant, message=task)
File "C:\Users\ADMIN\AppData\Local\Programs\Python\Python310\lib\site-packages\autogen\agentchat\conversable_agent.py", line 531, in initiate_chat
self.send(self.generate_init_message(**context), recipient, silent=silent)
File "C:\Users\ADMIN\AppData\Local\Programs\Python\Python310\lib\site-packages\autogen\agentchat\conversable_agent.py", line 334, in send
recipient.receive(message, self, request_reply, silent)
File "C:\Users\ADMIN\AppData\Local\Programs\Python\Python310\lib\site-packages\autogen\agentchat\conversable_agent.py", line 462, in receive
reply = self.generate_reply(messages=self.chat_messages[sender], sender=sender)
File "C:\Users\ADMIN\AppData\Local\Programs\Python\Python310\lib\site-packages\autogen\agentchat\conversable_agent.py", line 781, in generate_reply
final, reply = reply_func(self, messages=messages, sender=sender, config=reply_func_tuple["config"])
File "C:\Users\ADMIN\AppData\Local\Programs\Python\Python310\lib\site-packages\autogen\agentchat\conversable_agent.py", line 606, in generate_oai_reply
response = oai.ChatCompletion.create(
File "C:\Users\ADMIN\AppData\Local\Programs\Python\Python310\lib\site-packages\autogen\oai\completion.py", line 803, in create
response = cls.create(
File "C:\Users\ADMIN\AppData\Local\Programs\Python\Python310\lib\site-packages\autogen\oai\completion.py", line 834, in create
return cls._get_response(params, raise_on_ratelimit_or_timeout=raise_on_ratelimit_or_timeout)
File "C:\Users\ADMIN\AppData\Local\Programs\Python\Python310\lib\site-packages\autogen\oai\completion.py", line 272, in _get_response
cls._cache.set(key, response)
File "C:\Users\ADMIN\AppData\Local\Programs\Python\Python310\lib\site-packages\diskcache\core.py", line 772, in set
size, mode, filename, db_value = self._disk.store(value, read, key=key)
File "C:\Users\ADMIN\AppData\Local\Programs\Python\Python310\lib\site-packages\diskcache\core.py", line 221, in store
result = pickle.dumps(value, protocol=self.pickle_protocol)
TypeError: cannot pickle 'generator' object
The text was updated successfully, but these errors were encountered: