Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to stream text output? #587

Closed
ProjCRys opened this issue Nov 7, 2023 · 4 comments
Closed

How to stream text output? #587

ProjCRys opened this issue Nov 7, 2023 · 4 comments
Labels
question Further information is requested

Comments

@ProjCRys
Copy link

ProjCRys commented Nov 7, 2023

I tried adding "stream":True into the llm config but it doesn't seem to stream any text and just crashes it. I was planning to use the stream output for my TTS just to reduce the response time (speak every time a sentence is formed), so I tried using stream=True to see if it streams an output.

Code used:


import autogen
import random

config_list = [
{
"api_type": "open_ai",
"api_base": "http://localhost:1234/v1",
"api_key": "NULL"
}
]

random_seed = random.randint(0, 10000) # Generate a random seed

llm_config = {
"request_timeout": 1000,
"seed": random_seed, # Use the random seed here
"config_list": config_list,
"stream":True,
"temperature": 0
}

assistant = autogen.AssistantAgent(
name="assistant",
system_message="You are a coder specialized in Python",
llm_config=llm_config
)

user_proxy = autogen.UserProxyAgent(
name="user_proxy",
human_input_mode="ALWAYS",
max_consecutive_auto_reply=10,
is_termination_msg=lambda x: "TERMINATE" in x.get("content", ""),
code_execution_config={"work_dir": "web"},
llm_config=llm_config,
system_message="""End with TERMINATE if the task has been solved to full satisfaction. Otherwise, reply CONTINUE or the reason why the task is not solved yet."""
)

task = input("Please write a task: ")

user_proxy.initiate_chat(assistant, message=task)


Output:


D:\AI\ChatBots\Autogen>python instruct.py
Please write a task: Hello
user_proxy (to assistant):

Hello


Traceback (most recent call last):
File "D:\AI\ChatBots\Autogen\instruct.py", line 40, in
user_proxy.initiate_chat(assistant, message=task)
File "C:\Users\ADMIN\AppData\Local\Programs\Python\Python310\lib\site-packages\autogen\agentchat\conversable_agent.py", line 531, in initiate_chat
self.send(self.generate_init_message(**context), recipient, silent=silent)
File "C:\Users\ADMIN\AppData\Local\Programs\Python\Python310\lib\site-packages\autogen\agentchat\conversable_agent.py", line 334, in send
recipient.receive(message, self, request_reply, silent)
File "C:\Users\ADMIN\AppData\Local\Programs\Python\Python310\lib\site-packages\autogen\agentchat\conversable_agent.py", line 462, in receive
reply = self.generate_reply(messages=self.chat_messages[sender], sender=sender)
File "C:\Users\ADMIN\AppData\Local\Programs\Python\Python310\lib\site-packages\autogen\agentchat\conversable_agent.py", line 781, in generate_reply
final, reply = reply_func(self, messages=messages, sender=sender, config=reply_func_tuple["config"])
File "C:\Users\ADMIN\AppData\Local\Programs\Python\Python310\lib\site-packages\autogen\agentchat\conversable_agent.py", line 606, in generate_oai_reply
response = oai.ChatCompletion.create(
File "C:\Users\ADMIN\AppData\Local\Programs\Python\Python310\lib\site-packages\autogen\oai\completion.py", line 803, in create
response = cls.create(
File "C:\Users\ADMIN\AppData\Local\Programs\Python\Python310\lib\site-packages\autogen\oai\completion.py", line 834, in create
return cls._get_response(params, raise_on_ratelimit_or_timeout=raise_on_ratelimit_or_timeout)
File "C:\Users\ADMIN\AppData\Local\Programs\Python\Python310\lib\site-packages\autogen\oai\completion.py", line 272, in _get_response
cls._cache.set(key, response)
File "C:\Users\ADMIN\AppData\Local\Programs\Python\Python310\lib\site-packages\diskcache\core.py", line 772, in set
size, mode, filename, db_value = self._disk.store(value, read, key=key)
File "C:\Users\ADMIN\AppData\Local\Programs\Python\Python310\lib\site-packages\diskcache\core.py", line 221, in store
result = pickle.dumps(value, protocol=self.pickle_protocol)
TypeError: cannot pickle 'generator' object

@olaoluwasalami olaoluwasalami added the question Further information is requested label Nov 7, 2023
@afourney
Copy link
Member

afourney commented Nov 7, 2023

@victordibia , do you know what is the current status on streaming? I know there are other issues related to this, but I'm not up to date.

@sonichi
Copy link
Contributor

sonichi commented Nov 7, 2023

#491 was close to merging but a new PR hasn't been created. I assume it'll be created soon.
@Alvaromah for awareness.

@sonichi
Copy link
Contributor

sonichi commented Nov 12, 2023

Please follow #217

@thinkall
Copy link
Collaborator

Closing this issue due to inactivity. If you have further questions, please open a new issue or join the discussion in AutoGen Discord server: https://discord.com/invite/Yb5gwGVkE5

jackgerrits added a commit that referenced this issue Oct 2, 2024
* allow class associated subscriptions to be skipped on register

* format
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Further information is requested
Projects
None yet
Development

No branches or pull requests

5 participants