You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Remove the state descriptor from what is sent to the LLMs, and have the Proctor be tracking the state of the conversation but not explicitly sharing it with the participants. Instead of getting a state descriptor from the Proctor, the participants would rely on inferring what they needed to do based on the context and prompt sent to them.
Initial Implementation Requirements
Participant LLMs & bots will need individually tailored modifications based on the state of the conversation built in to how prompts are passed on to them (at least at the "family / type" level), so that each participant LLM or bot can understand and respond appropriately to the current state of the conversation, since each has different input formatting needs.
It may be advantageous to use an LLM to create these modified prompts themselves, rather than hard-coding them.
Context sharing will need to be worked out and enabled.
Other Considerations
This may be difficult to implement in our current interface, and may require more discussion. Adding it since it is an important part of the V3 (or maybe V4?) vision.
The "independent conversation" goal we have could perhaps be usefully compared to the difference between a court proceeding, and a peer-level discussion group with a secretary present to take notes and help keep the discussion moving forward. We want to build a discussion group, not a courtroom.
In the court, the judge as "Proctor" calls on each person to speak when it is their turn.
In the discussion group a secretary as "Proctor" would only speak up if the participants got off track.
We can't get to this sort of independent participation in one step, but we want to be working in that direction.
To do this we will need to select the messages which should be included as context in each submission to the LLM participants. Challenges include:
The insertion of messages from humans having side discussions with LLM participants while the main discussion is going on.
That both our LLMs and the commercial ones currently support only two conversational entities, the user and the LLM. Our conversation contains many entities, and choices need to be made on how to work within that limitation.
One way to adapt to sharing the context with multiple participants in a conversation is to insert some additional contextualizing text just before the conversation history and after the system prompt. An example might be "...You have been participating in a CCAI group discussion, you may have already commented, now you need to respond again. Here is the conversation so far:"
Later, an additional LLM could be used to refine this contextualizing text to be more specific the LLMs participation in the preceding conversation. For example, "...apparently you have already commented twice in the discussion round, and now it's time to vote…"
The text was updated successfully, but these errors were encountered:
Objective
Remove the state descriptor from what is sent to the LLMs, and have the Proctor be tracking the state of the conversation but not explicitly sharing it with the participants. Instead of getting a state descriptor from the Proctor, the participants would rely on inferring what they needed to do based on the context and prompt sent to them.
Initial Implementation Requirements
Other Considerations
In the court, the judge as "Proctor" calls on each person to speak when it is their turn.
In the discussion group a secretary as "Proctor" would only speak up if the participants got off track.
We can't get to this sort of independent participation in one step, but we want to be working in that direction.
The insertion of messages from humans having side discussions with LLM participants while the main discussion is going on.
That both our LLMs and the commercial ones currently support only two conversational entities, the user and the LLM. Our conversation contains many entities, and choices need to be made on how to work within that limitation.
One way to adapt to sharing the context with multiple participants in a conversation is to insert some additional contextualizing text just before the conversation history and after the system prompt. An example might be "...You have been participating in a CCAI group discussion, you may have already commented, now you need to respond again. Here is the conversation so far:"
Later, an additional LLM could be used to refine this contextualizing text to be more specific the LLMs participation in the preceding conversation. For example, "...apparently you have already commented twice in the discussion round, and now it's time to vote…"
The text was updated successfully, but these errors were encountered: