-
Notifications
You must be signed in to change notification settings - Fork 1.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support for Predefined Tags in StreamingText with AI SDK for Component Rendering in useChat streaming decoding #3630
Comments
Have you tried message annotations? It’s pretty similar to the example you showed for tags. I’ve been using it for the same use cases you mentioned, determining which component to render based on response type. |
Thanks for your suggestion. I have tried message annotations, and they are perfect for appending other data sources(not from LLM response text) to a message. However, they are not applicable for the case I proposed. Let's say we give the instructions to the LLM in the system prompt: We would then extract the tags and render them. Since the response comes from the LLM text stream, as I understand, we are not able to decode the predefined tags by using useChat or SDK. |
Sorry for the misunderstanding but I think you can still use message annotations with the LLM response by using the Since A basic example might be like let buffer = '';
const result = await streamText({
model: provider('gpt-4o'),
prompt:
'Write a short story but split the sections into beginning, middle, end. Include the types for the section in this format: "{type:beginning/middle/end}" at the end of that section ',
onChunk(event) {
if (event.chunk.type !== 'text-delta') return;
buffer += event.chunk.textDelta;
if (buffer.includes('{type:beginning}')) {
data.appendMessageAnnotation({
type: 'beginning',
text: buffer,
});
buffer = '';
}
else if (buffer.includes('{type:middle}')) {
data.appendMessageAnnotation({
type: 'middle',
text: buffer,
});
buffer = '';
}
else if (buffer.includes('{type:end}')) {
data.appendMessageAnnotation({
type: 'end',
text: buffer,
});
buffer = '';
}
},
onFinish() {
data.close();
},
}); and the message annotations would get streamed back like so
It's not perfect but it might be helpful to get things rolling until there's support for it. |
@ShervK Thank you so much for your help, really appreciate it. I think you are right, it would be a good approach currently. Looks like we can keep matching the message annotation value with content when rendering to replace the content tags. I will try it and feedback here. |
For those who may need it: I think a long-term solution should allow developers to specify which tags they need to process on the frontend. Then, the message chunks within the specified tags should be buffered and sent to the frontend independently (in a single chunk and message). This way, we can simply iterate through the messages to check whether a single message matches the tag or not. As a temporary solution, I used htmlparser2 on the frontend to extract the specified tags and separate them from the text stream and it works currently. Hope this helps! |
@rockywangxiaolei Thanks for trying my earlier suggestion and for the detailed feedback. Looking back, I can see why my method might not have been the best fit. One other idea that might be worth exploring is looking at how block/artifact streaming is implemented, using Streaming Data, in the AI Chatbot example Vercel put up. It’s similar to how you handle StreamParts, but they’ve extended it to include custom types, which seems to simplify how the blocks are rendered dynamically on the frontend. Glad to hear you still got something working in the meantime. |
hi @ShervK , yes i am aware of the ai chatbot streaming function from tool call, after study, it looks like using StreamingData at backend, and customized streaminghandler at frontend to handle the rendering. which actaully leverage the global streaming data instead of messages. Anyway, i used the htmlparse2 managed to make it works now. |
the tags has been removed by github... |
Feature Description
I would like to request a feature in the AI SDK to enhance useChat by supporting predefined tags within StreamingText. The goal is to enable the SDK to recognize specific tags in the streaming response and render appropriate components based on tag types. This functionality would be beneficial for applications that require dynamic rendering of content based on metadata tags, similar to the predefined tags described below.
Proposed Feature
The proposed feature would support tag parsing and component rendering in useChat using StreamingText.
Use Cases
Tag Examples and Desired Behavior:
Function: Determines the type of content (e.g., text, image, or interactive message).
Example:
{"type": "NARRATION", "text": "The printing press, invented by Johannes Gutenberg..."}
Desired Render: Display as a regular text component in the streaming output.
Function: Renders rich text with Markdown support.
Example:
{"text": "The printing press, invented by Johannes Gutenberg..."}
Desired Render: Automatically parse Markdown (e.g., bold text) for rich-text display.
Function: Enables search or related content links.
Example:
{"searchQuery": "Gutenberg printing press"}
Desired Render: Render as a link or button that triggers an external search based on the query.
Additional context
Implementing this feature would:
Enhance interactivity within useChat by enabling dynamic content display based on metadata.
Improve user engagement by allowing seamless rendering of rich text, images, links, and buttons directly in StreamingText.
Simplify front-end development, enabling developers to easily build sophisticated UIs that adapt based on content type without manual parsing.
**
**
The text was updated successfully, but these errors were encountered: