Skip to content

Tutorial on how to properly send intermediate LlamaIndex events to vercel ai sdk via server-sent events during RAG.

Notifications You must be signed in to change notification settings

rsrohan99/rag-stream-intermediate-events-tutorial

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

12 Commits
 
 
 
 
 
 

Repository files navigation

In this tutorial, we'll see how to use LlamaIndex Instrumentation module to send intermediate steps in a RAG pipeline to the frontend for an intuitive user experience.

Full video tutorial under 3 minutes 🔥👇

Stream Intermediate events in RAG

We use Server-Sent Events which will be recieved by Vercel AI SDK on the frontend.

Getting Started

First clone the repo:

git clone https://github.com/rsrohan99/rag-stream-intermediate-events-tutorial.git

cd rag-stream-intermediate-events-tutorial

Start the Backend

cd into the backend directory

cd backend

First create .env from .env.example

cp .env.example .env

Set the OpenAI key in .env

OPENAI_API_KEY=****

Install the dependencies

poetry install

Generate the Index for the first time

poetry run python app/engine/generate.py

Start the backend server

poetry run python main.py

Start the Frontend

cd into the frontend directory

cd frontend

First create .env from .env.example

cp .env.example .env

Install the dependencies

npm i

Start the frontend server

npm run dev

About

Tutorial on how to properly send intermediate LlamaIndex events to vercel ai sdk via server-sent events during RAG.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published