In this second part of the workshop, we will build upon the data and vector embeddings generated in Part 1 and integrate them into a Retrieval Augmented Generation (RAG) application. We’ll use a React frontend and a Node.js backend that leverages OpenAI for embeddings and Couchbase Capella for vector similarity searches.
- Completion of Part 1 of this workshop, where you have:
- A Couchbase Capella cluster with a bucket containing documents and their vector embeddings.
- A functioning vector search index in Capella.
- An OpenAI API key.
- A working Node.js environment.
- Set Up the Frontend (React)
- Set Up the Backend (Node.js)
- Integrate Capella Vector Search
- Integrate OpenAI for RAG
- Run and Test the Application
In this step, you’ll have a pre-configured React frontend that provides a UI for users to query your RAG application. The frontend will send user queries to your backend’s /api/query
endpoint.
- Navigate to the
frontend
directory. - Install dependencies:
npm install
- Start the development server:
npm run dev
- Open your browser and navigate to
http://localhost:3000
. You should see the RAG application UI.
Your backend will:
- Accept user queries from the frontend.
- Transform the queries into vector embeddings using OpenAI.
- Search for similar vectors in your Capella cluster.
- Augment the user query with the retrieved documents and request a response from OpenAI.
- Return the response to the frontend.
- Navigate to the
backend
directory. - Install dependencies:
npm install
- Start the backend:
node server.js
Your backend will use the Couchbase Node.js SDK to connect to Capella and execute vector similarity queries against the index created in Part 1.
Verify you have your Couchbase Capella connection config defined in .env
file in the backend
directory:
COUCHBASE_CONNECTION_STRING=your-connection-string
COUCHBASE_USERNAME=your-username
COUCHBASE_PASSWORD=your-password
COUCHBASE_SEARCH_INDEX_NAME=your-index-name
COUCHBASE_BUCKET_NAME=your-bucket-name
To transform user queries into embeddings and generate responses using retrieved context from Capella, you’ll integrate OpenAI’s API.
Verify you have your OpenAI API key defined in .env
file in the backend
directory:
OPENAI_API_KEY=your-api-key
Once everything is connected, you can run both the frontend and backend together:
- Ensure the backend (node server.js in backend) and frontend (npm run dev in frontend) servers are running.
- Visit the frontend URL in your browser.
- Enter a query and submit it.
- Frontend displays the response.