A context-loaded chat assistant for answering questions about Editions Winter ’23 using OpenAI's gpt-3.5-turbo API.
- Clone or fork the repo
- Run
npm i
- Create an
.env
file in root and add your Open AI API key to it (this file is not tracked) and add your Pinecone index URL and API key
OPENAI_API_KEY=superSecretAPIKey
PINECONE_INDEX_URL=indexURL
PINECONE_API_KEY=superSecretAPIKey
- Run
npm run dev
- Open in your browser
http://localhost:3000/
- Start playing with the context you wish to add in
/app/context/index.ts
This is build using Remix (a react based framework), Typescript, and uses TailwindCSS for styling. Some key notes:
- Pages can be found under
/app/routes
- Custom styling can be found in
/app/stylesheets
and can be added in the/app/root.tsx
file in thelinks()
function - Context for the chat interaction should be stored in
/app/context/index.ts
and should follow the data format for messages (role, content) - This is using the Toolformer modal to decide when it needs to fetch data about products
- This is using Pinecone.io as the embeddings vector DB
- To generate the embeddings vector JSON visit
http://localhost:3000/endpoints/generate
. You can then use the file to upsert into the DB
This repo was set up to deploy to Vercel as the main deplopyment source but you can customize it if you wish to suit your needs.
Would love to hear some feedback. Please feel free to open issues or hit me up on Twitter.