Skip to content

Latest commit

 

History

History
34 lines (23 loc) · 1.21 KB

README.md

File metadata and controls

34 lines (23 loc) · 1.21 KB

Varaamo info RAG demo

This is a LlamaIndex project bootstrapped with create-llama.

Getting Started

First, startup the backend as described in the backend README.

Second, run the development server of the frontend as described in the frontend README.

Open http://localhost:3000 with your browser to see the result.

How to run with docker compose

First check setup instructions in both frontend and backend and set env's. Make sure you have docker installed and then first run:

docker compose up

Then generate AI embeddings:

docker compose exec backend /bin/bash
poetry run generate
exit

Learn More

To learn more about LlamaIndex, take a look at the following resources:

You can check out the LlamaIndexTS GitHub repository - your feedback and contributions are welcome!