Skip to content

A question answering application built using llama index with a playground and evaluation in agenta

Notifications You must be signed in to change notification settings

Agenta-AI/qa_llama_index_playground

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

17 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Question Answering RAG using LlamaIndex in agenta

This templates is a question answering application with a RAG architecture using LlamaIndex, openAI. It provides a playground to experiment with different prompts and parameters in LlamaIndex and evaluate the results. It runs with agenta. Agenta is an open-source LLMOps platform that allows you to 1) create a playground from the code of any LLM app to quickly experiment, version, and collaborate in your team 2) evaluate LLM applications, and 3) deploy applications easily.

How to use

0. Prerequisites

  • Install the agenta CLI
pip install -U agenta

1. Clone the repository

git clone https://github.com/Agenta-AI/qa_llama_index_playground.git

2. Initialize the project

cd qa_llama_index_playground
agenta init

3. Setup your openAI API key

Create a .env file by copying the .env.example file and add your openAI API key to it.

OPENAI_API_KEY=sk-xxxxxxxxxxxxxxxxxxxxxxxx

4. Deploy the application to agenta

agenta variant serve app.py

5. Experiment with the prompts in a playground and evaluate different variants in agenta

job_desc.mp4

About

A question answering application built using llama index with a playground and evaluation in agenta

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages