This folder contains notebooks that demonstrate various use cases for Elasticsearch as the retrieval engine and vector store for LLM-powered applications.
The following notebooks are available:
In the question-answering.ipynb
notebook you'll learn how to:
- Retrieve sample workplace documents from a given URL.
- Set up an Elasticsearch client.
- Chunk documents into 512 token passages with an overlap of 256 token using the
RecursiveCharacterTextSplitter
fromlangchain
. - Use
OpenAIEmbeddings
fromlangchain
to create embeddings for the content. - Retrieve embeddings for the chunked passages using OpenAI.
- Persist the passage documents along with their embeddings into Elasticsearch.
- Set up a question-answering system using
OpenAI
andElasticKnnSearch
fromlangchain
to retrieve answers along with their source documents.
In the chatbot.ipynb
notebook you'll learn how to:
- Retrieve sample workplace documents from a given URL.
- Set up an Elasticsearch client.
- Chunk documents into 512 token passages with an overlap of 256 token using the
RecursiveCharacterTextSplitter
fromlangchain
. - Use
OpenAIEmbeddings
fromlangchain
to create embeddings for the content. - Retrieve embeddings for the chunked passages using OpenAI.
- Run hybrid search in Elasticsearch to find documents that answers asked questions.
- Maintain conversational memory for follow-up questions.