The example KDB.AI samples provided aim to demonstrate examples of the use of the KDB.AI vector database in a number of scenarios ranging from getting started guides to industry specific use-cases.
KDB.AI comes in two offerings:
- KDB.AI Cloud - For experimenting with smaller generative AI projects with a vector database in our cloud.
- KDB.AI Server - For evaluating large scale generative AI applications on-premises or on your own cloud provider.
Depending on which you use, there will be different setup steps and connection details required. You can signup at the links above and see the notebooks for connection inctructions.
KDB.AI is a vector database with time-series capabilities that allows developers to build scalable, reliable, and real-time applications by providing advanced search, recommendation, and personalization for Generative AI applications. KDB.AI is a key component of full-stack Generative AI applications that use Retrieval Augmented Generation (RAG).
Built by KX, the creators of kdb+, KDB.AI provides users with the ability to combine unstructured vector embedding data with structured time-series datasets to allow for hybrid use-cases which benefit from the rigor of conventional time-series data analytics and the usage patterns provided by vector databases within the Generative AI space.
KDB.AI supports the following feature set:
- Multiple index types: Flat, qFlat, IVF, IVFPQ, HNSW and qHnsw.
- Multiple distance metrics: Euclidean, Inner-Product, Cosine.
- Top-N and metadata filtered retrieval
- Python and REST Interfaces
At this time, the repository contains the following samples:
- Python Quickstart: A quick introduction to the KDB.AI APIs in Python.
- TSS_non_transformed: Temporal Similarity Search (Non Transformed) time series search.
- TSS_transformed: Temporal Similarity Search (Transformed) for time series search.
- LlamaIndex Advanced RAG: Demonstration on how to use LlamaIndex with KDB.AI for RAG.
- LlamaIndex Samples: Hybrid Search, Multimodal RAG, and Multi Query Retriever LlamaIndex Samples.
- LlamaParse PDF RAG: Use LlamaParse to extract embedded elements from a PDF and build a RAG pipeline.
- Document Search: Semantic Search on PDF Documents.
- Hybrid Search: Combine dense and sparse search to improve accuracy.
- Image Search: Image Search on Brain MRI Scans.
- Metadata Filtering: Metadata Filtering to increase search speed and accuracy.
- Fuzzy Filtering: Fuzzy Filtering to handle typos, international spelling difference, etc upon metadata columns.
- Multi-Index Search: Use KDB.AI's multiple index search capability for multimodal retrieval.
- Multimodal RAG ImageBind: Multimodal RAG with images and text.
- Multimodal RAG Unified Text: Multimodal RAG with images descriptions and text.
- Recommendation System: Music Recommendation on Spotify Data.
- Pattern Matching: Pattern Matching on Sensor Data.
- qFlat Index: Document search using KDB.AI's qFlat on-disk index.
- qHnsw Index: Document search using KDB.AI's qHnsw on-disk index.
- Retreival Augmented Generation with LangChain: Retrieval Augmented Generation (RAG) with LangChain.
- Retreival Augmented Generation Evaluation with LangChain: Retrieval Augmented Generation (RAG) Evaluation with LangChain.
- Sentiment Analysis: Sentiment Analysis on Disneyland Resort Reviews.
- ChatGPT Retrieval Plugin: Example showing a question and answer session using a ChatGPT retrieval plugin using KDB.AI Vector Database.
- Langchain: Example showing a question and answer session using a Langchain integration with the KDB.AI Vector Database.
- LlamaIndex: KDB.AI integrates with the LlamaIndex framework for working with LLMs.
This section details the setup steps required to run these samples locally on your machine.
This setup guide assumes the following:
- You are using a Unix terminal or similar
- You have
python
>= 3.8 installed - You have
pip
installed
Tip
Running out of disk space? By default, pytorch installs both GPU and CPU related packages. This repo does not require a GPU and hence, will only make use of the CPU related packages. By running the below install command, you will only install the CPU related packages and save approximately 1.5GB of disk space. This command is optional and if run, should be run at the beginning of the notebook.
pip install torch --index-url https://download.pytorch.org/whl/cpu
-
The necessary pip installs are at the beginning of each notebook.
(optional) To see a comprehensive list of requirements, see the
requirements.txt
file in the repository.pip install -r requirements.txt
-
Run a jupter notebook session:
jupyter notebook --no-browser
This will load up the jupyter session in the background and display a URL on screen for you.
-
Paste this URL into your browser
This will bring up the samples for you to interact with.
In this repository, we may make available to you certain datasets for use with the Software. You are not obliged to use such datasets (with the Software or otherwise), but any such use is at your own risk. Any datasets that we may make available to you are provided “as is” and without any warranty, including as to their accuracy or completeness. We accept no liability for any use you may make of such datasets.