Skip to content

Private-AI is an innovative AI project designed for asking questions about your documents using powerful Large Language Models (LLMs). The unique feature? It works offline, ensuring 100% privacy with no data leaving your environment

License

Notifications You must be signed in to change notification settings

AryanVBW/Private-Ai

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

Darkside

πŸš€ Welcome to Private-AI!

Private-AI is an innovative AI project designed for asking questions about your documents using powerful Large Language Models (LLMs). The unique feature? It works offline, ensuring 100% privacy with no data leaving your environment.

🌐 What does Private-AI offer?

  • High-level API: Abstracts the complexity of a Retrieval Augmented Generation (RAG) pipeline. Handles document ingestion, chat, and completions.

  • Low-level API: For advanced users to implement custom pipelines. Includes features like embeddings generation and contextual chunks retrieval.

🌟 Why Private-AI?

Privacy is the key motivator! Private-AI addresses concerns in data-sensitive domains like healthcare and legal, ensuring your data stays under your control.

πŸ€– installation


Private-Ai Installation Guide

  • Install Python 3.11 (or 3.12)

  • Using apt(Debian base linux like-kali,Ubantu etc. )

    sudo apt-get install python3.11
    apt install python3.11-venv
    
  • Using pyenv:

    pyenv install 3.11
    pyenv local 3.11
  • Install Poetry for dependency management.

   sudo apt install python3-poetry
  sudo apt install python3-pytest

Installation Whithout GPU:

  • Git clone Private-Ai repository:
git clone https://github.com/AryanVBW/Private-Ai
cd Private-Ai && \
python3.11 -m venv .venv && source .venv/bin/activate && \
pip install --upgrade pip poetry && poetry install --with ui,local && ./scripts/setup 
python3.11 -m private_gpt

Run of private Ai:

  • forRunAgain jutsGoTo Private Ai directoy anr run following comand:
  make run

πŸ‘πŸ‘All Done πŸ‘πŸ‘

For GPU utilization and customization, follow the steps below:

  • For Private-Ai to run fully locally GPU acceleration is required (CPU execution is possible, but very slow)

clone repo

  • Git clone Private-Ai repository:
git clone https://github.com/AryanVBW/Private-Ai
cd Private-Ai

Dependencies Installation:

  • Install make (OSX: brew install make, Windows: choco install make).
  • Install dependencies:
    poetry install --with ui

Local LLM Setup:

  • Install extra dependencies for local execution:
    poetry install --with local
  • Use the setup script to download embedding and LLM models:
    poetry run python scripts/setup

Finalize:

  • Installation of private Ai:
    make 

Verification and run :

  • Run make run or poetry run python -m private_gpt.
  • Open http://localhost:8001 to see Gradio UI with a mock LLM echoing input.

Customization:

  • Customize low-level parameters in private_gpt/components/llm/llm_component.py.
  • Configure LLM options in settings.yaml.

GPU Support:

  • OSX: Build llama.cpp with Metal support.

    CMAKE_ARGS="-DLLAMA_METAL=on" pip install --force-reinstall --no-cache-dir llama-cpp-python
  • Windows NVIDIA GPU: Install VS2022, CUDA toolkit, and run:

    $env:CMAKE_ARGS='-DLLAMA_CUBLAS=on'; poetry run pip install --force-reinstall --no-cache-dir llama-cpp-python
  • Linux NVIDIA GPU and Windows-WSL: Install CUDA toolkit and run:

    CMAKE_ARGS='-DLLAMA_CUBLAS=on' poetry run pip install --force-reinstall --no-cache-dir llama-cpp-python

Troubleshooting:

  • Check GPU support and dependencies for your platform.
  • For C++ compiler issues, follow troubleshooting steps.

Note: If any issues, retry in verbose mode with -vvv during installations.

Troubleshooting C++ Compiler:

  • Windows 10/11: Install Visual Studio 2022 and MinGW.
  • OSX: Ensure Xcode is installed or install clang/gcc with Homebrew.

🧩 Architecture Highlights:

  • FastAPI-Based API: Follows the OpenAI API standard, making it easy to integrate.

  • LlamaIndex Integration: Leverages LlamaIndex for the RAG pipeline, providing flexibility and extensibility.

  • Present and Future: Evolving into a gateway for generative AI models and primitives. Stay tuned for exciting new features!

πŸ’‘ How to Contribute?

Contributions are welcome! Check the ProjectBoard for ideas. Ensure code quality with format and typing checks (run make check).

πŸ€—Supporters:

Supported by Qdrant, Fern, and LlamaIndex. Influenced by projects like LangChain, GPT4All, LlamaCpp, Chroma, and SentenceTransformers.

πŸ‘ Thank you for contributing to the future of private and powerful AI with Private-AI! πŸ“ License: Apache-2.0

Copyright Notice

This is a modified version of PrivateGPT. All rights and licenses belong to the PrivateGPT team.

Β© 2023 PrivateGPT Developers. All rights reserved.

About

Private-AI is an innovative AI project designed for asking questions about your documents using powerful Large Language Models (LLMs). The unique feature? It works offline, ensuring 100% privacy with no data leaving your environment

Topics

Resources

License

Stars

Watchers

Forks

Packages

No packages published