Skip to content

Latest commit

 

History

History
125 lines (76 loc) · 5.4 KB

README.md

File metadata and controls

125 lines (76 loc) · 5.4 KB

✅🦙Kllama: Your Local & Private Chatbot :dependabot:

⚡ Your personal & private chatbot running on open LLM model(s) ⚡

X (formerly Twitter) Follow License: MIT GitHub language count GitHub top language GitHub Repo stars


Author: Kunal Suri, Ph.D.

Other Info: The symbol {✅🦙} for Kllama stands for OK, Llama (or) Kunal's llama! 😁🙏


🚀 How to use Kllama?

This application will can be executed on your local machine using open-source LLM models via the Ollama framework

Table of Contents

  1. Prerequisites
  2. Running Kllama Locally via CLI


Prerequisites

The Kllama App runs on your local machine via Ollama framework. To use this app, you need do the following steps:

  1. Download, Install and Run Ollama Application

    The Ollama framework enables easy interaction between your chatbot and the LLM models from the convenience of your local machine.

    Kindly, follow the instructions from Ollama website to download and install the framework in your local machine. Once Ollama is installed, you can run a local open-source LLM model from your machine. The general steps are given below:

    (For more details and info please check: website: Ollama) || GitHub: https://github.com/ollama/ollama

ollama
  • Download and Install Ollama app (Supported Platforms: Windows, Linux, MacOS)

    • For Window Users: Once Ollama application is downloaded, you will need to run the Ollama app from Programs

    • Once the Ollama app is running, go to commant prompt (CLI) and type:

         Ollama list

      Note: This command is used to list all the open-source LLM models available in your system locally. However, on the first run, then may be no LLM models in your system.

  • To download the LLM model, run the command

    Ollama run <Model_Name>

       Ollama run mistral

    This command will check if the model is available in the local repo on your machine, if not then it will fetch the LLM model from Ollama Website and then start running it.

  1. Replicate the Git Repo in your local machine

    We assume that you have cloned the Kllama repo on your local machine. If not, do the following:

git clone https://github.com/kunalsuri/kllama.git

⚠️ Recommendation: To keep their python installations for other projects clean, we recommend that the users create a new python virtual environment for the Kllama project and install the python packages via requirement.txt (in Step 3 below)


  1. Install the python packages needed to run the Kllama Chatbot

Kllama.py application needs the following packages: ollama, streamlit. They have been included to the requirements.txt and can be easily installed. To install the packages, go to the folder containing Kllama and enter the below command:

pip install -r requirements.txt


Running Kllama Locally via CLI

Kllama uses open-source LLM models running on your machine via the Ollama framework. To run these models, the user needs to install Ollama (as detailed in the Prerequisite section above).

We assume that you have installed the Ollama framework and downloaded the open LLM models such as Mistral, or Meta's Llama 2, Codelama, to name just a few.

To run the Kllama via command-line interface (CLI) use the following command:

streamlit run kllama.py

✅ Once executed, the Kllama Chatbot will start running on your web browser and will be ready for your use.



Further Reading on Llama | Meta

🛡️Responsible AI

🇪🇺 EU's Guidelines on the responsible use of generative AI in research: https://research-and-innovation.ec.europa.eu/news/all-research-and-innovation-news/guidelines-responsible-use-generative-ai-research-developed-european-research-area-forum-2024-03-20_en


⚠️ Note: In no circumstance shall the author(s) or copyright holder(s) be held liable for any claim, damages, or other liabilities arising from the utilization of this code, which incorporates several code snippets generated by artificial intelligence (AI), along with open-source contributions from other programmers sourced from platforms including GitHub and other, pursuant to the terms outlined in the MIT License.