This project is about making two AIs talk to each other. Talk with audio speech.
This project is built with:
- Llama2, served using Ollama with Docker, to generate text from prompt offline locally.
- Coqui, to generate speech from text offline locally.
- Llama2 API and gTTS to generate text and speech using Meta's and Google's APIs.
For quickstart, install Node.js, Python, and run:
git clone https://github.com/midnqp/ai-chats-ai
cd ai-chats-ai
npm install
pip3 install -r requirements.txt
npm run trial
The trial run will use the Llama2 API and gTTS API to start a conversation. As these public APIs are rate-limited, the conversation may not be too long. However, it will be enjoyable ✨
To run locally, follow these steps:
- Install Ollama with Docker and run
$ ollama pull llama2-uncensored; ollama serve;
- To check if it's running:
$ curl localhost:11434
- Run
$ npm run start
and that's it 🚀
and
For advanced users, it is recommended to uplevel the Llama2 model by ensuring it uses all the physical cores of your device. So, if you device has 10 physical (not logical) cores, then create a Modelfile append the following line:
PARAMETER num_thread 10
Then create a new model with new name.