- First download gguf from https://huggingface.co/TheBloke/Llama-2-13B-chat-GGUF ,
llama-2-13b-chat.Q5_K_S.gguf
- move this model to
model/llama-2-13b-chat.Q5_K_S.gguf
- Navigate to the frontend directory:
cd frontend
- Install node
node:18.17.1
- Install PNPM:
npm install -g [email protected]
pnpm:9.1.2
- Install Dependencies:
pnpm install
- start the application in dev mode:
pnpm dev
- production build:
pnpm build
- any linter problem
pnpm lint:fix
or to checkpnpm lint
- run docker containers
- after making sure that the database and the frontend are running, run the
LlmServiceApplication
- to run the Backend linter
mvn spotless:check
to fixmvn spotless:check
- Backend uses cpp plugin to run the modal ,
C++11
g++
cmake
- Run
docker compose up -d
- if something is changed in the front end and you want the latest changes
docker compose up -d --no-deps --build build-fe