Skip to content

h-alice/llamacpp-webui

Repository files navigation

LLaMa.cpp Gemma Web-UI

This project uses llama.cpp to load model from a local file, delivering fast and memory-efficient inference.
The project is currently designed for Google Gemma, and will support more models in the future.

Deployment

Prerequisites

Installation

  1. Download Gemma model from Google repository.
  2. Edit the model-path config.yaml, this should point to the actual model path.
  3. Start the web-ui by command:
    screen -S "webui" bash ./start-ui.sh

About

A StreamLit WebUI for LLaMA.cpp.

Topics

Resources

License

Stars

Watchers

Forks

Packages

No packages published