Skip to content

Commit

Permalink
Update README.md
Browse files Browse the repository at this point in the history
  • Loading branch information
mostlygeek authored Jan 2, 2025
1 parent 2e45f56 commit 1b04d03
Showing 1 changed file with 1 addition and 1 deletion.
2 changes: 1 addition & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@ llama-swap is a light weight, transparent proxy server that provides automatic m
Written in golang, it is very easy to install (single binary with no dependancies) and configure (single yaml file). Download a pre-built [release](https://github.com/mostlygeek/llama-swap/releases) or built it yourself from source with `make clean all`.

## How does it work?
When a request is made to one of the OpenAI compatible endpoints, lama-swap will extracts the `model` value and make sure the right server configuration is loaded to serve it. If the wrong model server is running it will stop it and start the correct one. This is where the "swap" part comes in. The upstream server is swapped to the correct one to serve the request.
When a request is made to an OpenAI compatible endpoints, lama-swap will extract the `model` value load the appropriate server configuration to serve it. If a server is already running it will stop it and start a new one. This is where the "swap" part comes in. The upstream server is automatically swapped to the correct one to serve the request.

In the most basic configuration llama-swap handles one model at a time. Using the Profiles multiple models can be loaded at the same time. You have complete control over how your GPU resources are used.

Expand Down

0 comments on commit 1b04d03

Please sign in to comment.