Skip to content

Commit

Permalink
feat: Docker support (#25)
Browse files Browse the repository at this point in the history
Co-authored-by: Tim Nunamaker <[email protected]>
  • Loading branch information
volod-vana and tnunamak authored Mar 18, 2024
1 parent c7248a4 commit 2c943db
Show file tree
Hide file tree
Showing 3 changed files with 113 additions and 0 deletions.
1 change: 1 addition & 0 deletions .dockerignore
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
poetry.lock
53 changes: 53 additions & 0 deletions Dockerfile
Original file line number Diff line number Diff line change
@@ -0,0 +1,53 @@
# Base image for building the UI
FROM node:18.19-alpine3.18 AS selfie-ui
WORKDIR /selfie
COPY selfie-ui/package.json selfie-ui/yarn.lock ./
RUN yarn install --frozen-lockfile --non-interactive
COPY selfie-ui/ .
RUN yarn run build

# Base image for the application
FROM python:3.11 as selfie-base
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
ENV PIP_NO_CACHE_DIR=1
WORKDIR /selfie
COPY . .
COPY --from=selfie-ui /selfie/out/ ./selfie-ui/out
RUN pip install poetry --no-cache-dir
RUN poetry config virtualenvs.create false
RUN poetry install --no-interaction --no-ansi
EXPOSE 8181

# CPU-specific configuration
FROM selfie-base as selfie-cpu
CMD ["python", "-m", "selfie"]

# GPU-specific configuration using an NVIDIA base image
# Ensure that the chosen image version has wheels available at https://jllllll.github.io/llama-cpp-python-cuBLAS-wheels.
# You can see what CUDA version an image uses at https://docs.nvidia.com/deeplearning/frameworks/support-matrix/index.html.
# For example, the image nvcr.io/nvidia/pytorch:23.10-py3 uses CUDA 12.2, which is available at https://jllllll.github.io/llama-cpp-python-cuBLAS-wheels/AVX2/cu122.
FROM nvcr.io/nvidia/pytorch:23.10-py3 as selfie-gpu
COPY --from=selfie-base /selfie /selfie
WORKDIR /selfie

RUN pip install poetry --no-cache-dir
ENV PATH="/root/.local/bin:$PATH"
RUN poetry install --no-interaction --no-ansi

RUN bash /selfie/scripts/llama-cpp-python-cublas.sh

# --verbose and other options should probably be controlled by the user
CMD ["poetry", "run", "python", "-m", "selfie", "--gpu", "--verbose"]

# ARM64-specific configuration (Apple Silicon)
FROM selfie-base as selfie-arm64
# Uncomment below if additional dependencies are needed for ARM64
# RUN apt-get update && apt-get install -y --no-install-recommends <arm64-specific-dependencies> && rm -rf /var/lib/apt/lists/*
CMD if [ "$(uname -m)" = "arm64" ]; then \
echo "Running with GPU support"; \
python -m selfie --gpu; \
else \
echo "Running without GPU support"; \
python -m selfie; \
fi
59 changes: 59 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -127,6 +127,65 @@ For most users, the easiest way to install Selfie is to follow the [Quick Start]

> **Note**: You can host selfie at a publicly-accessible URL with [ngrok](https://ngrok.com). Add your ngrok token (and optionally, ngrok domain) in `selfie/.env` and run `poetry run python -m selfie --share`.
## Using Docker

You can also run Selfie using Docker. To do so, follow these steps:

1. Ensure that [Docker](https://www.docker.com) is installed.
2. Clone or [download](https://github.com/vana-com/selfie) selfie repository.
3. In a terminal, navigate to the project directory.
4. Run the following commands for the image you want to use (CPU, GPU, or ARM64).

This will start the server and the UI in your browser at http://localhost:8181.
Your data will be stored in the `data` directory.
This mounts your Hugging Face cache into the container, so you don't have to download the models again if you already
have them.

### CPU Image
```bash
docker build --target selfie-cpu -t selfie:cpu .

docker run -p 8181:8181 \
--name selfie-cpu \
-v $(pwd)/data:/selfie/data \
-v $(pwd)/selfie:/selfie/selfie \
-v $(pwd)/selfie-ui:/selfie/selfie-ui \
-v $HOME/.cache/huggingface:/root/.cache/huggingface \
selfie:cpu
```

### Nvidia GPU Image
```bash
docker build --target selfie-gpu -t selfie:gpu .

docker run -p 8181:8181 \
--name selfie-gpu \
-v $(pwd)/data:/selfie/data \
-v $(pwd)/selfie:/selfie/selfie \
-v $(pwd)/selfie-ui:/selfie/selfie-ui \
-v $HOME/.cache/huggingface:/root/.cache/huggingface \
--gpus all \
--ipc=host --ulimit memlock=-1 --ulimit stack=67108864 \
selfie:gpu
````

### MacOS ARM64 Image
```bash
DOCKER_BUILDKIT=0 docker build --target selfie-arm64 -t selfie:arm64 .
docker run -p 8181:8181 \
--name selfie-arm64 \
-v $(pwd)/data:/selfie/data \
-v $(pwd)/selfie:/selfie/selfie \
-v $(pwd)/selfie-ui:/selfie/selfie-ui \
-v $HOME/.cache/huggingface:/root/.cache/huggingface \
selfie:arm64
```

> **Note**: on an M1/M2/M3 Mac, you may need to prefix the build command with `DOCKER_BUILDKIT=0` to disable BuildKit.
> ```bash
> DOCKER_BUILDKIT=0 docker build -t selfie:arm64 .
> ```

## Setting Up Selfie

Expand Down

0 comments on commit 2c943db

Please sign in to comment.