Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Run in docker #98

Closed
DrMartiner opened this issue Aug 15, 2023 · 12 comments · Fixed by #1418
Closed

Run in docker #98

DrMartiner opened this issue Aug 15, 2023 · 12 comments · Fixed by #1418
Labels
enhancement New feature or request
Milestone

Comments

@DrMartiner
Copy link

No description provided.

DrMartiner added a commit to DrMartiner/Fooocus that referenced this issue Aug 15, 2023
@dstrop
Copy link

dstrop commented Aug 17, 2023

i will just leave this here

FROM nvidia/cuda:12.2.0-runtime-ubuntu22.04

RUN apt-get update && \
    apt-get install --no-install-recommends -y python3 python3-pip && \
    rm -rf /var/lib/apt/lists/*

RUN --mount=type=cache,target=/root/.cache/pip pip3 install virtualenv
RUN mkdir /app

WORKDIR /app

RUN virtualenv /venv
RUN . /venv/bin/activate && \
    pip3 install --upgrade pip

COPY requirements_versions.txt /app/requirements_versions.txt
RUN . /venv/bin/activate && \
    pip3 install -r requirements_versions.txt

COPY . /app/

ENTRYPOINT [ "bash", "-c", ". /venv/bin/activate && exec \"$@\"", "--" ]
CMD [ "python3", "launch.py", "--listen" ]
version: "3.3"
services:
  testui:
    container_name: testui
    build:
      context: .
    ports:
      - "7860:7860"
    stdin_open: true
    tty: true
    volumes:
      - ./:/app
    deploy:
      resources:
        reservations:
          devices:
            - driver: nvidia
              device_ids: ['0']
              capabilities: [gpu]

@mabushey
Copy link

mabushey commented Sep 1, 2023

Thank you @dstrop , I was unable to get fooocus work via the Linux python directions, but this is working perfect.

@gusmolinabr
Copy link

gusmolinabr commented Oct 10, 2023

It appears the instructions are wrong. I could not do it also, and I'd prefer not to run on docker.

2023-10-09_23-20

@mabushey
Copy link

@gusmolinabr Docker is by far the easiest way to get anything python or java to work.

@yairopro
Copy link

@brianetaveras
Copy link

brianetaveras commented Dec 2, 2023

I ran into some issues using the config posted by @dstrop - thank you for sharing!

Here is my adjusted Dockerfile.

FROM nvidia/cuda:12.3.0-runtime-ubuntu22.04

RUN apt-get update && \
    apt-get install --no-install-recommends -y python3 python3-pip libgl1-mesa-glx libglib2.0-0 libsm6 libxrender1 libxext6 && \
    rm -rf /var/lib/apt/lists/*

RUN --mount=type=cache,target=/root/.cache/pip pip3 install virtualenv
RUN mkdir /app

WORKDIR /app

RUN virtualenv /venv
RUN . /venv/bin/activate && \
    pip3 install --upgrade pip

COPY requirements_versions.txt /app/requirements_versions.txt
RUN . /venv/bin/activate && \
    pip3 install -r requirements_versions.txt

COPY . /app/

# may need to adjust the arch
RUN ln -s /usr/lib/x86_64-linux-gnu/mesa/libGL.so.1 /usr/lib/libGL.so.1
RUN ln -s /usr/lib/x86_64-linux-gnu/libgthread-2.0.so.0 /usr/lib/libgthread-2.0.so.0

ENTRYPOINT [ "bash", "-c", ". /venv/bin/activate && exec \"$@\"", "--" ]
CMD [ "python3", "launch.py", "--listen" ]

The project runs on port 7865 by default so people may want to adjust the docker-compose file as well

@salatfreak
Copy link

Stripping away the venv, adding the installation of torch and torchvision that is otherwise done when running launch.py for the first time, making the CUDA version configurable, and flagging the models directory as volume, I arrived at this:

ARG CUDA_VERSION=12.3.0

FROM docker.io/nvidia/cuda:${CUDA_VERSION}-runtime-ubuntu22.04

RUN apt-get update && \
    apt-get install --no-install-recommends -y \
      libgl1-mesa-glx libglib2.0-0 libsm6 libxrender1 libxext6 \
      git python3 python3-pip && \
    rm -rf /var/lib/apt/lists/*

RUN git clone https://github.com/lllyasviel/Fooocus /app
WORKDIR /app

RUN pip install --no-cache -r requirements_versions.txt
RUN pip install --no-cache torch==2.1.0 torchvision==0.16.0 \
    --extra-index-url "https://download.pytorch.org/whl/cu121"

VOLUME /app/models

ENTRYPOINT [ "python3", "launch.py", "--listen" ]

On debian stable the latest CUDA image doesn't seem to work, so I use --build-arg CUDA_VERSION=12.0.1. It runs rootless via podman run --rm -v fooocus-models:/app/models -p 7865:7865 fooocus.

@mambari
Copy link

mambari commented Dec 3, 2023

I tried with portainer on my windows.

(base) root@85a74394deb6:/Fooocus# conda activate fooocus
(fooocus) root@85a74394deb6:/Fooocus# python launch.py
[System ARGV] ['launch.py']
Python 3.10.12 (main, Jul 5 2023, 18:54:27) [GCC 11.2.0]
Fooocus version: 2.1.824
Running on local URL: http://127.0.0.1:7865

To create a public link, set share=True in launch().
Total VRAM 24564 MB, total RAM 15878 MB
Set vram state to: NORMAL_VRAM
Disabling smart memory management
Device: cuda:0 NVIDIA GeForce RTX 4090 : native
VAE dtype: torch.bfloat16
Using pytorch cross attention
Refiner unloaded.
model_type EPS
adm 2816
Using pytorch attention in VAE
Working with z of shape (1, 4, 32, 32) = 4096 dimensions.
Using pytorch attention in VAE
extra keys {'cond_stage_model.clip_l.text_projection', 'cond_stage_model.clip_g.transformer.text_model.embeddings.position_ids', 'cond_stage_model.clip_l.logit_scale'}
Base model loaded: /Fooocus/models/checkpoints/juggernautXL_version6Rundiffusion.safetensors
Request to load LoRAs [['sd_xl_offset_example-lora_1.0.safetensors', 0.1], ['None', 1.0], ['None', 1.0], ['None', 1.0], ['None', 1.0]] for model [/Fooocus/models/checkpoints/juggernautXL_version6Rundiffusion.safetensors].
Loaded LoRA [/Fooocus/models/loras/sd_xl_offset_example-lora_1.0.safetensors] for UNet [/Fooocus/models/checkpoints/juggernautXL_version6Rundiffusion.safetensors] with 788 keys at weight 0.1.
Fooocus V2 Expansion: Vocab with 642 words.
Fooocus Expansion engine loaded for cuda:0, use_fp16 = True.
Requested to load SDXLClipModel
Requested to load GPT2LMHeadModel
Loading 2 new models
[Fooocus Model Management] Moving model(s) has taken 0.56 seconds
App started successful. Use the app with http://127.0.0.1:7865/ or 127.0.0.1:7865

And I have
image

But I can not test it:
image

@dgsiegel
Copy link

dgsiegel commented Dec 5, 2023

For AMD GPUs I've had success with the following Docker/Podman file:

FROM ubuntu:22.04

RUN apt-get update && \
    apt-get install --no-install-recommends -y \
      libgl1-mesa-glx libglib2.0-0 libsm6 libxrender1 libxext6 \
      wget aria2 \
      git python3 python3-pip && \
    rm -rf /var/lib/apt/lists/*

RUN git clone https://github.com/lllyasviel/Fooocus /app
WORKDIR /app

RUN pip install --no-cache -r requirements_versions.txt
RUN pip uninstall -y torch torchvision torchaudio torchtext functorch xformers
RUN pip install --pre torch torchvision torchaudio --index-url https://download.pytorch.org/whl/nightly/rocm5.7

VOLUME /app/models

ENV HSA_OVERRIDE_GFX_VERSION="10.3.0"

ENTRYPOINT [ "python3", "launch.py", "--listen" ]

Build it with docker build -t fooocus ., then run it with docker run -it --rm -v fooocus-models:/app/models --device=/dev/kfd --device=/dev/dri -p 7865:7865 fooocus

@mashb1t
Copy link
Collaborator

mashb1t commented Jan 1, 2024

Feel free to also contribute your suggestions to the PR #1418.

@realies
Copy link

realies commented Feb 9, 2024

image

@mashb1t
Copy link
Collaborator

mashb1t commented Feb 9, 2024

I can relate... This is a feature in Phase 2 - Features now, starting to work on merging PRs tomorrow for Phase 1 - Bugfixes (now finally permitted to do so!)
See #2154

@mashb1t mashb1t added this to the 2.2.0 milestone Feb 25, 2024
@mashb1t mashb1t closed this as completed Feb 26, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

Successfully merging a pull request may close this issue.