-
Notifications
You must be signed in to change notification settings - Fork 1.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Docker image #273
base: main
Are you sure you want to change the base?
Docker image #273
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hey @zecloud , thanks for adding the Dockerfile. I tested it out on my Ubuntu rig and it worked.
Here are a couple of changes I'd suggest:
-
In the Dockerfile, please make DEBIAN_FRONTEND=noninteractive for all commands since some of the python packages require the user input during the package setup process, so the build usually fails.
So just addARG DEBIAN_FRONTEND=noninteractive
to the top of the file. -
It'd be nice to also make this change to the Readme file as to instructions on how to run this after mounting GPUs to the docker container.
So something like ->docker run --gpus all tortoise-tts:latest
But otherwise, this PR works!
Hey @snpranav |
Hey @zecloud I was having errors running on my 4090 with those older versions of cuda, but this is working well: FROM nvidia/cuda:11.8.0-base-ubuntu20.04
ENV PYTHON_VERSION=3.8
RUN export DEBIAN_FRONTEND=noninteractive \
&& apt-get -qq update \
&& apt-get -qq install --no-install-recommends \
libsndfile1-dev \
git \
python${PYTHON_VERSION} \
python${PYTHON_VERSION}-venv \
python3-pip \
&& rm -rf /var/lib/apt/lists/*
RUN ln -s -f /usr/bin/python${PYTHON_VERSION} /usr/bin/python3 && \
ln -s -f /usr/bin/python${PYTHON_VERSION} /usr/bin/python && \
ln -s -f /usr/bin/pip3 /usr/bin/pip
RUN pip install --upgrade pip
# 2. Copy files
COPY . /src
RUN pip install torch torchvision torchaudio --extra-index-url https://download.pytorch.org/whl/cu118
WORKDIR /src
# 3. Install dependencies
RUN pip install -r requirements-docker.txt
RUN python3 setup.py install |
Hey @kfatehi, |
This works for me, but is the desired behaviour for it to download several gigs of model data every time the container runs? I feel theres a bug here somewhere. Even with a persistent models dir, and TORTOISE_MODELS_DIR set, it spends about half the time downloading. |
After some debugging, it looks like its not really the Docker image thats the issue, but how Huggingface caches the transformers. I solved it by mounting a volume to /cache and using the environment variable. TRANSFORMERS_CACHE=/cache |
hey @bmurray |
Hi,
I've made a docker image to use it with Gpu on Azure with Azure Container instance