I made a Nvidia TensorRT Extension Guide A1111 #109
if-ai
started this conversation in
Show and tell
Replies: 1 comment 1 reply
-
Could you also make a guide for Ubuntu. Much appreciate it. <3 |
Beta Was this translation helpful? Give feedback.
1 reply
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬
VIDEO LINKS📄🖍️o(≧o≦)o🔥
▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬
https://github.com/AUTOMATIC1111/stable-diffusion-webui
https://nvidia.custhelp.com/app/answers/detail/a_id/5487/~/tensorrt-extension-for-stable-diffusion-web-ui
https://github.com/NVIDIA/Stable-Diffusion-WebUI-TensorRT/tree/main
https://www.python.org/downloads/release/python-31011/
https://git-scm.com/downloads
https://civitai.com/models/125703/protovision-xl-high-fidelity-3d-photorealism-anime-hyperrealism-no-refiner-needed
▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬
🔥NOTES
▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬
This only works on Nvidia GPUs
will only work on dev branch for now
git clone https://github.com/AUTOMATIC1111/stable-diffusion-webui.git
cd stable-diffusion-webui
git checkout dev
git pull
run the webui.bat to instal the venv
cd models
delete the folder inside
make your models symlinks
for example you could ls to list the folders inside your original models instalation copy the list names
and ask bard, chatgpt or clade ai to make a command for each but be sure to read and make sure is correct b4 you execute it
mklink /D E:\stable-diffusion-webui\models \wsl.localhost\Ubuntu\home\impactframes\automatic\models
mklink /D E:\stable-diffusion-webui\models C:\ComfyUI\models
cd ..
cd ./venv/scripts
activate
python.exe -m pip install --upgrade pip
python.exe -m pip install nvidia-cudnn-cu11==8.9.4.25 --no-cache-dir
python.exe -m pip install --pre --extra-index-url https://pypi.nvidia.com/ tensorrt==9.0.1.post11.dev4 --no-cache-dir
python.exe -m pip install polygraphy --extra-index-url https://pypi.ngc.nvidia.com/
python.exe -m pip install onnxruntime
python.exe -m pip install colored
python.exe -m pip install protobuf==3.20.2
python.exe -m pip install onnx-graphsurgeon --extra-index-url https://pypi.ngc.nvidia.com/
python.exe -m pip install xformers==0.0.21
python.exe -m pip uninstall -y nvidia-cudnn-cu11
Move into the extensions folder and
and tyme CDM in the search bar
x:\stable-diffusion-webui\extensions>
git clone https://github.com/NVIDIA/Stable-Diffusion-WebUI-TensorRT.git
now add the variables to the webui-user.bat
@echo off
set POLYGRAPHY_AUTOINSTALL_DEPS=1
set CUDA_MODULE_LOADING=LAZY
set PYTHON=
set GIT=
set VENV_DIR=
set COMMANDLINE_ARGS= --api --xformers
call webui.bat
save it then double click and wait a long time until it starts at least give it 3 min it should load
Go to settings > User Interface > quicklist > add sd_unet
select a model
TensorRT tab export a default enginne or with the settings you like
Beta Was this translation helpful? Give feedback.
All reactions