This is an example of collaborative inference of the latent diffusion model from this notebook by @multimodalart.
Idea: A swarm of servers from all over the Internet hold a model on their GPUs and respond to clients' queries to run inference. The queries are evenly distributed among all servers connected to the swarm. Any GPU owner who is willing to help may run a server and connect to the swarm, thus increasing the total system throughput.
- Model: CompVis/latent-diffusion
- Dataset: LAION-400M
- NSFW filtering: LAION-AI/CLIP-based-NSFW-Detector
- Distributed inference: hivemind
Warning: This is a demo for research purposes only. Some safety features of the original model may be disabled.
conda create -y --name demo-for-laion python=3.8.12 pip
conda activate demo-for-laion
conda install -y -c conda-forge cudatoolkit-dev==11.3.1 cudatoolkit==11.3.1 cudnn==8.2.1.32
pip install torch==1.11.0+cu113 torchvision==0.12.0+cu113 -f https://download.pytorch.org/whl/torch_stable.html
pip install https://github.com/learning-at-home/hivemind/archive/61e5e8c1f33dd2390e6d0d0221e2de6e75741a9c.zip
pip install opencv-python matplotlib
Call the remote inference (no need to have a GPU):
from diffusion_client import DiffusionClient
# Here, you can specify one or more addresses of any servers
# connected to the swarm (no need to list all of them)
client = DiffusionClient(initial_peers=[
'/ip4/193.106.95.184/tcp/31334/p2p/QmRbeBn2noC63PWHAM2w4mQCrjLFks2vc4Dgy1YooEpUYJ',
'/ip4/193.106.95.184/tcp/31335/p2p/Qmf3DM44osRjP2xFmomh8oH8HnwLDV9ePDMSvGo5JtjEuL',
])
print(f'Found {client.n_active_servers} active servers')
images = client.draw(2 * ['a photo of the san francisco golden gate bridge',
'graphite sketch of a gothic cathedral',
'hedgehog sleeping near a laptop'])
This returns a list of the following datastructures:
class GeneratedImage:
encoded_image: bytes # WEBP-encoded image
decoded_image: Optional[np.ndarray] # Pixel values, a numpy array of shape (height, width, 3)
nsfw_score: float # NSFW detector score. May be used for extra filtering
You can use them to draw results:
import matplotlib.pyplot as plt
plt.figure(figsize=(10, 6))
for index, img in enumerate(images):
plt.subplot(2, 3, index + 1)
plt.imshow(img.decoded_image)
plt.axis('off')
plt.tight_layout()
plt.show()
Expected output:
Pro Tips:
- Use
client.draw(..., skip_decoding=True)
if you don't need the decoded images. - Use
client.draw(..., nsfw_threshold=value)
to change the NSFW detector threshold.client.draw()
raises anNSFWOutputError
if it detects images with score exceeding the threshold.
First, you need to install all dependencies for the model from the original Colab notebook. Then, you can run:
python -m run_server --identity server1.id --host_maddrs "/ip4/0.0.0.0/tcp/31234" --initial_peers \
"/ip4/193.106.95.184/tcp/31334/p2p/QmRbeBn2noC63PWHAM2w4mQCrjLFks2vc4Dgy1YooEpUYJ" \
"/ip4/193.106.95.184/tcp/31335/p2p/Qmf3DM44osRjP2xFmomh8oH8HnwLDV9ePDMSvGo5JtjEuL"
# Skip --initial_peers if you'd like to start a new swarm
Ensure that --max-batch-size
is small enough for your GPU to do inference without running out of memory. The default value is 16.
If your public IP address doesn't match the IP address of the network interface, use --announce_maddrs /ip4/1.2.3.4/tcp/31324
to announce your public IP to the rest of the network.
Servers may still occupy GPU memory after crashing with errors, so run pkill -f run_server
before restarting them.
[ Based on assorted code by shuf(mryab@ younesbelkada@ borzunov@ timdettmers@ dbaranchuk@ greenfatguy@ artek0chumak@ and hivemind contributors) ]