Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

add SDXL model example to INC 3.x #1887

Merged
merged 21 commits into from
Aug 1, 2024
Merged
Show file tree
Hide file tree
Changes from 9 commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
@@ -0,0 +1,67 @@
Step-by-Step
============
This document describes the step-by-step instructions to run [stable diffusion XL model](https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0) using Smooth Quantization to accelerate inference while maintain the quality of output image.

# Prerequisite

## Environment
Recommend python 3.9 or higher version.

```shell
pip install -r requirements.txt
```
**Note**: IPEX along with torch require nightly version (2.4) for compatibility. Please refer to [installation](https://intel.github.io/intel-extension-for-pytorch/index.html#installation?platform=cpu&version=main&os=linux%2fwsl2&package=source).

# Run

To quantize the model:
```bash
python sdxl_smooth_quant.py --model_name_or_path stabilityai/stable-diffusion-xl-base-1.0 --quantize --output_dir "./saved_results"
```
To load a quantized model:
```bash
python sdxl_smooth_quant.py --model_name_or_path stabilityai/stable-diffusion-xl-base-1.0 --quantize --load --int8
```

# Results
## Image Generated

With caption `"A brown and white dog runs on some brown grass near a Frisbee that is just sailing above the ground."`, results of fp32 model and int8 model are listed left and right respectively.

<p float="left">
<img src="./images/fp32.jpg" width = "300" height = "300" alt="bf16" align=center />
<img src="./images/int8.jpg" width = "300" height = "300" alt="int8" align=center />
</p>

## CLIP evaluation
We have also evaluated CLIP scores on 5000 samples from COCO2014 validation dataset for FP32 model and INT8 model. CLIP results are listed below.

| Precision | FP32 | INT8 |
|----------------------|-------|-------|
| CLIP on COCO2014 val | 32.05 | 31.77 |

We're using the mlperf_sd_inference [repo](https://github.com/ahmadki/mlperf_sd_inference) to evaluate CLIP scores. In order to support evaluation on quantized model,
we made some modification on the script (`main.py`). Please use as following:
```bash
git clone https://github.com/ahmadki/mlperf_sd_inference.git
cd mlperf_sd_inference
mv ../main.py ./
```
After setting the environment as instructed in the repo, you can execute the modified `main.py` script to generate images:
```bash
python main.py \
--model-id stabilityai/stable-diffusion-xl-base-1.0 \
--quantized-unet ./saved_results \ # quantized model saving path, should include `qconfig.json` and `quantized_model.pt`
--precision fp32 \
--guidance 8.0 \
--steps 20 \
--latent-path latents.pt \
--base-output-dir ./output
```
Then you can compute CLIP score using the images generated by the quantized model:
```bash
python clip/clip_score.py \
--tsv-file captions_5k.tsv \
--image-folder ./output \ # Folder with the generated images
--device "cpu"
```
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file not shown.
Loading
Loading