We primarily assess the compositionality of generated images using T2I-Combench. This evaluation code is built using the official t2i-combench repo.
Please refer to Install.md for installation instructions.
clone the repo and move to the directory
git clone https://github.com/Karine-Huang/T2I-CompBench.git
mv T2I-CompBench/* eval_t2icombench/
cd eval_t2icombench/
Before evaluation, you need to first gerate data covering different dimensions for evaluation. The specific prompts for image generation are listed as follows:
The generated images are stored in the "examples" directory. The directory structure is:
examples
├──samples/
│ ├──action/
│ │ ├── xxx.png
│ │ ├── xxx.png
│ │ ├── ......
│ ├──color/
│ │ ├── xxx.png
│ │ ├── xxx.png
│ │ ├── ......
│ ├──complex/
│ │ ├── xxx.png
│ │ ├── xxx.png
│ │ ├── ......
│ ├──shape/
│ │ ├── xxx.png
│ │ ├── xxx.png
│ │ │── ......
│ ├──spatial/
│ │ ├── xxx.png
│ │ ├── xxx.png
│ │ ├── ......
│ ├──texture/
│ │ ├── xxx.png
│ │ ├── xxx.png
│ │ ├── ......
where the action
is specially designed for the non-spatial
dimension.
We provide a script to evaluate the performance of all the dimensions in one click.
bash auto_eval.sh
If you would like to run the evaluation for a specific dimension, please refer to the following steps.
export project_dir="BLIPvqa_eval/"
cd $project_dir
out_dir="examples/"
python BLIP_vqa.py --out_dir=$out_dir
or run
cd T2I-CompBench
bash BLIPvqa_eval/test.sh
The output files are formatted as a json file named vqa_result.json
in examples/annotation_blip/
directory.
download weight and put under repo experts/expert_weights:
mkdir -p UniDet_eval/experts/expert_weights
cd UniDet_eval/experts/expert_weights
wget https://huggingface.co/shikunl/prismer/resolve/main/expert_weights/Unified_learned_OCIM_RS200_6x%2B2x.pth
run evaluation
export project_dir=UniDet_eval
cd $project_dir
python determine_position_for_eval.py
The output files are formatted as a json file named vqa_result.json
in examples/labels/annotation_obj_detection
directory.
outpath="examples/"
python CLIPScore_eval/CLIP_similarity.py --outpath=${outpath}
or run
cd T2I-CompBench
bash CLIPScore_eval/test.sh
The output files are formatted as a json file named vqa_result.json
in examples/annotation_clip
directory.
export project_dir="3_in_1_eval/"
cd $project_dir
outpath="examples/"
data_path="examples/dataset/"
python 3_in_1.py --outpath=${outpath} --data_path=${data_path}
The output files are formatted as a json file named vqa_result.json
in examples/annotation_3_in_1
directory.