Skip to content

code for paper "Compositional Text-to-Image Synthesis with Attention Map Control of Diffusion Models"

License

Notifications You must be signed in to change notification settings

OPPO-Mente-Lab/attention-mask-control

Repository files navigation

Code for paper: "Compositional Text-to-Image Synthesis with Attention Map Control of Diffusion Models"

[Projext Page][Paper]

Requirements

A suitable conda environment named AMC can be created and activated with:

conda env create -f environment.yaml
conda activate AMC

Data Prepearing

First, please download the coco dataset from here. We use COCO2014 in the paper. Then, you can process your data with this script:

python coco_preprocess.py \
    --coco_image_path /YOUR/COCO/PATH/train2014 \
    --coco_caption_file /YOUR/COCO/PATH/annotations/captions_train2014.json \
    --coco_instance_file /YOUR/COCO/PATH/annotations/instances_train2014.json \
    --output_dir /YOUR/DATA/PATH

Training

Before training, you need to change configs in train_boxnet.sh

  • ROOT_DIR: where to save all the results.
  • webdataset_base_urls: /YOUR/DATA/PATH/{xxx-xxx}.tar
  • model_path: stable diffusion v1-5 checkpoint

You can train the BoxNet through this script:

sh train_boxnet.sh $NODE_NUM $CURRENT_NODE_RANK $GPUS_PER_NODE

Text-to-Image Synthesis

With a trained BoxNet, you can start the Text-to-Image Synthesis with:

python test_pipeline_onestage.py \
	--stable_model_path /stable-diffusion-v1-5/checkpoint
	--boxnet_model_path /TRAINED/BOXNET/CKPT
	--output_dir /YOUR/SAVE/DIR

all the test prompt is saved in file test_prompts.json.

TODOs

  • Release data preparation code
  • Release inference code
  • Release training code
  • Release demo
  • Release checkpoint

Acknowledgements

This implementation is based on the repo from the diffusers library. Fengshenbang-LM codebase. DETR codebase.

About

code for paper "Compositional Text-to-Image Synthesis with Attention Map Control of Diffusion Models"

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published