This repository contains code for paper Air-Decoding: Attribute Distribution Reconstruction for Decoding-Time Controllable Text Generation which has been accepted to appear at EMNLP 2023. If you have any questions, please feel free to create an issue or contact the email: [email protected]
├── dataset
│ ├── detoxification-jigsaw
│ ├── sentiment-imdb
│ └── topic-agnews
├── models
│ ├── best_sentiment_classifier
│ ├── best_topic_classifier
│ ├── ckpt_for_detoxification
│ └── ckpt_for_sentiment_and_topic
├── scripts
├── test_data
air-decoding.py
: Air-Decoding algorithm implementation for generating text with desired attributestrain_PCLMs.py
: Training PC-LMs with desired attributeseval_sent_acc.py
: evaluate the sentiment accuracy of generated texteval_topic_acc.py
: evaluate the topic accuracy of generated texteval_toxic
: evaluate the average toxicity of generated texteval_perplexity.py
: evaluate the average perplexity of generated texteval_dist.py
: evaluate the dist-1, dist-2, dist-3 of generated text/scripts
: it contains the bash commands for model training, controllable text generation, and evaluation
-
Install the following environment
pip install -r requirements.txt
-
Download the models: click here
-
After downloading, you will get the "models.zip" file, and you should move it to the main directory.
unzip models.zip rm models.zip
It contains the training process of PC-LMs.
mkdir ckpt
cd ./scripts
bash train_PCLMs.sh
--model_name_or_path
: pretrained language model path, i.e., GPT2-medium or GPT2-large
--prefix_len
: the length of prefix
--prefix_mid_size
: the dimension of reparameterization in prefix-tuning
--output_dir
: the save path for the output model
It contains the generation process of Air-Decoding.
cd ./scripts
bash generate_sentiment.sh
bash generate_topic.sh
bash generate_detoxification.sh
--model_name_or_path
: fine-tuned PC-LMs model path--length
: the length of generated text--samples
: the number of generated texts for each prompt--lambda_cs
: control strength
It contains the evaluation process.
cd ./scripts
bash eval_sent_acc.sh
bash eval_topic_acc.sh
bash eval_toxic.sh
bash eval_perplexity.sh
bash eval_dist.sh
--model_name_or_path
: fine-tuned classifier model for sentiment or topic evaluation and GPT2-large model for perplexity evaluation--dataset_path
: the path of the file under test, which is a JSONL file. Each data entry in the file includes a 'text' field and its corresponding attribute label
@inproceedings{zhong-etal-2023-air,
title = "Air-Decoding: Attribute Distribution Reconstruction for Decoding-Time Controllable Text Generation",
author = "Zhong, Tianqi and
Wang, Quan and
Han, Jingxuan and
Zhang, Yongdong and
Mao, Zhendong",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.emnlp-main.512",
pages = "8233--8248",
abstract = "Controllable text generation (CTG) aims to generate text with desired attributes, and decoding-time-based methods have shown promising performance on this task. However, in this paper, we identify the phenomenon of Attribute Collapse for the first time. It causes the fluency of generated text to rapidly decrease when the control strength exceeds a critical value, rendering the text completely unusable. This limitation hinders the effectiveness of decoding methods in achieving high levels of controllability. To address this problem, we propose a novel lightweight decoding framework named Air-Decoding. Its main idea is reconstructing the attribute distributions to balance the weights between attribute words and non-attribute words to generate more fluent text. Specifically, we train prefixes by prefix-tuning to obtain attribute distributions. Then we design a novel attribute distribution reconstruction method to balance the obtained distributions and use the reconstructed distributions to guide language models for generation, effectively avoiding the issue of Attribute Collapse. Experiments on multiple CTG tasks prove that our method achieves a new state-of-the-art control performance.",
}